Hybrid Computation at Louisiana State University.
ERIC Educational Resources Information Center
Corripio, Armando B.
Hybrid computation facilities have been in operation at Louisiana State University since the spring of 1969. In part, they consist of an Electronics Associates, Inc. (EAI) Model 680 analog computer, an EAI Model 693 interface, and a Xerox Data Systems (XDS) Sigma 5 digital computer. The hybrid laboratory is used in a course on hybrid computation…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-26
... Workers From Sun Microsystems, Inc., Dell Computer Corp., EMC Corp., EMC Corp. Total, Cisco Systems Capital Corporation, Microsoft Corp., Symantec Corp., Xerox Corp., Vmware, Inc., Sun Microsystems Federal... known as Electronic Data Systems, including on- site leased workers from Sun Microsystems, Inc., Dell...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-15
... Workers From Sun Microsystems, Inc., Dell Computer Corp., EMC Corp., EMC Corp. Total, Cisco Systems Capital Corporation, Microsoft Corp., Symantec Corp., Xerox Corp., VMWare, Inc., Sun Microsystems Federal...-- Services, formerly known as Electronic Data Systems, including on- site leased workers from Sun...
Circus: A Replicated Procedure Call Facility
1984-08-01
Computer Science Laboratory, Xerox PARC, July 1082 . [24) Bruce Ja.y Nelson. Remote Procedure Ctdl. Ph.D. dissertation, Computer Science Department...t. Ph.D. dissertation, Computer Science Division, University of California, Berkeley, Xerox PARC report number CSIF 82-7, December 1082 . [30...Tandem Computers Inc. GUARDIAN Opet’ating Sy•tem Programming Mt~nulll, Volumu 1 11nd 2. C upertino, California, 1082 . [31) R. H. Thoma.s. A majority
1974-08-01
Node Control Logic 2-27 2.16 Pitch Channel Frequence Response 2-36 2.17 Yaw Channel Frequency Response 2-37 K 4 2.18 Analog Computer Mechanlzation of...8217S 0 121 £l1:c IL-I. TABLE I Elements of the Slgma 5 Digital Computer System Xerox Model- Performance MIOP Channel Description Number Characteristics...transfer control signals to or from the CPU. The MIOP can handle up to 32 I/0 channels each operating simultaneously, provided the overall data
Atmosphere Explorer control system software (version 2.0)
NASA Technical Reports Server (NTRS)
Mocarsky, W.; Villasenor, A.
1973-01-01
The Atmosphere Explorer Control System (AECS) was developed to provide automatic computer control of the Atmosphere Explorer spacecraft and experiments. The software performs several vital functions, such as issuing commands to the spacecraft and experiments, receiving and processing telemetry data, and allowing for extensive data processing by experiment analysis programs. The AECS was written for a 48K XEROX Data System Sigma 5 computer, and coexists in core with the XDS Real-time Batch Monitor (RBM) executive system. RBM is a flexible operating system designed for a real-time foreground/background environment, and hence is ideally suited for this application. Existing capabilities of RBM have been used as much as possible by AECS to minimize programming redundancy. The most important functions of the AECS are to send commands to the spacecraft and experiments, and to receive, process, and display telemetry data.
A serial digital data communications device. [for real time flight simulation
NASA Technical Reports Server (NTRS)
Fetter, J. L.
1977-01-01
A general purpose computer peripheral device which is used to provide a full-duplex, serial, digital data transmission link between a Xerox Sigma computer and a wide variety of external equipment, including computers, terminals, and special purpose devices is reported. The interface has an extensive set of user defined options to assist the user in establishing the necessary data links. This report describes those options and other features of the serial communications interface and its performance by discussing its application to a particular problem.
1989-10-01
Vol. 18, No. 5, 1975, pp. 253-263. [CAR84] D.B. Carlin, J.P. Bednarz, CJ. Kaiser, J.C. Connolly, M.G. Harvey , "Multichannel optical recording using... Kellog [31] takes a similar approach as ILEX in the sense that it uses existing systems rather than developing specialized hardwares (the Xerox 1100...parallel complexity. In Proceedings of the International Conference on Database Theory, pages 1-30, September 1986. [31] C. Kellog . From data management to
Software Sharing Enables Smarter Content Management
NASA Technical Reports Server (NTRS)
2007-01-01
In 2004, NASA established a technology partnership with Xerox Corporation to develop high-tech knowledge management systems while providing new tools and applications that support the Vision for Space Exploration. In return, NASA provides research and development assistance to Xerox to progress its product line. The first result of the technology partnership was a new system called the NX Knowledge Network (based on Xerox DocuShare CPX). Created specifically for NASA's purposes, this system combines Netmark-practical database content management software created by the Intelligent Systems Division of NASA's Ames Research Center-with complementary software from Xerox's global research centers and DocuShare. NX Knowledge Network was tested at the NASA Astrobiology Institute, and is widely used for document management at Ames, Langley Research Center, within the Mission Operations Directorate at Johnson Space Center, and at the Jet Propulsion Laboratory, for mission-related tasks.
1984-12-01
3Com Corporation ....... A-18 Ethernet Controller Support . . . . . . A-19 Host Systems Support . . . . . . . . . A-20 Personal Computers Support...A-23 VAX EtherSeries Software 0 * A-23 Network Research Corporation . o o o . o A-24 File Transfer Service . . . . o A-25 Virtual Terminal Service 0...Control office is planning to acquire a Digital Equipment Corporation VAX 11/780 mainframe computer with the Unix Berkeley 4.2BSD operating system. They
ERIC Educational Resources Information Center
Streibel, Michael J.
This paper discusses the implications of Lucy Suchman's conclusion that a theory of situated action--i.e., the actual sense that specific users make out of specific Xeroxing events--is truer to the lived experience of Xerox users than a cognitive account of the user's plans--e.g., the hierarchy of subprocedures for how Xerox machines should be…
A Guide to Computer Adaptive Testing Systems
ERIC Educational Resources Information Center
Davey, Tim
2011-01-01
Some brand names are used generically to describe an entire class of products that perform the same function. "Kleenex," "Xerox," "Thermos," and "Band-Aid" are good examples. The term "computerized adaptive testing" (CAT) is similar in that it is often applied uniformly across a diverse family of testing methods. Although the various members of…
System Engineering Concept Demonstration, Interface Standards Studies. Volume 4
1992-12-01
Xerox’s Palo Alto Research Center (PARC) 25 begat the Xerox Star; Steve Jobs visited PARC, saw the Star, went back to Apple, and begat the Mac. But...Author of Adobe Systems, PostScript Language Program Design, has left Adobe to join Steve Jobs ’ NeXT, Inc. Reid worked for Adobe Systems for four and a
Application of the System Identification Technique to Goal-Directed Saccades.
1984-07-30
1983 to May 31, 1984 by the AFOSR under Grant No. AFOSR-83-0187. 1. Salaries & Wages $7,257 2. Employee Benefits $ 4186 3. Indirect Costs $1,177 *’ 1...Equipment $2,127 DEC VT100 Terminal Computer Terminal Table & Chair Computer Interface 5. Travel $ 672 6. Miscellaneous Expenses 281 Computer Costs ...Telephone Xeroxing Report Costs Total $12,000 A 1cc;3t Ion r . ;. ., ’o n. e, Ef V r CI3 k.i *r 7’r’ ’ - s-I - . CLef • -- * 0 - -- -, r ~ . r w
NASA Astrophysics Data System (ADS)
Chang, Shu
2000-03-01
Xerox has a very favorable reputation as an employer for women. In 1998, Xerox was cited three times as a top company for working women. "Working Mother" magazine, for the 13th consecutive year, chose Xerox as one of the 100 best companies for working mothers. "Working Women" magazine included Xerox as one of the top 25 public companies in the United States for executive women. "Latina Style" named Xerox as one of the 50 companies that offer the best professional opportunities for Hispanic women. However, Xerox is striving to be the employer of choice for women. Xerox views diversity as a business necessity and is beyond numbers and targets. To Xerox, diversity brings ideas, perspectives, and creativity that lead to more innovative solutions. To become the employer of choice for women, the approach from the Xerox Research and Technology organization (XR&T) is to improve the recruitment, retention, and advancement of women. The measurement of the improvement is an increasing representation of women at all levels. Championed by Dr. Mark B. Myers, Senior Vice President and head of XR&T, a dual effort has been implemented. At the request of Dr. Myers, an XR&T Women’s Council was formed in 1991. The mission of the Council has been to identify and promote opportunities for improving the work environment to support diversity and to advise XR&T management how to achieve this goal. Along with the Council, the mission of the XR&T management has been to follow through with Dr. Myers’ directions, Xerox policies, and the Council’s recommendations. By persistency, this dual effort is now paying off. Since 1991, the number of women among new hires and promotions has been steadily increasing. As for retention, XR&T is continuously creating, improving, and communicating policies and practices on career development programs, BWF tracking, diversity training, etc. Due to these proactive actions, a more supportive climate for women is emerging in XR&T. In our talk, we will discuss in more detail on the XR&T parallel effort and its consequence.
xdamp Version 6 : an IDL-based data and image manipulation program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballard, William Parker
2012-04-01
The original DAMP (DAta Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA{trademark} (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs. time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of Unix(reg sign)-based workstations, a replacement was needed. This package uses the IDL(reg sign) software, available from Research Systems Incorporated, a Xerox company, in Boulder, Colorado, as the engine, and creates a set of widgets tomore » manipulate the data in a manner similar to the original DAMP and earlier versions of xdamp. IDL is currently supported on a wide variety of Unix platforms such as IBM(reg sign) workstations, Hewlett Packard workstations, SUN(reg sign) workstations, Microsoft(reg sign) Windows{trademark} computers, Macintosh(reg sign) computers and Digital Equipment Corporation VMS(reg sign) and Alpha(reg sign) systems. Thus, xdamp is portable across many platforms. We have verified operation, albeit with some minor IDL bugs, on personal computers using Windows 7 and Windows Vista; Unix platforms; and Macintosh computers. Version 6 is an update that uses the IDL Virtual Machine to resolve the need for licensing IDL.« less
Educational Reports That Scale across Users and Data
ERIC Educational Resources Information Center
Rolleston, Rob; Howe, Richard; Sprague, Mary Ann
2015-01-01
The field of education is undergoing fundamental change with the growing use of data. Fine-scale data collection at the item-response level is now possible. Xerox has developed a system that bridges the paper-to-digital divide by providing the well-established and easy-to-use paper interface to students, but digitizes the responses for scoring,…
NASA Astrophysics Data System (ADS)
Mueller, Daniel L.
1994-03-01
Xerox virtually created the plain paper copier industry, it enjoyed unparalleled growth and its name became synonymous with copying. However, competition in the 1970s aggressively attacked this attractive growth market and took away market share. An evaluation of the competition told Xerox that its competitors were selling products for what it cost Xerox to make them, that their quality was better and that their goal was to capture all of Xerox' market share. The fundamental precept that Xerox pursued to meet this competitive threat and recapture market share was the recognition that long term success is dependent upon total mastery of quality, especially in manufacturing. In turning this precept into reality, Xerox Manufacturing made dramatic improvements in all of its processes and practices focusing on quality as defined by the customer. Actions to accomplish this result included training all people in basic statistical tools and their applications, the use of employee involvement teams and continuous quality improvement techniques. These and other actions were successful in not only enabling Xerox to turn the competitive threat and recover market share, but to also win the Malcolm Baldrige Award for Quality in 1989.
18. Wayne Chandler, Photographer, August 1993 Photographic copy of Xerox ...
18. Wayne Chandler, Photographer, August 1993 Photographic copy of Xerox copy of original plans, dated 1932, by Wisconsin Highway Commission. Xerox copy in possession of Westbrook Associated Engineers, Spring Green, Wisconsin. SUPERSTRUCTURE DETAILS. - Chippewa River Bridge, Spanning Chippewa River at State Highway 35, Nelson, Buffalo County, WI
16. Wayne Chandler, Photographer, August 1993 Photographic copy of Xerox ...
16. Wayne Chandler, Photographer, August 1993 Photographic copy of Xerox copy of original plans, dated 1932, by Wisconsin Highway Commission. Xerox copy in possession of Westbrook Associated Engineers, Spring Green, Wisconsin. SUPERSTRUCTURE DETAILS. - Chippewa River Bridge, Spanning Chippewa River at State Highway 35, Nelson, Buffalo County, WI
17. Wayne Chandler, Photographer, August 1993 Photographic copy of Xerox ...
17. Wayne Chandler, Photographer, August 1993 Photographic copy of Xerox copy of original plans, dated 1932, by Wisconsin Highway Commission. Xerox copy in possession of Westbrook Associated Engineers, Spring Green, Wisconsin. SUPERSTRUCTURE DETAILS. - Chippewa River Bridge, Spanning Chippewa River at State Highway 35, Nelson, Buffalo County, WI
Copyright Development in the United States
ERIC Educational Resources Information Center
Stedman, John C.
1976-01-01
Some of the problems posed by the more significant new technologies in their relation to the copyright law are described. Included in the discussion are cable television, reprography (especially Xeroxing and comparable processes), and the computer. A federal Technology Commission is proposed. (LBH)
19. Wayne Chandler, Photographer, August 1993 Photographic copy of Xerox ...
19. Wayne Chandler, Photographer, August 1993 Photographic copy of Xerox copy of original plans, dated 1932, by Wisconsin Highway Commission. Xerox copy in possession of Westbrook Associated Engineers, Spring Green, Wisconsin. DETAILS FOR PIERS NO. 1, 2, 5 & 6. - Chippewa River Bridge, Spanning Chippewa River at State Highway 35, Nelson, Buffalo County, WI
15. Wayne Chandler, Photographer, August 1993 Photographic copy of Xerox ...
15. Wayne Chandler, Photographer, August 1993 Photographic copy of Xerox copy of original plans, dated 1932, by Wisconsin Highway Commission. Xerox copy in possession of Westbrook Associated Engineers, Spring Green, Wisconsin. GENERAL PLAN AND ELEVATION OF BRIDGE. - Chippewa River Bridge, Spanning Chippewa River at State Highway 35, Nelson, Buffalo County, WI
Neither Schools nor Photocopiers Are Flawless.
ERIC Educational Resources Information Center
Mecklenburger, James A.
1988-01-01
Compares David Kearns' resentment of school performance (in the same "Kappan" issue) to the author's own frustration with Xerox photocopiers. To achieve the restructuring and choice central to Kearns' educational recovery plan, the schools will need to depend increasingly on technology (computer simulations, interactive video, and…
ERIC Educational Resources Information Center
Cutcher-Gershenfeld, Joel
A combination of crises and innovative attempts to manage them that began in 1980 transformed the relationship between Xerox Corporation and the Amalgamated Clothing and Textile Workers Union, which represents most of Xerox's manufacturing employees. Eight pivotal episodes were largely responsible for the transformation. The first was a joint…
Chips: A Tool for Developing Software Interfaces Interactively.
ERIC Educational Resources Information Center
Cunningham, Robert E.; And Others
This report provides a detailed description of Chips, an interactive tool for developing software employing graphical/computer interfaces on Xerox Lisp machines. It is noted that Chips, which is implemented as a collection of customizable classes, provides the programmer with a rich graphical interface for the creation of rich graphical…
Duplicating Research Success at Xerox
NASA Astrophysics Data System (ADS)
Hays, Dan A.
2003-03-01
The genesis of Xerox is rooted in the invention of xerography by physicist Chester Carlson in 1938. The initial research by Carlson can be viewed as the first of four successful xerographic research eras that have contributed to the growth of Xerox. The second era began in 1944 when Carlson established a working relationship with Battelle Memorial Institute in Columbus, OH. Due to many research advances at Battelle, the Haloid Corporation in Rochester, NY acquired a license to the xerographic process in 1947. The name of the company was changed to Xerox Corporation in 1961 following the wide market acceptance of the legendary Xerox 914 copier. Rapid revenue growth of Xerox in the mid-'60s provided the foundation for a third successful research era in the '70s and '80s. A research center was established in Webster, NY for the purpose of improving the design of xerographic subsystems and materials. These research efforts contributed to the commercial success of the DocuTech family of digital production printers. The fourth successful research era was initiated in the '90s with the objective of identifying a high-speed color xerographic printing process. A number of research advances contributed to the design of a 100 page per minute printer recently introduced as the Xerox DocuColor iGen3 Digital Production Press. To illustrate the role of research in enabling these waves of successful xerographic products, the physics of photoreceptors, light exposure and development subsystems will be discussed. Since the annual worldwide revenue of the xerographic industry exceeds 100 billion dollars, the economic return on Carlson's initial research investment in the mid-'30s is astronomical. The future for xerography remains promising since the technology enables high-speed digital printing of high-quality color documents with variable information.
Xerox' Canadian Research Facility: The Multinational and the "Offshore" Laboratory.
ERIC Educational Resources Information Center
Marchessault, R. H.; Myers, M. B.
1986-01-01
The history, logistics, and strategy behind the Xerox Corporation's Canadian research laboratory, a subsidiary firm located outside the United States for reasons of manpower, tax incentives, and quality of life, are described. (MSE)
The Xerox Corporation campus is located at 800 Phillips Road in Webster, New York. The facility occupies approximately one thousand acres in the Town of Webster. The areas adjacent to the site to the east south and west are zoned for industrial, commercial
NASA Technical Reports Server (NTRS)
White, R. J.
1973-01-01
A detailed description of Guyton's model and modifications are provided. Also included are descriptions of several typical experiments which the model can simulate to illustrate the model's general utility. A discussion of the problems associated with the interfacing of the model to other models such as respiratory and thermal regulation models which is prime importance since these stimuli are not present in the current model is also included. A user's guide for the operation of the model on the Xerox Sigma 3 computer is provided and two programs are described. A verification plan and procedure for performing experiments is also presented.
[Low Fidelity Simulation of a Zero-Y Robot
NASA Technical Reports Server (NTRS)
Sweet, Adam
2001-01-01
The item to be cleared is a low-fidelity software simulation model of a hypothetical freeflying robot designed for use in zero gravity environments. This simulation model works with the HCC simulation system that was developed by Xerox PARC and NASA Ames Research Center. HCC has been previously cleared for distribution. When used with the HCC software, the model computes the location and orientation of the simulated robot over time. Failures (such as a broken motor) can be injected into the simulation to produce simulated behavior corresponding to the failure. Release of this simulation will allow researchers to test their software diagnosis systems by attempting to diagnose the simulated failure from the simulated behavior. This model does not contain any encryption software nor can it perform any control tasks that might be export controlled.
Surviving the Glut: The Management of Event Streams in Cyberphysical Systems
NASA Astrophysics Data System (ADS)
Buchmann, Alejandro
Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de
Integrated system for well-to-well correlation with geological knowledge base
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saito, K.; Doi, E.; Uchiyama, T.
1987-05-01
A task of well-to-well correlation is an essential part of the reservoir description study. Since the task is involved with diverse data such as logs, dipmeter, seismic, and reservoir engineering, a system with simultaneous access to such data is desirable. A system is developed to aid stratigraphic correlation under a Xerox 1108 workstation, written in INTERLISP-D. The system uses log, dipmeter, seismic, and computer-processed results such as Litho-Analysis and LSA (Log Shape Analyzer). The system first defines zones which are segmentations of log data into consistent layers using Litho-Analysis and LSA results. Each zone is defined as a minimum unitmore » for correlation with slot values of lithology, thickness, log values, and log shape such as bell, cylinder, and funnel. Using a user's input of local geological knowledge such as depositional environment, the system selects marker beds and performs correlation among the wells chosen from the base map. Correlation is performed first with markers and then with sandstones of lesser lateral extent. Structural dip and seismic horizon are guides for seeking a correlatable event. Knowledge of sand body geometry such as ratio of thickness and width is also used to provide a guide on how far a correlation should be made. Correlation results performed by the system are displayed on the screen for the user to examine and modify. The system has been tested with data sets from several depositional settings and has shown to be a useful tool for correlation work. The results are stored as a data base for structural mapping and reservoir engineering study.« less
Experiments with microcomputer-based artificial intelligence environments
Summers, E.G.; MacDonald, R.A.
1988-01-01
The U.S. Geological Survey (USGS) has been experimenting with the use of relatively inexpensive microcomputers as artificial intelligence (AI) development environments. Several AI languages are available that perform fairly well on desk-top personal computers, as are low-to-medium cost expert system packages. Although performance of these systems is respectable, their speed and capacity limitations are questionable for serious earth science applications foreseen by the USGS. The most capable artificial intelligence applications currently are concentrated on what is known as the "artificial intelligence computer," and include Xerox D-series, Tektronix 4400 series, Symbolics 3600, VAX, LMI, and Texas Instruments Explorer. The artificial intelligence computer runs expert system shells and Lisp, Prolog, and Smalltalk programming languages. However, these AI environments are expensive. Recently, inexpensive 32-bit hardware has become available for the IBM/AT microcomputer. USGS has acquired and recently completed Beta-testing of the Gold Hill Systems 80386 Hummingboard, which runs Common Lisp on an IBM/AT microcomputer. Hummingboard appears to have the potential to overcome many of the speed/capacity limitations observed with AI-applications on standard personal computers. USGS is a Beta-test site for the Gold Hill Systems GoldWorks expert system. GoldWorks combines some high-end expert system shell capabilities in a medium-cost package. This shell is developed in Common Lisp, runs on the 80386 Hummingboard, and provides some expert system features formerly available only on AI-computers including frame and rule-based reasoning, on-line tutorial, multiple inheritance, and object-programming. ?? 1988 International Association for Mathematical Geology.
COMPUTER DATA PROCESSING SYSTEM. PROJECT ROVER, 1962
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narin, F.
ABS>A system was created for processing large volumes of data from Project ROVER tests at the Nevada Test Site. The data are compiled as analog, frequency modulated tape, which is translated in a Packard-Bell Tape-to-Tape converter into a binary coded decimal (BCD) IBM 7090 computer input tape. This input tape, tape A5, is processed on the 7090 by the RDH-D FORTRAN-II code and its 20 FAP and FORTRAN subroutines. Outputs from the 7090 run are tapes A3, which is a BCD tape used for listing on the IBM 1401 input-output computer, tape B5 which is a binary tape used asmore » input to a Stromberg-Carlson 40/20 cathode ray tube (CRT) plotter, and tape B6 which is a binary tape used for permanent data storage and input to specialized subcodes. The information on tape B5 commands the 40/20 to write grids, data points, and other information on the face of a CRT; the information on the CRT is photographed on 35 mm film which is subsequently developed; full-size (10" x 10") plots are made from the 35 mm film on a Xerox 1824 printer. The 7090 processes a data channel in approximately 4 seconds plus 4 seconds per plot to be made on the 40/20 for that channel. Up to 4500 data and calibration points on any one channel may be processed in one pass of the RDH-D code. This system has been used to produce more than 100,000 prints on the 1824 printer from more than 10,000 different 40/20 plots. At 00 per minute of 7090 time, it costs 60 to process a typical, 3-plot data channel on the 7090; each print on the 1824 costs between 5 and 10 cents including rental, supplies, and operator time. All automatic computer stops in the codes and subroutines are accompanied by on-line instructions to the operator. Extensive redundancy checking is incorporated in the FAP tape handling subroutines. (auth)« less
Evaluation of the Rational Environment
1988-07-01
Computer, Inc. Rational , R1000, and Rational Environment are trademarks of Rational . Smalltalk-8C is a trademark of Xerox. Sun is a trademark of Sun...Introduction 1 1.1. Background 1 1.2. The Rational Environment as Evaluated 2 1.3. Scope of Evaluation 3 1.4. Road Map for the Reader 4 2...CMVC Implementation 26 2.3.2. Workorder Management .--,---...28 3. Capabilities of the Rational Environment i , 31 3.1.1.1..nd Namin 3.1
2000-10-01
control systems and prototyped the approach by porting the ILU ORB from Xerox to the Lynx real - time operating system . They then provided a distributed...compliant real - time operating system , a real-time ORB, and an ODMG-compliant real-time ODBMS [12]. The MITRE system is an infrastructure for...the server’s local operating system can handle. For instance, on a node controlled by the VXWorks real - time operating system with 256 local
2012 ARPA-E Energy Innovation Summit Keynote Presentation (Ursula Burns, Xerox Corporation)
Burns, Ursula
2018-01-16
The third annual ARPA-E Energy Innovation Summit was held in Washington D.C. in February, 2012. The event brought together key players from across the energy ecosystem - researchers, entrepreneurs, investors, corporate executives, and government officials - to share ideas for developing and deploying the next generation of energy technologies. Ursula Burns, Chairman and CEO of the Xerox Corporation, gave the second keynote address of the third day's sessions on February 29.
NASA Technical Reports Server (NTRS)
Kirshten, P. M.; Black, S.; Pearson, R.
1979-01-01
The ESS-EDS and EDS-Sigma interfaces within the standalone engine simulator are described. The operation of these interfaces, including the definition and use of special function signals and data flow paths within them during data transfers, is presented along with detailed schematics and circuit layouts of the described equipment.
1982-06-30
treatments, and cure (or kill ) a patient. Administratively, the items were in a multiple-choice format and the simulation proceeded by branching...Discs: dual 5 1/4 inch floppies (IM) Bus: N/A Operating System: CP/M, MmmOST Price: $3,495 -14 ~-174- - ’i~ Model 820 Xerox 1341 West Mockingbird Lane
Development and operations of the astrophysics data system
NASA Technical Reports Server (NTRS)
Murray, Stephen S.; Oliversen, Ronald (Technical Monitor)
2005-01-01
Abstract service - Continued regular updates of abstracts in the databases, both at SA0 and at all mirror sites. - Modified loading scripts to accommodate changes in data format (PhyS) - Discussed data deliveries with providers to clear up problems with format or other errors (EGU) - Continued inclusion of large numbers of historical literature volumes and physics conference volumes xeroxed from the library. - Performed systematic fixes on some data sets in the database to account for changes in article numbering (AGU journals) - Implemented linking of ADS bibliographic records with multimedia files - Debugged and fixed obscure connection problems with the ADS Korean mirror site which were preventing successful updates of the data holdings. - Wrote procedure to parse citation data and characterize an ADS record based on its citation ratios within each database.
1988-09-01
difficulties), 1580. Engineers, Boston, Mass., December Polaroid, and Xerox. 25. William Shakespeare (1564-1616), 1987. 19. Ibid. Hamlet , Act I, Scene...ethics institute, former philosophy philospher said "Know thyself." 26 next step is to sit down with manage- teacher and magazine editor. Shakespeare
Computational Nanotechnology at NASA Ames Research Center, 1996
NASA Technical Reports Server (NTRS)
Globus, Al; Bailey, David; Langhoff, Steve; Pohorille, Andrew; Levit, Creon; Chancellor, Marisa K. (Technical Monitor)
1996-01-01
Some forms of nanotechnology appear to have enormous potential to improve aerospace and computer systems; computational nanotechnology, the design and simulation of programmable molecular machines, is crucial to progress. NASA Ames Research Center has begun a computational nanotechnology program including in-house work, external research grants, and grants of supercomputer time. Four goals have been established: (1) Simulate a hypothetical programmable molecular machine replicating itself and building other products. (2) Develop molecular manufacturing CAD (computer aided design) software and use it to design molecular manufacturing systems and products of aerospace interest, including computer components. (3) Characterize nanotechnologically accessible materials of aerospace interest. Such materials may have excellent strength and thermal properties. (4) Collaborate with experimentalists. Current in-house activities include: (1) Development of NanoDesign, software to design and simulate a nanotechnology based on functionalized fullerenes. Early work focuses on gears. (2) A design for high density atomically precise memory. (3) Design of nanotechnology systems based on biology. (4) Characterization of diamonoid mechanosynthetic pathways. (5) Studies of the laplacian of the electronic charge density to understand molecular structure and reactivity. (6) Studies of entropic effects during self-assembly. Characterization of properties of matter for clusters up to sizes exhibiting bulk properties. In addition, the NAS (NASA Advanced Supercomputing) supercomputer division sponsored a workshop on computational molecular nanotechnology on March 4-5, 1996 held at NASA Ames Research Center. Finally, collaborations with Bill Goddard at CalTech, Ralph Merkle at Xerox Parc, Don Brenner at NCSU (North Carolina State University), Tom McKendree at Hughes, and Todd Wipke at UCSC are underway.
1984-01-01
ASSOCIATES INC 37 37 WARD 79 LTD DEL 242 242 WEEKS CONSTRUCTION CO 66 66 WESTINGHOUSE ELECTRIC CORP 29 29 WILLIAMS SERVICES INC 629 629 XEROX CORPORATION...UNIVERSITY OF SOUTH CAROLINA 489 242 190 57 WILLIAMS SERVICES INC 56 56 XEROX CORPORATION 202 163 39 TOTAL - COLUMBIA 28,454 17,409 791 439 2,499 7,316...CO OF LAKE C 165 165 FENNELL CNTR CO CHRLSTN 148 148 FLAD & ASSOCIATES OF FLORIDA 26 26 FREEMAN WILLIAM F INC 48 48 GENERATION III INC 51 51
StrateGene: object-oriented programming in molecular biology.
Carhart, R E; Cash, H D; Moore, J F
1988-03-01
This paper describes some of the ways that object-oriented programming methodologies have been used to represent and manipulate biological information in a working application. When running on a Xerox 1100 series computer, StrateGene functions as a genetic engineering workstation for the management of information about cloning experiments. It represents biological molecules, enzymes, fragments, and methods as classes, subclasses, and members in a hierarchy of objects. These objects may have various attributes, which themselves can be defined and classified. The attributes and their values can be passed from the classes of objects down to the subclasses and members. The user can modify the objects and their attributes while using them. New knowledge and changes to the system can be incorporated relatively easily. The operations on the biological objects are associated with the objects themselves. This makes it easier to invoke them correctly and allows generic operations to be customized for the particular object.
NASA Technical Reports Server (NTRS)
Lee, Mark
1991-01-01
Many companies, including Xerox and Texas Instruments, are using cross functional systems to deal with the increasingly complex and competitive business environment. However, few firms within the aerospace industry appear to be aware of the significant benefits that cross functional systems can provide. Those benefits are examined and a flexible methodology is discussed that companies can use to identify and develop cross functional systems that will help improve organizational performance. In addition, some of the managerial issues are addressed that cross functional systems may raise and specific examples are used to explore networking's contributions to cross functional systems.
Distributed Systems Technology Survey.
1987-03-01
and prolocols. 2. Hardware Technology Ecnomic factor we a majo reonm for the prolierat of dlstbted systoe. Processors, memory, an magne tc ndoptical...destined messages and pertorn the a pro te forwarding. There gImsno agreement that a ightweight process mechanism is essential to support com- monly used...Xerox PARC environment [311. Shared file servers, discussed below, are essential to the success of such a scheme. 11. ecurlity A distributed
NASA Astrophysics Data System (ADS)
Brodersen, R. W.
1984-04-01
A scaled version of the RISC II chip has been fabricated and tested and these new chips have a cycle time that would outperform a VAX 11/780 by about a factor of two on compiled integer C programs. The architectural work on a RISC chip designed for a Smalltalk implementation has been completed. This chip, called SOAR (Smalltalk On a RISC), should run program s4-15 times faster than the Xerox 1100 (Dolphin), a TTL minicomputer, and about as fast as the Xerox 1132 (Dorado), a $100,000 ECL minicomputer. The 1983 VLSI tools tape has been converted for use under the latest UNIX release (4.2). The Magic (formerly called Caddy) layout system will be a unified set of highly automated tools that cover all aspects of the layout process, including stretching, compaction, tiling and routing. A multiple window package and design rule checker for this system have just been completed and compaction and stretching are partially implemented. New slope-based timing models for the Crystal timing analyzer are now fully implemented and in regular use. In an accuracy test using a dozen critical paths from the RISC II processor and cache chips it was found that Crystal's estimates were within 5-10% of SPICE's estimates, while being a factor of 10,000 times faster.
Design and field performance of the KENETECH photovoltaic inverter system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behnke, M.R.
1995-11-01
KENETECH Windpower has recently adapted the power conversion technology developed for the company`s variable speed wind turbine to grid-connected photovoltaic applications. KENETECH PV inverter systems are now in successful operation at the Sacramento Municipal Utility District`s (SMUD) Hedge Substation and the PVUSA-Davis site, with additional systems scheduled to be placed into service by the end of 1995 at SMUD, the New York Power Authority, Xerox Corporation`s Clean Air Now project, and the Georgia Tech Aquatic Center. The features of the inverter are described.
NASA Technical Reports Server (NTRS)
1998-01-01
Under SBIR (Small Business Innovative Research) contracts with Lewis Research Center, Nektonics, Inc., developed coating process simulation tools, known as Nekton. This powerful simulation software is used specifically for the modeling and analysis of a wide range of coating flows including thin film coating analysis, polymer processing, and glass melt flows. Polaroid, Xerox, 3M, Dow Corning, Mead Paper, BASF, Mitsubishi, Chugai, and Dupont Imaging Systems are only a few of the companies that presently use Nekton.
[Digital archiving of imaged heart catheter studies on CD-R. Detection of irreversible CD damage].
Erbel, R; Ge, J; Haude, M
1998-12-01
The digital archiving has great advantages compared to the standard 35-mm X-ray cinefilm documentation. The data are immediately available and quantitative coronary angiography possible. In addition the technical progress is enhancing the availability of data. The loss of films is nearly eliminated, as only copies of the digital archive data are delivered. In addition a big advantage concerning pollution is present, when CD Rs are used. We report about the damage of CD Rs after 89, 162, 181 and 252 days when they were stored in polypropylene material containing envelopes. The damaged CD Rs all belonged to the provider Verbatim, whereas CD Rs of the provider Rank Xerox or Kodak were never damaged. In contrary to the Verbatim company, Rank Xerox gave written confirmation for 10-year storage and a written confirmation, that the storage in the polypropylene envelopes is possible. Mechanical, thermal damage and damage by humidity have to be discussed as well as chemical interactions of the CD Rs surface with the polypropylene material. As the digital storage for X-ray images has to be provided for 10 years in Germany, it is concluded, that the storage in polypropylene envelopes has to be avoided, when a written confirmation by the company is not given. These observations should stimulate to better control and analyze the real storage availabilities of digital data and provide in the future other media than CD R for long-term archiving.
Ink Jet For Business Graphic Application
NASA Astrophysics Data System (ADS)
Hooper, Dana H.
1987-04-01
This talk covers the use of Computer generated color output in the preparation of professional, memorable presentations. The focus is on this application and today's business graphic marketplace. To provide a background, on overview of the factors and trends influencing the market for color hard copy output is essential. The availability of lower cost computing technology, improved graphic software and user interfaces and the availability of color copiers is combining with the latest generation of color ink jet printers to cause a strong growth in the use of color hardcopy devices in the business graphics marketplace. The market is expected to grow at a compound annual growth rate in excess of 25% and reach a level of 5 Billion by 1990. Color lasography and ink jet technology based products are expected to increase share significantly primarily at the expense of pen plotters. Essential to the above mentioned growth is the latest generation of products. The Xerox 4020 Color Ink Jet Printer embodies the latest ink jet technology and is a good example of this new generation of products. The printer brings highly reliable color to a broad range of business users. The 4020 is driven by over 50 software packages allowing users compatibility and supporting a variety of applications. The 4020 is easy to operate and maintain and capable of producing excellent hardcopy and transparencies at an attractive price point. Several specific applications areas were discussed. Images were typically created on an IBM PC or compatible with a graphics application package and output to the Xerox 4020 Color Ink Jet Printer. Bar charts, line graphs, pie charts, integrated text and graphics, reports and maps were displayed with a brief description. Additionally, the use of color in brainscanning to discern and communicate information and in computer generated Art demonstrate the wide variety of potential applications. Images may be output to paper or to transparency for overhead presentation. The future of color in the business graphics market looks bright and will continue to be strongly influenced by future product introductions.
Micromachining technology for thermal ink-jet products
NASA Astrophysics Data System (ADS)
Verdonckt-Vandebroek, Sophie
1997-09-01
This paper reviews recent trends and evolutions in the low- end color printing market which is currently dominated by thermal inkjet (TIJ) based products. Micro electromechanical systems technology has been an enabler for the unprecedented cost/performance ratio of these printing products. The generic TIJ operating principles are based on an intimate blend of thermodynamics, fluid dynamics and LSI electronics. The key principles and design issues are outlined and the fabrication of TIJ printheads illustrated with an implementation by the Xerox Corporation.
2010-09-01
articulating perception, interpretation and actionable prediction in an operational environment . BCKS’ success with digital storytelling has far...podcasts; wikis and other collaborative spaces; social networks such as Facebook and LinkedIn; other user generated content; virtual social environments ...study of Xerox’s knowledge management systems noting that 80% of its IT was focused on adapting to the social dynamics of its workplace environment
E-Leadership: A Two-Pronged Idea.
ERIC Educational Resources Information Center
Pulley, Mary Lynn; Sessa, Valerie; Malloy, Michelle
2002-01-01
The electronic leadership development program created for Xerox has the following goals: improving leadership skills, morale, and retention; maximizing efficiency of resources; and teaching leaders how to function in an online environment. (Author/JOW)
Balancing Act: How to Capture Knowledge without Killing It.
ERIC Educational Resources Information Center
Brown, John Seely; Duguid, Paul
2000-01-01
Top-down processes for institutionalizing ideas can stifle creativity. Xerox researchers learned how to combine process-based and practice-based methods in order to disseminate best practices from a community of repair technicians. (JOW)
Office Automation Boosts University's Productivity.
ERIC Educational Resources Information Center
School Business Affairs, 1986
1986-01-01
The University of Pittsburgh has a 2-year agreement designating the Xerox Corporation as the primary supplier of word processing and related office automation equipment in order to increase productivity and more efficient use of campus resources. (MLF)
Malcolm Baldrige National Quality Award winners 1989
NASA Technical Reports Server (NTRS)
1990-01-01
The 1989 Malcolm Baldrige award winners - Milliken and Company; and Xerox Business Products and Services are highlighted in this video. Their strategies for producing quality products are discussed, along with their applications and importance in today's competitive workplace.
Benchmarking study of corporate research management and planning practices
NASA Astrophysics Data System (ADS)
McIrvine, Edward C.
1992-05-01
During 1983-84, Xerox Corporation was undergoing a change in corporate style through a process of training and altered behavior known as Leadership Through Quality. One tenet of Leadership Through Quality was benchmarking, a procedure whereby all units of the corporation were asked to compare their operation with the outside world. As a part of the first wave of benchmark studies, Xerox Corporate Research Group studied the processes of research management, technology transfer, and research planning in twelve American and Japanese companies. The approach taken was to separate `research yield' and `research productivity' (as defined by Richard Foster) and to seek information about how these companies sought to achieve high- quality results in these two parameters. The most significant findings include the influence of company culture, two different possible research missions (an innovation resource and an information resource), and the importance of systematic personal interaction between sources and targets of technology transfer.
78 FR 771 - Notice of Determinations Regarding Eligibility To Apply for Worker Adjustment Assistance
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-04
... Technologies, Eco Energy Solutions. 81,965 Melco Engraving, Inc...... Rochester Hills, MI.. 81,975 Xerox Corporation, Solid Wilsonville, OR...... Ink Development Group, Global Technology Development Group. I hereby...., HCL Technologies Limited. [[Page 773
Data-Driven Simulation-Enhanced Optimization of People-Based Print Production Service
NASA Astrophysics Data System (ADS)
Rai, Sudhendu
This paper describes a systematic six-step data-driven simulation-based methodology for optimizing people-based service systems on a large distributed scale that exhibit high variety and variability. The methodology is exemplified through its application within the printing services industry where it has been successfully deployed by Xerox Corporation across small, mid-sized and large print shops generating over 250 million in profits across the customer value chain. Each step of the methodology consisting of innovative concepts co-development and testing in partnership with customers, development of software and hardware tools to implement the innovative concepts, establishment of work-process and practices for customer-engagement and service implementation, creation of training and infrastructure for large scale deployment, integration of the innovative offering within the framework of existing corporate offerings and lastly the monitoring and deployment of the financial and operational metrics for estimating the return-on-investment and the continual renewal of the offering are described in detail.
Instructional Television In Industry (ITVI): A Survey.
ERIC Educational Resources Information Center
Stasheff, Edward; Lavi, Aryeh
Fifteen industrial organizations were surveyed for their use of instructional television (ITV) in their educational programs for employees. The firms surveys included Xerox Corporation, RCA Corporation, General Electric Company, International Telephone and Telegraph, Lockheed Aircraft Corporation, International Business Machines Corporation, etc.…
Technology, Time, and Participation: How a Principal Supports Teachers.
ERIC Educational Resources Information Center
McCarthy, Robert B.
1985-01-01
Argues that administrations might better support teachers if they provided: (1) up-to-date technology, such as Xerox machines; (2) "guilt-free" time for curriculum work or special projects; and (3) more opportunities for teacher participation in school decision making. (KH)
Where Is the Xerox Corporation of the LIS Sector?
ERIC Educational Resources Information Center
Gilchrist, Alan; Brockman, John
1996-01-01
Discusses barriers to the implementation of quality management in the library and information science sector in Europe. Topics include Total Quality Management and other business experiences, an information quality infrastructure, supplier/customer relations, customer satisfaction, and a European Quality Model. (LRW)
A Survey of Electronic Color Printer Technologies
NASA Astrophysics Data System (ADS)
Starkweather, Gary K.
1989-08-01
Electronic printing in black and white has now come of age. Both high and low speed laser printers now heavily populate the electronic printing marketplace. On the high end of the market, the Xerox 9700 printer is the market dominator while the Canon LBP-SX and CX engines dominate the low end of the market. Clearly, laser printers are the predominant monochrome electronic printing technology. Ink jet is now beginning to engage the low end printer market but still fails to attain laser printer image quality. As yet, ink jet is not a serious contender for the substantial low end laser printer marketplace served by Apple Computer's LaserWriter II and Hewlett-Packard's LaserJet printers. Laser printing generally dominates because of its cost/performance as well as the reliability of the cartridge serviced low end printers.
ERIC Educational Resources Information Center
Livingston-Webber, Joan
According to Lawrence Chua, "zines" are "xeroxed broadsides" which "make marginality their starting point, empowering voices excluded from the slicker journals." According to "Ms," they are "downsized stapled rags...often defiantly tasteless." A subgenre of zines, "Grrrl zines" are those written, produced, and distributed by young women, usually…
The Pericles Space Case: Preserving Earth Observation Data for the Future
NASA Astrophysics Data System (ADS)
Muller, C.; Pandey, P.; Pericles Consortium
2016-08-01
PERICLES (Promoting and Enhancing the Reuse of Information throughout the Content Lifecycle exploiting Evolving Semantics) is an FP7 project started on February 2013. It aims at preserving by design large and complex data sets. PERICLES is coordinated by King's College London, UK and its partners are University of Borås (Sweden), CERT (Greece), DotSoft(Greece), GeorgAugustUniversität, Göttingen (Germany), University of Liverpool (UK), Space Application Services (Belgium), XEROX France and University of Edinburgh (UK). Two additional partners provide the case studies: Tate Gallery (UK) brings the digital art and media case study and B.USOC (Belgian Users Support and Operations Centre) brings the space science case study.
Integrated Communications at America's Leading Total Quality Management Corporations.
ERIC Educational Resources Information Center
Gronstedt, Anders
1996-01-01
Examines how to create organizational processes that allow communication professionals with a variety of expertise to support each other through coordination and integration. Studies eight of America's leading total quality management corporations, including AT&T, Federal Express, Saturn, and Xerox. Explores how various total quality…
TQM for Professors and Students.
ERIC Educational Resources Information Center
Bateman, George R.; Roberts, Harry V.
This paper offers suggestions on how individual faculty can apply Total Quality Management (TQM) practices to their teaching. In particular the paper describes the experiences and lessons learned by two business school faculty members who took to heart the "Galvin Challenge," Bob Galvin's challenge to professors at the Xerox Quality…
Can Man Control His Biological Evolution? A Symposium on Genetic Engineering. Xeroxing Human Beings
ERIC Educational Resources Information Center
Freund, Paul A.
1972-01-01
If the aim of new research is to improve the genetic inheritance of future generations, then decisions regarding who should decide what research should be done needs to be established. Positive and negative eugenics need to be considered thoroughly. (PS)
Science and Technology Test Mining: Disruptive Technology Roadmaps
2003-07-23
integrating breakthrough technologies to maintain the U.S.’s technological advantage and the role of naval engineers in fostering and managing innovation . It...highlights these. Loutfy, R, Belkhir, L, Managing innovation at Xerox, RESEARCH-TECHNOLOGY MANAGEMENT, 44:4, July-Aug 2001. The careful and painstaking
A Target That Beckons--A Man on the Moon.
ERIC Educational Resources Information Center
Wiler, Ann L.
1994-01-01
Examines the significance and uses (for a company) of a vision statement, how to set one, and what to include. Illustrates this by discussing the visions for Xerox Cooperation over the last 40 years, and John F. Kennedy's national vision of landing a man on the moon. (SR)
Speaking Personally--With John Seely Brown
ERIC Educational Resources Information Center
American Journal of Distance Education, 2008
2008-01-01
This article presents an interview with John Seely Brown, a visiting scholar at the University of Southern California and a former chief scientist of Xerox Corporation and director of its Palo Alto Research Center (PARC)--a position he held for nearly two decades. While head of PARC, Brown expanded the role of corporate research to include such…
Break-Even Point for a Proof Slip Operation
ERIC Educational Resources Information Center
Anderson, James F.
1972-01-01
Break-even analysis is applied to determine what magnitude of titles added per year is sufficient to utilize economically Library of Congress proof slips and a Xerox 914 copying machine in the cataloging operation of a library. A formula is derived, and an example of its use is given. (1 reference) (Author/SJ)
Electronic Paper Turns the Page.
ERIC Educational Resources Information Center
Mann, Charles C.
2001-01-01
Suggests that, rather than the electronic book, the technology that is most likely to transform reading and writing will be electronic paper (e-paper). Traces the evolution of e-paper from its prototype created by Xerox PARC's Nick Sheridon in 1975 to the E Ink/Lucent e-paper made from e-ink and plastic transistors. Highlights future…
PEEL-AND-STICK SENSORS POWERED BY DIRECTED RF ENERGY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalau-Keraly, Chrisopher; Daniel, George; Lee, Joseph
PARC, a Xerox Company, is developing a low-cost system of peel-and-stick wireless sensors that will enable widespread building environment sensor deployment with the potential to deliver up to 30% energy savings. The system is embodied by a set of RF hubs that provide power to automatically located sensor nodes, and relay data wirelessly to the building management system (BMS). The sensor nodes are flexible electronic labels powered by rectified RF energy transmitted by an RF hub and can contain multiple printed and conventional sensors. The system design overcomes limitations in wireless sensors related to power delivery, lifetime, and cost bymore » eliminating batteries and photovoltaic devices. Sensor localization is performed automatically by the inclusion of a programmable multidirectional antenna array in the RF hub. Comparison of signal strengths while the RF beam is swept allows for sensor localization, reducing installation effort and enabling automatic recommissioning of sensors that have been relocated, overcoming a significant challenge in building operations. PARC has already demonstrated wireless power and temperature data transmission up to a distance of 20m with less than one minute between measurements, using power levels well within the FCC regulation limits in the 902-928 MHz ISM band. The sensor’s RF energy harvesting antenna achieves high performance with dimensions below 5cm x 9cm.« less
Peel-and-Stick Sensors Powered by Directed RF Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalau-Keraly, Christopher; Daniel, George; Lee, Joseph
PARC, a Xerox Company, is developing a low-cost system of peel-and-stick wireless sensors that will enable widespread building environment sensor deployment with the potential to deliver up to 30% energy savings. The system is embodied by a set of RF hubs that provide power to automatically located sensor nodes, and relay data wirelessly to the building management system (BMS). The sensor nodes are flexible electronic labels powered by rectified RF energy transmitted by an RF hub and can contain multiple printed and conventional sensors. The system design overcomes limitations in wireless sensors related to power delivery, lifetime, and cost bymore » eliminating batteries and photovoltaic devices. Sensor localization is performed automatically by the inclusion of a programmable multidirectional antenna array in the RF hub. Comparison of signal strengths while the RF beam is swept allows for sensor localization, reducing installation effort and enabling automatic recommissioning of sensors that have been relocated, overcoming a significant challenge in building operations. PARC has already demonstrated wireless power and temperature data transmission up to a distance of 20m with less than one minute between measurements, using power levels well within the FCC regulation limits in the 902-928 MHz ISM band. The sensor’s RF energy harvesting antenna achieves high performance with dimensions below 5cm x 9cm« less
RF Energy Harvesting Peel-and-Stick Sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalau-Keraly, Christopher; Schwartz, David; Daniel, George
PARC, a Xerox Company, is developing a low-cost system of peel-and-stick wireless sensors that will enable widespread building environment sensor deployment with the potential to deliver up to 30% energy savings. The system is embodied by a set of RF hubs that provide power to the automatically located sensor nodes, and relays data wirelessly to the building management system (BMS). The sensor nodes are flexible electronic labels powered by rectified RF energy transmitted by a RF hub and can contain multiple printed and conventional sensors. The system design overcomes limitations in wireless sensors related to power delivery, lifetime, and costmore » by eliminating batteries and photovoltaic devices. The sensor localization is performed automatically by the inclusion of a programmable multidirectional antenna array in the RF hub. Comparison of signal strengths when the RF beam is swept allows for sensor localization, further reducing installation effort and enabling automatic recommissioning of sensors that have been relocated, overcoming a significant challenge in building operations. PARC has already demonstrated wireless power and temperature data transmission up to a distance of 20m with a duty cycle less than a minute between measurements, using power levels well within the FCC regulation limits in the 902-928 MHz ISM band. The sensor’s RF energy harvesting antenna dimensions was less than 5cmx9cm, demonstrating the possibility of small form factor for the sensor nodes.« less
NASA Technical Reports Server (NTRS)
1981-01-01
Preparation for the Apollo Soyuz mission entailed large-scale informational exchange that was accomplished by a computerized translation system. Based on this technology of commercial machine translation, a system known as SYSTRAN II was developed by LATSEC, Inc. and the World Translation Company of Canada. This system increases the output of a human translator by five to eight times, affording cost savings by allowing a large increase in document production without hiring additional people. Extra savings accrue from automatic production of camera-ready copy. Applications include translation of service manuals, proposals and tenders, planning studies, catalogs, list of parts and prices, textbooks, technical reports and education/training materials. System is operational for six language pairs. Systran users include Xerox Corporation, General Motors of Canada, Bell Northern Research of Canada, the U.S. Air Force and the European Commission. The company responsible for the production of SYSTRAN II has changed its name to SYSTRAN.
Genericization: A Theory of Semantic Broadening in the Marketplace.
ERIC Educational Resources Information Center
Clankie, Shawn M.
2000-01-01
Genericization theory developed as a response to claims from outside of linguistics that generic use in brand names (for example, using Kleenex as a generic noun for all facial tissues, or Xerox for all photocopiers) is the result of marketing factors or misuse by consumers. This paper examines the linguistic factors that create an environment…
ERIC Educational Resources Information Center
Kenney, Anne R.; Personius, Lynne K.
In cooperation with the Commission on Preservation and Access, Xerox Corporation, Sun Microsystems, Inc., and the New York State Program for the Conservation and Preservation of Library Research Materials, Cornell University (New York) studied and established the effectiveness of digital technology to preserve and make available research library…
A Componential Approach to Training Reading Skills.
1983-03-17
1 syllable, mixed vowels A2 16 one-syll., 4 two-syll., mixed vowels A3 14 one-syll., 6 two-syll., mixed vowels A4 All two-syllable, mixed vowels* B1 ...06520 I ERIC Facility-Acquisitions I Dr. John S. Brown 4833 Rugby Avenue XEROX Palo Alto Research Center Bethesda, MD 20014 3333 Coyote Road Palo Alto, CA
ERIC Educational Resources Information Center
Kapoor, Kanta
2010-01-01
Purpose: The purpose of this paper is to quantify the use of electronic journals in comparison with the print collections in the Guru Gobind Singh Indraprastha University Library. Design/methodology/approach: A detailed analysis was made of the use of lending services, the Xerox facility and usage of electronic journals such as Science Direct,…
Context-Aware Mobile Role Playing Game for Learning--A Case of Canada and Taiwan
ERIC Educational Resources Information Center
Lu, Chris; Chang, Maiga; Kinshuk; Huang, Echo; Chen, Ching-Wen
2014-01-01
The research presented in this paper is part of a 5-year renewable national research program in Canada, namely the NSERC/iCORE/Xerox/Markin research chair program that aims to explore possibilities of adaptive mobile learning and to provide learners with a learning environment which facilitates personalized learning at any time and any place. One…
33 CFR Appendix C to Part 230 - Notice of Intent To Prepare a Draft EIS
Code of Federal Regulations, 2011 CFR
2011-07-01
... Draft EIS C Appendix C to Part 230 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE PROCEDURES FOR IMPLEMENTING NEPA Pt. 230, App. C Appendix C to Part 230... copies sent forward must be signed in ink. A xerox copy of the signature is not allowed. c. A six-digit...
33 CFR Appendix C to Part 230 - Notice of Intent To Prepare a Draft EIS
Code of Federal Regulations, 2013 CFR
2013-07-01
... Draft EIS C Appendix C to Part 230 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE PROCEDURES FOR IMPLEMENTING NEPA Pt. 230, App. C Appendix C to Part 230... copies sent forward must be signed in ink. A xerox copy of the signature is not allowed. c. A six-digit...
33 CFR Appendix C to Part 230 - Notice of Intent To Prepare a Draft EIS
Code of Federal Regulations, 2012 CFR
2012-07-01
... Draft EIS C Appendix C to Part 230 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE PROCEDURES FOR IMPLEMENTING NEPA Pt. 230, App. C Appendix C to Part 230... copies sent forward must be signed in ink. A xerox copy of the signature is not allowed. c. A six-digit...
1984-01-01
ALMONT ALMONT WELDING WORKS INC 102 102 ALPENA FLETCHER MOTEL 27 27 HARRYS OIL CO 65 65 HOMANT OIL CO 117 117 KOSS INDUSTRIES INC 538 538 PARSONS G S...COMPANY 108 Joe XEROX CORPORATION 59 59 ZOLNIEREK DAVID J INC 43 43 TOTAL - ALPENA 957 667 290 ANN ARBOR ADVERTEL COMMUNICATION INC 60 AMAX MATERIALS
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.
This document reports on a congressional hearing on the impact of technological advancements on employment. Testimony includes statements and prepared statements from individuals representing conservation of human resources, Columbia University; United Steelworkers of America; The Brookings Institution; Xerox Corporation; and Panel on Technology…
Medium Fidelity Simulation of Oxygen Tank Venting
NASA Technical Reports Server (NTRS)
Sweet, Adam; Kurien, James; Lau, Sonie (Technical Monitor)
2001-01-01
The item to he cleared is a medium-fidelity software simulation model of a vented cryogenic tank. Such tanks are commonly used to transport cryogenic liquids such as liquid oxygen via truck, and have appeared on liquid-fueled rockets for decades. This simulation model works with the HCC simulation system that was developed by Xerox PARC and NASA Ames Research Center. HCC has been previously cleared for distribution. When used with the HCC software, the model generates simulated readings for the tank pressure and temperature as the simulated cryogenic liquid boils off and is vented. Failures (such as a broken vent valve) can be injected into the simulation to produce readings corresponding to the failure. Release of this simulation will allow researchers to test their software diagnosis systems by attempting to diagnose the simulated failure from the simulated readings. This model does not contain any encryption software nor can it perform any control tasks that might be export controlled.
ERIC Educational Resources Information Center
Markowitsch, Jorg; Kollinger, Iris; Warmerdam, John; Moerel, Hans; Konrad, John; Burell, Catherine; Guile, David
A comparative analysis of human resources development and management in the subsidiaries of three multinational companies (Xerox, Glaxo Wellcome, and AXA Nordstern Colonia) was conducted in these three European Union (EU) member states: Austria, the United Kingdom, and the Netherlands. Case studies were used, focusing on competence needs and…
Assessment of Industrial Attitudes Toward Generic Research Needs in Tribology
1985-09-01
Ford (3) Chrysler (4) Aerospace Pratt and Whitney ( 5 ) Williams International (6) Hughes (7) Small Mechanical Products Xerox (19) Black and...correlation of the 4-^ball with the 5 -J-car tests was interesting and valuable, but some reservations were indicated about its applicability over wider...resulting from excessive vibration. 5 . Generally interested in DOE work presented. Least interested in high temperature applications . 6. Interested in
The H-Metaphor as a Guideline for Vehicle Automation and Interaction
NASA Technical Reports Server (NTRS)
Flemisch, Frank O.; Adams, Catherine A.; Conway, Sheila R.; Goodrich, Ken H.; Palmer, Michael T.; Schutte, Paul C.
2003-01-01
Good design is not free of form. It does not necessarily happen through a mere sampling of technologies packaged together, through pure analysis, or just by following procedures. Good design begins with inspiration and a vision, a mental image of the end product, which can sometimes be described with a design metaphor. A successful example from the 20th century is the desktop metaphor, which took a real desktop as an orientation for the manipulation of electronic documents on a computer. Initially defined by Xerox, then refined by Apple and others, it could be found on almost every computer by the turn of the 20th century. This paper sketches a specific metaphor for the emerging field of highly automated vehicles, their interactions with human users and with other vehicles. In the introduction, general questions on vehicle automation are raised and related to the physical control of conventional vehicles and to the automation of some late 20th century vehicles. After some words on design metaphors, the H-Metaphor is introduced. More details of the metaphor's source are described and their application to human-machine interaction, automation and management of intelligent vehicles sketched. Finally, risks and opportunities to apply the metaphor to technical applications are discussed.
A High-Average-Power Free Electron Laser for Microfabrication and Surface Applications
NASA Technical Reports Server (NTRS)
Dylla, H. F.; Benson, S.; Bisognano, J.; Bohn, C. L.; Cardman, L.; Engwall, D.; Fugitt, J.; Jordan, K.; Kehne, D.; Li, Z.;
1995-01-01
CEBAF has developed a comprehensive conceptual design of an industrial user facility based on a kilowatt ultraviolet (UV) (160-1000 mm) and infrared (IR) (2-25 micron) free electron laser (FEL) driven by a recirculating, energy recovering 200 MeV superconducting radio frequency (SRF) accelerator. FEL users, CEBAF's partners in the Lase Processing Consortium, including AT&T, DuPont, IBM, Northrop Grumman, 3M, and Xerox, are developing applications such as metal, ceramic, and electronic material micro-fabrication and polymer and metal surface processing, with the overall effort leading to later scale-up to industrial systems at 50-100 kW. Representative applications are described. The proposed high-average-power FEL overcomes limitations of conventional laser sources in available power, cost-effectiveness, tunability, and pulse structure.
A high-average-power FEL for industrial applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dylla, H.F.; Benson, S.; Bisognano, J.
1995-12-31
CEBAF has developed a comprehensive conceptual design of an industrial user facility based on a kilowatt UV (150-1000 nm) and IR (2-25 micron) FEL driven by a recirculating, energy-recovering 200 MeV superconducting radio-frequency (SRF) accelerator. FEL users{endash}CEBAF`s partners in the Laser Processing Consortium, including AT&T, DuPont, IBM, Northrop-Grumman, 3M, and Xerox{endash}plan to develop applications such as polymer surface processing, metals and ceramics micromachining, and metal surface processing, with the overall effort leading to later scale-up to industrial systems at 50-100 kW. Representative applications are described. The proposed high-average-power FEL overcomes limitations of conventional laser sources in available power, cost-effectiveness, tunabilitymore » and pulse structure. 4 refs., 3 figs., 2 tabs.« less
A Love Supreme--Riffing on the Standards: Placing Ideas at the Center of High Stakes Schooling
ERIC Educational Resources Information Center
Kohl, Herbert
2006-01-01
The Fake Book is a square spiral bound Xeroxed book, about 7" by 7", maybe 250 pages long. It's all music--the notes, usually in C or B minor, of hundreds of standard tunes, jazz, pop, and every once in a while, classical. The Fake Book and all of its variants provide an evolving canon of tunes that defines a set of common standards for…
First Annual FAA General Aviation Forecast Conference Proceedings
1991-03-01
of Portland Wellington, KS 67152 P.O. Box 3529 (316) 326-5921 Portland, OR 97218 (503) 231-5000 Rick Patton Muller, Sirhall & Associates, Inc. John...360-4161 (818) 843-8311 John Trupchak Rick Weinberg Manager of Maintenance University of Illinois Xerox 1 Airport Rd. Hanger G- Westchester Airport...S.W. (214) 522-0851 Washington, DC 20591 (202) 267-3361 157 Robert Yancey BFGoodrich Aerospace & Defense Division P.O. Box 340 Troy, OH 45373 (513
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Senate Special Committee on Aging.
Flexible retirement policies have worked very well for four major United States corporations, accordinq to testimony of their executives during the second part of a U.S. Senate hearing on work after age 65, conducted in Washington, D.C., in May, 1980. Executives of Xerox, Polaroid, Bankers Life and Casualty, and Atlantic Richfield told the special…
Speech Recognition: Acoustic-Phonetic Knowledge Acquisition and Representation.
1987-09-25
the release duration is the voice onset time, or VOT. For the purpose of this investigation, alveolar flaps ( as in "butter’) and and glottalized /t/’s...Cambridge, Massachusetts 02139 Abstract females and 8 males. The other sentence was said by 7 females We discuss a framework for an acoustic-phonetic...tarned a number of semivowels. One sentence was said by 6 vowels + + "jpporte.d by a Xerox Fellowhsp Table It Features which characterite
Becker, Cinda
2003-05-05
The hospital outlook might seem bleak to some investors, but a bevy of diverse companies are seeking the financial cure they believe the healthcare industry can provide. Everyone from carpet companies to trucking firms has been drawn to healthcare because of its seemingly endless consumer demand. Jeannine Rossignol, left, senior marketing manager at Xerox Corp., demonstrates a product at the recent VHA Leadership Conference in Boston.
Evaluation of the MTF for a-Si:H imaging arrays
NASA Astrophysics Data System (ADS)
Yorkston, John; Antonuk, Larry E.; Seraji, N.; Huang, Weidong; Siewerdsen, Jeffrey H.; El-Mohri, Youcef
1994-05-01
Hydrogenated amorphous silicon imaging arrays are being developed for numerous applications in medical imaging. Diagnostic and megavoltage images have previously been reported and a number of the intrinsic properties of the arrays have been investigated. This paper reports on the first attempt to characterize the intrinsic spatial resolution of the imaging pixels on a 450 micrometers pitch, n-i-p imaging array fabricated at Xerox P.A.R.C. The pre- sampled modulation transfer function was measured by scanning a approximately 25 micrometers wide slit of visible wavelength light across a pixel in both the DATA and FET directions. The results show that the response of the pixel in these orthogonal directions is well described by a simple model that accounts for asymmetries in the pixel response due to geometric aspects of the pixel design.
The Excessive Profits of Defense Contractors: Evidence and Determinants
2012-02-08
ETN 3620 11 UNILEVER NV $112,089,508 292 UL 2000 11 MOOG, INC. $111,608,841 293 MOG.A 3728 11 ALON USA LP $111,102,800 296 ALJ 2911 11 COCA ... COLA ENTERPRISES INC. $93,991,833 343 CCE 2086 11 XEROX CORP. $91,275,424 356 XRX 3577 11 JOHNSON & JOHNSON $89,990,235 363 JNJ 2834 11 AMERICAN...for inferring the defense contractors’ normal profitability. Defense contractors, as a whole or as individual firms, and the broad market are two
1994-01-01
iiluinfbeer (of vi c * Departmreli of (omwitel r Scien~ce an o (t Vgillvering". FR 1?- tDepartment of Matiihemaiit ics. (;N5 *Departmnent. of Statimics...N-22 FTuis wvork was sInppor~te t, pat 1) by tvttccorc. Itic Xerox ( ortorli on. I liM. ti-ewlet - lakard. A I K-I Betl Lab,. the Digital Eqniipmeiu C ... C ,.•. A large value of Crep indicates that a sparse representation is to be strongly preferred over a dense one. usually at the expense of degrading
1993-09-01
against Japanese competitors (Camp, 1989:6; Geber , 1990:38). Due to their incredible success in controlling costs, Xerox adopted the technique company...important first step to benchmarking outside the organization ( Geber , 1990:40). Due to availability of information and cooperation of partners, this is...science ( Geber , 1990:42). Several avenues can be pursued to find best-in-class companies. Search business publications for companies frequently
1978-08-01
Engineer School of Application in 1885; and the School for Cavalry and Light Artillery was established in 1892.10 This period of intellectual ferment has...one of tremendous ferment and dissatisfaction among the younger officers who had won a victory by the institution of the “plucking” boards. However...1954, pp. 637-646, and July 1954, pp. 761-771. Vinegar , J. W. (Manager of Training and Development for Xerox), “Personnel Development Through VERT
1993-04-16
and A. Ishitani AUTHOR INDEX 495 SUBJECT INDEX 499 *Invited Paper x Preface This symposium showcased the advancement in processing technology and...Layers of this thickness still are in advance of current fabrication technology , but do now appear to be within the bounds of possibility. Figure 6...Krusor of Xerox PARC for technical assistance. This work has been supported in part by the Department of Commerce Advanced Technology Program
Analyst-centered models for systems design, analysis, and development
NASA Technical Reports Server (NTRS)
Bukley, A. P.; Pritchard, Richard H.; Burke, Steven M.; Kiss, P. A.
1988-01-01
Much has been written about the possible use of Expert Systems (ES) technology for strategic defense system applications, particularly for battle management algorithms and mission planning. It is proposed that ES (or more accurately, Knowledge Based System (KBS)) technology can be used in situations for which no human expert exists, namely to create design and analysis environments that allow an analyst to rapidly pose many different possible problem resolutions in game like fashion and to then work through the solution space in search of the optimal solution. Portions of such an environment exist for expensive AI hardware/software combinations such as the Xerox LOOPS and Intellicorp KEE systems. Efforts are discussed to build an analyst centered model (ACM) using an ES programming environment, ExperOPS5 for a simple missile system tradeoff study. By analyst centered, it is meant that the focus of learning is for the benefit of the analyst, not the model. The model's environment allows the analyst to pose a variety of what if questions without resorting to programming changes. Although not an ES per se, the ACM would allow for a design and analysis environment that is much superior to that of current technologies.
A Method of Utilizing Small Astronomical Telescopes in Earth Science Instruction
NASA Astrophysics Data System (ADS)
Kim, Kyung-Im; Lee, Young Bom
1985-12-01
Four observational astronomical items have been pilottested with a 150mm refracting telescope in order to lay out the detailed procedures for the suggested(inquiry) activities listed in the high school earth science curriculum and to contrive some adequate instructions for students stressed on how to make proper treatments with the collected materials. The tested items were of sunspots' motion, the size of lunar craters, the Galilean satellites' revolution, and the galactic distribution of stars. Following series of activities are suggested with respect to the way of collecting observational data and of giving proper instruction to students in class: 1) Photography and other materials be made by teacher and/or extracurricular group of students; 2) Replicas(xeroxed, photographs, or slides) be made from the collected materials, so that they are available to all the students in class; 3) Quantative analyses be taken as sutdents' activities
Answering Questions from Oceanography Texts: Learner, Task and Text Characteristics.
1987-09-15
1 275-290. ’S.= Stowe, K. (1983). S , 2nd edition. New York, NY: John Wiley & Sons. ’?o.,i ’S= .. ’S.’ .5.% .5.% r. ** - ’, .* . . . % S 9.. . 9...94305 O Dr. Ed Aiken Leo Seltracchi Dr. John S. Brown " Navy Personnel R&D Center U. S. Nuclear Regulatory Comm. XEROX Palo Alto Research Center " San...iego, CA 92152-6800 Washington, OC 20555 3333 Coyote Hill Road Palo Alto, CA 94304 Or. John R. Anderson Dr. Mark H. Bickhard Dr. Ann Brown Department
High-quality digital color xerography
NASA Astrophysics Data System (ADS)
Takiguchi, Koichi
1993-06-01
Image noise, tone reproduction, color reproduction, fine line reproduction, and OHP performance are the most important characteristics for a high quality color copier. Technologies enabling such quality are use of fine toner, halftone algorithm to ensure good highlight reproduction, soft roll fuser with good release performance, smooth surface and high thermal conductivity, white and smooth paper, and selection of a coating material for the surface layer of the OHP sheets. These technologies are integrated in the Fuji Xerox `A- Color' product. Utilizing 7 micrometers color toner, `A-Color' can make very high quality color copies.
1983-01-01
ALABAMA 281 281 WESTON ROY F ANNISTON ALABAMA 585- 585- WESTON ROY F & HARRINGTON ETAL JV ANNISTON ALABAMA 52 52 WILLIAMS CONSTRUCTION CO INC ANNISTON...ALABAMA 33 33 ERNEST CONSTRUCTION CO BAY MINETTE ALABAMA 200 200 SYNTEX DENTAL CO BAY MINETTE ALABAMA 2,601 169 2,412 2,925 91 189 2,445 200 QUALITY...ALABAMA 1,235 1,235 WILLIAMS BURT CONST & REMODELNG C FORT RUCKER ALABAMA 160 160 WOODHAM & SHARPE FORT RUCKER ALABAMA 44 44 XEROX CORP FORT RUCKER
Systems engineering implementation in the preliminary design phase of the Giant Magellan Telescope
NASA Astrophysics Data System (ADS)
Maiten, J.; Johns, M.; Trancho, G.; Sawyer, D.; Mady, P.
2012-09-01
Like many telescope projects today, the 24.5-meter Giant Magellan Telescope (GMT) is truly a complex system. The primary and secondary mirrors of the GMT are segmented and actuated to support two operating modes: natural seeing and adaptive optics. GMT is a general-purpose telescope supporting multiple science instruments operated in those modes. GMT is a large, diverse collaboration and development includes geographically distributed teams. The need to implement good systems engineering processes for managing the development of systems like GMT becomes imperative. The management of the requirements flow down from the science requirements to the component level requirements is an inherently difficult task in itself. The interfaces must also be negotiated so that the interactions between subsystems and assemblies are well defined and controlled. This paper will provide an overview of the systems engineering processes and tools implemented for the GMT project during the preliminary design phase. This will include requirements management, documentation and configuration control, interface development and technical risk management. Because of the complexity of the GMT system and the distributed team, using web-accessible tools for collaboration is vital. To accomplish this GMTO has selected three tools: Cognition Cockpit, Xerox Docushare, and Solidworks Enterprise Product Data Management (EPDM). Key to this is the use of Cockpit for managing and documenting the product tree, architecture, error budget, requirements, interfaces, and risks. Additionally, drawing management is accomplished using an EPDM vault. Docushare, a documentation and configuration management tool is used to manage workflow of documents and drawings for the GMT project. These tools electronically facilitate collaboration in real time, enabling the GMT team to track, trace and report on key project metrics and design parameters.
NASA Astrophysics Data System (ADS)
Mihlan, G. J.; Ungers, L. J.; Smith, R. K.; Mitchell, R. I.; Jones, J. H.
1983-05-01
A preliminary control technology assessment survey was conducted at the facility which manufactures N-channel metal oxide semiconductor (NMOS) integrated circuits. The facility has industrial hygiene review procedures for evaluating all new and existing process equipment. Employees are trained in safety, use of personal protective equipment, and emergency response. Workers potentially exposed to arsenic are monitored for urinary arsenic levels. The facility should be considered a candidate for detailed study based on the diversity of process operations encountered and the use of state-of-the-art technology and process equipment.
1981-05-01
be allocated to targets on the battlefield and in the rear area. The speaker describes the VECTOR I/NUCLEAR model, a combination of the UNICORN target...outlined. UNICORN is compatible with VECTOR 1 in level of detail. It is an expected value damage model and uses linear programming to optimize the...and a growing appreciation for the power of simulation in addressing large, complex problems, it was only a few short years before these games had
Infrared/submillimeter optical properties data base
NASA Technical Reports Server (NTRS)
Alley, Phillip W.
1989-01-01
The general goal was to build a data base containing optical properties, such as reflectance, transmittance, refractive index, in the far infrared to submillimeter wavelength region. This data base would be limited to selected crystalline materials and temperature between 300 and 2 K. The selected materials were: lithium, lead, and strontium; the bromides of potassium and thallium; the carbides of silicone and tungsten; and the materials of KRS5, KRS6, diamond, and sapphire. Last summer, barium fluoride was selected as prototype material for building the data base. This summer the literature search, preparation of the data for barium fluoride was completed. In addition the literature search for data related to the compounds mentioned was completed. The current status is that barium fluoride is in a form suitable for a NASA internal publication. The papers containing the data on the other materials were xeroxed and they are ready to be reduced. On the reverse side, the top figure is a sample combination of data for the index of refraction at 300 K. The lower figure shows the transmittance vs wavelength at 300 and 80 K. These figures are a sample of many which were developed. Since barium fluoride was studied more than most of the materials listed above, it is clear that additional measurements should be made to fill in the gaps present on both temperature and wavelength data.
NASA Astrophysics Data System (ADS)
Chan, Hau P.; Bao, Nai-Keng; Kwok, Wing O.; Wong, Wing H.
2002-04-01
The application of Digital Pixel Hologram (DPH) as anti-counterfeiting technology for products such as commercial goods, credit cards, identity cards, paper money banknote etc. is growing important nowadays. It offers many advantages over other anti-counterfeiting tools and this includes high diffraction effect, high resolving power, resistance to photo copying using two-dimensional Xeroxes, potential for mass production of patterns at a very low cost. Recently, we have successfully in fabricating high definition DPH with resolution higher than 2500dpi for the purpose of anti-counterfeiting by applying modern optical diffraction theory to computer pattern generation technique with the assist of electron beam lithography (EBL). In this paper, we introduce five levels of encryption techniques, which can be embedded in the design of such DPHs to further improve its anti-counterfeiting performance with negligible added on cost. The techniques involved, in the ascending order of decryption complexity, are namely Gray-level Encryption, Pattern Encryption, Character Encryption, Image Modification Encryption and Codebook Encryption. A Hong Kong Special Administration Regions (HKSAR) DPH emblem was fabricated at a resolution of 2540dpi using the facilities housed in our Optoelectronics Research Center. This emblem will be used as an illustration to discuss in details about each encryption idea during the conference.
High-Performance Computing Data Center | Energy Systems Integration
Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing
Fempress: a communication strategy for women.
Santa Cruz, A
1995-02-01
In 1981, two Chilean women living in exile in Mexico started Fempress, the Latin American Media Network which puts out a monthly magazine, operates a press service on women's issues, and provides a Radio Press Service which covers Latin America. The monthly magazine, Mujer-Fempress, started out as 200 copies of a xeroxed bulletin and now runs 5000 copies per issue. This magazine has played an important role in achieving communication within the far-flung women's movement in Latin American. Early in its existence, Fempress began to concentrate on creating alternative media channels as a means of empowering women through raising awareness and stimulating change. Since the market will not support such alternative work, Fempress is dependent upon international cooperation for funding.
Reliability models for dataflow computer systems
NASA Technical Reports Server (NTRS)
Kavi, K. M.; Buckles, B. P.
1985-01-01
The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.
From photons to big-data applications: terminating terabits
2016-01-01
Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. PMID:26809573
From photons to big-data applications: terminating terabits.
Zilberman, Noa; Moore, Andrew W; Crowcroft, Jon A
2016-03-06
Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. © 2016 The Authors.
Data processing for water monitoring system
NASA Technical Reports Server (NTRS)
Monford, L.; Linton, A. T.
1978-01-01
Water monitoring data acquisition system is structured about central computer that controls sampling and sensor operation, and analyzes and displays data in real time. Unit is essentially separated into two systems: computer system, and hard wire backup system which may function separately or with computer.
The snow system: A decentralized medical data processing system.
Bellika, Johan Gustav; Henriksen, Torje Starbo; Yigzaw, Kassaye Yitbarek
2015-01-01
Systems for large-scale reuse of electronic health record data is claimed to have the potential to transform the current health care delivery system. In principle three alternative solutions for reuse exist: centralized, data warehouse, and decentralized solutions. This chapter focuses on the decentralized system alternative. Decentralized systems may be categorized into approaches that move data to enable computations or move computations to the where data is located to enable computations. We describe a system that moves computations to where the data is located. Only this kind of decentralized solution has the capabilities to become ideal systems for reuse as the decentralized alternative enables computation and reuse of electronic health record data without moving or exposing the information to outsiders. This chapter describes the Snow system, which is a decentralized medical data processing system, its components and how it has been used. It also describes the requirements this kind of systems need to support to become sustainable and successful in recruiting voluntary participation from health institutions.
Digital processing of mesoscale analysis and space sensor data
NASA Technical Reports Server (NTRS)
Hickey, J. S.; Karitani, S.
1985-01-01
The mesoscale analysis and space sensor (MASS) data management and analysis system on the research computer system is presented. The MASS data base management and analysis system was implemented on the research computer system which provides a wide range of capabilities for processing and displaying large volumes of conventional and satellite derived meteorological data. The research computer system consists of three primary computers (HP-1000F, Harris/6, and Perkin-Elmer 3250), each of which performs a specific function according to its unique capabilities. The overall tasks performed concerning the software, data base management and display capabilities of the research computer system in terms of providing a very effective interactive research tool for the digital processing of mesoscale analysis and space sensor data is described.
1993-01-01
0000 (000 1. 0 z z acca at "rto "r ;,- at or a t U tut t C t at a aU. > or t-CC C C C C C C 10a at at c at at at at at @ 0 C 0 0~~~~~~~~ >at >t 4 1 t...6 . F . . 11 . . - L I- & UI f6 )V A c ) 0 ( A V ) ( A 0 t A ( A L A ( I m - r U)0 0 0 a 60 t C a I u I 1’ : 666ý q 0 - r4 - 4 - 4 -4 r4 - - 4 ý
NASA Technical Reports Server (NTRS)
Kavi, K. M.
1984-01-01
There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.
Mentat: An object-oriented macro data flow system
NASA Technical Reports Server (NTRS)
Grimshaw, Andrew S.; Liu, Jane W. S.
1988-01-01
Mentat, an object-oriented macro data flow system designed to facilitate parallelism in distributed systems, is presented. The macro data flow model is a model of computation similar to the data flow model with two principal differences: the computational complexity of the actors is much greater than in traditional data flow systems, and there are persistent actors that maintain state information between executions. Mentat is a system that combines the object-oriented programming paradigm and the macro data flow model of computation. Mentat programs use a dynamic structure called a future list to represent the future of computations.
Hybrid data storage system in an HPC exascale environment
Bent, John M.; Faibish, Sorin; Gupta, Uday K.; Tzelnic, Percy; Ting, Dennis P. J.
2015-08-18
A computer-executable method, system, and computer program product for managing I/O requests from a compute node in communication with a data storage system, including a first burst buffer node and a second burst buffer node, the computer-executable method, system, and computer program product comprising striping data on the first burst buffer node and the second burst buffer node, wherein a first portion of the data is communicated to the first burst buffer node and a second portion of the data is communicated to the second burst buffer node, processing the first portion of the data at the first burst buffer node, and processing the second portion of the data at the second burst buffer node.
Hand-held computer operating system program for collection of resident experience data.
Malan, T K; Haffner, W H; Armstrong, A Y; Satin, A J
2000-11-01
To describe a system for recording resident experience involving hand-held computers with the Palm Operating System (3 Com, Inc., Santa Clara, CA). Hand-held personal computers (PCs) are popular, easy to use, inexpensive, portable, and can share data among other operating systems. Residents in our program carry individual hand-held database computers to record Residency Review Committee (RRC) reportable patient encounters. Each resident's data is transferred to a single central relational database compatible with Microsoft Access (Microsoft Corporation, Redmond, WA). Patient data entry and subsequent transfer to a central database is accomplished with commercially available software that requires minimal computer expertise to implement and maintain. The central database can then be used for statistical analysis or to create required RRC resident experience reports. As a result, the data collection and transfer process takes less time for residents and program director alike, than paper-based or central computer-based systems. The system of collecting resident encounter data using hand-held computers with the Palm Operating System is easy to use, relatively inexpensive, accurate, and secure. The user-friendly system provides prompt, complete, and accurate data, enhancing the education of residents while facilitating the job of the program director.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., financial records, and automated data systems; (ii) The data are free from computational errors and are... records, financial records, and automated data systems; (ii) The data are free from computational errors... records, and automated data systems; (ii) The data are free from computational errors and are internally...
The 1985 pittsburgh conference: a special instrumentation report.
1985-03-29
For the first time in its 36 years of operation, the Pittsburgh Conference and Exposition on Analytical Chemistry and Applied Spectroscopy had a sharp drop in attendance-down 16 percent to 20,731. That loss was attributed to the fact that the meeting was held in New Orleans for the first time, and most of the lost attendees were students and young professionals who had previously come for only 1 day. The number of exhibitors and the number of booths, however, were both up about 15 percent, to 730 and 1856, respectively. A large proportion of that increase was contributed by foreign companies exhibiting for the first time, but there were also some well-known names, such as General Electric and Xerox, making first forays into analytical chemistry. There was also a sharp increase in the number and type of instruments displayed. "The key skill now in analytical chemistry," says Perkin-Elmer president Horace McDonell, Jr., "may be simply finding the right tool to obtain the answers you need." The predominant theme of the show, as it has been for the past few years, was automation of both laboratories and instruments. That trend is having major effects in chemical laboratories, but it is also affecting the instrument companies themselves. At large companies such as Varian, Beckman, and Perkin-Elmer, as much as 50 percent of the research and development budget is now going toward development of software-a much higher percentage than it was even 5 years ago. Another trend in automation also seemed clear at the show. As recently as 2 or 3 years ago, much of the available software for chemistry was designed for Apple and similar computers. Now, the laboratory standard is the IBM PC. As a representative of another company that manufactures computers noted with only slight exaggeration, "There's probably not a booth on the floor that doesn't have one."
Bringing the CMS distributed computing system into scalable operations
NASA Astrophysics Data System (ADS)
Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.
2010-04-01
Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.
A computer system for the storage and retrieval of gravity data, Kingdom of Saudi Arabia
Godson, Richard H.; Andreasen, Gordon H.
1974-01-01
A computer system has been developed for the systematic storage and retrieval of gravity data. All pertinent facts relating to gravity station measurements and computed Bouguer values may be retrieved either by project name or by geographical coordinates. Features of the system include visual display in the form of printer listings of gravity data and printer plots of station locations. The retrieved data format interfaces with the format of GEOPAC, a system of computer programs designed for the analysis of geophysical data.
NASA Astrophysics Data System (ADS)
Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey
2018-02-01
At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.
Non-harmful insertion of data mimicking computer network attacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neil, Joshua Charles; Kent, Alexander; Hash, Jr, Curtis Lee
Non-harmful data mimicking computer network attacks may be inserted in a computer network. Anomalous real network connections may be generated between a plurality of computing systems in the network. Data mimicking an attack may also be generated. The generated data may be transmitted between the plurality of computing systems using the real network connections and measured to determine whether an attack is detected.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkata, Manjunath Gorentla; Aderholdt, William F
The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less
ERIC Educational Resources Information Center
Association for Educational Data Systems, Washington, DC.
The theme of the 1976 convention of the Association for Educational Data Systems (AEDS) was educational data processing and information systems. Special attention was focused on educational management information systems, computer centers and networks, computer assisted instruction, computerized testing, guidance, and higher education. This…
RDTC [Restricted Data Transmission Controller] global variable definitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grambihler, A.J.; O`Callaghan, P.B.
The purpose of the Restricted Data Transmission Controller (RDTC) is to demonstrate a methodology for transmitting data between computers which have different levels of classification. The RDTC does this by logically filtering the data being transmitted between the two computers. This prototype is set up to filter data from the classified computer so that only numeric data is passed to the unclassified computer. The RDTC allows all data from the unclassified computer to be sent to the classified computer. The classified system is referred to as LUA and the unclassified system is referred to as LUB. 9 tabs.
Cloud Computing and Its Applications in GIS
NASA Astrophysics Data System (ADS)
Kang, Cao
2011-12-01
Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature of cloud computing. This paper presents a parallel Euclidean distance algorithm that works seamlessly with the distributed nature of cloud computing infrastructures. The mechanism of this algorithm is to subdivide a raster image into sub-images and wrap them with a one pixel deep edge layer of individually computed distance information. Each sub-image is then processed by a separate node, after which the resulting sub-images are reassembled into the final output. It is shown that while any rectangular sub-image shape can be used, those approximating squares are computationally optimal. This study also serves as a demonstration of this subdivide and layer-wrap strategy, which would enable the migration of many truly spatial GIS algorithms to cloud computing infrastructures. However, this research also indicates that certain spatial GIS algorithms such as cost distance cannot be migrated by adopting this mechanism, which presents significant challenges for the development of cloud-based GIS systems. The third article is entitled "A Distributed Storage Schema for Cloud Computing based Raster GIS Systems". This paper proposes a NoSQL Database Management System (NDDBMS) based raster GIS data storage schema. NDDBMS has good scalability and is able to use distributed commodity computers, which make it superior to Relational Database Management Systems (RDBMS) in a cloud computing environment. In order to provide optimized data service performance, the proposed storage schema analyzes the nature of commonly used raster GIS data sets. It discriminates two categories of commonly used data sets, and then designs corresponding data storage models for both categories. As a result, the proposed storage schema is capable of hosting and serving enormous volumes of raster GIS data speedily and efficiently on cloud computing infrastructures. In addition, the scheme also takes advantage of the data compression characteristics of Quadtrees, thus promoting efficient data storage. Through this assessment of cloud computing technology, the exploration of the challenges and solutions to the migration of GIS algorithms to cloud computing infrastructures, and the examination of strategies for serving large amounts of GIS data in a cloud computing infrastructure, this dissertation lends support to the feasibility of building a cloud-based GIS system. However, there are still challenges that need to be addressed before a full-scale functional cloud-based GIS system can be successfully implemented. (Abstract shortened by UMI.)
NASA Technical Reports Server (NTRS)
Byrne, F. (Inventor)
1981-01-01
A high speed common data buffer system is described for providing an interface and communications medium between a plurality of computers utilized in a distributed computer complex forming part of a checkout, command and control system for space vehicles and associated ground support equipment. The system includes the capability for temporarily storing data to be transferred between computers, for transferring a plurality of interrupts between computers, for monitoring and recording these transfers, and for correcting errors incurred in these transfers. Validity checks are made on each transfer and appropriate error notification is given to the computer associated with that transfer.
The revolution in data gathering systems
NASA Technical Reports Server (NTRS)
Cambra, J. M.; Trover, W. F.
1975-01-01
Data acquisition systems used in NASA's wind tunnels from the 1950's through the present time are summarized as a baseline for assessing the impact of minicomputers and microcomputers on data acquisition and data processing. Emphasis is placed on the cyclic evolution in computer technology which transformed the central computer system, and finally the distributed computer system. Other developments discussed include: medium scale integration, large scale integration, combining the functions of data acquisition and control, and micro and minicomputers.
Evaluation of a data dictionary system. [information dissemination and computer systems programs
NASA Technical Reports Server (NTRS)
Driggers, W. G.
1975-01-01
The usefulness was investigated of a data dictionary/directory system for achieving optimum benefits from existing and planned investments in computer data files in the Data Systems Development Branch and the Institutional Data Systems Division. Potential applications of the data catalogue system are discussed along with an evaluation of the system. Other topics discussed include data description, data structure, programming aids, programming languages, program networks, and test data.
NASA Technical Reports Server (NTRS)
Peri, Frank, Jr.
1992-01-01
A flight digital data acquisition system that uses the MIL-STD-1553B bus for transmission of data to a host computer for control law processing is described. The instrument, the Remote Interface Unit (RIU), can accommodate up to 16 input channels and eight output channels. The RIU employs a digital signal processor to perform local digital filtering before sending data to the host. The system allows flexible sensor and actuator data organization to facilitate quick control law computations on the host computer. The instrument can also run simple control laws autonomously without host intervention. The RIU and host computer together have replaced a similar larger, ground minicomputer system with favorable results.
Data engineering systems: Computerized modeling and data bank capabilities for engineering analysis
NASA Technical Reports Server (NTRS)
Kopp, H.; Trettau, R.; Zolotar, B.
1984-01-01
The Data Engineering System (DES) is a computer-based system that organizes technical data and provides automated mechanisms for storage, retrieval, and engineering analysis. The DES combines the benefits of a structured data base system with automated links to large-scale analysis codes. While the DES provides the user with many of the capabilities of a computer-aided design (CAD) system, the systems are actually quite different in several respects. A typical CAD system emphasizes interactive graphics capabilities and organizes data in a manner that optimizes these graphics. On the other hand, the DES is a computer-aided engineering system intended for the engineer who must operationally understand an existing or planned design or who desires to carry out additional technical analysis based on a particular design. The DES emphasizes data retrieval in a form that not only provides the engineer access to search and display the data but also links the data automatically with the computer analysis codes.
A characterization of workflow management systems for extreme-scale applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
A characterization of workflow management systems for extreme-scale applications
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...
2017-02-16
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
Use of MCIDAS as an earth science information systems tool
NASA Technical Reports Server (NTRS)
Goodman, H. Michael; Karitani, Shogo; Parker, Karen G.; Stooksbury, Laura M.; Wilson, Gregory S.
1988-01-01
The application of the man computer interactive data access system (MCIDAS) to information processing is examined. The computer systems that interface with the MCIDAS are discussed. Consideration is given to the computer networking of MCIDAS, data base archival, and the collection and distribution of real-time special sensor microwave/imager data.
An information retrieval system for research file data
Joan E. Lengel; John W. Koning
1978-01-01
Research file data have been successfully retrieved at the Forest Products Laboratory through a high-speed cross-referencing system involving the computer program FAMULUS as modified by the Madison Academic Computing Center at the University of Wisconsin. The method of data input, transfer to computer storage, system utilization, and effectiveness are discussed....
Chida, Koji; Morohashi, Gembu; Fuji, Hitoshi; Magata, Fumihiko; Fujimura, Akiko; Hamada, Koki; Ikarashi, Dai; Yamamoto, Ryuichi
2014-01-01
Background and objective While the secondary use of medical data has gained attention, its adoption has been constrained due to protection of patient privacy. Making medical data secure by de-identification can be problematic, especially when the data concerns rare diseases. We require rigorous security management measures. Materials and methods Using secure computation, an approach from cryptography, our system can compute various statistics over encrypted medical records without decrypting them. An issue of secure computation is that the amount of processing time required is immense. We implemented a system that securely computes healthcare statistics from the statistical computing software ‘R’ by effectively combining secret-sharing-based secure computation with original computation. Results Testing confirmed that our system could correctly complete computation of average and unbiased variance of approximately 50 000 records of dummy insurance claim data in a little over a second. Computation including conditional expressions and/or comparison of values, for example, t test and median, could also be correctly completed in several tens of seconds to a few minutes. Discussion If medical records are simply encrypted, the risk of leaks exists because decryption is usually required during statistical analysis. Our system possesses high-level security because medical records remain in encrypted state even during statistical analysis. Also, our system can securely compute some basic statistics with conditional expressions using ‘R’ that works interactively while secure computation protocols generally require a significant amount of processing time. Conclusions We propose a secure statistical analysis system using ‘R’ for medical data that effectively integrates secret-sharing-based secure computation and original computation. PMID:24763677
Chida, Koji; Morohashi, Gembu; Fuji, Hitoshi; Magata, Fumihiko; Fujimura, Akiko; Hamada, Koki; Ikarashi, Dai; Yamamoto, Ryuichi
2014-10-01
While the secondary use of medical data has gained attention, its adoption has been constrained due to protection of patient privacy. Making medical data secure by de-identification can be problematic, especially when the data concerns rare diseases. We require rigorous security management measures. Using secure computation, an approach from cryptography, our system can compute various statistics over encrypted medical records without decrypting them. An issue of secure computation is that the amount of processing time required is immense. We implemented a system that securely computes healthcare statistics from the statistical computing software 'R' by effectively combining secret-sharing-based secure computation with original computation. Testing confirmed that our system could correctly complete computation of average and unbiased variance of approximately 50,000 records of dummy insurance claim data in a little over a second. Computation including conditional expressions and/or comparison of values, for example, t test and median, could also be correctly completed in several tens of seconds to a few minutes. If medical records are simply encrypted, the risk of leaks exists because decryption is usually required during statistical analysis. Our system possesses high-level security because medical records remain in encrypted state even during statistical analysis. Also, our system can securely compute some basic statistics with conditional expressions using 'R' that works interactively while secure computation protocols generally require a significant amount of processing time. We propose a secure statistical analysis system using 'R' for medical data that effectively integrates secret-sharing-based secure computation and original computation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Trusted Computing Technologies, Intel Trusted Execution Technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guise, Max Joseph; Wendt, Jeremy Daniel
2011-01-01
We describe the current state-of-the-art in Trusted Computing Technologies - focusing mainly on Intel's Trusted Execution Technology (TXT). This document is based on existing documentation and tests of two existing TXT-based systems: Intel's Trusted Boot and Invisible Things Lab's Qubes OS. We describe what features are lacking in current implementations, describe what a mature system could provide, and present a list of developments to watch. Critical systems perform operation-critical computations on high importance data. In such systems, the inputs, computation steps, and outputs may be highly sensitive. Sensitive components must be protected from both unauthorized release, and unauthorized alteration: Unauthorizedmore » users should not access the sensitive input and sensitive output data, nor be able to alter them; the computation contains intermediate data with the same requirements, and executes algorithms that the unauthorized should not be able to know or alter. Due to various system requirements, such critical systems are frequently built from commercial hardware, employ commercial software, and require network access. These hardware, software, and network system components increase the risk that sensitive input data, computation, and output data may be compromised.« less
The DFVLR main department for central data processing, 1976 - 1983
NASA Technical Reports Server (NTRS)
1983-01-01
Data processing, equipment and systems operation, operative and user systems, user services, computer networks and communications, text processing, computer graphics, and high power computers are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.
1979-07-01
User input data requirements are presented for certain special processors in a nuclear reactor computation system. These processors generally read data in formatted form and generate binary interface data files. Some data processing is done to convert from the user oriented form to the interface file forms. The VENTURE diffusion theory neutronics code and other computation modules in this system use the interface data files which are generated.
Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation
NASA Astrophysics Data System (ADS)
Anisenkov, A. V.
2018-03-01
In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).
An integrated compact airborne multispectral imaging system using embedded computer
NASA Astrophysics Data System (ADS)
Zhang, Yuedong; Wang, Li; Zhang, Xuguo
2015-08-01
An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.
A data-management system for detailed areal interpretive data
Ferrigno, C.F.
1986-01-01
A data storage and retrieval system has been developed to organize and preserve areal interpretive data. This system can be used by any study where there is a need to store areal interpretive data that generally is presented in map form. This system provides the capability to grid areal interpretive data for input to groundwater flow models at any spacing and orientation. The data storage and retrieval system is designed to be used for studies that cover small areas such as counties. The system is built around a hierarchically structured data base consisting of related latitude-longitude blocks. The information in the data base can be stored at different levels of detail, with the finest detail being a block of 6 sec of latitude by 6 sec of longitude (approximately 0.01 sq mi). This system was implemented on a mainframe computer using a hierarchical data base management system. The computer programs are written in Fortran IV and PL/1. The design and capabilities of the data storage and retrieval system, and the computer programs that are used to implement the system are described. Supplemental sections contain the data dictionary, user documentation of the data-system software, changes that would need to be made to use this system for other studies, and information on the computer software tape. (Lantz-PTT)
Code of Federal Regulations, 2010 CFR
2010-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2013 CFR
2013-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2014 CFR
2014-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2012 CFR
2012-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2011 CFR
2011-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
A distributed system for fast alignment of next-generation sequencing data.
Srimani, Jaydeep K; Wu, Po-Yen; Phan, John H; Wang, May D
2010-12-01
We developed a scalable distributed computing system using the Berkeley Open Interface for Network Computing (BOINC) to align next-generation sequencing (NGS) data quickly and accurately. NGS technology is emerging as a promising platform for gene expression analysis due to its high sensitivity compared to traditional genomic microarray technology. However, despite the benefits, NGS datasets can be prohibitively large, requiring significant computing resources to obtain sequence alignment results. Moreover, as the data and alignment algorithms become more prevalent, it will become necessary to examine the effect of the multitude of alignment parameters on various NGS systems. We validate the distributed software system by (1) computing simple timing results to show the speed-up gained by using multiple computers, (2) optimizing alignment parameters using simulated NGS data, and (3) computing NGS expression levels for a single biological sample using optimal parameters and comparing these expression levels to that of a microarray sample. Results indicate that the distributed alignment system achieves approximately a linear speed-up and correctly distributes sequence data to and gathers alignment results from multiple compute clients.
NASA Astrophysics Data System (ADS)
German, Kristine A.; Kubby, Joel; Chen, Jingkuang; Diehl, James; Feinberg, Kathleen; Gulvin, Peter; Herko, Larry; Jia, Nancy; Lin, Pinyen; Liu, Xueyuan; Ma, Jun; Meyers, John; Nystrom, Peter; Wang, Yao Rong
2004-07-01
Xerox Corporation has developed a technology platform for on-chip integration of latching MEMS optical waveguide switches and Planar Light Circuit (PLC) components using a Silicon On Insulator (SOI) based process. To illustrate the current state of this new technology platform, working prototypes of a Reconfigurable Optical Add/Drop Multiplexer (ROADM) and a l-router will be presented along with details of the integrated latching MEMS optical switches. On-chip integration of optical switches and PLCs can greatly reduce the size, manufacturing cost and operating cost of multi-component optical equipment. It is anticipated that low-cost, low-overhead optical network products will accelerate the migration of functions and services from high-cost long-haul markets to price sensitive markets, including networks for metropolitan areas and fiber to the home. Compared to the more common silica-on-silicon PLC technology, the high index of refraction of silicon waveguides created in the SOI device layer enables miniaturization of optical components, thereby increasing yield and decreasing cost projections. The latching SOI MEMS switches feature moving waveguides, and are advantaged across multiple attributes relative to alternative switching technologies, such as thermal optical switches and polymer switches. The SOI process employed was jointly developed under the auspice of the NIST APT program in partnership with Coventor, Corning IntelliSense Corp., and MicroScan Systems to enable fabrication of a broad range of free space and guided wave MicroOptoElectroMechanical Systems (MOEMS).
Computer Storage and Retrieval of Position - Dependent Data.
1982-06-01
This thesis covers the design of a new digital database system to replace the merged (observation and geographic location) record, one file per cruise...68 "The Digital Data Library System: Library Storage and Retrieval of Digital Geophysical Data" by Robert C. Groan) provided a relatively simple...dependent, ’geophysical’ data. The system is operational on a Digital Equipment Corporation VAX-11/780 computer. Values of measured and computed
Shick, G L; Hoover, L W; Moore, A N
1979-04-01
A data base was developed for a computer-assisted personnel data system for a university hospital department of dietetics which would store data on employees' employment, personnel information, attendance records, and termination. Development of the data base required designing computer programs and files, coding directions and forms for card input, and forms and procedures for on-line transmission. A program was written to compute accrued vacation, sick leave, and holiday time, and to generate historical records.
The Metadata Cloud: The Last Piece of a Distributed Data System Model
NASA Astrophysics Data System (ADS)
King, T. A.; Cecconi, B.; Hughes, J. S.; Walker, R. J.; Roberts, D.; Thieman, J. R.; Joy, S. P.; Mafi, J. N.; Gangloff, M.
2012-12-01
Distributed data systems have existed ever since systems were networked together. Over the years the model for distributed data systems have evolved from basic file transfer to client-server to multi-tiered to grid and finally to cloud based systems. Initially metadata was tightly coupled to the data either by embedding the metadata in the same file containing the data or by co-locating the metadata in commonly named files. As the sources of data multiplied, data volumes have increased and services have specialized to improve efficiency; a cloud system model has emerged. In a cloud system computing and storage are provided as services with accessibility emphasized over physical location. Computation and data clouds are common implementations. Effectively using the data and computation capabilities requires metadata. When metadata is stored separately from the data; a metadata cloud is formed. With a metadata cloud information and knowledge about data resources can migrate efficiently from system to system, enabling services and allowing the data to remain efficiently stored until used. This is especially important with "Big Data" where movement of the data is limited by bandwidth. We examine how the metadata cloud completes a general distributed data system model, how standards play a role and relate this to the existing types of cloud computing. We also look at the major science data systems in existence and compare each to the generalized cloud system model.
Ku-band signal design study. [space shuttle orbiter data processing network
NASA Technical Reports Server (NTRS)
Rubin, I.
1978-01-01
Analytical tools, methods and techniques for assessing the design and performance of the space shuttle orbiter data processing system (DPS) are provided. The computer data processing network is evaluated in the key areas of queueing behavior synchronization and network reliability. The structure of the data processing network is described as well as the system operation principles and the network configuration. The characteristics of the computer systems are indicated. System reliability measures are defined and studied. System and network invulnerability measures are computed. Communication path and network failure analysis techniques are included.
Hively, Lee M [Philadelphia, TN
2011-07-12
The invention relates to a method and apparatus for simultaneously processing different sources of test data into informational data and then processing different categories of informational data into knowledge-based data. The knowledge-based data can then be communicated between nodes in a system of multiple computers according to rules for a type of complex, hierarchical computer system modeled on a human brain.
NASA Technical Reports Server (NTRS)
Hickey, J. S.
1983-01-01
The Mesoscale Analysis and Space Sensor (MASS) Data Management and Analysis System developed by Atsuko Computing International (ACI) on the MASS HP-1000 Computer System within the Systems Dynamics Laboratory of the Marshall Space Flight Center is described. The MASS Data Management and Analysis System was successfully implemented and utilized daily by atmospheric scientists to graphically display and analyze large volumes of conventional and satellite derived meteorological data. The scientists can process interactively various atmospheric data (Sounding, Single Level, Gird, and Image) by utilizing the MASS (AVE80) share common data and user inputs, thereby reducing overhead, optimizing execution time, and thus enhancing user flexibility, useability, and understandability of the total system/software capabilities. In addition ACI installed eight APPLE III graphics/imaging computer terminals in individual scientist offices and integrated them into the MASS HP-1000 Computer System thus providing significant enhancement to the overall research environment.
MICROPROCESSOR-BASED DATA-ACQUISITION SYSTEM FOR A BOREHOLE RADAR.
Bradley, Jerry A.; Wright, David L.
1987-01-01
An efficient microprocessor-based system is described that permits real-time acquisition, stacking, and digital recording of data generated by a borehole radar system. Although the system digitizes, stacks, and records independently of a computer, it is interfaced to a desktop computer for program control over system parameters such as sampling interval, number of samples, number of times the data are stacked prior to recording on nine-track tape, and for graphics display of the digitized data. The data can be transferred to the desktop computer during recording, or it can be played back from a tape at a latter time. Using the desktop computer, the operator observes results while recording data and generates hard-copy graphics in the field. Thus, the radar operator can immediately evaluate the quality of data being obtained, modify system parameters, study the radar logs before leaving the field, and rerun borehole logs if necessary. The system has proven to be reliable in the field and has increased productivity both in the field and in the laboratory.
A data management system to enable urgent natural disaster computing
NASA Astrophysics Data System (ADS)
Leong, Siew Hoon; Kranzlmüller, Dieter; Frank, Anton
2014-05-01
Civil protection, in particular natural disaster management, is very important to most nations and civilians in the world. When disasters like flash floods, earthquakes and tsunamis are expected or have taken place, it is of utmost importance to make timely decisions for managing the affected areas and reduce casualties. Computer simulations can generate information and provide predictions to facilitate this decision making process. Getting the data to the required resources is a critical requirement to enable the timely computation of the predictions. An urgent data management system to support natural disaster computing is thus necessary to effectively carry out data activities within a stipulated deadline. Since the trigger of a natural disaster is usually unpredictable, it is not always possible to prepare required resources well in advance. As such, an urgent data management system for natural disaster computing has to be able to work with any type of resources. Additional requirements include the need to manage deadlines and huge volume of data, fault tolerance, reliable, flexibility to changes, ease of usage, etc. The proposed data management platform includes a service manager to provide a uniform and extensible interface for the supported data protocols, a configuration manager to check and retrieve configurations of available resources, a scheduler manager to ensure that the deadlines can be met, a fault tolerance manager to increase the reliability of the platform and a data manager to initiate and perform the data activities. These managers will enable the selection of the most appropriate resource, transfer protocol, etc. such that the hard deadline of an urgent computation can be met for a particular urgent activity, e.g. data staging or computation. We associated 2 types of deadlines [2] with an urgent computing system. Soft-hard deadline: Missing a soft-firm deadline will render the computation less useful resulting in a cost that can have severe consequences Hard deadline: Missing a hard deadline renders the computation useless and results in full catastrophic consequences. A prototype of this system has a REST-based service manager. The REST-based implementation provides a uniform interface that is easy to use. New and upcoming file transfer protocols can easily be extended and accessed via the service manager. The service manager interacts with the other four managers to coordinate the data activities so that the fundamental natural disaster urgent computing requirement, i.e. deadline, can be fulfilled in a reliable manner. A data activity can include data storing, data archiving and data storing. Reliability is ensured by the choice of a network of managers organisation model[1] the configuration manager and the fault tolerance manager. With this proposed design, an easy to use, resource-independent data management system that can support and fulfill the computation of a natural disaster prediction within stipulated deadlines can thus be realised. References [1] H. G. Hegering, S. Abeck, and B. Neumair, Integrated management of networked systems - concepts, architectures, and their operational application, Morgan Kaufmann Publishers, 340 Pine Stret, Sixth Floor, San Francisco, CA 94104-3205, USA, 1999. [2] H. Kopetz, Real-time systems design principles for distributed embedded applications, second edition, Springer, LLC, 233 Spring Street, New York, NY 10013, USA, 2011. [3] S. H. Leong, A. Frank, and D. Kranzlmu¨ ller, Leveraging e-infrastructures for urgent computing, Procedia Computer Science 18 (2013), no. 0, 2177 - 2186, 2013 International Conference on Computational Science. [4] N. Trebon, Enabling urgent computing within the existing distributed computing infrastructure, Ph.D. thesis, University of Chicago, August 2011, http://people.cs.uchicago.edu/~ntrebon/docs/dissertation.pdf.
NASA Astrophysics Data System (ADS)
Ceres, M.; Heselton, L. R., III
1981-11-01
This manual describes the computer programs for the FIREFINDER Digital Topographic Data Verification-Library-Dubbing System (FFDTDVLDS), and will assist in the maintenance of these programs. The manual contains detailed flow diagrams and associated descriptions for each computer program routine and subroutine. Complete computer program listings are also included. This information should be used when changes are made in the computer programs. The operating system has been designed to minimize operator intervention.
Embedded Data Processor and Portable Computer Technology testbeds
NASA Technical Reports Server (NTRS)
Alena, Richard; Liu, Yuan-Kwei; Goforth, Andre; Fernquist, Alan R.
1993-01-01
Attention is given to current activities in the Embedded Data Processor and Portable Computer Technology testbed configurations that are part of the Advanced Data Systems Architectures Testbed at the Information Sciences Division at NASA Ames Research Center. The Embedded Data Processor Testbed evaluates advanced microprocessors for potential use in mission and payload applications within the Space Station Freedom Program. The Portable Computer Technology (PCT) Testbed integrates and demonstrates advanced portable computing devices and data system architectures. The PCT Testbed uses both commercial and custom-developed devices to demonstrate the feasibility of functional expansion and networking for portable computers in flight missions.
Techniques for digital enhancement of Landsat MSS data using an Apple II+ microcomputer
NASA Technical Reports Server (NTRS)
Harrington, J. A., Jr.; Cartin, K. F.
1984-01-01
The information provided by remotely sensed data collected from orbiting platforms has been useful in many research fields. Particularly convenient for evaluation are generally digital data stored on computer compatible tapes (CCT's). The major advantages of CCT's are the quality of the data and the accessibility to computer manipulation. Minicomputer systems are widely used for the required computer processing operations. However, microprocessor-related technological advances make it now possible to process CCT data with computing systems which can be obtained at a much lower price than minicomputer systems. A detailed description is provided of the design considerations of a microcomputer-based Digital Image Analysis System (DIAS). Particular attention is given to the algorithms which are incorporated for eighter edge enhancement or smoothing Landsat multispectral scanner data.
ERIC Educational Resources Information Center
Cho, Vincent; Wayman, Jeffrey C.
2014-01-01
Background: Increasingly, teachers and other educators are expected to leverage data in making educational decisions. Effective data use is difficult, if not impossible, without computer data systems. Nonetheless, these systems may be underused or even rejected by teachers. One potential explanation for such troubles may relate to how teachers…
76 FR 4435 - Privacy Act of 1974; Report of Modified or Altered System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-25
... entry, computer systems analysis and computer programming services. The contractors promptly return data.... Proposed Routine Use Disclosures of Data in the System This System of Records contains information such as... compatible use of data is known as a ``routine use''. The routine uses proposed for this System are...
Pen-based computers: Computers without keys
NASA Technical Reports Server (NTRS)
Conklin, Cheryl L.
1994-01-01
The National Space Transportation System (NSTS) is comprised of many diverse and highly complex systems incorporating the latest technologies. Data collection associated with ground processing of the various Space Shuttle system elements is extremely challenging due to the many separate processing locations where data is generated. This presents a significant problem when the timely collection, transfer, collation, and storage of data is required. This paper describes how new technology, referred to as Pen-Based computers, is being used to transform the data collection process at Kennedy Space Center (KSC). Pen-Based computers have streamlined procedures, increased data accuracy, and now provide more complete information than previous methods. The end results is the elimination of Shuttle processing delays associated with data deficiencies.
Logistical Consideration in Computer-Based Screening of Astronaut Applicants
NASA Technical Reports Server (NTRS)
Galarza, Laura
2000-01-01
This presentation reviews the logistical, ergonomic, and psychometric issues and data related to the development and operational use of a computer-based system for the psychological screening of astronaut applicants. The Behavioral Health and Performance Group (BHPG) at the Johnson Space Center upgraded its astronaut psychological screening and selection procedures for the 1999 astronaut applicants and subsequent astronaut selection cycles. The questionnaires, tests, and inventories were upgraded from a paper-and-pencil system to a computer-based system. Members of the BHPG and a computer programmer designed and developed needed interfaces (screens, buttons, etc.) and programs for the astronaut psychological assessment system. This intranet-based system included the user-friendly computer-based administration of tests, test scoring, generation of reports, the integration of test administration and test output to a single system, and a complete database for past, present, and future selection data. Upon completion of the system development phase, four beta and usability tests were conducted with the newly developed system. The first three tests included 1 to 3 participants each. The final system test was conducted with 23 participants tested simultaneously. Usability and ergonomic data were collected from the system (beta) test participants and from 1999 astronaut applicants who volunteered the information in exchange for anonymity. Beta and usability test data were analyzed to examine operational, ergonomic, programming, test administration and scoring issues related to computer-based testing. Results showed a preference for computer-based testing over paper-and -pencil procedures. The data also reflected specific ergonomic, usability, psychometric, and logistical concerns that should be taken into account in future selection cycles. Conclusion. Psychological, psychometric, human and logistical factors must be examined and considered carefully when developing and using a computer-based system for psychological screening and selection.
Distributed metadata in a high performance computing environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Zhang, Zhenhua
A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination thatmore » a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.« less
Voter comparator switch provides fail safe data communications system - A concept
NASA Technical Reports Server (NTRS)
Koczela, L. J.; Wilgus, D. S.
1971-01-01
System indicates status of computers and controls operational modes. Two matrices are used - one relating to permissible system states, the other relating to requested system states. Concept is useful to designers of digital data transmission systems and time shared computer systems.
CANFAR+Skytree: A Cloud Computing and Data Mining System for Astronomy
NASA Astrophysics Data System (ADS)
Ball, N. M.
2013-10-01
This is a companion Focus Demonstration article to the CANFAR+Skytree poster (Ball 2013, this volume), demonstrating the usage of the Skytree machine learning software on the Canadian Advanced Network for Astronomical Research (CANFAR) cloud computing system. CANFAR+Skytree is the world's first cloud computing system for data mining in astronomy.
77 FR 64439 - Airworthiness Directives; Bell Helicopter Textron Canada (Bell) Model Helicopters
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-22
... Control System] Air Data Computer.'' TCAA issued AD CF-2005-30 to require the procedures in Bell Alert... overspeed warning system, replacing the overspeed warning computer, V ne converter, and pilot and copilot... Aircraft System/Component Code: 3417 Air Data Computer. Issued in Fort Worth, Texas, on October 12, 2012...
A portable fNIRS system with eight channels
NASA Astrophysics Data System (ADS)
Si, Juanning; Zhao, Ruirui; Zhang, Yujin; Zuo, Nianming; Zhang, Xin; Jiang, Tianzi
2015-03-01
Abundant study on the hemodynamic response of a brain have brought quite a few advances in technologies of measuring it. The most benefitted is the functional near infrared spectroscope (fNIRS). A variety of devices have been developed for different applications. Because portable fNIRS systems were more competent to measure responses either of special subjects or in natural environment, several kinds of portable fNIRS systems have been reported. However, they all required a computer for receiving data. The extra computer increases the cost of a fNIRS system. What's more noticeable is the space required to locate the computer even for a portable system. It will discount the portability of the fNIRS system. So we designed a self-contained eight channel fNIRS system, which does not demand a computer to receive data and display data in a monitor. Instead, the system is centered by an ARM core CPU, which takes charge in organizing data and saving data, and then displays data on a touch screen. The system has also been validated by experiments on phantoms and on subjects in tasks.
1981-11-30
COMPUTER PROGRAM USER’S MANUAL FOR FIREFINDER DIGITAL TOPOGRAPHIC DATA VERIFICATION LIBRARY DUBBING SYSTEM 30 NOVEMBER 1981 by: Marie Ceres Leslie R...Library .............................. 1-2 1.2.3 Dubbing .......................... 1-2 1.3 Library Process Overview ..................... 1-3 2 LIBRARY...RPOSE AND SCOPE This manual describes the computer programs for the FIREFINDER Digital Topographic Data Veri fication-Library- Dubbing System (FFDTDVLDS
Requirements for company-wide management
NASA Technical Reports Server (NTRS)
Southall, J. W.
1980-01-01
Computing system requirements were developed for company-wide management of information and computer programs in an engineering data processing environment. The requirements are essential to the successful implementation of a computer-based engineering data management system; they exceed the capabilities provided by the commercially available data base management systems. These requirements were derived from a study entitled The Design Process, which was prepared by design engineers experienced in development of aerospace products.
Edwards, Roger L; Edwards, Sandra L; Bryner, James; Cunningham, Kelly; Rogers, Amy; Slattery, Martha L
2008-04-01
We describe a computer-assisted data collection system developed for a multicenter cohort study of American Indian and Alaska Native people. The study computer-assisted participant evaluation system or SCAPES is built around a central database server that controls a small private network with touch screen workstations. SCAPES encompasses the self-administered questionnaires, the keyboard-based stations for interviewer-administered questionnaires, a system for inputting medical measurements, and administrative tasks such as data exporting, backup and management. Elements of SCAPES hardware/network design, data storage, programming language, software choices, questionnaire programming including the programming of questionnaires administered using audio computer-assisted self-interviewing (ACASI), and participant identification/data security system are presented. Unique features of SCAPES are that data are promptly made available to participants in the form of health feedback; data can be quickly summarized for tribes for health monitoring and planning at the community level; and data are available to study investigators for analyses and scientific evaluation.
Parallel compression of data chunks of a shared data object using a log-structured file system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Grider, Gary
2016-10-25
Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less
Low-cost data analysis systems for processing multispectral scanner data
NASA Technical Reports Server (NTRS)
Whitely, S. L.
1976-01-01
The basic hardware and software requirements are described for four low cost analysis systems for computer generated land use maps. The data analysis systems consist of an image display system, a small digital computer, and an output recording device. Software is described together with some of the display and recording devices, and typical costs are cited. Computer requirements are given, and two approaches are described for converting black-white film and electrostatic printer output to inexpensive color output products. Examples of output products are shown.
Visual Analysis of Cloud Computing Performance Using Behavioral Lines.
Muelder, Chris; Zhu, Biao; Chen, Wei; Zhang, Hongxin; Ma, Kwan-Liu
2016-02-29
Cloud computing is an essential technology to Big Data analytics and services. A cloud computing system is often comprised of a large number of parallel computing and storage devices. Monitoring the usage and performance of such a system is important for efficient operations, maintenance, and security. Tracing every application on a large cloud system is untenable due to scale and privacy issues. But profile data can be collected relatively efficiently by regularly sampling the state of the system, including properties such as CPU load, memory usage, network usage, and others, creating a set of multivariate time series for each system. Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper, we present a visual based analysis approach to understanding and analyzing the performance and behavior of cloud computing systems. Our design is based on similarity measures and a layout method to portray the behavior of each compute node over time. When visualizing a large number of behavioral lines together, distinct patterns often appear suggesting particular types of performance bottleneck. The resulting system provides multiple linked views, which allow the user to interactively explore the data by examining the data or a selected subset at different levels of detail. Our case studies, which use datasets collected from two different cloud systems, show that this visual based approach is effective in identifying trends and anomalies of the systems.
Dynamic Docking Test System (DDTS) active table computer program NASA Advanced Docking System (NADS)
NASA Technical Reports Server (NTRS)
Gates, R. M.; Jantz, R. E.
1974-01-01
A computer program was developed to describe the three-dimensional motion of the Dynamic Docking Test System active table. The input consists of inertia and geometry data, actuator structural data, forcing function data, hydraulics data, servo electronics data, and integration control data. The output consists of table responses, actuator bending responses, and actuator responses.
I/O routing in a multidimensional torus network
Chen, Dong; Eisley, Noel A.; Heidelberger, Philip
2017-02-07
A method, system and computer program product are disclosed for routing data packet in a computing system comprising a multidimensional torus compute node network including a multitude of compute nodes, and an I/O node network including a plurality of I/O nodes. In one embodiment, the method comprises assigning to each of the data packets a destination address identifying one of the compute nodes; providing each of the data packets with a toio value; routing the data packets through the compute node network to the destination addresses of the data packets; and when each of the data packets reaches the destination address assigned to said each data packet, routing said each data packet to one of the I/O nodes if the toio value of said each data packet is a specified value. In one embodiment, each of the data packets is also provided with an ioreturn value used to route the data packets through the compute node network.
I/O routing in a multidimensional torus network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Dong; Eisley, Noel A.; Heidelberger, Philip
A method, system and computer program product are disclosed for routing data packet in a computing system comprising a multidimensional torus compute node network including a multitude of compute nodes, and an I/O node network including a plurality of I/O nodes. In one embodiment, the method comprises assigning to each of the data packets a destination address identifying one of the compute nodes; providing each of the data packets with a toio value; routing the data packets through the compute node network to the destination addresses of the data packets; and when each of the data packets reaches the destinationmore » address assigned to said each data packet, routing said each data packet to one of the I/O nodes if the toio value of said each data packet is a specified value. In one embodiment, each of the data packets is also provided with an ioreturn value used to route the data packets through the compute node network.« less
Analysis Report for Exascale Storage Requirements for Scientific Data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruwart, Thomas M.
Over the next 10 years, the Department of Energy will be transitioning from Petascale to Exascale Computing resulting in data storage, networking, and infrastructure requirements to increase by three orders of magnitude. The technologies and best practices used today are the result of a relatively slow evolution of ancestral technologies developed in the 1950s and 1960s. These include magnetic tape, magnetic disk, networking, databases, file systems, and operating systems. These technologies will continue to evolve over the next 10 to 15 years on a reasonably predictable path. Experience with the challenges involved in transitioning these fundamental technologies from Terascale tomore » Petascale computing systems has raised questions about how these will scale another 3 or 4 orders of magnitude to meet the requirements imposed by Exascale computing systems. This report is focused on the most concerning scaling issues with data storage systems as they relate to High Performance Computing- and presents options for a path forward. Given the ability to store exponentially increasing amounts of data, far more advanced concepts and use of metadata will be critical to managing data in Exascale computing systems.« less
Real time computer data system for the 40 x 80 ft wind tunnel facility at Ames Research Center
NASA Technical Reports Server (NTRS)
Cambra, J. M.; Tolari, G. P.
1974-01-01
The wind tunnel realtime computer system is a distributed data gathering system that features a master computer subsystem, a high speed data gathering subsystem, a quick look dynamic analysis and vibration control subsystem, an analog recording back-up subsystem, a pulse code modulation (PCM) on-board subsystem, a communications subsystem, and a transducer excitation and calibration subsystem. The subsystems are married to the master computer through an executive software system and standard hardware and FORTRAN software interfaces. The executive software system has four basic software routines. These are the playback, setup, record, and monitor routines. The standard hardware interfaces along with the software interfaces provide the system with the capability of adapting to new environments.
Study of USGS/NASA land use classification system. [computer analysis from LANDSAT data
NASA Technical Reports Server (NTRS)
Spann, G. W.
1975-01-01
The results of a computer mapping project using LANDSAT data and the USGS/NASA land use classification system are summarized. During the computer mapping portion of the project, accuracies of 67 percent to 79 percent were achieved using Level II of the classification system and a 4,000 acre test site centered on Douglasville, Georgia. Analysis of response to a questionaire circulated to actual and potential LANDSAT data users reveals several important findings: (1) there is a substantial desire for additional information related to LANDSAT capabilities; (2) a majority of the respondents feel computer mapping from LANDSAT data could aid present or future projects; and (3) the costs of computer mapping are substantially less than those of other methods.
Data Security Policy | High-Performance Computing | NREL
to use its high-performance computing (HPC) systems. NREL HPC systems are operated as research systems and may only contain data related to scientific research. These systems are categorized as low per sensitive or non-sensitive. One example of sensitive data would be personally identifiable information (PII
Okamoto, E; Shimanaka, M; Suzuki, S; Baba, K; Mitamura, Y
1999-01-01
The usefulness of a remote monitoring system that uses a personal handy phone for artificial heart implanted patients was investigated. The type of handy phone used in this study was a personal handy phone system (PHS), which is a system developed in Japan that uses the NTT (Nippon Telephone and Telegraph, Inc.) telephone network service. The PHS has several advantages: high-speed data transmission, low power output, little electromagnetic interference with medical devices, and easy locating of patients. In our system, patients have a mobile computer (Toshiba, Libretto 50, Kawasaki, Japan) for data transmission control between an implanted controller and a host computer (NEC, PC-9821V16) in the hospital. Information on the motor rotational angle (8 bits) and motor current (8 bits) of the implanted motor driven heart is fed into the mobile computer from the implanted controller (Hitachi, H8/532, Yokohama, Japan) according to 32-bit command codes from the host computer. Motor current and motor rotational angle data from inside the body are framed together by a control code (frame number and parity) for data error checking and correcting at the receiving site, and the data are sent through the PHS connection to the mobile computer. The host computer calculates pump outflow and arterial pressure from the motor rotational angle and motor current values and displays the data in real-time waveforms. The results of this study showed that accurate data on motor rotational angle and current could be transmitted from the subjects while they were walking or driving a car to the host computer at a data transmission rate of 9600 bps. This system is useful for remote monitoring of patients with an implanted artificial heart.
UNH Data Cooperative: A Cyber Infrastructure for Earth System Studies
NASA Astrophysics Data System (ADS)
Braswell, B. H.; Fekete, B. M.; Prusevich, A.; Gliden, S.; Magill, A.; Vorosmarty, C. J.
2007-12-01
Earth system scientists and managers have a continuously growing demand for a wide array of earth observations derived from various data sources including (a) modern satellite retrievals, (b) "in-situ" records, (c) various simulation outputs, and (d) assimilated data products combining model results with observational records. The sheer quantity of data, and formatting inconsistencies make it difficult for users to take full advantage of this important information resource. Thus the system could benefit from a thorough retooling of our current data processing procedures and infrastructure. Emerging technologies, like OPeNDAP and OGC map services, open standard data formats (NetCDF, HDF) data cataloging systems (NASA-Echo, Global Change Master Directory, etc.) are providing the basis for a new approach in data management and processing, where web- services are increasingly designed to serve computer-to-computer communications without human interactions and complex analysis can be carried out over distributed computer resources interconnected via cyber infrastructure. The UNH Earth System Data Collaborative is designed to utilize the aforementioned emerging web technologies to offer new means of access to earth system data. While the UNH Data Collaborative serves a wide array of data ranging from weather station data (Climate Portal) to ocean buoy records and ship tracks (Portsmouth Harbor Initiative) to land cover characteristics, etc. the underlaying data architecture shares common components for data mining and data dissemination via web-services. Perhaps the most unique element of the UNH Data Cooperative's IT infrastructure is its prototype modeling environment for regional ecosystem surveillance over the Northeast corridor, which allows the integration of complex earth system model components with the Cooperative's data services. While the complexity of the IT infrastructure to perform complex computations is continuously increasing, scientists are often forced to spend considerable amount of time to solve basic data management and preprocessing tasks and deal with low level computational design problems like parallelization of model codes. Our modeling infrastructure is designed to take care the bulk of the common tasks found in complex earth system models like I/O handling, computational domain and time management, parallel execution of the modeling tasks, etc. The modeling infrastructure allows scientists to focus on the numerical implementation of the physical processes on a single computational objects(typically grid cells) while the framework takes care of the preprocessing of input data, establishing of the data exchange between computation objects and the execution of the science code. In our presentation, we will discuss the key concepts of our modeling infrastructure. We will demonstrate integration of our modeling framework with data services offered by the UNH Earth System Data Collaborative via web interfaces. We will layout the road map to turn our prototype modeling environment into a truly community framework for wide range of earth system scientists and environmental managers.
Controller/Computer Interface with an Air-Ground Data Link
DOT National Transportation Integrated Search
1976-06-01
This report describes the results of an experiment for evaluating the controller/computer interface in an ARTS III/M&S system modified for use with a simulated digital data link and a voice link utilizing a computer-generated voice system. A modified...
NASA Technical Reports Server (NTRS)
Stocks, Dana R.
1986-01-01
The Dynamic Gas Temperature Measurement System compensation software accepts digitized data from two different diameter thermocouples and computes a compensated frequency response spectrum for one of the thermocouples. Detailed discussions of the physical system, analytical model, and computer software are presented in this volume and in Volume 1 of this report under Task 3. Computer program software restrictions and test cases are also presented. Compensated and uncompensated data may be presented in either the time or frequency domain. Time domain data are presented as instantaneous temperature vs time. Frequency domain data may be presented in several forms such as power spectral density vs frequency.
ERIC Educational Resources Information Center
Association for Educational Data Systems, Washington, DC.
Two abstracts and seventeen articles on computer assisted instruction (CAI) presented at the 1976 Association for Educational Data Systems (AEDS) convention are included here. Four new computer programs are described: Author System for Education and Training (ASET); GNOSIS, a Swedish/English CAI package; Statistical Interactive Programming System…
Network support for system initiated checkpoints
Chen, Dong; Heidelberger, Philip
2013-01-29
A system, method and computer program product for supporting system initiated checkpoints in parallel computing systems. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity.
Developing Data System Engineers
NASA Astrophysics Data System (ADS)
Behnke, J.; Byrnes, J. B.; Kobler, B.
2011-12-01
In the early days of general computer systems for science data processing, staff members working on NASA's data systems would most often be hired as mathematicians. Computer engineering was very often filled by those with electrical engineering degrees. Today, the Goddard Space Flight Center has special position descriptions for data scientists or as they are more commonly called: data systems engineers. These staff members are required to have very diverse skills, hence the need for a generalized position description. There is always a need for data systems engineers to develop, maintain and operate the complex data systems for Earth and space science missions. Today's data systems engineers however are not just mathematicians, they are computer programmers, GIS experts, software engineers, visualization experts, etc... They represent many different degree fields. To put together distributed systems like the NASA Earth Observing Data and Information System (EOSDIS), staff are required from many different fields. Sometimes, the skilled professional is not available and must be developed in-house. This paper will address the various skills and jobs for data systems engineers at NASA. Further it explores how to develop staff to become data scientists.
Template based parallel checkpointing in a massively parallel computer system
Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN
2009-01-13
A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.
DIALOG: An executive computer program for linking independent programs
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hague, D. S.; Watson, D. A.
1973-01-01
A very large scale computer programming procedure called the DIALOG executive system was developed for the CDC 6000 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. Each computer program maintains its individual identity and is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG executive system. The installation and uses of the DIALOG executive system are described.
Arthur, J.K.; Taylor, R.E.
1986-01-01
As part of the Gulf Coast Regional Aquifer System Analysis (GC RASA) study, data from 184 geophysical well logs were used to define the geohydrologic framework of the Mississippi embayment aquifer system in Mississippi for flow model simulation. Five major aquifers of Eocene and Paleocene age were defined within this aquifer system in Mississippi. A computer data storage system was established to assimilate the information obtained from the geophysical logs. Computer programs were developed to manipulate the data to construct geologic sections and structure maps. Data from the storage system will be input to a five-layer, three-dimensional, finite-difference digital computer model that is used to simulate the flow dynamics in the five major aquifers of the Mississippi embayment aquifer system.
NASA Technical Reports Server (NTRS)
Smith, Paul H.
1988-01-01
The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.
Prototyping manufacturing in the cloud
NASA Astrophysics Data System (ADS)
Ciortea, E. M.
2017-08-01
This paper attempts a theoretical approach to cloud systems with impacts on production systems. I call systems as cloud computing because form a relatively new concept in the field of informatics, representing an overall distributed computing services, applications, access to information and data storage without the user to know the physical location and configuration of systems. The advantages of this approach are especially computing speed and storage capacity without investment in additional configurations, synchronizing user data, data processing using web applications. The disadvantage is that it wants to identify a solution for data security, leading to mistrust users. The case study is applied to a module of the system of production, because the system is complex.
Cooperative storage of shared files in a parallel computing system with dynamic block size
Bent, John M.; Faibish, Sorin; Grider, Gary
2015-11-10
Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).
System Collects And Displays Demultiplexed Data
NASA Technical Reports Server (NTRS)
Reschke, Millard F.; Fariss, Julie L.; Kulecz, Walter B.; Paloski, William H.
1992-01-01
Electronic system collects, manipulates, and displays in real time results of manipulation of streams of data transmitted from remote scientific instrumentation. Interface circuit shifts data-and-clock signal from differential logic levels of multiplexer to single-ended logic levels of computer. System accommodates nonstandard data-transmission protocol. Software useful in applications where Macintosh computers used in real-time display and recording of data.
System enhancements of Mesoscale Analysis and Space Sensor (MASS) computer system
NASA Technical Reports Server (NTRS)
Hickey, J. S.; Karitani, S.
1985-01-01
The interactive information processing for the mesoscale analysis and space sensor (MASS) program is reported. The development and implementation of new spaceborne remote sensing technology to observe and measure atmospheric processes is described. The space measurements and conventional observational data are processed together to gain an improved understanding of the mesoscale structure and dynamical evolution of the atmosphere relative to cloud development and precipitation processes. A Research Computer System consisting of three primary computers was developed (HP-1000F, Perkin-Elmer 3250, and Harris/6) which provides a wide range of capabilities for processing and displaying interactively large volumes of remote sensing data. The development of a MASS data base management and analysis system on the HP-1000F computer and extending these capabilities by integration with the Perkin-Elmer and Harris/6 computers using the MSFC's Apple III microcomputer workstations is described. The objectives are: to design hardware enhancements for computer integration and to provide data conversion and transfer between machines.
A Computational Workflow for the Automated Generation of Models of Genetic Designs.
Misirli, Göksel; Nguyen, Tramy; McLaughlin, James Alastair; Vaidyanathan, Prashant; Jones, Timothy S; Densmore, Douglas; Myers, Chris; Wipat, Anil
2018-06-05
Computational models are essential to engineer predictable biological systems and to scale up this process for complex systems. Computational modeling often requires expert knowledge and data to build models. Clearly, manual creation of models is not scalable for large designs. Despite several automated model construction approaches, computational methodologies to bridge knowledge in design repositories and the process of creating computational models have still not been established. This paper describes a workflow for automatic generation of computational models of genetic circuits from data stored in design repositories using existing standards. This workflow leverages the software tool SBOLDesigner to build structural models that are then enriched by the Virtual Parts Repository API using Systems Biology Open Language (SBOL) data fetched from the SynBioHub design repository. The iBioSim software tool is then utilized to convert this SBOL description into a computational model encoded using the Systems Biology Markup Language (SBML). Finally, this SBML model can be simulated using a variety of methods. This workflow provides synthetic biologists with easy to use tools to create predictable biological systems, hiding away the complexity of building computational models. This approach can further be incorporated into other computational workflows for design automation.
ERIC Educational Resources Information Center
Cox, John
This paper documents the program used in the application of the INFO system for data storage and retrieval in schools, from the viewpoints of both the unsophisticated user and the experienced programmer interested in using the INFO system or modifying it for use within an existing school's computer system. The opening user's guide presents simple…
Engineering study for the functional design of a multiprocessor system
NASA Technical Reports Server (NTRS)
Miller, J. S.; Vandever, W. H.; Stanten, S. F.; Avakian, A. E.; Kosmala, A. L.
1972-01-01
The results are presented of a study to generate a functional system design of a multiprocessing computer system capable of satisfying the computational requirements of a space station. These data management system requirements were specified to include: (1) real time control, (2) data processing and storage, (3) data retrieval, and (4) remote terminal servicing.
12 CFR 1271.22 - Computer data.
Code of Federal Regulations, 2014 CFR
2014-01-01
... computer system. Any such arrangement shall ensure the security of the computerized data stored in a Bank's... 12 Banks and Banking 10 2014-01-01 2014-01-01 false Computer data. 1271.22 Section 1271.22 Banks... BANK OPERATIONS AND AUTHORITIES Bank Requests for Information § 1271.22 Computer data. Nothing in this...
Estimating costs and performance of systems for machine processing of remotely sensed data
NASA Technical Reports Server (NTRS)
Ballard, R. J.; Eastwood, L. F., Jr.
1977-01-01
This paper outlines a method for estimating computer processing times and costs incurred in producing information products from digital remotely sensed data. The method accounts for both computation and overhead, and may be applied to any serial computer. The method is applied to estimate the cost and computer time involved in producing Level II Land Use and Vegetative Cover Maps for a five-state midwestern region. The results show that the amount of data to be processed overloads some example computer systems, but that the processing is feasible on others.
NASA Technical Reports Server (NTRS)
Snyder, W. V.; Hanson, R. J.
1986-01-01
Text Exchange System (TES) exchanges and maintains organized textual information including source code, documentation, data, and listings. System consists of two computer programs and definition of format for information storage. Comprehensive program used to create, read, and maintain TES files. TES developed to meet three goals: First, easy and efficient exchange of programs and other textual data between similar and dissimilar computer systems via magnetic tape. Second, provide transportable management system for textual information. Third, provide common user interface, over wide variety of computing systems, for all activities associated with text exchange.
Parallel checksumming of data chunks of a shared data object using a log-structured file system
Bent, John M.; Faibish, Sorin; Grider, Gary
2016-09-06
Checksum values are generated and used to verify the data integrity. A client executing in a parallel computing system stores a data chunk to a shared data object on a storage node in the parallel computing system. The client determines a checksum value for the data chunk; and provides the checksum value with the data chunk to the storage node that stores the shared object. The data chunk can be stored on the storage node with the corresponding checksum value as part of the shared object. The storage node may be part of a Parallel Log-Structured File System (PLFS), and the client may comprise, for example, a Log-Structured File System client on a compute node or burst buffer. The checksum value can be evaluated when the data chunk is read from the storage node to verify the integrity of the data that is read.
1982-01-29
N - Nw .VA COMPUTER PROGRAM USER’S MANUAL FOR . 0FIREFINDER DIGITAL TOPOGRAPHIC DATA VERIFICATION LIBRARY DUBBING SYSTEM VOLUME II DUBBING 29 JANUARY...Digital Topographic Data Verification Library Dubbing System, Volume II, Dubbing 6. PERFORMING ORG. REPORT NUMER 7. AUTHOR(q) S. CONTRACT OR GRANT...Software Library FIREFINDER Dubbing 20. ABSTRACT (Continue an revWee *Ide II necessary end identify by leek mauber) PThis manual describes the computer
Surveillance of industrial processes with correlated parameters
White, Andrew M.; Gross, Kenny C.; Kubic, William L.; Wigeland, Roald A.
1996-01-01
A system and method for surveillance of an industrial process. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions.
Adaptive voting computer system
NASA Technical Reports Server (NTRS)
Koczela, L. J.; Wilgus, D. S. (Inventor)
1974-01-01
A computer system is reported that uses adaptive voting to tolerate failures and operates in a fail-operational, fail-safe manner. Each of four computers is individually connected to one of four external input/output (I/O) busses which interface with external subsystems. Each computer is connected to receive input data and commands from the other three computers and to furnish output data commands to the other three computers. An adaptive control apparatus including a voter-comparator-switch (VCS) is provided for each computer to receive signals from each of the computers and permits adaptive voting among the computers to permit the fail-operational, fail-safe operation.
The impact of the pervasive information age on healthcare organizations.
Landry, Brett J L; Mahesh, Sathi; Hartman, Sandra J
2005-01-01
New information technologies place data on integrated information systems, and provide access via pervasive computing technologies. Pervasive computing puts computing power in the hands of all employees, available wherever it is needed. Integrated systems offer seamless data and process integration over diverse information systems. In this paper we look at the impact of these technologies on healthcare organizations in the future.
MIRADS-2 Implementation Manual
NASA Technical Reports Server (NTRS)
1975-01-01
The Marshall Information Retrieval and Display System (MIRADS) which is a data base management system designed to provide the user with a set of generalized file capabilities is presented. The system provides a wide variety of ways to process the contents of the data base and includes capabilities to search, sort, compute, update, and display the data. The process of creating, defining, and loading a data base is generally called the loading process. The steps in the loading process which includes (1) structuring, (2) creating, (3) defining, (4) and implementing the data base for use by MIRADS are defined. The execution of several computer programs is required to successfully complete all steps of the loading process. This library must be established as a cataloged mass storage file as the first step in MIRADS implementation. The procedure for establishing the MIRADS Library is given. The system is currently operational for the UNIVAC 1108 computer system utilizing the Executive Operating System. All procedures relate to the use of MIRADS on the U-1108 computer.
DEBRIS: a computer program for analyzing channel cross sections
Patrick Deenihan; Thomas E. Lisle
1988-01-01
DEBRIS is a menu-driven, interactive computer program written in FORTRAN 77 for recording and plotting survey data and for computing hydraulic variables and depths of scour and fill. It was developed for use with the USDA Forest Service's Data General computer system, with the AOS/VS operation system. By using menus, the operator does not need to know any...
DEBRIS: A computer program for analyzing channel cross sections
Patrick Deenihan; Thomas E. Lisle
1988-01-01
DEBRIS is a menu-driven, interactive computer program written in FORTRAN 77 for recording and platting survey data and for computing hydraulic variables and depths of scour and fill. It was developed for use with the USDA Forest Service's Data General computer system, with the AOS/VS operating system. By using menus, the operator does not need to know any...
A direct-to-drive neural data acquisition system.
Kinney, Justin P; Bernstein, Jacob G; Meyer, Andrew J; Barber, Jessica B; Bolivar, Marti; Newbold, Bryan; Scholvin, Jorg; Moore-Kochlacs, Caroline; Wentz, Christian T; Kopell, Nancy J; Boyden, Edward S
2015-01-01
Driven by the increasing channel count of neural probes, there is much effort being directed to creating increasingly scalable electrophysiology data acquisition (DAQ) systems. However, all such systems still rely on personal computers for data storage, and thus are limited by the bandwidth and cost of the computers, especially as the scale of recording increases. Here we present a novel architecture in which a digital processor receives data from an analog-to-digital converter, and writes that data directly to hard drives, without the need for a personal computer to serve as an intermediary in the DAQ process. This minimalist architecture may support exceptionally high data throughput, without incurring costs to support unnecessary hardware and overhead associated with personal computers, thus facilitating scaling of electrophysiological recording in the future.
A direct-to-drive neural data acquisition system
Kinney, Justin P.; Bernstein, Jacob G.; Meyer, Andrew J.; Barber, Jessica B.; Bolivar, Marti; Newbold, Bryan; Scholvin, Jorg; Moore-Kochlacs, Caroline; Wentz, Christian T.; Kopell, Nancy J.; Boyden, Edward S.
2015-01-01
Driven by the increasing channel count of neural probes, there is much effort being directed to creating increasingly scalable electrophysiology data acquisition (DAQ) systems. However, all such systems still rely on personal computers for data storage, and thus are limited by the bandwidth and cost of the computers, especially as the scale of recording increases. Here we present a novel architecture in which a digital processor receives data from an analog-to-digital converter, and writes that data directly to hard drives, without the need for a personal computer to serve as an intermediary in the DAQ process. This minimalist architecture may support exceptionally high data throughput, without incurring costs to support unnecessary hardware and overhead associated with personal computers, thus facilitating scaling of electrophysiological recording in the future. PMID:26388740
Generation and physical characteristics of the ERTS MSS system corrected computer compatible tapes
NASA Technical Reports Server (NTRS)
Thomas, V. L.
1973-01-01
The generation and format are discussed of the ERTS system corrected multispectral scanner computer compatible tapes. The discussion includes spacecraft sensors, scene characteristics, data transmission, and conversion of data to computer compatible tapes at the NASA Data Processing Facility. Geometeric and radiometric corrections, tape formats, and the physical characteristics of the tapes are also included.
ERA 1103 UNIVAC 2 Calculating Machine
1955-09-21
The new 10-by 10-Foot Supersonic Wind Tunnel at the Lewis Flight Propulsion Laboratory included high tech data acquisition and analysis systems. The reliable gathering of pressure, speed, temperature, and other data from test runs in the facilities was critical to the research process. Throughout the 1940s and early 1950s female employees, known as computers, recorded all test data and performed initial calculations by hand. The introduction of punch card computers in the late 1940s gradually reduced the number of hands-on calculations. In the mid-1950s new computational machines were installed in the office building of the 10-by 10-Foot tunnel. The new systems included this UNIVAC 1103 vacuum tube computer—the lab’s first centralized computer system. The programming was done on paper tape and fed into the machine. The 10-by 10 computer center also included the Lewis-designed Computer Automated Digital Encoder (CADDE) and Digital Automated Multiple Pressure Recorder (DAMPR) systems which converted test data to binary-coded decimal numbers and recorded test pressures automatically, respectively. The systems primarily served the 10-by 10, but were also applied to the other large facilities. Engineering Research Associates (ERA) developed the initial UNIVAC computer for the Navy in the late 1940s. In 1952 the company designed a commercial version, the UNIVAC 1103. The 1103 was the first computer designed by Seymour Cray and the first commercially successful computer.
AGIS: Integration of new technologies used in ATLAS Distributed Computing
NASA Astrophysics Data System (ADS)
Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria
2017-10-01
The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.
A service-oriented data access control model
NASA Astrophysics Data System (ADS)
Meng, Wei; Li, Fengmin; Pan, Juchen; Song, Song; Bian, Jiali
2017-01-01
The development of mobile computing, cloud computing and distributed computing meets the growing individual service needs. Facing with complex application system, it's an urgent problem to ensure real-time, dynamic, and fine-grained data access control. By analyzing common data access control models, on the basis of mandatory access control model, the paper proposes a service-oriented access control model. By regarding system services as subject and data of databases as object, the model defines access levels and access identification of subject and object, and ensures system services securely to access databases.
Natural resources information system.
NASA Technical Reports Server (NTRS)
Leachtenauer, J. C.; Woll, A. M.
1972-01-01
A computer-based Natural Resources Information System was developed for the Bureaus of Indian Affairs and Land Management. The system stores, processes and displays data useful to the land manager in the decision making process. Emphasis is placed on the use of remote sensing as a data source. Data input consists of maps, imagery overlays, and on-site data. Maps and overlays are entered using a digitizer and stored as irregular polygons, lines and points. Processing functions include set intersection, union and difference and area, length and value computations. Data output consists of computer tabulations and overlays prepared on a drum plotter.
Federated data storage system prototype for LHC experiments and data intensive science
NASA Astrophysics Data System (ADS)
Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.
2017-10-01
Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.
EMRlog method for computer security for electronic medical records with logic and data mining.
Martínez Monterrubio, Sergio Mauricio; Frausto Solis, Juan; Monroy Borja, Raúl
2015-01-01
The proper functioning of a hospital computer system is an arduous work for managers and staff. However, inconsistent policies are frequent and can produce enormous problems, such as stolen information, frequent failures, and loss of the entire or part of the hospital data. This paper presents a new method named EMRlog for computer security systems in hospitals. EMRlog is focused on two kinds of security policies: directive and implemented policies. Security policies are applied to computer systems that handle huge amounts of information such as databases, applications, and medical records. Firstly, a syntactic verification step is applied by using predicate logic. Then data mining techniques are used to detect which security policies have really been implemented by the computer systems staff. Subsequently, consistency is verified in both kinds of policies; in addition these subsets are contrasted and validated. This is performed by an automatic theorem prover. Thus, many kinds of vulnerabilities can be removed for achieving a safer computer system.
EMRlog Method for Computer Security for Electronic Medical Records with Logic and Data Mining
Frausto Solis, Juan; Monroy Borja, Raúl
2015-01-01
The proper functioning of a hospital computer system is an arduous work for managers and staff. However, inconsistent policies are frequent and can produce enormous problems, such as stolen information, frequent failures, and loss of the entire or part of the hospital data. This paper presents a new method named EMRlog for computer security systems in hospitals. EMRlog is focused on two kinds of security policies: directive and implemented policies. Security policies are applied to computer systems that handle huge amounts of information such as databases, applications, and medical records. Firstly, a syntactic verification step is applied by using predicate logic. Then data mining techniques are used to detect which security policies have really been implemented by the computer systems staff. Subsequently, consistency is verified in both kinds of policies; in addition these subsets are contrasted and validated. This is performed by an automatic theorem prover. Thus, many kinds of vulnerabilities can be removed for achieving a safer computer system. PMID:26495300
Controlling data transfers from an origin compute node to a target compute node
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2011-06-21
Methods, apparatus, and products are disclosed for controlling data transfers from an origin compute node to a target compute node that include: receiving, by an application messaging module on the target compute node, an indication of a data transfer from an origin compute node to the target compute node; and administering, by the application messaging module on the target compute node, the data transfer using one or more messaging primitives of a system messaging module in dependence upon the indication.
Rhetorical Consequences of the Computer Society: Expert Systems and Human Communication.
ERIC Educational Resources Information Center
Skopec, Eric Wm.
Expert systems are computer programs that solve selected problems by modelling domain-specific behaviors of human experts. These computer programs typically consist of an input/output system that feeds data into the computer and retrieves advice, an inference system using the reasoning and heuristic processes of human experts, and a knowledge…
DIALOG: An executive computer program for linking independent programs
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hague, D. S.; Watson, D. A.
1973-01-01
A very large scale computer programming procedure called the DIALOG Executive System has been developed for the Univac 1100 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. The unique feature of the DIALOG Executive System is the manner in which computer programs are linked. Each program maintains its individual identity and as such is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG Executive System. The installation and use of the DIALOG Executive System are described at Johnson Space Center.
Argonne Out Loud: Computation, Big Data, and the Future of Cities
Catlett, Charlie
2018-01-16
Charlie Catlett, a Senior Computer Scientist at Argonne and Director of the Urban Center for Computation and Data at the Computation Institute of the University of Chicago and Argonne, talks about how he and his colleagues are using high-performance computing, data analytics, and embedded systems to better understand and design cities.
Data Storage and Transfer | High-Performance Computing | NREL
High-Performance Computing (HPC) systems. Photo of computer server wiring and lights, blurred to show data. WinSCP for Windows File Transfers Use to transfer files from a local computer to a remote computer. Robinhood for File Management Use this tool to manage your data files on Peregrine. Best
Image matrix processor for fast multi-dimensional computations
Roberson, George P.; Skeate, Michael F.
1996-01-01
An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.
Data Structures in Natural Computing: Databases as Weak or Strong Anticipatory Systems
NASA Astrophysics Data System (ADS)
Rossiter, B. N.; Heather, M. A.
2004-08-01
Information systems anticipate the real world. Classical databases store, organise and search collections of data of that real world but only as weak anticipatory information systems. This is because of the reductionism and normalisation needed to map the structuralism of natural data on to idealised machines with von Neumann architectures consisting of fixed instructions. Category theory developed as a formalism to explore the theoretical concept of naturality shows that methods like sketches arising from graph theory as only non-natural models of naturality cannot capture real-world structures for strong anticipatory information systems. Databases need a schema of the natural world. Natural computing databases need the schema itself to be also natural. Natural computing methods including neural computers, evolutionary automata, molecular and nanocomputing and quantum computation have the potential to be strong. At present they are mainly at the stage of weak anticipatory systems.
Computer Operating System Maintenance.
1982-06-01
FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access
Design of Remote GPRS-based Gas Data Monitoring System
NASA Astrophysics Data System (ADS)
Yan, Xiyue; Yang, Jianhua; Lu, Wei
2018-01-01
In order to solve the problem of remote data transmission of gas flowmeter, and realize unattended operation on the spot, an unattended remote monitoring system based on GPRS for gas data is designed in this paper. The slave computer of this system adopts embedded microprocessor to read data of gas flowmeter through rs-232 bus and transfers it to the host computer through DTU. In the host computer, the VB program dynamically binds the Winsock control to receive and parse data. By using dynamic data exchange, the Kingview configuration software realizes history trend curve, real time trend curve, alarm, print, web browsing and other functions.
EOS MLS Science Data Processing System: A Description of Architecture and Capabilities
NASA Technical Reports Server (NTRS)
Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.
2006-01-01
This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.
Parallel file system with metadata distributed across partitioned key-value store c
Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron
2017-09-19
Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).
48 CFR 212.7003 - Technical data and computer software.
Code of Federal Regulations, 2011 CFR
2011-10-01
... computer software. 212.7003 Section 212.7003 Federal Acquisition Regulations System DEFENSE ACQUISITION... data and computer software. For purposes of establishing delivery requirements and license rights for technical data under 227.7102 and for computer software under 227.7202, there shall be a rebuttable...
48 CFR 212.7003 - Technical data and computer software.
Code of Federal Regulations, 2013 CFR
2013-10-01
... computer software. 212.7003 Section 212.7003 Federal Acquisition Regulations System DEFENSE ACQUISITION... data and computer software. For purposes of establishing delivery requirements and license rights for technical data under 227.7102 and for computer software under 227.7202, there shall be a rebuttable...
48 CFR 212.7003 - Technical data and computer software.
Code of Federal Regulations, 2012 CFR
2012-10-01
... computer software. 212.7003 Section 212.7003 Federal Acquisition Regulations System DEFENSE ACQUISITION... data and computer software. For purposes of establishing delivery requirements and license rights for technical data under 227.7102 and for computer software under 227.7202, there shall be a rebuttable...
48 CFR 212.7003 - Technical data and computer software.
Code of Federal Regulations, 2014 CFR
2014-10-01
... computer software. 212.7003 Section 212.7003 Federal Acquisition Regulations System DEFENSE ACQUISITION... data and computer software. For purposes of establishing delivery requirements and license rights for technical data under 227.7102 and for computer software under 227.7202, there shall be a rebuttable...
48 CFR 212.7003 - Technical data and computer software.
Code of Federal Regulations, 2010 CFR
2010-10-01
... computer software. 212.7003 Section 212.7003 Federal Acquisition Regulations System DEFENSE ACQUISITION... data and computer software. For purposes of establishing delivery requirements and license rights for technical data under 227.7102 and for computer software under 227.7202, there shall be a rebuttable...
Computer systems for annotation of single molecule fragments
Schwartz, David Charles; Severin, Jessica
2016-07-19
There are provided computer systems for visualizing and annotating single molecule images. Annotation systems in accordance with this disclosure allow a user to mark and annotate single molecules of interest and their restriction enzyme cut sites thereby determining the restriction fragments of single nucleic acid molecules. The markings and annotations may be automatically generated by the system in certain embodiments and they may be overlaid translucently onto the single molecule images. An image caching system may be implemented in the computer annotation systems to reduce image processing time. The annotation systems include one or more connectors connecting to one or more databases capable of storing single molecule data as well as other biomedical data. Such diverse array of data can be retrieved and used to validate the markings and annotations. The annotation systems may be implemented and deployed over a computer network. They may be ergonomically optimized to facilitate user interactions.
Computers as an Instrument for Data Analysis. Technical Report No. 11.
ERIC Educational Resources Information Center
Muller, Mervin E.
A review of statistical data analysis involving computers as a multi-dimensional problem provides the perspective for consideration of the use of computers in statistical analysis and the problems associated with large data files. An overall description of STATJOB, a particular system for doing statistical data analysis on a digital computer,…
76 FR 4456 - Privacy Act of 1974; Report of Modified or Altered System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-25
..., computer systems analysis and computer programming services. The contractors promptly return data entry... Use Disclosures of Data in the System The Privacy Act allows us to disclose information without an...) for which the information was collected. Any such compatible use of data is known as a ``routine use...
A Macintosh based data system for array spectrometers (Poster)
NASA Astrophysics Data System (ADS)
Bregman, J.; Moss, N.
An interactive data aquisition and reduction system has been assembled by combining a Macintosh computer with an instrument controller (an Apple II computer) via an RS-232 interface. The data system provides flexibility for operating different linear array spectrometers. The standard Macintosh interface is used to provide ease of operation and to allow transferring the reduced data to commercial graphics software.
Automated attendance accounting system
NASA Technical Reports Server (NTRS)
Chapman, C. P. (Inventor)
1973-01-01
An automated accounting system useful for applying data to a computer from any or all of a multiplicity of data terminals is disclosed. The system essentially includes a preselected number of data terminals which are each adapted to convert data words of decimal form to another form, i.e., binary, usable with the computer. Each data terminal may take the form of a keyboard unit having a number of depressable buttons or switches corresponding to selected data digits and/or function digits. A bank of data buffers, one of which is associated with each data terminal, is provided as a temporary storage. Data from the terminals is applied to the data buffers on a digit by digit basis for transfer via a multiplexer to the computer.
Gardner, William; Morton, Suzanne; Byron, Sepheen C; Tinoco, Aldo; Canan, Benjamin D; Leonhart, Karen; Kong, Vivian; Scholle, Sarah Hudson
2014-01-01
Objective To determine whether quality measures based on computer-extracted EHR data can reproduce findings based on data manually extracted by reviewers. Data Sources We studied 12 measures of care indicated for adolescent well-care visits for 597 patients in three pediatric health systems. Study Design Observational study. Data Collection/Extraction Methods Manual reviewers collected quality data from the EHR. Site personnel programmed their EHR systems to extract the same data from structured fields in the EHR according to national health IT standards. Principal Findings Overall performance measured via computer-extracted data was 21.9 percent, compared with 53.2 percent for manual data. Agreement measures were high for immunizations. Otherwise, agreement between computer extraction and manual review was modest (Kappa = 0.36) because computer-extracted data frequently missed care events (sensitivity = 39.5 percent). Measure validity varied by health care domain and setting. A limitation of our findings is that we studied only three domains and three sites. Conclusions The accuracy of computer-extracted EHR quality reporting depends on the use of structured data fields, with the highest agreement found for measures and in the setting that had the greatest concentration of structured fields. We need to improve documentation of care, data extraction, and adaptation of EHR systems to practice workflow. PMID:24471935
Surveillance of industrial processes with correlated parameters
White, A.M.; Gross, K.C.; Kubic, W.L.; Wigeland, R.A.
1996-12-17
A system and method for surveillance of an industrial process are disclosed. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions. 10 figs.
The data base management system alternative for computing in the human services.
Sircar, S; Schkade, L L; Schoech, D
1983-01-01
The traditional incremental approach to computerization presents substantial problems as systems develop and grow. The Data Base Management System approach to computerization was developed to overcome the problems resulting from implementing computer applications one at a time. The authors describe the applications approach and the alternative Data Base Management System (DBMS) approach through their developmental history, discuss the technology of DBMS components, and consider the implications of choosing the DBMS alternative. Human service managers need an understanding of the DBMS alternative and its applicability to their agency data processing needs. The basis for a conscious selection of computing alternatives is outlined.
A personal computer-based, multitasking data acquisition system
NASA Technical Reports Server (NTRS)
Bailey, Steven A.
1990-01-01
A multitasking, data acquisition system was written to simultaneously collect meteorological radar and telemetry data from two sources. This system is based on the personal computer architecture. Data is collected via two asynchronous serial ports and is deposited to disk. The system is written in both the C programming language and assembler. It consists of three parts: a multitasking kernel for data collection, a shell with pull down windows as user interface, and a graphics processor for editing data and creating coded messages. An explanation of both system principles and program structure is presented.
Today's Personal Computers: Products for Every Need--Part II.
ERIC Educational Resources Information Center
Personal Computing, 1981
1981-01-01
Looks at microcomputers manufactured by Altos Computer Systems, Cromemco, Exidy, Intelligent Systems, Intertec Data Systems, Mattel, Nippon Electronics, Northstar, Personal Micro Computers, and Sinclair. (Part I of this article, examining other computers, appeared in the May 1981 issue.) Journal availability: Hayden Publishing Company, 50 Essex…
Pan Air Geometry Management System (PAGMS): A data-base management system for PAN AIR geometry data
NASA Technical Reports Server (NTRS)
Hall, J. F.
1981-01-01
A data-base management system called PAGMS was developed to facilitate the data transfer in applications computer programs that create, modify, plot or otherwise manipulate PAN AIR type geometry data in preparation for input to the PAN AIR system of computer programs. PAGMS is composed of a series of FORTRAN callable subroutines which can be accessed directly from applications programs. Currently only a NOS version of PAGMS has been developed.
Pinthong, Watthanai; Muangruen, Panya
2016-01-01
Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software. PMID:27547555
BESIU Physical Analysis on Hadoop Platform
NASA Astrophysics Data System (ADS)
Huo, Jing; Zang, Dongsong; Lei, Xiaofeng; Li, Qiang; Sun, Gongxing
2014-06-01
In the past 20 years, computing cluster has been widely used for High Energy Physics data processing. The jobs running on the traditional cluster with a Data-to-Computing structure, have to read large volumes of data via the network to the computing nodes for analysis, thereby making the I/O latency become a bottleneck of the whole system. The new distributed computing technology based on the MapReduce programming model has many advantages, such as high concurrency, high scalability and high fault tolerance, and it can benefit us in dealing with Big Data. This paper brings the idea of using MapReduce model to do BESIII physical analysis, and presents a new data analysis system structure based on Hadoop platform, which not only greatly improve the efficiency of data analysis, but also reduces the cost of system building. Moreover, this paper establishes an event pre-selection system based on the event level metadata(TAGs) database to optimize the data analyzing procedure.
Medical Signal-Conditioning and Data-Interface System
NASA Technical Reports Server (NTRS)
Braun, Jeffrey; Jacobus, charles; Booth, Scott; Suarez, Michael; Smith, Derek; Hartnagle, Jeffrey; LePrell, Glenn
2006-01-01
A general-purpose portable, wearable electronic signal-conditioning and data-interface system is being developed for medical applications. The system can acquire multiple physiological signals (e.g., electrocardiographic, electroencephalographic, and electromyographic signals) from sensors on the wearer s body, digitize those signals that are received in analog form, preprocess the resulting data, and transmit the data to one or more remote location(s) via a radiocommunication link and/or the Internet. The system includes a computer running data-object-oriented software that can be programmed to configure the system to accept almost any analog or digital input signals from medical devices. The computing hardware and software implement a general-purpose data-routing-and-encapsulation architecture that supports tagging of input data and routing the data in a standardized way through the Internet and other modern packet-switching networks to one or more computer(s) for review by physicians. The architecture supports multiple-site buffering of data for redundancy and reliability, and supports both real-time and slower-than-real-time collection, routing, and viewing of signal data. Routing and viewing stations support insertion of automated analysis routines to aid in encoding, analysis, viewing, and diagnosis.
Computation of Flow Through Water-Control Structures Using Program DAMFLO.2
Sanders, Curtis L.; Feaster, Toby D.
2004-01-01
As part of its mission to collect, analyze, and store streamflow data, the U.S. Geological Survey computes flow through several dam structures throughout the country. Flows are computed using hydraulic equations that describe flow through sluice and Tainter gates, crest gates, lock gates, spillways, locks, pumps, and siphons, which are calibrated using flow measurements. The program DAMFLO.2 was written to compute, tabulate, and plot flow through dam structures using data that describe the physical properties of dams and various hydraulic parameters and ratings that use time-varying data, such as lake elevations or gate openings. The program uses electronic computer files of time-varying data, such as lake elevation or gate openings, retrieved from the U.S. Geological Survey Automated Data Processing System. Computed time-varying flow data from DAMFLO.2 are output in flat files, which can be entered into the Automated Data Processing System database. All computations are made in units of feet and seconds. DAMFLO.2 uses the procedures and language developed by the SAS Institute Inc.
Systems and methods for biometric identification using the acoustic properties of the ear canal
Bouchard, Ann Marie; Osbourn, Gordon Cecil
1998-01-01
The present invention teaches systems and methods for verifying or recognizing a person's identity based on measurements of the acoustic response of the individual's ear canal. The system comprises an acoustic emission device, which emits an acoustic source signal s(t), designated by a computer, into the ear canal of an individual, and an acoustic response detection device, which detects the acoustic response signal f(t). A computer digitizes the response (detected) signal f(t) and stores the data. Computer-implemented algorithms analyze the response signal f(t) to produce ear-canal feature data. The ear-canal feature data obtained during enrollment is stored on the computer, or some other recording medium, to compare the enrollment data with ear-canal feature data produced in a subsequent access attempt, to determine if the individual has previously been enrolled. The system can also be adapted for remote access applications.
Systems and methods for biometric identification using the acoustic properties of the ear canal
Bouchard, A.M.; Osbourn, G.C.
1998-07-28
The present invention teaches systems and methods for verifying or recognizing a person`s identity based on measurements of the acoustic response of the individual`s ear canal. The system comprises an acoustic emission device, which emits an acoustic source signal s(t), designated by a computer, into the ear canal of an individual, and an acoustic response detection device, which detects the acoustic response signal f(t). A computer digitizes the response (detected) signal f(t) and stores the data. Computer-implemented algorithms analyze the response signal f(t) to produce ear-canal feature data. The ear-canal feature data obtained during enrollment is stored on the computer, or some other recording medium, to compare the enrollment data with ear-canal feature data produced in a subsequent access attempt, to determine if the individual has previously been enrolled. The system can also be adapted for remote access applications. 5 figs.
Laboratory data base for isomer-specific determination of polychlorinated biphenyls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz, T.R.; Campbell, R.D.; Stalling, D.L.
1984-07-01
A computer-assisted technique for quantitative determination of polychlorinated biphenyl isomers is described. PCB isomers were identified by use of a retention index system with n-alkyl trichloroacetates as retention index marker compounds. A laboratory data base system was developed to aid in editing and quantitation of data generated from capillary gas chromatographic data. Data base management was provided by computer programs written in DSM-11 (Digital Standard MUMPS) for the PDP-11 family of computers. 13 references, 4 figures, 2 tables.
A system for the input and storage of data in the Besm-6 digital computer
NASA Technical Reports Server (NTRS)
Schmidt, K.; Blenke, L.
1975-01-01
Computer programs used for the decoding and storage of large volumes of data on the the BESM-6 computer are described. The following factors are discussed: the programming control language allows the programs to be run as part of a modular programming system used in data processing; data control is executed in a hierarchically built file on magnetic tape with sequential index storage; and the programs are not dependent on the structure of the data.
Middleware for big data processing: test results
NASA Astrophysics Data System (ADS)
Gankevich, I.; Gaiduchok, V.; Korkhov, V.; Degtyarev, A.; Bogdanov, A.
2017-12-01
Dealing with large volumes of data is resource-consuming work which is more and more often delegated not only to a single computer but also to a whole distributed computing system at once. As the number of computers in a distributed system increases, the amount of effort put into effective management of the system grows. When the system reaches some critical size, much effort should be put into improving its fault tolerance. It is difficult to estimate when some particular distributed system needs such facilities for a given workload, so instead they should be implemented in a middleware which works efficiently with a distributed system of any size. It is also difficult to estimate whether a volume of data is large or not, so the middleware should also work with data of any volume. In other words, the purpose of the middleware is to provide facilities that adapt distributed computing system for a given workload. In this paper we introduce such middleware appliance. Tests show that this middleware is well-suited for typical HPC and big data workloads and its performance is comparable with well-known alternatives.
Performance, Agility and Cost of Cloud Computing Services for NASA GES DISC Giovanni Application
NASA Astrophysics Data System (ADS)
Pham, L.; Chen, A.; Wharton, S.; Winter, E. L.; Lynnes, C.
2013-12-01
The NASA Goddard Earth Science Data and Information Services Center (GES DISC) is investigating the performance, agility and cost of Cloud computing for GES DISC applications. Giovanni (Geospatial Interactive Online Visualization ANd aNalysis Infrastructure), one of the core applications at the GES DISC for online climate-related Earth science data access, subsetting, analysis, visualization, and downloading, was used to evaluate the feasibility and effort of porting an application to the Amazon Cloud Services platform. The performance and the cost of running Giovanni on the Amazon Cloud were compared to similar parameters for the GES DISC local operational system. A Giovanni Time-Series analysis of aerosol absorption optical depth (388nm) from OMI (Ozone Monitoring Instrument)/Aura was selected for these comparisons. All required data were pre-cached in both the Cloud and local system to avoid data transfer delays. The 3-, 6-, 12-, and 24-month data were used for analysis on the Cloud and local system respectively, and the processing times for the analysis were used to evaluate system performance. To investigate application agility, Giovanni was installed and tested on multiple Cloud platforms. The cost of using a Cloud computing platform mainly consists of: computing, storage, data requests, and data transfer in/out. The Cloud computing cost is calculated based on the hourly rate, and the storage cost is calculated based on the rate of Gigabytes per month. Cost for incoming data transfer is free, and for data transfer out, the cost is based on the rate in Gigabytes. The costs for a local server system consist of buying hardware/software, system maintenance/updating, and operating cost. The results showed that the Cloud platform had a 38% better performance and cost 36% less than the local system. This investigation shows the potential of cloud computing to increase system performance and lower the overall cost of system management.
NASA Technical Reports Server (NTRS)
1972-01-01
The design is reported of an advanced modular computer system designated the Automatically Reconfigurable Modular Multiprocessor System, which anticipates requirements for higher computing capacity and reliability for future spaceborne computers. Subjects discussed include: an overview of the architecture, mission analysis, synchronous and nonsynchronous scheduling control, reliability, and data transmission.
Study of ephemeris accuracy of the minor planets. [using computer based data systems
NASA Technical Reports Server (NTRS)
Brooks, D. R.; Cunningham, L. E.
1974-01-01
The current state of minor planet ephemerides was assessed, and the means for providing and updating these emphemerides for use by both the mission planner and the astronomer were developed. A system of obtaining data for all the numbered minor planets was planned, and computer programs for its initial mechanization were developed. The computer based system furnishes the osculating elements for all of the numbered minor planets at an adopted date of October 10, 1972, and at every 400 day interval over the years of interest. It also furnishes the perturbations in the rectangular coordinates relative to the osculating elements at every 4 day interval. Another computer program was designed and developed to integrate the perturbed motion of a group of 50 minor planets simultaneously. Sampled data resulting from the operation of the computer based systems are presented.
Geoinformation web-system for processing and visualization of large archives of geo-referenced data
NASA Astrophysics Data System (ADS)
Gordov, E. P.; Okladnikov, I. G.; Titov, A. G.; Shulgina, T. M.
2010-12-01
Developed working model of information-computational system aimed at scientific research in area of climate change is presented. The system will allow processing and analysis of large archives of geophysical data obtained both from observations and modeling. Accumulated experience of developing information-computational web-systems providing computational processing and visualization of large archives of geo-referenced data was used during the implementation (Gordov et al, 2007; Okladnikov et al, 2008; Titov et al, 2009). Functional capabilities of the system comprise a set of procedures for mathematical and statistical analysis, processing and visualization of data. At present five archives of data are available for processing: 1st and 2nd editions of NCEP/NCAR Reanalysis, ECMWF ERA-40 Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, and NOAA-CIRES XX Century Global Reanalysis Version I. To provide data processing functionality a computational modular kernel and class library providing data access for computational modules were developed. Currently a set of computational modules for climate change indices approved by WMO is available. Also a special module providing visualization of results and writing to Encapsulated Postscript, GeoTIFF and ESRI shape files was developed. As a technological basis for representation of cartographical information in Internet the GeoServer software conforming to OpenGIS standards is used. Integration of GIS-functionality with web-portal software to provide a basis for web-portal’s development as a part of geoinformation web-system is performed. Such geoinformation web-system is a next step in development of applied information-telecommunication systems offering to specialists from various scientific fields unique opportunities of performing reliable analysis of heterogeneous geophysical data using approved computational algorithms. It will allow a wide range of researchers to work with geophysical data without specific programming knowledge and to concentrate on solving their specific tasks. The system would be of special importance for education in climate change domain. This work is partially supported by RFBR grant #10-07-00547, SB RAS Basic Program Projects 4.31.1.5 and 4.31.2.7, SB RAS Integration Projects 4 and 9.
Data management in engineering
NASA Technical Reports Server (NTRS)
Browne, J. C.
1976-01-01
An introduction to computer based data management is presented with an orientation toward the needs of engineering application. The characteristics and structure of data management systems are discussed. A link to familiar engineering applications of computing is established through a discussion of data structure and data access procedures. An example data management system for a hypothetical engineering application is presented.
Ackermann, Hans D.; Pankratz, Leroy W.; Dansereau, Danny A.
1983-01-01
The computer programs published in Open-File Report 82-1065, A comprehensive system for interpreting seismic-refraction arrival-time data using interactive computer methods (Ackermann, Pankratz, and Dansereau, 1982), have been modified to run on a mini-computer. The new version uses approximately 1/10 of the memory of the initial version, is more efficient and gives the same results.
White, Timothy C.; Sauter, Edward A.; Stewart, Duff C.
2014-01-01
Intermagnet is an international oversight group which exists to establish a global network for geomagnetic observatories. This group establishes data standards and standard operating procedures for members and prospective members. Intermagnet has proposed a new One-Second Data Standard, for that emerging geomagnetic product. The standard specifies that all data collected must have a time stamp accuracy of ±10 milliseconds of the top-of-the-second Coordinated Universal Time. Therefore, the U.S. Geological Survey Geomagnetism Program has designed and executed several tests on its current data collection system, the Personal Computer Data Collection Platform. Tests are designed to measure the time shifts introduced by individual components within the data collection system, as well as to measure the time shift introduced by the entire Personal Computer Data Collection Platform. Additional testing designed for Intermagnet will be used to validate further such measurements. Current results of the measurements showed a 5.0–19.9 millisecond lag for the vertical channel (Z) of the Personal Computer Data Collection Platform and a 13.0–25.8 millisecond lag for horizontal channels (H and D) of the collection system. These measurements represent a dynamically changing delay introduced within the U.S. Geological Survey Personal Computer Data Collection Platform.
Error correcting code with chip kill capability and power saving enhancement
Gara, Alan G [Mount Kisco, NY; Chen, Dong [Croton On Husdon, NY; Coteus, Paul W [Yorktown Heights, NY; Flynn, William T [Rochester, MN; Marcella, James A [Rochester, MN; Takken, Todd [Brewster, NY; Trager, Barry M [Yorktown Heights, NY; Winograd, Shmuel [Scarsdale, NY
2011-08-30
A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.
Distributed Accounting on the Grid
NASA Technical Reports Server (NTRS)
Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.
2001-01-01
By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-13
... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology..., computational, and systems biology data can better inform risk assessment. This draft document is available for...
NASA Technical Reports Server (NTRS)
STACK S. H.
1981-01-01
A computer-aided design system has recently been developed specifically for the small research group environment. The system is implemented on a Prime 400 minicomputer linked with a CDC 6600 computer. The goal was to assign the minicomputer specific tasks, such as data input and graphics, thereby reserving the large mainframe computer for time-consuming analysis codes. The basic structure of the design system consists of GEMPAK, a computer code that generates detailed configuration geometry from a minimum of input; interface programs that reformat GEMPAK geometry for input to the analysis codes; and utility programs that simplify computer access and data interpretation. The working system has had a large positive impact on the quantity and quality of research performed by the originating group. This paper describes the system, the major factors that contributed to its particular form, and presents examples of its application.
Improved Interactive Medical-Imaging System
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; Twombly, Ian A.; Senger, Steven
2003-01-01
An improved computational-simulation system for interactive medical imaging has been invented. The system displays high-resolution, three-dimensional-appearing images of anatomical objects based on data acquired by such techniques as computed tomography (CT) and magnetic-resonance imaging (MRI). The system enables users to manipulate the data to obtain a variety of views for example, to display cross sections in specified planes or to rotate images about specified axes. Relative to prior such systems, this system offers enhanced capabilities for synthesizing images of surgical cuts and for collaboration by users at multiple, remote computing sites.
Bent, John M.; Faibish, Sorin; Grider, Gary
2015-06-30
Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.
Towards a Multi-Mission, Airborne Science Data System Environment
NASA Astrophysics Data System (ADS)
Crichton, D. J.; Hardman, S.; Law, E.; Freeborn, D.; Kay-Im, E.; Lau, G.; Oswald, J.
2011-12-01
NASA earth science instruments are increasingly relying on airborne missions. However, traditionally, there has been limited common infrastructure support available to principal investigators in the area of science data systems. As a result, each investigator has been required to develop their own computing infrastructures for the science data system. Typically there is little software reuse and many projects lack sufficient resources to provide a robust infrastructure to capture, process, distribute and archive the observations acquired from airborne flights. At NASA's Jet Propulsion Laboratory (JPL), we have been developing a multi-mission data system infrastructure for airborne instruments called the Airborne Cloud Computing Environment (ACCE). ACCE encompasses the end-to-end lifecycle covering planning, provisioning of data system capabilities, and support for scientific analysis in order to improve the quality, cost effectiveness, and capabilities to enable new scientific discovery and research in earth observation. This includes improving data system interoperability across each instrument. A principal characteristic is being able to provide an agile infrastructure that is architected to allow for a variety of configurations of the infrastructure from locally installed compute and storage services to provisioning those services via the "cloud" from cloud computer vendors such as Amazon.com. Investigators often have different needs that require a flexible configuration. The data system infrastructure is built on the Apache's Object Oriented Data Technology (OODT) suite of components which has been used for a number of spaceborne missions and provides a rich set of open source software components and services for constructing science processing and data management systems. In 2010, a partnership was formed between the ACCE team and the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) mission to support the data processing and data management needs. A principal goal is to provide support for the Fourier Transform Spectrometer (FTS) instrument which will produce over 700,000 soundings over the life of their three-year mission. The cost to purchase and operate a cluster-based system in order to generate Level 2 Full Physics products from this data was prohibitive. Through an evaluation of cloud computing solutions, Amazon's Elastic Compute Cloud (EC2) was selected for the CARVE deployment. As the ACCE infrastructure is developed and extended to form an infrastructure for airborne missions, the experience of working with CARVE has provided a number of lessons learned and has proven to be important in reinforcing the unique aspects of airborne missions and the importance of the ACCE infrastructure in developing a cost effective, flexible multi-mission capability that leverages emerging capabilities in cloud computing, workflow management, and distributed computing.
On-line computer system for use with low- energy nuclear physics experiments is reported
NASA Technical Reports Server (NTRS)
Gemmell, D. S.
1969-01-01
Computer program handles data from low-energy nuclear physics experiments which utilize the ND-160 pulse-height analyzer and the PHYLIS computing system. The program allows experimenters to choose from about 50 different basic data-handling functions and to prescribe the order in which these functions will be performed.
The Radio Frequency Health Node Wireless Sensor System
NASA Technical Reports Server (NTRS)
Valencia, J. Emilio; Stanley, Priscilla C.; Mackey, Paul J.
2009-01-01
The Radio Frequency Health Node (RFHN) wireless sensor system differs from other wireless sensor systems in ways originally intended to enhance utility as an instrumentation system for a spacecraft. The RFHN can also be adapted to use in terrestrial applications in which there are requirements for operational flexibility and integrability into higher-level instrumentation and data acquisition systems. As shown in the figure, the heart of the system is the RFHN, which is a unit that passes commands and data between (1) one or more commercially available wireless sensor units (optionally, also including wired sensor units) and (2) command and data interfaces with a local control computer that may be part of the spacecraft or other engineering system in which the wireless sensor system is installed. In turn, the local control computer can be in radio or wire communication with a remote control computer that may be part of a higher-level system. The remote control computer, acting via the local control computer and the RFHN, cannot only monitor readout data from the sensor units but can also remotely configure (program or reprogram) the RFHN and the sensor units during operation. In a spacecraft application, the RFHN and the sensor units can also be configured more nearly directly, prior to launch, via a serial interface that includes an umbilical cable between the spacecraft and ground support equipment. In either case, the RFHN wireless sensor system has the flexibility to be configured, as required, with different numbers and types of sensors for different applications. The RFHN can be used to effect realtime transfer of data from, and commands to, the wireless sensor units. It can also store data for later retrieval by an external computer. The RFHN communicates with the wireless sensor units via a radio transceiver module. The modular design of the RFHN makes it possible to add radio transceiver modules as needed to accommodate additional sets of wireless sensor units. The RFHN includes a core module that performs generic computer functions, including management of power and input, output, processing, and storage of data. In a typical application, the processing capabilities in the RFHN are utilized to perform preprocessing, trending, and fusion of sensor data. The core module also serves as the unit through which the remote control computer configures the sensor units and the rest of the RFHN.
ERIC Educational Resources Information Center
Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.
This document, which is intended for use by community and junior colleges throughout Mississippi, contains curriculum frameworks for two programs in the state's postsecondary-level computer information systems technology cluster: computer programming and network support. Presented in the introduction are program descriptions and suggested course…
Integrating computer programs for engineering analysis and design
NASA Technical Reports Server (NTRS)
Wilhite, A. W.; Crisp, V. K.; Johnson, S. C.
1983-01-01
The design of a third-generation system for integrating computer programs for engineering and design has been developed for the Aerospace Vehicle Interactive Design (AVID) system. This system consists of an engineering data management system, program interface software, a user interface, and a geometry system. A relational information system (ARIS) was developed specifically for the computer-aided engineering system. It is used for a repository of design data that are communicated between analysis programs, for a dictionary that describes these design data, for a directory that describes the analysis programs, and for other system functions. A method is described for interfacing independent analysis programs into a loosely-coupled design system. This method emphasizes an interactive extension of analysis techniques and manipulation of design data. Also, integrity mechanisms exist to maintain database correctness for multidisciplinary design tasks by an individual or a team of specialists. Finally, a prototype user interface program has been developed to aid in system utilization.
The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform
NASA Astrophysics Data System (ADS)
Xie, Qingyun
2016-06-01
This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-23
... Data and Computer Software AGENCY: Defense Acquisition Regulations System; Department of Defense (DoD... in Technical Data, and Subpart 227.72, Rights in Computer Software and Computer Software... are associated with rights in technical data and computer software. DoD needs this information to...
Analyzing high energy physics data using database computing: Preliminary report
NASA Technical Reports Server (NTRS)
Baden, Andrew; Day, Chris; Grossman, Robert; Lifka, Dave; Lusk, Ewing; May, Edward; Price, Larry
1991-01-01
A proof of concept system is described for analyzing high energy physics (HEP) data using data base computing. The system is designed to scale up to the size required for HEP experiments at the Superconducting SuperCollider (SSC) lab. These experiments will require collecting and analyzing approximately 10 to 100 million 'events' per year during proton colliding beam collisions. Each 'event' consists of a set of vectors with a total length of approx. one megabyte. This represents an increase of approx. 2 to 3 orders of magnitude in the amount of data accumulated by present HEP experiments. The system is called the HEPDBC System (High Energy Physics Database Computing System). At present, the Mark 0 HEPDBC System is completed, and can produce analysis of HEP experimental data approx. an order of magnitude faster than current production software on data sets of approx. 1 GB. The Mark 1 HEPDBC System is currently undergoing testing and is designed to analyze data sets 10 to 100 times larger.
Digital data storage systems, computers, and data verification methods
Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.
2005-12-27
Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.
High-Level Data-Abstraction System
NASA Technical Reports Server (NTRS)
Fishwick, P. A.
1986-01-01
Communication with data-base processor flexible and efficient. High Level Data Abstraction (HILDA) system is three-layer system supporting data-abstraction features of Intel data-base processor (DBP). Purpose of HILDA establishment of flexible method of efficiently communicating with DBP. Power of HILDA lies in its extensibility with regard to syntax and semantic changes. HILDA's high-level query language readily modified. Offers powerful potential to computer sites where DBP attached to DEC VAX-series computer. HILDA system written in Pascal and FORTRAN 77 for interactive execution.
Data Integration in Computer Distributed Systems
NASA Astrophysics Data System (ADS)
Kwiecień, Błażej
In this article the author analyze a problem of data integration in a computer distributed systems. Exchange of information between different levels in integrated pyramid of enterprise process is fundamental with regard to efficient enterprise work. Communication and data exchange between levels are not always the same cause of necessity of different network protocols usage, communication medium, system response time, etc.
System and Method for Providing a Climate Data Persistence Service
NASA Technical Reports Server (NTRS)
Schnase, John L. (Inventor); Ripley, III, William David (Inventor); Duffy, Daniel Q. (Inventor); Thompson, John H. (Inventor); Strong, Savannah L. (Inventor); McInerney, Mark (Inventor); Sinno, Scott (Inventor); Tamkin, Glenn S. (Inventor); Nadeau, Denis (Inventor)
2018-01-01
A system, method and computer-readable storage devices for providing a climate data persistence service. A system configured to provide the service can include a climate data server that performs data and metadata storage and management functions for climate data objects, a compute-storage platform that provides the resources needed to support a climate data server, provisioning software that allows climate data server instances to be deployed as virtual climate data servers in a cloud computing environment, and a service interface, wherein persistence service capabilities are invoked by software applications running on a client device. The climate data objects can be in various formats, such as International Organization for Standards (ISO) Open Archival Information System (OAIS) Reference Model Submission Information Packages, Archive Information Packages, and Dissemination Information Packages. The climate data server can enable scalable, federated storage, management, discovery, and access, and can be tailored for particular use cases.
System and Method for Monitoring Distributed Asset Data
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry (Inventor)
2015-01-01
A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.
Non-volatile memory for checkpoint storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumrich, Matthias A.; Chen, Dong; Cipolla, Thomas M.
A system, method and computer program product for supporting system initiated checkpoints in high performance parallel computing systems and storing of checkpoint data to a non-volatile memory storage device. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity. In one embodiment, themore » non-volatile memory is a pluggable flash memory card.« less
A computer system for processing data from routine pulmonary function tests.
Pack, A I; McCusker, R; Moran, F
1977-01-01
In larger pulmonary function laboratories there is a need for computerised techniques of data processing. A flexible computer system, which is used routinely, is described. The system processes data from a relatively large range of tests. Two types of output are produced--one for laboratory purposes, and one for return to the referring physician. The system adds an automatic interpretative report for each set of results. In developing the interpretative system it has been necessary to utilise a number of arbitrary definitions. The present terminology for reporting pulmonary function tests has limitations. The computer interpretation system affords the opportunity to take account of known interaction between measurements of function and different pathological states. Images PMID:329462
NASA Technical Reports Server (NTRS)
1972-01-01
The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.
NASA Technical Reports Server (NTRS)
Bechtel, R. D.; Mateos, M. A.; Lincoln, K. A.
1988-01-01
Briefly described are the essential features of a computer program designed to interface a personal computer with the fast, digital data acquisition system of a time-of-flight mass spectrometer. The instrumentation was developed to provide a time-resolved analysis of individual vapor pulses produced by the incidence of a pulsed laser beam on an ablative material. The high repetition rate spectrometer coupled to a fast transient recorder captures complete mass spectra every 20 to 35 microsecs, thereby providing the time resolution needed for the study of this sort of transient event. The program enables the computer to record the large amount of data generated by the system in short time intervals, and it provides the operator the immediate option of presenting the spectral data in several different formats. Furthermore, the system does this with a high degree of automation, including the tasks of mass labeling the spectra and logging pertinent instrumental parameters.
Main control computer security model of closed network systems protection against cyber attacks
NASA Astrophysics Data System (ADS)
Seymen, Bilal
2014-06-01
The model that brings the data input/output under control in closed network systems, that maintains the system securely, and that controls the flow of information through the Main Control Computer which also brings the network traffic under control against cyber-attacks. The network, which can be controlled single-handedly thanks to the system designed to enable the network users to make data entry into the system or to extract data from the system securely, intends to minimize the security gaps. Moreover, data input/output record can be kept by means of the user account assigned for each user, and it is also possible to carry out retroactive tracking, if requested. Because the measures that need to be taken for each computer on the network regarding cyber security, do require high cost; it has been intended to provide a cost-effective working environment with this model, only if the Main Control Computer has the updated hardware.
Design for Run-Time Monitor on Cloud Computing
NASA Astrophysics Data System (ADS)
Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.
Holo-Chidi video concentrator card
NASA Astrophysics Data System (ADS)
Nwodoh, Thomas A.; Prabhakar, Aditya; Benton, Stephen A.
2001-12-01
The Holo-Chidi Video Concentrator Card is a frame buffer for the Holo-Chidi holographic video processing system. Holo- Chidi is designed at the MIT Media Laboratory for real-time computation of computer generated holograms and the subsequent display of the holograms at video frame rates. The Holo-Chidi system is made of two sets of cards - the set of Processor cards and the set of Video Concentrator Cards (VCCs). The Processor cards are used for hologram computation, data archival/retrieval from a host system, and for higher-level control of the VCCs. The VCC formats computed holographic data from multiple hologram computing Processor cards, converting the digital data to analog form to feed the acousto-optic-modulators of the Media lab's Mark-II holographic display system. The Video Concentrator card is made of: a High-Speed I/O (HSIO) interface whence data is transferred from the hologram computing Processor cards, a set of FIFOs and video RAM used as buffer for data for the hololines being displayed, a one-chip integrated microprocessor and peripheral combination that handles communication with other VCCs and furnishes the card with a USB port, a co-processor which controls display data formatting, and D-to-A converters that convert digital fringes to analog form. The co-processor is implemented with an SRAM-based FPGA with over 500,000 gates and controls all the signals needed to format the data from the multiple Processor cards into the format required by Mark-II. A VCC has three HSIO ports through which up to 500 Megabytes of computed holographic data can flow from the Processor Cards to the VCC per second. A Holo-Chidi system with three VCCs has enough frame buffering capacity to hold up to thirty two 36Megabyte hologram frames at a time. Pre-computed holograms may also be loaded into the VCC from a host computer through the low- speed USB port. Both the microprocessor and the co- processor in the VCC can access the main system memory used to store control programs and data for the VCC. The Card also generates the control signals used by the scanning mirrors of Mark-II. In this paper we discuss the design of the VCC and its implementation in the Holo-Chidi system.
Design and Construction of Detector and Data Acquisition Elements for Proton Computed Tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fermi Research Alliance; Northern Illinois University
2015-07-15
Proton computed tomography (pCT) offers an alternative to x-ray imaging with potential for three-dimensional imaging, reduced radiation exposure, and in-situ imaging. Northern Illinois University (NIU) is developing a second-generation proton computed tomography system with a goal of demonstrating the feasibility of three-dimensional imaging within clinically realistic imaging times. The second-generation pCT system is comprised of a tracking system, a calorimeter, data acquisition, a computing farm, and software algorithms. The proton beam encounters the upstream tracking detectors, the patient or phantom, the downstream tracking detectors, and a calorimeter. The schematic layout of the PCT system is shown. The data acquisition sendsmore » the proton scattering information to an offline computing farm. Major innovations of the second generation pCT project involve an increased data acquisition rate ( MHz range) and development of three-dimensional imaging algorithms. The Fermilab Particle Physics Division and Northern Illinois Center for Accelerator and Detector Development at Northern Illinois University worked together to design and construct the tracking detectors, calorimeter, readout electronics and detector mounting system.« less
Description of data base management systems activities
NASA Technical Reports Server (NTRS)
1983-01-01
One of the major responsibilities of the JPL Computing and Information Services Office is to develop and maintain a JPL plan for providing computing services to the JPL management and administrative community that will lead to improved productivity. The CISO plan to accomplish this objective has been titled 'Management and Administrative Support Systems' (MASS). The MASS plan is based on the continued use of JPL's IBM 3032 Computer system for administrative computing and for the MASS functions. The current candidate administrative Data Base Management Systems required to support the MASS include ADABASE, Cullinane IDMS and TOTAL. Previous uses of administrative Data Base Systems have been applied to specific local functions rather than in a centralized manner with elements common to the many user groups. Limited capacity data base systems have been installed in microprocessor based office automation systems in a few Project and Management Offices using Ashton-Tate dBASE II. These experiences plus some other localized in house DBMS uses have provided an excellent background for developing user and system requirements for a single DBMS to support the MASS program.
User guide to a command and control system; a part of a prelaunch wind monitoring program
NASA Technical Reports Server (NTRS)
Cowgill, G. R.
1976-01-01
A set of programs called Command and Control System (CCS), intended as a user manual, is described for the operation of CCS by the personnel supporting the wind monitoring portion of the launch mission. Wind data obtained by tracking balloons is sent by electronic means using telephone lines to other locations. Steering commands are computed from a system called ADDJUST for the on-board computer and relays this data. Data are received and automatically stored in a microprocessor, then via a real time program transferred to the UNIVAC 1100/40 computer. At this point the data is available to be used by the Command and Control system.
Bringing MapReduce Closer To Data With Active Drives
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Prathapan, S.; Warmka, R.; Wyatt, B.; Halem, M.; Trantham, J. D.; Markey, C. A.
2017-12-01
Moving computation closer to the data location has been a much theorized improvement to computation for decades. The increase in processor performance, the decrease in processor size and power requirement combined with the increase in data intensive computing has created a push to move computation as close to data as possible. We will show the next logical step in this evolution in computing: moving computation directly to storage. Hypothetical systems, known as Active Drives, have been proposed as early as 1998. These Active Drives would have a general-purpose CPU on each disk allowing for computations to be performed on them without the need to transfer the data to the computer over the system bus or via a network. We will utilize Seagate's Active Drives to perform general purpose parallel computing using the MapReduce programming model directly on each drive. We will detail how the MapReduce programming model can be adapted to the Active Drive compute model to perform general purpose computing with comparable results to traditional MapReduce computations performed via Hadoop. We will show how an Active Drive based approach significantly reduces the amount of data leaving the drive when performing several common algorithms: subsetting and gridding. We will show that an Active Drive based design significantly improves data transfer speeds into and out of drives compared to Hadoop's HDFS while at the same time keeping comparable compute speeds as Hadoop.
Proceedings: Computer Science and Data Systems Technical Symposium, volume 1
NASA Technical Reports Server (NTRS)
Larsen, Ronald L.; Wallgren, Kenneth
1985-01-01
Progress reports and technical updates of programs being performed by NASA centers are covered. Presentations in viewgraph form are included for topics in three categories: computer science, data systems and space station applications.
Monitoring SLAC High Performance UNIX Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC
2005-12-15
Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia.more » Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.« less
Image matrix processor for fast multi-dimensional computations
Roberson, G.P.; Skeate, M.F.
1996-10-15
An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.
A Cloud-based Infrastructure and Architecture for Environmental System Research
NASA Astrophysics Data System (ADS)
Wang, D.; Wei, Y.; Shankar, M.; Quigley, J.; Wilson, B. E.
2016-12-01
The present availability of high-capacity networks, low-cost computers and storage devices, and the widespread adoption of hardware virtualization and service-oriented architecture provide a great opportunity to enable data and computing infrastructure sharing between closely related research activities. By taking advantage of these approaches, along with the world-class high computing and data infrastructure located at Oak Ridge National Laboratory, a cloud-based infrastructure and architecture has been developed to efficiently deliver essential data and informatics service and utilities to the environmental system research community, and will provide unique capabilities that allows terrestrial ecosystem research projects to share their software utilities (tools), data and even data submission workflow in a straightforward fashion. The infrastructure will minimize large disruptions from current project-based data submission workflows for better acceptances from existing projects, since many ecosystem research projects already have their own requirements or preferences for data submission and collection. The infrastructure will eliminate scalability problems with current project silos by provide unified data services and infrastructure. The Infrastructure consists of two key components (1) a collection of configurable virtual computing environments and user management systems that expedite data submission and collection from environmental system research community, and (2) scalable data management services and system, originated and development by ORNL data centers.
AGIS: The ATLAS Grid Information System
NASA Astrophysics Data System (ADS)
Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration
2014-06-01
ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.
A system for automatic analysis of blood pressure data for digital computer entry
NASA Technical Reports Server (NTRS)
Miller, R. L.
1972-01-01
Operation of automatic blood pressure data system is described. Analog blood pressure signal is analyzed by three separate circuits, systolic, diastolic, and cycle defect. Digital computer output is displayed on teletype paper tape punch and video screen. Illustration of system is included.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.
NASA Technical Reports Server (NTRS)
Grantham, C.
1979-01-01
The Interactive Software Invocation (ISIS), an interactive data management system, was developed to act as a buffer between the user and host computer system. The user is provided by ISIS with a powerful system for developing software or systems in the interactive environment. The user is protected from the idiosyncracies of the host computer system by providing such a complete range of capabilities that the user should have no need for direct access to the host computer. These capabilities are divided into four areas: desk top calculator, data editor, file manager, and tool invoker.
ERIC Educational Resources Information Center
Palme, Jacob
The four papers contained in this document provide: (1) a survey of computer based mail and conference systems; (2) an evaluation of systems for both individually addressed mail and group addressing through conferences and distribution lists; (3) a discussion of various methods of structuring the text data in existing systems; and (4) a…
GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data
NASA Astrophysics Data System (ADS)
Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.
2016-12-01
Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.
48 CFR 27.404-2 - Limited rights data and restricted computer software.
Code of Federal Regulations, 2011 CFR
2011-10-01
... restricted computer software. 27.404-2 Section 27.404-2 Federal Acquisition Regulations System FEDERAL... Copyrights 27.404-2 Limited rights data and restricted computer software. (a) General. The basic clause at 52... restricted computer software by withholding the data from the Government and instead delivering form, fit...
48 CFR 27.404-2 - Limited rights data and restricted computer software.
Code of Federal Regulations, 2014 CFR
2014-10-01
... restricted computer software. 27.404-2 Section 27.404-2 Federal Acquisition Regulations System FEDERAL... Copyrights 27.404-2 Limited rights data and restricted computer software. (a) General. The basic clause at 52... restricted computer software by withholding the data from the Government and instead delivering form, fit...
48 CFR 27.404-2 - Limited rights data and restricted computer software.
Code of Federal Regulations, 2012 CFR
2012-10-01
... restricted computer software. 27.404-2 Section 27.404-2 Federal Acquisition Regulations System FEDERAL... Copyrights 27.404-2 Limited rights data and restricted computer software. (a) General. The basic clause at 52... restricted computer software by withholding the data from the Government and instead delivering form, fit...
48 CFR 27.404-2 - Limited rights data and restricted computer software.
Code of Federal Regulations, 2013 CFR
2013-10-01
... restricted computer software. 27.404-2 Section 27.404-2 Federal Acquisition Regulations System FEDERAL... Copyrights 27.404-2 Limited rights data and restricted computer software. (a) General. The basic clause at 52... restricted computer software by withholding the data from the Government and instead delivering form, fit...
48 CFR 27.404-2 - Limited rights data and restricted computer software.
Code of Federal Regulations, 2010 CFR
2010-10-01
... restricted computer software. 27.404-2 Section 27.404-2 Federal Acquisition Regulations System FEDERAL... Copyrights 27.404-2 Limited rights data and restricted computer software. (a) General. The basic clause at 52... restricted computer software by withholding the data from the Government and instead delivering form, fit...
Optimum spaceborne computer system design by simulation
NASA Technical Reports Server (NTRS)
Williams, T.; Kerner, H.; Weatherbee, J. E.; Taylor, D. S.; Hodges, B.
1973-01-01
A deterministic simulator is described which models the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. Its use as a tool to study and determine the minimum computer system configuration necessary to satisfy the on-board computational requirements of a typical mission is presented. The paper describes how the computer system configuration is determined in order to satisfy the data processing demand of the various shuttle booster subsytems. The configuration which is developed as a result of studies with the simulator is optimal with respect to the efficient use of computer system resources.
A case study for cloud based high throughput analysis of NGS data using the globus genomics system
Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; ...
2015-01-01
Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-end NGS analysis requirements. The Globus Genomicsmore » system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.« less
A case study for cloud based high throughput analysis of NGS data using the globus genomics system
Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; Rodriguez, Alex; Madduri, Ravi; Dave, Utpal; Lacinski, Lukasz; Foster, Ian; Gusev, Yuriy; Madhavan, Subha
2014-01-01
Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-endNGS analysis requirements. The Globus Genomics system is built on Amazon 's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research. PMID:26925205
ERIC Educational Resources Information Center
Mitchell, Eugene E., Ed.
The simulation of a sampled-data system is described that uses a full parallel hybrid computer. The sampled data system simulated illustrates the proportional-integral-derivative (PID) discrete control of a continuous second-order process representing a stirred-tank. The stirred-tank is simulated using continuous analog components, while PID…
SANs and Large Scale Data Migration at the NASA Center for Computational Sciences
NASA Technical Reports Server (NTRS)
Salmon, Ellen M.
2004-01-01
Evolution and migration are a way of life for provisioners of high-performance mass storage systems that serve high-end computers used by climate and Earth and space science researchers: the compute engines come and go, but the data remains. At the NASA Center for Computational Sciences (NCCS), disk and tape SANs are deployed to provide high-speed I/O for the compute engines and the hierarchical storage management systems. Along with gigabit Ethernet, they also enable the NCCS's latest significant migration: the transparent transfer of 300 Til3 of legacy HSM data into the new Sun SAM-QFS cluster.
Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces
NASA Technical Reports Server (NTRS)
Ellman, Alvin; Carlton, Magdi
1993-01-01
The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.
High speed television camera system processes photographic film data for digital computer analysis
NASA Technical Reports Server (NTRS)
Habbal, N. A.
1970-01-01
Data acquisition system translates and processes graphical information recorded on high speed photographic film. It automatically scans the film and stores the information with a minimal use of the computer memory.
Human face recognition using eigenface in cloud computing environment
NASA Astrophysics Data System (ADS)
Siregar, S. T. M.; Syahputra, M. F.; Rahmat, R. F.
2018-02-01
Doing a face recognition for one single face does not take a long time to process, but if we implement attendance system or security system on companies that have many faces to be recognized, it will take a long time. Cloud computing is a computing service that is done not on a local device, but on an internet connected to a data center infrastructure. The system of cloud computing also provides a scalability solution where cloud computing can increase the resources needed when doing larger data processing. This research is done by applying eigenface while collecting data as training data is also done by using REST concept to provide resource, then server can process the data according to existing stages. After doing research and development of this application, it can be concluded by implementing Eigenface, recognizing face by applying REST concept as endpoint in giving or receiving related information to be used as a resource in doing model formation to do face recognition.
Information management system study results. Volume 2: IMS study results appendixes
NASA Technical Reports Server (NTRS)
1971-01-01
Computer systems program specifications are presented for the modular space station information management system. These are the computer program contract end item, data bus system, data bus breadboard, and display interface adapter specifications. The performance, design, tests, and qualification requirements are established for the implementation of the information management system. For Vol. 1, see N72-19972.
A computer-based time study system for timber harvesting operations
Jingxin Wang; Joe McNeel; John Baumgras
2003-01-01
A computer-based time study system was developed for timber harvesting operations. Object-oriented techniques were used to model and design the system. The front-end of the time study system resides on the MS Windows CE and the back-end is supported by MS Access. The system consists of three major components: a handheld system, data transfer interface, and data storage...
Zhou, Zhi; Arce, Gonzalo R; Di Crescenzo, Giovanni
2006-08-01
Visual cryptography encodes a secret binary image (SI) into n shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the n shares, however, have no visual meaning and hinder the objectives of visual cryptography. Extended visual cryptography [1] was proposed recently to construct meaningful binary images as shares using hypergraph colourings, but the visual quality is poor. In this paper, a novel technique named halftone visual cryptography is proposed to achieve visual cryptography via halftoning. Based on the blue-noise dithering principles, the proposed method utilizes the void and cluster algorithm [2] to encode a secret binary image into n halftone shares (images) carrying significant visual information. The simulation shows that the visual quality of the obtained halftone shares are observably better than that attained by any available visual cryptography method known to date.
Managing geometric information with a data base management system
NASA Technical Reports Server (NTRS)
Dube, R. P.
1984-01-01
The strategies for managing computer based geometry are described. The computer model of geometry is the basis for communication, manipulation, and analysis of shape information. The research on integrated programs for aerospace-vehicle design (IPAD) focuses on the use of data base management system (DBMS) technology to manage engineering/manufacturing data. The objectives of IPAD is to develop a computer based engineering complex which automates the storage, management, protection, and retrieval of engineering data. In particular, this facility must manage geometry information as well as associated data. The approach taken on the IPAD project to achieve this objective is discussed. Geometry management in current systems and the approach taken in the early IPAD prototypes are examined.
Electro-optical processing of phased array data
NASA Technical Reports Server (NTRS)
Casasent, D.
1973-01-01
An on-line spatial light modulator for application as the input transducer for a real-time optical data processing system is described. The use of such a device in the analysis and processing of radar data in real time is reported. An interface from the optical processor to a control digital computer was designed, constructed, and tested. The input transducer, optical system, and computer interface have been operated in real time with real time radar data with the input data returns recorded on the input crystal, processed by the optical system, and the output plane pattern digitized, thresholded, and outputted to a display and storage in the computer memory. The correlation of theoretical and experimental results is discussed.
Dynamic reduction of dimensions of a document vector in a document search and retrieval system
Jiao, Yu; Potok, Thomas E.
2011-05-03
The method and system of the invention involves processing each new document (20) coming into the system into a document vector (16), and creating a document vector with reduced dimensionality (17) for comparison with the data model (15) without recomputing the data model (15). These operations are carried out by a first computer (11) while a second computer (12) updates the data model (18), which can be comprised of an initial large group of documents (19) and is premised on the computing an initial data model (13, 14, 15) to provide a reference point for determining document vectors from documents processed from the data stream (20).
A Computer Support System for the Entry and Analysis of Questionnaire Data.
ERIC Educational Resources Information Center
Shale, Douglas G.; Milinusic, Tomislav O.
This paper describes a computer support system that eliminated many of the problems associated with the usual methods of transcribing and analyzing questionnaire data. The system was created to support the course evaluation system at Athabasca University, a distance education university in Canada. The courses evaluated were all home study courses,…
Computer-aided system for diabetes care in Berlin, G.D.R.
Thoelke, H; Meusel, K; Ratzmann, K P
1990-01-01
In the Centre of Diabetes and Metabolic Disorders of Berlin, G.D.R., a computer-aided care system has been used since 1974, aiming at relieving physicians and medical staff from routine tasks and rendering possible epidemiological research on an unselected diabetes population of a defined area. The basis of the system is the data bank on diabetics (DB), where at present data from approximately 55,000 patients are stored. DB is used as a diabetes register of Berlin. On the basis of standardised criteria of diagnosis and therapy of diabetes mellitus in our dispensary care system, DB facilitates representative epidemiological analyses of the diabetic population, e.g. prevalence, incidence, duration of diabetes, and modes of treatment. The availability of general data on the population or the selection of specified groups of patients serves the management of the care system. Also, it supports the computer-aided recall of type II diabetics, treated either with diet alone or with diet and oral drugs. In this way, the standardised evaluation of treatment strategies in large populations of diabetics is possible on the basis of uniform metabolic criteria (blood glucose plus urinary glucose). The system consists of a main computer in the data processing unit and of personal computers in the diabetes centre which can be used either individually or as terminals to the main computer. During 14 years of experience, the computer-aided out-patient care of type II diabetics has proved efficient in a big-city area with a large population.
Ahamed, Nizam U; Sundaraj, Kenneth; Poo, Tarn S
2013-03-01
This article describes the design of a robust, inexpensive, easy-to-use, small, and portable online electromyography acquisition system for monitoring electromyography signals during rehabilitation. This single-channel (one-muscle) system was connected via the universal serial bus port to a programmable Windows operating system handheld tablet personal computer for storage and analysis of the data by the end user. The raw electromyography signals were amplified in order to convert them to an observable scale. The inherent noise of 50 Hz (Malaysia) from power lines electromagnetic interference was then eliminated using a single-hybrid IC notch filter. These signals were sampled by a signal processing module and converted into 24-bit digital data. An algorithm was developed and programmed to transmit the digital data to the computer, where it was reassembled and displayed in the computer using software. Finally, the following device was furnished with the graphical user interface to display the online muscle strength streaming signal in a handheld tablet personal computer. This battery-operated system was tested on the biceps brachii muscles of 20 healthy subjects, and the results were compared to those obtained with a commercial single-channel (one-muscle) electromyography acquisition system. The results obtained using the developed device when compared to those obtained from a commercially available physiological signal monitoring system for activities involving muscle contractions were found to be comparable (the comparison of various statistical parameters) between male and female subjects. In addition, the key advantage of this developed system over the conventional desktop personal computer-based acquisition systems is its portability due to the use of a tablet personal computer in which the results are accessible graphically as well as stored in text (comma-separated value) form.
48 CFR 227.7207 - Contractor data repositories.
Code of Federal Regulations, 2010 CFR
2010-10-01
... repositories. 227.7207 Section 227.7207 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... Computer Software and Computer Software Documentation 227.7207 Contractor data repositories. Follow 227.7108 when it is in the Government's interests to have a data repository include computer software or to...
JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.
Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J
2010-04-01
The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.
Proceedings: Computer Science and Data Systems Technical Symposium, volume 2
NASA Technical Reports Server (NTRS)
Larsen, Ronald L.; Wallgren, Kenneth
1985-01-01
Progress reports and technical updates of programs being performed by NASA centers are covered. Presentations in viewgraph form, along with abstracts, are included for topics in three catagories: computer science, data systems, and space station applications.
Sharing digital micrographs and other data files between computers.
Entwistle, A
2004-01-01
It ought to be easy to exchange digital micrographs and other computer data files with a colleague even on another continent. In practice, this often is not the case. The advantages and disadvantages of various methods that are available for exchanging data files between computers are discussed. When possible, data should be transferred through computer networking. When data are to be exchanged locally between computers with similar operating systems, the use of a local area network is recommended. For computers in commercial or academic environments that have dissimilar operating systems or are more widely spaced, the use of FTPs is recommended. Failing this, posting the data on a website and transferring by hypertext transfer protocol is suggested. If peer to peer exchange between computers in domestic environments is needed, the use of Messenger services such as Microsoft Messenger or Yahoo Messenger is the method of choice. When it is not possible to transfer the data files over the internet, single use, writable CD ROMs are the best media for transferring data. If for some reason this is not possible, DVD-R/RW, DVD+R/RW, 100 MB ZIP disks and USB flash media are potentially useful media for exchanging data files.
The symbolic computation and automatic analysis of trajectories
NASA Technical Reports Server (NTRS)
Grossman, Robert
1991-01-01
Research was generally done on computation of trajectories of dynamical systems, especially control systems. Algorithms were further developed for rewriting expressions involving differential operators. The differential operators involved arise in the local analysis of nonlinear control systems. An initial design was completed of the system architecture for software to analyze nonlinear control systems using data base computing.
Computer program and user documentation medical data tape retrieval system
NASA Technical Reports Server (NTRS)
Anderson, J.
1971-01-01
This volume provides several levels of documentation for the program module of the NASA medical directorate mini-computer storage and retrieval system. A biomedical information system overview describes some of the reasons for the development of the mini-computer storage and retrieval system. It briefly outlines all of the program modules which constitute the system.
45 CFR 310.1 - What definitions apply to this part?
Code of Federal Regulations, 2010 CFR
2010-10-01
... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...
45 CFR 310.1 - What definitions apply to this part?
Code of Federal Regulations, 2013 CFR
2013-10-01
... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...
45 CFR 310.1 - What definitions apply to this part?
Code of Federal Regulations, 2014 CFR
2014-10-01
... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...
45 CFR 310.1 - What definitions apply to this part?
Code of Federal Regulations, 2011 CFR
2011-10-01
... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...
45 CFR 310.1 - What definitions apply to this part?
Code of Federal Regulations, 2012 CFR
2012-10-01
... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qishi; Zhu, Mengxia; Rao, Nageswara S
We propose an intelligent decision support system based on sensor and computer networks that incorporates various component techniques for sensor deployment, data routing, distributed computing, and information fusion. The integrated system is deployed in a distributed environment composed of both wireless sensor networks for data collection and wired computer networks for data processing in support of homeland security defense. We present the system framework and formulate the analytical problems and develop approximate or exact solutions for the subtasks: (i) sensor deployment strategy based on a two-dimensional genetic algorithm to achieve maximum coverage with cost constraints; (ii) data routing scheme tomore » achieve maximum signal strength with minimum path loss, high energy efficiency, and effective fault tolerance; (iii) network mapping method to assign computing modules to network nodes for high-performance distributed data processing; and (iv) binary decision fusion rule that derive threshold bounds to improve system hit rate and false alarm rate. These component solutions are implemented and evaluated through either experiments or simulations in various application scenarios. The extensive results demonstrate that these component solutions imbue the integrated system with the desirable and useful quality of intelligence in decision making.« less
An Intelligent Terminal for Access to a Medical Database
Womble, M. E.; Wilson, S. D.; Keiser, H. N.; Tworek, M. L.
1978-01-01
Very powerful data base management systems (DBMS) now exist which allow medical personnel access to patient record data bases. DBMS's make it easy to retrieve either complete or abbreviated records of patients with similar characteristics. In addition, statistics on data base records are immediately accessible. However, the price of this power is a large computer with the inherent problems of access, response time, and reliability. If a general purpose, time-shared computer is used to get this power, the response time to a request can be either rapid or slow, depending upon loading by other users. Furthermore, if the computer is accessed via dial-up telephone lines, there is competition with other users for telephone ports. If either the DBMS or the host machine is replaced, the medical users, who are typically not sophisticated in computer usage, are forced to learn the new system. Microcomputers, because of their low cost and adaptability, lend themselves to a solution of these problems. A microprocessor-based intelligent terminal has been designed and implemented at the USAF School of Aerospace Medicine to provide a transparent interface between the user and his data base. The intelligent terminal system includes multiple microprocessors, floppy disks, a CRT terminal, and a printer. Users interact with the system at the CRT terminal using menu selection (framing). The system translates the menu selection into the query language of the DBMS and handles all actual communication with the DBMS and its host computer, including telephone dialing and sign on procedures, as well as the actual data base query and response. Retrieved information is stored locally for CRT display, hard copy production, and/or permanent retention. Microprocessor-based communication units provide security for sensitive medical data through encryption/decryption algorithms and high reliability error detection transmission schemes. Highly modular software design permits adapation to a different DBMS and/or host computer with only minor localized software changes. Importantly, this portability is completely transparent to system users. Although the terminal system is independent of the host computer and its DBMS, it has been linked to a UNIVAC 1108 computer supporting MRI's SYSTEM 2000 DBMS.
Computer assisted performance tests of the Lyman Alpha Coronagraph
NASA Technical Reports Server (NTRS)
Parkinson, W. H.; Kohl, J. L.
1979-01-01
Preflight calibration and performance tests of the Lyman Alpha Coronagraph rocket instrument in the laboratory, with the experiment in its flight configuration and illumination levels near those expected during flight were successfully carried out using a pulse code modulation telemetry system simulator interfaced in real time to a PDP 11/10 computer system. Post acquisition data reduction programs developed and implemented on the same computer system aided in the interpretation of test and calibration data.
36 CFR 1236.2 - What definitions apply to this part?
Code of Federal Regulations, 2014 CFR
2014-07-01
... users but does not retain any transmission data), data systems used to collect and process data that have been organized into data files or data bases on either personal computers or mainframe computers...
36 CFR 1236.2 - What definitions apply to this part?
Code of Federal Regulations, 2011 CFR
2011-07-01
... users but does not retain any transmission data), data systems used to collect and process data that have been organized into data files or data bases on either personal computers or mainframe computers...
36 CFR 1236.2 - What definitions apply to this part?
Code of Federal Regulations, 2012 CFR
2012-07-01
... users but does not retain any transmission data), data systems used to collect and process data that have been organized into data files or data bases on either personal computers or mainframe computers...
36 CFR 1236.2 - What definitions apply to this part?
Code of Federal Regulations, 2010 CFR
2010-07-01
... users but does not retain any transmission data), data systems used to collect and process data that have been organized into data files or data bases on either personal computers or mainframe computers...
A data acquisition and storage system for the ion auxiliary propulsion system cyclic thruster test
NASA Technical Reports Server (NTRS)
Hamley, John A.
1989-01-01
A nine-track tape drive interfaced to a standard personal computer was used to transport data from a remote test site to the NASA Lewis mainframe computer for analysis. The Cyclic Ground Test of the Ion Auxiliary Propulsion System (IAPS), which successfully achieved its goal of 2557 cycles and 7057 hr of thrusting beam on time generated several megabytes of test data over many months of continuous testing. A flight-like controller and power supply were used to control the thruster and acquire data. Thruster data was converted to RS232 format and transmitted to a personal computer, which stored the raw digital data on the nine-track tape. The tape format was such that with minor modifications, mainframe flight data analysis software could be used to analyze the Cyclic Ground Test data. The personal computer also converted the digital data to engineering units and displayed real time thruster parameters. Hardcopy data was printed at a rate dependent on thruster operating conditions. The tape drive provided a convenient means to transport the data to the mainframe for analysis, and avoided a development effort for new data analysis software for the Cyclic test. This paper describes the data system, interfacing and software requirements.
Central Data Processing System (CDPS) user's manual: Solar heating and cooling program
NASA Technical Reports Server (NTRS)
1976-01-01
The software and data base management system required to assess the performance of solar heating and cooling systems installed at multiple sites is presented. The instrumentation data associated with these systems is collected, processed, and presented in a form which supported continuity of performance evaluation across all applications. The CDPS consisted of three major elements: communication interface computer, central data processing computer, and performance evaluation data base. Users of the performance data base were identified, and procedures for operation, and guidelines for software maintenance were outlined. The manual also defined the output capabilities of the CDPS in support of external users of the system.
NASA's Information Power Grid: Large Scale Distributed Computing and Data Management
NASA Technical Reports Server (NTRS)
Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)
2001-01-01
Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.
The application of a computer data acquisition system to a new high temperature tribometer
NASA Technical Reports Server (NTRS)
Bonham, Charles D.; Dellacorte, Christopher
1991-01-01
The two data acquisition computer programs are described which were developed for a high temperature friction and wear test apparatus, a tribometer. The raw data produced by the tribometer and the methods used to sample that data are explained. In addition, the instrumentation and computer hardware and software are presented. Also shown is how computer data acquisition was applied to increase convenience and productivity on a high temperature tribometer.
The application of a computer data acquisition system for a new high temperature tribometer
NASA Technical Reports Server (NTRS)
Bonham, Charles D.; Dellacorte, Christopher
1990-01-01
The two data acquisition computer programs are described which were developed for a high temperature friction and wear test apparatus, a tribometer. The raw data produced by the tribometer and the methods used to sample that data are explained. In addition, the instrumentation and computer hardware and software are presented. Also shown is how computer data acquisition was applied to increase convenience and productivity on a high temperature tribometer.
Welcome to health information science and systems.
Zhang, Yanchun
2013-01-01
Health Information Science and Systems is an exciting, new, multidisciplinary journal that aims to use technologies in computer science to assist in disease diagnoses, treatment, prediction and monitoring through the modeling, design, development, visualization, integration and management of health related information. These computer-science technologies include such as information systems, web technologies, data mining, image processing, user interaction and interface, sensors and wireless networking and are applicable to a wide range of health related information including medical data, biomedical data, bioinformatics data, public health data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Sameer; Mamidala, Amith R.; Ratterman, Joseph D.
A system and method for enhancing barrier collective synchronization on a computer system comprises a computer system including a data storage device. The computer system includes a program stored in the data storage device and steps of the program being executed by a processor. The system includes providing a plurality of communicators for storing state information for a bather algorithm. Each communicator designates a master core in a multi-processor environment of the computer system. The system allocates or designates one counter for each of a plurality of threads. The system configures a table with a number of entries equal tomore » the maximum number of threads. The system sets a table entry with an ID associated with a communicator when a process thread initiates a collective. The system determines an allocated or designated counter by searching entries in the table.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blocksome, Michael; Kumar, Sameer; Mamidala, Amith R.
A system and method for enhancing barrier collective synchronization on a computer system comprises a computer system including a data storage device. The computer system includes a program stored in the data storage device and steps of the program being executed by a processor. The system includes providing a plurality of communicators for storing state information for a barrier algorithm. Each communicator designates a master core in a multi-processor environment of the computer system. The system allocates or designates one counter for each of a plurality of threads. The system configures a table with a number of entries equal tomore » the maximum number of threads. The system sets a table entry with an ID associated with a communicator when a process thread initiates a collective. The system determines an allocated or designated counter by searching entries in the table.« less
Kumar, Sameer; Mamidala, Amith R.; Ratterman, Joseph D.; Blocksome, Michael; Miller, Douglas
2013-09-03
A system and method for enhancing barrier collective synchronization on a computer system comprises a computer system including a data storage device. The computer system includes a program stored in the data storage device and steps of the program being executed by a processor. The system includes providing a plurality of communicators for storing state information for a bather algorithm. Each communicator designates a master core in a multi-processor environment of the computer system. The system allocates or designates one counter for each of a plurality of threads. The system configures a table with a number of entries equal to the maximum number of threads. The system sets a table entry with an ID associated with a communicator when a process thread initiates a collective. The system determines an allocated or designated counter by searching entries in the table.
Goal-seismic computer programs in BASIC: Part I; Store, plot, and edit array data
Hasbrouck, Wilfred P.
1979-01-01
Processing of geophysical data taken with the U.S. Geological Survey's coal-seismic system is done with a desk-top, stand-alone computer. Programs for this computer are written in an extended BASIC language specially augmented for acceptance by the Tektronix 4051 Graphic System. This report presents five computer programs used to store, plot, and edit array data for the line, cross, and triangle arrays commonly employed in our coal-seismic investigations. * Use of brand names in this report is for descriptive purposes only and does not constitute endorsement by the U.S. Geological Survey.
NASA Astrophysics Data System (ADS)
Kashansky, Vladislav V.; Kaftannikov, Igor L.
2018-02-01
Modern numerical modeling experiments and data analytics problems in various fields of science and technology reveal a wide variety of serious requirements for distributed computing systems. Many scientific computing projects sometimes exceed the available resource pool limits, requiring extra scalability and sustainability. In this paper we share the experience and findings of our own on combining the power of SLURM, BOINC and GlusterFS as software system for scientific computing. Especially, we suggest a complete architecture and highlight important aspects of systems integration.
Jäger, G; Hagemeier, J H; Schneider, P; Heber, E
1978-01-01
Report about an electronic data processing system for gynaecology. The developed data document design and data flowchart are shown. The accumulated data allowed a detailed interpretation record. For all clinical treated patients the computer printed out a final gynaecological epicrisis. The system is an improvement of the information and the typewriting work of medial staff has been reduced.
Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811
Design and development of a run-time monitor for multi-core architectures in cloud computing.
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.
Mobile healthcare information management utilizing Cloud Computing and Android OS.
Doukas, Charalampos; Pliakas, Thomas; Maglogiannis, Ilias
2010-01-01
Cloud Computing provides functionality for managing information data in a distributed, ubiquitous and pervasive manner supporting several platforms, systems and applications. This work presents the implementation of a mobile system that enables electronic healthcare data storage, update and retrieval using Cloud Computing. The mobile application is developed using Google's Android operating system and provides management of patient health records and medical images (supporting DICOM format and JPEG2000 coding). The developed system has been evaluated using the Amazon's S3 cloud service. This article summarizes the implementation details and presents initial results of the system in practice.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-23
... the essential bus. The disabled equipment could include the autopilot, anti-skid system, hydraulic indicator, spoiler system, pilot primary flight display, audio panel, or the 1 air data computer. This... system, pilot primary flight display, audio panel, or the 1 air data computer. This failure could lead to...
Anomalous event diagnosis for environmental satellite systems
NASA Technical Reports Server (NTRS)
Ramsay, Bruce H.
1993-01-01
The National Oceanic and Atmospheric Administration's (NOAA) National Environmental Satellite, Data, and Information Service (NESDIS) is responsible for the operation of the NOAA geostationary and polar orbiting satellites. NESDIS provides a wide array of operational meteorological and oceanographic products and services and operates various computer and communication systems on a 24-hour, seven days per week schedule. The Anomaly Reporting System contains a database of anomalous events regarding the operations of the Geostationary Operational Environmental Satellite (GOES), communication, or computer systems that have degraded or caused the loss of GOES imagery. Data is currently entered manually via an automated query user interface. There are 21 possible symptoms (e.g., No Data), and 73 possible causes (e.g., Sectorizer - World Weather Building) of an anomalous event. The determination of an event's cause(s) is made by the on-duty computer operator, who enters the event in a paper based daily log, and by the analyst entering the data into the reporting system. The determination of the event's cause(s) impacts both the operational status of these systems, and the performance evaluation of the on-site computer and communication operations contractor.
Computational approaches to vision
NASA Technical Reports Server (NTRS)
Barrow, H. G.; Tenenbaum, J. M.
1986-01-01
Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.
Architecture for hospital information integration
NASA Astrophysics Data System (ADS)
Chimiak, William J.; Janariz, Daniel L.; Martinez, Ralph
1999-07-01
The ongoing integration of hospital information systems (HIS) continues. Data storage systems, data networks and computers improve, data bases grow and health-care applications increase. Some computer operating systems continue to evolve and some fade. Health care delivery now depends on this computer-assisted environment. The result is the critical harmonization of the various hospital information systems becomes increasingly difficult. The purpose of this paper is to present an architecture for HIS integration that is computer-language-neutral and computer- hardware-neutral for the informatics applications. The proposed architecture builds upon the work done at the University of Arizona on middleware, the work of the National Electrical Manufacturers Association, and the American College of Radiology. It is a fresh approach to allowing applications engineers to access medical data easily and thus concentrates on the application techniques in which they are expert without struggling with medical information syntaxes. The HIS can be modeled using a hierarchy of information sub-systems thus facilitating its understanding. The architecture includes the resulting information model along with a strict but intuitive application programming interface, managed by CORBA. The CORBA requirement facilitates interoperability. It should also reduce software and hardware development times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
1996-05-01
The Network Information System (NWIS) was initially implemented in May 1996 as a system in which computing devices could be recorded so that unique names could be generated for each device. Since then the system has grown to be an enterprise wide information system which is integrated with other systems to provide the seamless flow of data through the enterprise. The system Iracks data for two main entities: people and computing devices. The following are the type of functions performed by NWIS for these two entities: People Provides source information to the enterprise person data repository for select contractors andmore » visitors Generates and tracks unique usernames and Unix user IDs for every individual granted cyber access Tracks accounts for centrally managed computing resources, and monitors and controls the reauthorization of the accounts in accordance with the DOE mandated interval Computing Devices Generates unique names for all computing devices registered in the system Tracks the following information for each computing device: manufacturer, make, model, Sandia property number, vendor serial number, operating system and operating system version, owner, device location, amount of memory, amount of disk space, and level of support provided for the machine Tracks the hardware address for network cards Tracks the P address registered to computing devices along with the canonical and alias names for each address Updates the Dynamic Domain Name Service (DDNS) for canonical and alias names Creates the configuration files for DHCP to control the DHCP ranges and allow access to only properly registered computers Tracks and monitors classified security plans for stand-alone computers Tracks the configuration requirements used to setup the machine Tracks the roles people have on machines (system administrator, administrative access, user, etc...) Allows systems administrators to track changes made on the machine (both hardware and software) Generates an adjustment history of changes on selected fields« less
Hanford meteorological station computer codes: Volume 9, The quality assurance computer codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burk, K.W.; Andrews, G.L.
1989-02-01
The Hanford Meteorological Station (HMS) was established in 1944 on the Hanford Site to collect and archive meteorological data and provide weather forecasts and related services for Hanford Site approximately 1/2 mile east of the 200 West Area and is operated by PNL for the US Department of Energy. Meteorological data are collected from various sensors and equipment located on and off the Hanford Site. These data are stored in data bases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS (hereafter referred to as the HMS computer). Files from those data bases are routinely transferred to themore » Emergency Management System (EMS) computer at the Unified Dose Assessment Center (UDAC). To ensure the quality and integrity of the HMS data, a set of Quality Assurance (QA) computer codes has been written. The codes will be routinely used by the HMS system manager or the data base custodian. The QA codes provide detailed output files that will be used in correcting erroneous data. The following sections in this volume describe the implementation and operation of QA computer codes. The appendices contain detailed descriptions, flow charts, and source code listings of each computer code. 2 refs.« less
Computational System For Rapid CFD Analysis In Engineering
NASA Technical Reports Server (NTRS)
Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.
1995-01-01
Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.
The engineering design integration (EDIN) system. [digital computer program complex
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.
1974-01-01
A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.
Integrated Computer System of Management in Logistics
NASA Astrophysics Data System (ADS)
Chwesiuk, Krzysztof
2011-06-01
This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.
Optical memories in digital computing
NASA Technical Reports Server (NTRS)
Alford, C. O.; Gaylord, T. K.
1979-01-01
High capacity optical memories with relatively-high data-transfer rate and multiport simultaneous access capability may serve as basis for new computer architectures. Several computer structures that might profitably use memories are: a) simultaneous record-access system, b) simultaneously-shared memory computer system, and c) parallel digital processing structure.
Automation of the CFD Process on Distributed Computing Systems
NASA Technical Reports Server (NTRS)
Tejnil, Ed; Gee, Ken; Rizk, Yehia M.
2000-01-01
A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational resources required to compute and store the information. The scripts were continually modified to improve the utilization of the computational resources and reduce the likelihood of data loss due to failures. An ad-hoc file server was created to manage the large amount of data being generated as part of the design event. Files were stored and retrieved as needed to create new jobs and analyze the results. Additional information is contained in the original.
de Beer, R; Graveron-Demilly, D; Nastase, S; van Ormondt, D
2004-03-01
Recently we have developed a Java-based heterogeneous distributed computing system for the field of magnetic resonance imaging (MRI). It is a software system for embedding the various image reconstruction algorithms that we have created for handling MRI data sets with sparse sampling distributions. Since these data sets may result from multi-dimensional MRI measurements our system has to control the storage and manipulation of large amounts of data. In this paper we describe how we have employed the extensible markup language (XML) to realize this data handling in a highly structured way. To that end we have used Java packages, recently released by Sun Microsystems, to process XML documents and to compile pieces of XML code into Java classes. We have effectuated a flexible storage and manipulation approach for all kinds of data within the MRI system, such as data describing and containing multi-dimensional MRI measurements, data configuring image reconstruction methods and data representing and visualizing the various services of the system. We have found that the object-oriented approach, possible with the Java programming environment, combined with the XML technology is a convenient way of describing and handling various data streams in heterogeneous distributed computing systems.
Acquisition of gamma camera and physiological data by computer.
Hack, S N; Chang, M; Line, B R; Cooper, J A; Robeson, G H
1986-11-01
We have designed, implemented, and tested a new Research Data Acquisition System (RDAS) that permits a general purpose digital computer to acquire signals from both gamma camera sources and physiological signal sources concurrently. This system overcomes the limited multi-source, high speed data acquisition capabilities found in most clinically oriented nuclear medicine computers. The RDAS can simultaneously input signals from up to four gamma camera sources with a throughput of 200 kHz per source and from up to eight physiological signal sources with an aggregate throughput of 50 kHz. Rigorous testing has found the RDAS to exhibit acceptable linearity and timing characteristics. In addition, flood images obtained by this system were compared with flood images acquired by a commercial nuclear medicine computer system. National Electrical Manufacturers Association performance standards of the flood images were found to be comparable.
36 CFR § 1236.2 - What definitions apply to this part?
Code of Federal Regulations, 2013 CFR
2013-07-01
... users but does not retain any transmission data), data systems used to collect and process data that have been organized into data files or data bases on either personal computers or mainframe computers...
Computer Plotting Data Points in the Engine Research Building
1956-09-21
A female computer plotting compressor data in the Engine Research Building at the NACA’s Lewis Flight Propulsion Laboratory. The Computing Section was introduced during World War II to relieve short-handed research engineers of some of the tedious data-taking work. The computers made the initial computations and plotted the data graphically. The researcher then analyzed the data and either summarized the findings in a report or made modifications or ran the test again. With the introduction of mechanical computer systems in the 1950s the female computers learned how to encode the punch cards. As the data processing capabilities increased, fewer female computers were needed. Many left on their own to start families, while others earned mathematical degrees and moved into advanced positions.
Systems Analysis, Machineable Circulation Data and Library Users and Non-Users.
ERIC Educational Resources Information Center
Lubans, John, Jr.
A study to be made with computer-based circulation data of the non-use and use of a large academic library is discussed. A search of the literature reveals that computer-based circulation systems can be, but have not been, utilized to provide data bases for systematic analyses of library users and resources. The data gathered in the circulation…
High-Speed Recording of Test Data on Hard Disks
NASA Technical Reports Server (NTRS)
Lagarde, Paul M., Jr.; Newnan, Bruce
2003-01-01
Disk Recording System (DRS) is a systems-integration computer program for a direct-to-disk (DTD) high-speed data acquisition system (HDAS) that records rocket-engine test data. The HDAS consists partly of equipment originally designed for recording the data on tapes. The tape recorders were replaced with hard-disk drives, necessitating the development of DRS to provide an operating environment that ties two computers, a set of five DTD recorders, and signal-processing circuits from the original tape-recording version of the HDAS into one working system. DRS includes three subsystems: (1) one that generates a graphical user interface (GUI), on one of the computers, that serves as a main control panel; (2) one that generates a GUI, on the other computer, that serves as a remote control panel; and (3) a data-processing subsystem that performs tasks on the DTD recorders according to instructions sent from the main control panel. The software affords capabilities for dynamic configuration to record single or multiple channels from a remote source, remote starting and stopping of the recorders, indexing to prevent overwriting of data, and production of filtered frequency data from an original time-series data file.
NASA Astrophysics Data System (ADS)
Caplan, B.; Morrison, A.; Moore, J. C.; Berkowitz, A. R.
2017-12-01
Understanding water is central to understanding environmental challenges. Scientists use `big data' and computational models to develop knowledge about the structure and function of complex systems, and to make predictions about changes in climate, weather, hydrology, and ecology. Large environmental systems-related data sets and simulation models are difficult for high school teachers and students to access and make sense of. Comp Hydro, a collaboration across four states and multiple school districts, integrates computational thinking and data-related science practices into water systems instruction to enhance development of scientific model-based reasoning, through curriculum, assessment and teacher professional development. Comp Hydro addresses the need for 1) teaching materials for using data and physical models of hydrological phenomena, 2) building teachers' and students' comfort or familiarity with data analysis and modeling, and 3) infusing the computational knowledge and practices necessary to model and visualize hydrologic processes into instruction. Comp Hydro teams in Baltimore, MD and Fort Collins, CO are integrating teaching about surface water systems into high school courses focusing on flooding (MD) and surface water reservoirs (CO). This interactive session will highlight the successes and challenges of our physical and simulation models in helping teachers and students develop proficiency with computational thinking about surface water. We also will share insights from comparing teacher-led vs. project-led development of curriculum and our simulations.
AGIS: Evolution of Distributed Computing information system for ATLAS
NASA Astrophysics Data System (ADS)
Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.
2015-12-01
ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.
NASA Technical Reports Server (NTRS)
1973-01-01
A description of each of the software modules of the Image Data Processing System (IDAPS) is presented. The changes in the software modules are the result of additions to the application software of the system and an upgrade of the IBM 7094 Mod(1) computer to a 1301 disk storage configuration. Necessary information about IDAPS sofware is supplied to the computer programmer who desires to make changes in the software system or who desires to use portions of the software outside of the IDAPS system. Each software module is documented with: module name, purpose, usage, common block(s) description, method (algorithm of subroutine) flow diagram (if needed), subroutines called, and storage requirements.
The development of a specialized processor for a space-based multispectral earth imager
NASA Astrophysics Data System (ADS)
Khedr, Mostafa E.
2008-10-01
This work was done in the Department of Computer Engineering, Lvov Polytechnic National University, Lvov, Ukraine, as a thesis entitled "Space Imager Computer System for Raw Video Data Processing" [1]. This work describes the synthesis and practical implementation of a specialized computer system for raw data control and processing onboard a satellite MultiSpectral earth imager. This computer system is intended for satellites with resolution in the range of one meter with 12-bit precession. The design is based mostly on general off-the-shelf components such as (FPGAs) plus custom designed software for interfacing with PC and test equipment. The designed system was successfully manufactured and now fully functioning in orbit.
Real-time operation without a real-time operating system for instrument control and data acquisition
NASA Astrophysics Data System (ADS)
Klein, Randolf; Poglitsch, Albrecht; Fumi, Fabio; Geis, Norbert; Hamidouche, Murad; Hoenle, Rainer; Looney, Leslie; Raab, Walfried; Viehhauser, Werner
2004-09-01
We are building the Field-Imaging Far-Infrared Line Spectrometer (FIFI LS) for the US-German airborne observatory SOFIA. The detector read-out system is driven by a clock signal at a certain frequency. This signal has to be provided and all other sub-systems have to work synchronously to this clock. The data generated by the instrument has to be received by a computer in a timely manner. Usually these requirements are met with a real-time operating system (RTOS). In this presentation we want to show how we meet these demands differently avoiding the stiffness of an RTOS. Digital I/O-cards with a large buffer separate the asynchronous working computers and the synchronous working instrument. The advantage is that the data processing computers do not need to process the data in real-time. It is sufficient that the computer can process the incoming data stream on average. But since the data is read-in synchronously, problems of relating commands and responses (data) have to be solved: The data is arriving at a fixed rate. The receiving I/O-card buffers the data in its buffer until the computer can access it. To relate the data to commands sent previously, the data is tagged by counters in the read-out electronics. These counters count the system's heartbeat and signals derived from that. The heartbeat and control signals synchronous with the heartbeat are sent by an I/O-card working as pattern generator. Its buffer gets continously programmed with a pattern which is clocked out on the control lines. A counter in the I/O-card keeps track of the amount of pattern words clocked out. By reading this counter, the computer knows the state of the instrument or knows the meaning of the data that will arrive with a certain time-tag.
The Control Point Library Building System. [for Landsat MSS and RBV geometric image correction
NASA Technical Reports Server (NTRS)
Niblack, W.
1981-01-01
The Earth Resources Observation System (EROS) Data Center in Sioux Falls, South Dakota distributes precision corrected Landsat MSS and RBV data. These data are derived from master data tapes produced by the Master Data Processor (MDP), NASA's system for computing and applying corrections to the data. Included in the MDP is the Control Point Library Building System (CPLBS), an interactive, menu-driven system which permits a user to build and maintain libraries of control points. The control points are required to achieve the high geometric accuracy desired in the output MSS and RBV data. This paper describes the processing performed by CPLBS, the accuracy of the system, and the host computer and special image viewing equipment employed.
NASA Technical Reports Server (NTRS)
Southall, J. W.
1979-01-01
The engineering-specified requirements for integrated information processing by means of the Integrated Programs for Aerospace-Vehicle Design (IPAD) system are presented. A data model is described and is based on the design process of a typical aerospace vehicle. General data management requirements are specified for data storage, retrieval, generation, communication, and maintenance. Information management requirements are specified for a two-component data model. In the general portion, data sets are managed as entities, and in the specific portion, data elements and the relationships between elements are managed by the system, allowing user access to individual elements for the purpose of query. Computer program management requirements are specified for support of a computer program library, control of computer programs, and installation of computer programs into IPAD.
System balance analysis for vector computers
NASA Technical Reports Server (NTRS)
Knight, J. C.; Poole, W. G., Jr.; Voight, R. G.
1975-01-01
The availability of vector processors capable of sustaining computing rates of 10 to the 8th power arithmetic results pers second raised the question of whether peripheral storage devices representing current technology can keep such processors supplied with data. By examining the solution of a large banded linear system on these computers, it was found that even under ideal conditions, the processors will frequently be waiting for problem data.
Information Power Grid Posters
NASA Technical Reports Server (NTRS)
Vaziri, Arsi
2003-01-01
This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.
AOIPS data base management systems support for GARP data sets
NASA Technical Reports Server (NTRS)
Gary, J. P.
1977-01-01
A data base management system is identified, developed to provide flexible access to data sets produced by GARP during its data systems tests. The content and coverage of the data base are defined and a computer-aided, interactive information storage and retrieval system, implemented to facilitate access to user specified data subsets, is described. The computer programs developed to provide the capability were implemented on the highly interactive, minicomputer-based AOIPS and are referred to as the data retrieval system (DRS). Implemented as a user interactive but menu guided system, the DRS permits users to inventory the data tape library and create duplicate or subset data sets based on a user selected window defined by time and latitude/longitude boundaries. The DRS permits users to select, display, or produce formatted hard copy of individual data items contained within the data records.
Data systems and computer science space data systems: Onboard networking and testbeds
NASA Technical Reports Server (NTRS)
Dalton, Dan
1991-01-01
The technical objectives are to develop high-performance, space-qualifiable, onboard computing, storage, and networking technologies. The topics are presented in viewgraph form and include the following: justification; technology challenges; program description; and state-of-the-art assessment.
Computer aided systems human engineering: A hypermedia tool
NASA Technical Reports Server (NTRS)
Boff, Kenneth R.; Monk, Donald L.; Cody, William J.
1992-01-01
The Computer Aided Systems Human Engineering (CASHE) system, Version 1.0, is a multimedia ergonomics database on CD-ROM for the Apple Macintosh II computer, being developed for use by human system designers, educators, and researchers. It will initially be available on CD-ROM and will allow users to access ergonomics data and models stored electronically as text, graphics, and audio. The CASHE CD-ROM, Version 1.0 will contain the Boff and Lincoln (1988) Engineering Data Compendium, MIL-STD-1472D and a unique, interactive simulation capability, the Perception and Performance Prototyper. Its features also include a specialized data retrieval, scaling, and analysis capability and the state of the art in information retrieval, browsing, and navigation.
Distributed Computer Networks in Support of Complex Group Practices
Wess, Bernard P.
1978-01-01
The economics of medical computer networks are presented in context with the patient care and administrative goals of medical networks. Design alternatives and network topologies are discussed with an emphasis on medical network design requirements in distributed data base design, telecommunications, satellite systems, and software engineering. The success of the medical computer networking technology is predicated on the ability of medical and data processing professionals to design comprehensive, efficient, and virtually impenetrable security systems to protect data bases, network access and services, and patient confidentiality.
ERIC Educational Resources Information Center
Letmanyi, Helen
Developed to identify and qualitatively assess computer system evaluation techniques for use during acquisition of general purpose computer systems, this document presents several criteria for comparison and selection. An introduction discusses the automatic data processing (ADP) acquisition process and the need to plan for uncertainty through…
NASA Technical Reports Server (NTRS)
Park, Nohpill; Reagan, Shawn; Franks, Greg; Jones, William G.
1999-01-01
This paper discusses analytical approaches to evaluating performance of Spacecraft On-Board Computing systems, thereby ultimately achieving a reliable spacecraft data communications systems. The sensitivity analysis approach of memory system on the ProSEDS (Propulsive Small Expendable Deployer System) as a part of its data communication system will be investigated. Also, general issues and possible approaches to reliable Spacecraft On-Board Interconnection Network and Processor Array will be shown. The performance issues of a spacecraft on-board computing systems such as sensitivity, throughput, delay and reliability will be introduced and discussed.
Commercial Digital/ADP Equipment in the Ocean Environment. Volume 2. User Appendices
1978-12-15
is that the LINDA system uses a mini computer with a time sharing system software which allows several terminals to be operated at the same time...Acquisition System (ODAS) consists of sensors, computer hardware and computer software . Certain sensors are interfaced to the computers for real time...on USNS KANE, USNS BENT, and USKS WILKES. Commercial automatic data processing equipment used in ODAS includes: Item Model Computer PDP-9 Tape
Gooding, Thomas Michael; McCarthy, Patrick Joseph
2010-03-02
A data collector for a massively parallel computer system obtains call-return stack traceback data for multiple nodes by retrieving partial call-return stack traceback data from each node, grouping the nodes in subsets according to the partial traceback data, and obtaining further call-return stack traceback data from a representative node or nodes of each subset. Preferably, the partial data is a respective instruction address from each node, nodes having identical instruction address being grouped together in the same subset. Preferably, a single node of each subset is chosen and full stack traceback data is retrieved from the call-return stack within the chosen node.
Enabling Earth Science Through Cloud Computing
NASA Technical Reports Server (NTRS)
Hardman, Sean; Riofrio, Andres; Shams, Khawaja; Freeborn, Dana; Springer, Paul; Chafin, Brian
2012-01-01
Cloud Computing holds tremendous potential for missions across the National Aeronautics and Space Administration. Several flight missions are already benefiting from an investment in cloud computing for mission critical pipelines and services through faster processing time, higher availability, and drastically lower costs available on cloud systems. However, these processes do not currently extend to general scientific algorithms relevant to earth science missions. The members of the Airborne Cloud Computing Environment task at the Jet Propulsion Laboratory have worked closely with the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) mission to integrate cloud computing into their science data processing pipeline. This paper details the efforts involved in deploying a science data system for the CARVE mission, evaluating and integrating cloud computing solutions with the system and porting their science algorithms for execution in a cloud environment.
Enhanced Data Authentication System v. 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Maikael A.; Tolsch, Brandon Jeffrey; Schwartz, Steven Robert
EDAS is a system, comprised on hardware and software, that plugs in to an existing data stream, and branches all data for transmission to a secondary observer computer. The EDAS Junction box, which inserts into the data stream, has Java software that forms these data into packets, digitally signs, encrypts, and sends these packets to a safeguards inspector computer. Further, there is a second Java program running on the secondary observer computer that receives data from the EDAS Junction Box to decrypt, authenticate, and store incoming packets. Also, there is a stand-alone Java program that is used to configure themore » EDAS Junction Box.« less
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-08
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.
Hydrologic data-verification management program plan
Alexander, C.W.
1982-01-01
Data verification refers to the performance of quality control on hydrologic data that have been retrieved from the field and are being prepared for dissemination to water-data users. Water-data users now have access to computerized data files containing unpublished, unverified hydrologic data. Therefore, it is necessary to develop techniques and systems whereby the computer can perform some data-verification functions before the data are stored in user-accessible files. Computerized data-verification routines can be developed for this purpose. A single, unified concept describing master data-verification program using multiple special-purpose subroutines, and a screen file containing verification criteria, can probably be adapted to any type and size of computer-processing system. Some traditional manual-verification procedures can be adapted for computerized verification, but new procedures can also be developed that would take advantage of the powerful statistical tools and data-handling procedures available to the computer. Prototype data-verification systems should be developed for all three data-processing environments as soon as possible. The WATSTORE system probably affords the greatest opportunity for long-range research and testing of new verification subroutines. (USGS)
Systems and methods for predicting materials properties
Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano
2007-11-06
Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.
NASA Astrophysics Data System (ADS)
McFall, Steve
1994-03-01
With the increase in business automation and the widespread availability and low cost of computer systems, law enforcement agencies have seen a corresponding increase in criminal acts involving computers. The examination of computer evidence is a new field of forensic science with numerous opportunities for research and development. Research is needed to develop new software utilities to examine computer storage media, expert systems capable of finding criminal activity in large amounts of data, and to find methods of recovering data from chemically and physically damaged computer storage media. In addition, defeating encryption and password protection of computer files is also a topic requiring more research and development.
Natural Resource Information System. Volume 2: System operating procedures and instructions
NASA Technical Reports Server (NTRS)
1972-01-01
A total computer software system description is provided for the prototype Natural Resource Information System designed to store, process, and display data of maximum usefulness to land management decision making. Program modules are described, as are the computer file design, file updating methods, digitizing process, and paper tape conversion to magnetic tape. Operating instructions for the system, data output, printed output, and graphic output are also discussed.
Computer-aided personal interviewing. A new technique for data collection in epidemiologic surveys.
Birkett, N J
1988-03-01
Most epidemiologic studies involve the collection of data directly from selected respondents. Traditionally, interviewers are provided with the interview in booklet form on paper and answers are recorded therein. On receipt at the study office, the interview results are coded, transcribed, and keypunched for analysis. The author's team has developed a method of personal interviewing which uses a structured interview stored on a lap-sized computer. Responses are entered into the computer and are subject to immediate error-checking and correction. All skip-patterns are automatic. Data entry to the final data-base involves no manual data transcription. A pilot evaluation with a preliminary version of the system using tape-recorded interviews in a test/re-test methodology revealed a slightly higher error rate, probably related to weaknesses in the pilot system and the training process. Computer interviews tended to be longer but other features of the interview process were not affected by computer. The author's team has now completed 2,505 interviews using this system in a community-based blood pressure survey. It has been well accepted by both interviewers and respondents. Failure to complete an interview on the computer was uncommon (5 per cent) and well-handled by paper back-up questionnaires. The results show that computer-aided personal interviewing in the home is feasible but that further evaluation is needed to establish the impact of this methodology on overall data quality.
Comparison of existing digital image analysis systems for the analysis of Thematic Mapper data
NASA Technical Reports Server (NTRS)
Likens, W. C.; Wrigley, R. C.
1984-01-01
Most existing image analysis systems were designed with the Landsat Multi-Spectral Scanner in mind, leaving open the question of whether or not these systems could adequately process Thematic Mapper data. In this report, both hardware and software systems have been evaluated for compatibility with TM data. Lack of spectral analysis capability was not found to be a problem, though techniques for spatial filtering and texture varied. Computer processing speed and data storage of currently existing mini-computer based systems may be less than adequate. Upgrading to more powerful hardware may be required for many TM applications.
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David
1987-01-01
The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149
Optimization of tomographic reconstruction workflows on geographically distributed resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Adaptable radiation monitoring system and method
Archer, Daniel E [Livermore, CA; Beauchamp, Brock R [San Ramon, CA; Mauger, G Joseph [Livermore, CA; Nelson, Karl E [Livermore, CA; Mercer, Michael B [Manteca, CA; Pletcher, David C [Sacramento, CA; Riot, Vincent J [Berkeley, CA; Schek, James L [Tracy, CA; Knapp, David A [Livermore, CA
2006-06-20
A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.
Reactor Operations Monitoring System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M.M.
1989-01-01
The Reactor Operations Monitoring System (ROMS) is a VME based, parallel processor data acquisition and safety action system designed by the Equipment Engineering Section and Reactor Engineering Department of the Savannah River Site. The ROMS will be analyzing over 8 million signal samples per minute. Sixty-eight microprocessors are used in the ROMS in order to achieve a real-time data analysis. The ROMS is composed of multiple computer subsystems. Four redundant computer subsystems monitor 600 temperatures with 2400 thermocouples. Two computer subsystems share the monitoring of 600 reactor coolant flows. Additional computer subsystems are dedicated to monitoring 400 signals from assortedmore » process sensors. Data from these computer subsystems are transferred to two redundant process display computer subsystems which present process information to reactor operators and to reactor control computers. The ROMS is also designed to carry out safety functions based on its analysis of process data. The safety functions include initiating a reactor scram (shutdown), the injection of neutron poison, and the loadshed of selected equipment. A complete development Reactor Operations Monitoring System has been built. It is located in the Program Development Center at the Savannah River Site and is currently being used by the Reactor Engineering Department in software development. The Equipment Engineering Section is designing and fabricating the process interface hardware. Upon proof of hardware and design concept, orders will be placed for the final five systems located in the three reactor areas, the reactor training simulator, and the hardware maintenance center.« less
The 'Biologically-Inspired Computing' Column
NASA Technical Reports Server (NTRS)
Hinchey, Mike
2006-01-01
The field of Biology changed dramatically in 1953, with the determination by Francis Crick and James Dewey Watson of the double helix structure of DNA. This discovery changed Biology for ever, allowing the sequencing of the human genome, and the emergence of a "new Biology" focused on DNA, genes, proteins, data, and search. Computational Biology and Bioinformatics heavily rely on computing to facilitate research into life and development. Simultaneously, an understanding of the biology of living organisms indicates a parallel with computing systems: molecules in living cells interact, grow, and transform according to the "program" dictated by DNA. Moreover, paradigms of Computing are emerging based on modelling and developing computer-based systems exploiting ideas that are observed in nature. This includes building into computer systems self-management and self-governance mechanisms that are inspired by the human body's autonomic nervous system, modelling evolutionary systems analogous to colonies of ants or other insects, and developing highly-efficient and highly-complex distributed systems from large numbers of (often quite simple) largely homogeneous components to reflect the behaviour of flocks of birds, swarms of bees, herds of animals, or schools of fish. This new field of "Biologically-Inspired Computing", often known in other incarnations by other names, such as: Autonomic Computing, Pervasive Computing, Organic Computing, Biomimetics, and Artificial Life, amongst others, is poised at the intersection of Computer Science, Engineering, Mathematics, and the Life Sciences. Successes have been reported in the fields of drug discovery, data communications, computer animation, control and command, exploration systems for space, undersea, and harsh environments, to name but a few, and augur much promise for future progress.
Coal-seismic, desktop computer programs in BASIC; Part 7, Display and compute shear-pair seismograms
Hasbrouck, W.P.
1983-01-01
Processing of geophysical data taken with the U.S. Geological Survey's coal-seismic system is done with a desk-top, stand-alone computer. Programs for this computer are written in the extended BASIC language utilized by the Tektronix 4051 Graphic System. This report discusses and presents five computer pro grams used to display and compute shear-pair seismograms.
1980-05-01
engineering ,ZteNo D R RPTE16 research w 9 laboratory COMPARISON OF BUILDING LOADS ANALYSIS AND SYSTEM THERMODYNAMICS (BLAST) AD 0 5 5,0 3COMPUTER PROGRAM...Building Loads Analysis and System Thermodynamics (BLAST) computer program. A dental clinic and a battalion headquarters and classroom building were...Building and HVAC System Data Computer Simulation Comparison of Actual and Simulated Results ANALYSIS AND FINDINGS
Data, Analysis, and Visualization | Computational Science | NREL
Data, Analysis, and Visualization Data, Analysis, and Visualization Data management, data analysis . At NREL, our data management, data analysis, and scientific visualization capabilities help move the approaches to image analysis and computer vision. Data Management and Big Data Systems, software, and tools
O'Reilly, Robert; Fedorko, Steve; Nicholson, Nigel
1983-01-01
This paper describes a structured interview process for medical school admissions supported by an Apple II computer system which provides feedback to interviewers and the College admissions committee. Presented are the rationale for the system, the preliminary results of analysis of some of the interview data, and a brief description of the computer program and output. The present data show that the structured interview yields very high interrater reliability coefficients, is acceptable to the medical school faculty, and results in quantitative data useful in the admission process. The system continues in development at this time, a second year of data will be shortly available, and further refinements are being made to the computer program to enhance its utilization and exportability.
Applications of massively parallel computers in telemetry processing
NASA Technical Reports Server (NTRS)
El-Ghazawi, Tarek A.; Pritchard, Jim; Knoble, Gordon
1994-01-01
Telemetry processing refers to the reconstruction of full resolution raw instrumentation data with artifacts, of space and ground recording and transmission, removed. Being the first processing phase of satellite data, this process is also referred to as level-zero processing. This study is aimed at investigating the use of massively parallel computing technology in providing level-zero processing to spaceflights that adhere to the recommendations of the Consultative Committee on Space Data Systems (CCSDS). The workload characteristics, of level-zero processing, are used to identify processing requirements in high-performance computing systems. An example of level-zero functions on a SIMD MPP, such as the MasPar, is discussed. The requirements in this paper are based in part on the Earth Observing System (EOS) Data and Operation System (EDOS).
NASA Astrophysics Data System (ADS)
Shi, X.
2015-12-01
As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.
A secure file manager for UNIX
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeVries, R.G.
1990-12-31
The development of a secure file management system for a UNIX-based computer facility with supercomputers and workstations is described. Specifically, UNIX in its usual form does not address: (1) Operation which would satisfy rigorous security requirements. (2) Online space management in an environment where total data demands would be many times the actual online capacity. (3) Making the file management system part of a computer network in which users of any computer in the local network could retrieve data generated on any other computer in the network. The characteristics of UNIX can be exploited to develop a portable, secure filemore » manager which would operate on computer systems ranging from workstations to supercomputers. Implementation considerations making unusual use of UNIX features, rather than requiring extensive internal system changes, are described, and implementation using the Cray Research Inc. UNICOS operating system is outlined.« less
Monitoring system including an electronic sensor platform and an interrogation transceiver
Kinzel, Robert L.; Sheets, Larry R.
2003-09-23
A wireless monitoring system suitable for a wide range of remote data collection applications. The system includes at least one Electronic Sensor Platform (ESP), an Interrogator Transceiver (IT) and a general purpose host computer. The ESP functions as a remote data collector from a number of digital and analog sensors located therein. The host computer provides for data logging, testing, demonstration, installation checkout, and troubleshooting of the system. The IT transmits signals from one or more ESP's to the host computer to the ESP's. The IT host computer may be powered by a common power supply, and each ESP is individually powered by a battery. This monitoring system has an extremely low power consumption which allows remote operation of the ESP for long periods; provides authenticated message traffic over a wireless network; utilizes state-of-health and tamper sensors to ensure that the ESP is secure and undamaged; has robust housing of the ESP suitable for use in radiation environments; and is low in cost. With one base station (host computer and interrogator transceiver), multiple ESP's may be controlled at a single monitoring site.
ALMA Correlator Real-Time Data Processor
NASA Astrophysics Data System (ADS)
Pisano, J.; Amestica, R.; Perez, J.
2005-10-01
The design of a real-time Linux application utilizing Real-Time Application Interface (RTAI) to process real-time data from the radio astronomy correlator for the Atacama Large Millimeter Array (ALMA) is described. The correlator is a custom-built digital signal processor which computes the cross-correlation function of two digitized signal streams. ALMA will have 64 antennas with 2080 signal streams each with a sample rate of 4 giga-samples per second. The correlator's aggregate data output will be 1 gigabyte per second. The software is defined by hard deadlines with high input and processing data rates, while requiring interfaces to non real-time external computers. The designed computer system - the Correlator Data Processor or CDP, consists of a cluster of 17 SMP computers, 16 of which are compute nodes plus a master controller node all running real-time Linux kernels. Each compute node uses an RTAI kernel module to interface to a 32-bit parallel interface which accepts raw data at 64 megabytes per second in 1 megabyte chunks every 16 milliseconds. These data are transferred to tasks running on multiple CPUs in hard real-time using RTAI's LXRT facility to perform quantization corrections, data windowing, FFTs, and phase corrections for a processing rate of approximately 1 GFLOPS. Highly accurate timing signals are distributed to all seventeen computer nodes in order to synchronize them to other time-dependent devices in the observatory array. RTAI kernel tasks interface to the timing signals providing sub-millisecond timing resolution. The CDP interfaces, via the master node, to other computer systems on an external intra-net for command and control, data storage, and further data (image) processing. The master node accesses these external systems utilizing ALMA Common Software (ACS), a CORBA-based client-server software infrastructure providing logging, monitoring, data delivery, and intra-computer function invocation. The software is being developed in tandem with the correlator hardware which presents software engineering challenges as the hardware evolves. The current status of this project and future goals are also presented.
[Experiences with an anesthesia protocol written by computer].
Karliczek, G F; Brenken, U; van den Broeke, J J; Mooi, B; de Geus, A F; Wiersma, G; Oosterhaven, S
1988-04-01
Since December 1983, we have used a computer system for charting and data logging in cardiac and thoracic anesthesia. These computers, designed as stand-alone units, were developed at our hospital based on Motorola 6809 microprocessor systems. All measurements derived from anesthetic monitoring, ventilator, and heart-lung machine are automatically sampled at regular intervals and stored for later data management. Laboratory results are automatically received from the hospital computer system. The user communicates with the system via a terminal and a keyboard; this also facilitates the entering of all comments, medications, infusions, and fluid losses. All data are continuously displayed on an A3 format anesthetic chart using a multi-pen, flat-bed plotter. The operation of the system has proved to be simple and needs less time than charting by hand, while the result, the display on the chart, is far clearer and more complete than any handwritten document. Up to now 3,200 operations (corresponding to 12,500 anesthetic h) have been documented. The failure rate of the system, defined as an interruption of the documentation for more than 30 min is 2.1%. Further development of the system is discussed. A data base for processing the stored data has been developed and is being tested at present.
NASA Technical Reports Server (NTRS)
Jansen, B. J., Jr.
1998-01-01
The features of the data acquisition and control systems of the NASA Langley Research Center's Jet Noise Laboratory are presented. The Jet Noise Laboratory is a facility that simulates realistic mixed flow turbofan jet engine nozzle exhaust systems in simulated flight. The system is capable of acquiring data for a complete take-off assessment of noise and nozzle performance. This paper describes the development of an integrated system to control and measure the behavior of model jet nozzles featuring dual independent high pressure combusting air streams with wind tunnel flow. The acquisition and control system is capable of simultaneous measurement of forces, moments, static and dynamic model pressures and temperatures, and jet noise. The design concepts for the coordination of the control computers and multiple data acquisition computers and instruments are discussed. The control system design and implementation are explained, describing the features, equipment, and the experiences of using a primarily Personal Computer based system. Areas for future development are examined.
Computer graphics for management: An abstract of capabilities and applications of the EIS system
NASA Technical Reports Server (NTRS)
Solem, B. J.
1975-01-01
The Executive Information Services (EIS) system, developed as a computer-based, time-sharing tool for making and implementing management decisions, and including computer graphics capabilities, was described. The following resources are available through the EIS languages: centralized corporate/gov't data base, customized and working data bases, report writing, general computational capability, specialized routines, modeling/programming capability, and graphics. Nearly all EIS graphs can be created by a single, on-line instruction. A large number of options are available, such as selection of graphic form, line control, shading, placement on the page, multiple images on a page, control of scaling and labeling, plotting of cum data sets, optical grid lines, and stack charts. The following are examples of areas in which the EIS system may be used: research, estimating services, planning, budgeting, and performance measurement, national computer hook-up negotiations.
NASA Technical Reports Server (NTRS)
1974-01-01
Computer program listings as well as graphical and tabulated data needed by the analyst to perform a BRAVO analysis were examined. Graphical aid which can be used to determine the earth coverage of satellites in synchronous equatorial orbits was described. A listing for satellite synthesis computer program as well as a sample printout for the DSCS-11 satellite program and a listing of the symbols used in the program were included. The APL language listing for the payload program cost estimating computer program was given. This language is compatible with many of the time sharing remote terminals computers used in the United States. Data on the intelsat communications network was studied. Costs for telecommunications systems leasing, line of sight microwave relay communications systems, submarine telephone cables, and terrestrial power generation systems were also described.
An Interactive Web-Based Analysis Framework for Remote Sensing Cloud Computing
NASA Astrophysics Data System (ADS)
Wang, X. Z.; Zhang, H. M.; Zhao, J. H.; Lin, Q. H.; Zhou, Y. C.; Li, J. H.
2015-07-01
Spatiotemporal data, especially remote sensing data, are widely used in ecological, geographical, agriculture, and military research and applications. With the development of remote sensing technology, more and more remote sensing data are accumulated and stored in the cloud. An effective way for cloud users to access and analyse these massive spatiotemporal data in the web clients becomes an urgent issue. In this paper, we proposed a new scalable, interactive and web-based cloud computing solution for massive remote sensing data analysis. We build a spatiotemporal analysis platform to provide the end-user with a safe and convenient way to access massive remote sensing data stored in the cloud. The lightweight cloud storage system used to store public data and users' private data is constructed based on open source distributed file system. In it, massive remote sensing data are stored as public data, while the intermediate and input data are stored as private data. The elastic, scalable, and flexible cloud computing environment is built using Docker, which is a technology of open-source lightweight cloud computing container in the Linux operating system. In the Docker container, open-source software such as IPython, NumPy, GDAL, and Grass GIS etc., are deployed. Users can write scripts in the IPython Notebook web page through the web browser to process data, and the scripts will be submitted to IPython kernel to be executed. By comparing the performance of remote sensing data analysis tasks executed in Docker container, KVM virtual machines and physical machines respectively, we can conclude that the cloud computing environment built by Docker makes the greatest use of the host system resources, and can handle more concurrent spatial-temporal computing tasks. Docker technology provides resource isolation mechanism in aspects of IO, CPU, and memory etc., which offers security guarantee when processing remote sensing data in the IPython Notebook. Users can write complex data processing code on the web directly, so they can design their own data processing algorithm.
High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing
NASA Astrophysics Data System (ADS)
Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.
2015-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.
Computer-Guided Diagnosis of Learning Disabilities: A Prototype.
ERIC Educational Resources Information Center
Colbourn, Marlene Jones
A computer based diagnostic system to assist educators in the assessment of learning disabled children aged 8 to 10 years in the area of reading is described and evaluated. The system is intended to guide the diagnosis of reading problems through step by step analysis of available data and requests for additional data. The system provides a…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-17
... system, pilot primary flight display, audio panel, or the 1 air data computer. This failure could lead to... include the autopilot, anti-skid system, hydraulic indicator, spoiler system, pilot primary flight display, audio panel, or the 1 air data computer. This failure could lead to a significant increase in pilot...
Launch Processing System. [for Space Shuttle
NASA Technical Reports Server (NTRS)
Byrne, F.; Doolittle, G. V.; Hockenberger, R. W.
1976-01-01
This paper presents a functional description of the Launch Processing System, which provides automatic ground checkout and control of the Space Shuttle launch site and airborne systems, with emphasis placed on the Checkout, Control, and Monitor Subsystem. Hardware and software modular design concepts for the distributed computer system are reviewed relative to performing system tests, launch operations control, and status monitoring during ground operations. The communication network design, which uses a Common Data Buffer interface to all computers to allow computer-to-computer communication, is discussed in detail.
ISSYS: An integrated synergistic Synthesis System
NASA Technical Reports Server (NTRS)
Dovi, A. R.
1980-01-01
Integrated Synergistic Synthesis System (ISSYS), an integrated system of computer codes in which the sequence of program execution and data flow is controlled by the user, is discussed. The commands available to exert such control, the ISSYS major function and rules, and the computer codes currently available in the system are described. Computational sequences frequently used in the aircraft structural analysis and synthesis are defined. External computer codes utilized by the ISSYS system are documented. A bibliography on the programs is included.
Geiger, Linda H.
1983-01-01
The report is an update of U.S. Geological Survey Open-File Report 77-703, which described a retrieval program for administrative index of active data-collection sites in Florida. Extensive changes to the Findex system have been made since 1977 , making the previous report obsolete. A description of the data base and computer programs that are available in the Findex system are documented in this report. This system serves a vital need in the administration of the many and diverse water-data collection activities. District offices with extensive data-collection activities will benefit from the documentation of the system. Largely descriptive, the report tells how a file of computer card images has been established which contains entries for all sites in Florida at which there is currently a water-data collection activity. Entries include information such as identification number, station name, location, type of site, county, frequency of data collection, funding, and other pertinent details. The computer program FINDEX selectively retrieves entries and lists them in a format suitable for publication. The index is updated routinely. (USGS)
MIT CSAIL and Lincoln Laboratory Task Force Report
2016-08-01
projects have been very diverse, spanning several areas of CSAIL concentration, including robotics, big data analytics , wireless communications...spanning several areas of CSAIL concentration, including robotics, big data analytics , wireless communications, computing architectures and...to machine learning systems and algorithms, such as recommender systems, and “Big Data ” analytics . Advanced computing architectures broadly refer to
Living Color Frame System: PC graphics tool for data visualization
NASA Technical Reports Server (NTRS)
Truong, Long V.
1993-01-01
Living Color Frame System (LCFS) is a personal computer software tool for generating real-time graphics applications. It is highly applicable for a wide range of data visualization in virtual environment applications. Engineers often use computer graphics to enhance the interpretation of data under observation. These graphics become more complicated when 'run time' animations are required, such as found in many typical modern artificial intelligence and expert systems. Living Color Frame System solves many of these real-time graphics problems.
1983-07-01
Distributed Computing Systems impact DrnwrR - aehR on Sotwar Quaity. PERFORMING 010. REPORT NUMBER 7. AUTNOW) S. CONTRACT OR GRANT "UMBER(*)IS ThomasY...C31 Application", "Space Systems Network", "Need for Distributed Database Management", and "Adaptive Routing". This is discussed in the last para ...data reduction, buffering, encryption, and error detection and correction functions. Examples of such data streams include imagery data, video
Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin
2015-01-15
Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.
Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin
2014-01-01
Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633
ERIC Educational Resources Information Center
Association for Educational Data Systems, Washington, DC.
Fifteen papers on computer centers and data processing management presented at the Association for Educational Data Systems (AEDS) 1976 convention are included in this document. The first two papers review the recent controversy for proposed licensing of data processors, and they are followed by a description of the Institute for Certification of…
Airborne Cloud Computing Environment (ACCE)
NASA Technical Reports Server (NTRS)
Hardman, Sean; Freeborn, Dana; Crichton, Dan; Law, Emily; Kay-Im, Liz
2011-01-01
Airborne Cloud Computing Environment (ACCE) is JPL's internal investment to improve the return on airborne missions. Improve development performance of the data system. Improve return on the captured science data. The investment is to develop a common science data system capability for airborne instruments that encompasses the end-to-end lifecycle covering planning, provisioning of data system capabilities, and support for scientific analysis in order to improve the quality, cost effectiveness, and capabilities to enable new scientific discovery and research in earth observation.
Work-a-day world of NPRDS: what makes it tick
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Nuclear Plant Reliability Data System (NPRDS) is a computer-based data bank of reliability information on safety-related nuclear-power-plant systems and components. Until January 1982, the system was administered by the American Nuclear Society 58.20 Subcommittee. The data base was maintained by Southwest Research Institute in San Antonio, Texas. In October 1982, it was decided that the Institute of Nuclear Power Operations (INPO) would maintain the data base on its own computer. The transition is currently in progress.
Applied Information Systems Research Program Workshop
NASA Technical Reports Server (NTRS)
Bredekamp, Joe
1991-01-01
Viewgraphs on Applied Information Systems Research Program Workshop are presented. Topics covered include: the Earth Observing System Data and Information System; the planetary data system; Astrophysics Data System project review; OAET Computer Science and Data Systems Programs; the Center of Excellence in Space Data and Information Sciences; and CASIS background.
Ogata, Y; Nishizawa, K
1995-10-01
An automated smear counting and data processing system for a life science laboratory was developed to facilitate routine surveys and eliminate human errors by using a notebook computer. This system was composed of a personal computer, a liquid scintillation counter and a well-type NaI(Tl) scintillation counter. The radioactivity of smear samples was automatically measured by these counters. The personal computer received raw signals from the counters through an interface of RS-232C. The software for the computer evaluated the surface density of each radioisotope and printed out that value along with other items as a report. The software was programmed in Pascal language. This system was successfully applied to routine surveys for contamination in our facility.
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
NASA Astrophysics Data System (ADS)
Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.
2015-05-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.
Computer-aided design and computer science technology
NASA Technical Reports Server (NTRS)
Fulton, R. E.; Voigt, S. J.
1976-01-01
A description is presented of computer-aided design requirements and the resulting computer science advances needed to support aerospace design. The aerospace design environment is examined, taking into account problems of data handling and aspects of computer hardware and software. The interactive terminal is normally the primary interface between the computer system and the engineering designer. Attention is given to user aids, interactive design, interactive computations, the characteristics of design information, data management requirements, hardware advancements, and computer science developments.
Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Torres, Aaron
2015-02-03
Techniques are provided for storing files in a parallel computing system using sub-files with semantically meaningful boundaries. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a plurality of sub-files. The method comprises the steps of obtaining a user specification of semantic information related to the file; providing the semantic information as a data structure description to a data formatting library write function; and storing the semantic information related to the file with one or more of the sub-files in one or more storage nodes of the parallel computing system. The semantic information provides a description of data in the file. The sub-files can be replicated based on semantically meaningful boundaries.
Louis, David N.; Feldman, Michael; Carter, Alexis B.; Dighe, Anand S.; Pfeifer, John D.; Bry, Lynn; Almeida, Jonas S.; Saltz, Joel; Braun, Jonathan; Tomaszewski, John E.; Gilbertson, John R.; Sinard, John H.; Gerber, Georg K.; Galli, Stephen J.; Golden, Jeffrey A.; Becich, Michael J.
2016-01-01
Context We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. Objective To define the scope and needs of computational pathology. Data Sources A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. Conclusions The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and non-pathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology. PMID:26098131
Automatic computation of transfer functions
Atcitty, Stanley; Watson, Luke Dale
2015-04-14
Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.
Use of cloud computing technology in natural hazard assessment and emergency management
NASA Astrophysics Data System (ADS)
Webley, P. W.; Dehn, J.
2015-12-01
During a natural hazard event, the most up-to-date data needs to be in the hands of those on the front line. Decision support system tools can be developed to provide access to pre-made outputs to quickly assess the hazard and potential risk. However, with the ever growing availability of new satellite data as well as ground and airborne data generated in real-time there is a need to analyze the large volumes of data in an easy-to-access and effective environment. With the growth in the use of cloud computing, where the analysis and visualization system can grow with the needs of the user, then these facilities can used to provide this real-time analysis. Think of a central command center uploading the data to the cloud compute system and then those researchers in-the-field connecting to a web-based tool to view the newly acquired data. New data can be added by any user and then viewed instantly by anyone else in the organization through the cloud computing interface. This provides the ideal tool for collaborative data analysis, hazard assessment and decision making. We present the rationale for developing a cloud computing systems and illustrate how this tool can be developed for use in real-time environments. Users would have access to an interactive online image analysis tool without the need for specific remote sensing software on their local system therefore increasing their understanding of the ongoing hazard and mitigate its impact on the surrounding region.
Computer-Assisted Pregnancy Management
Haug, Peter J.; Hebertson, Richard M.; Heywood, Reed E.; Larkin, Ronald; Swapp, Craig; Waterfall, Brian; Warner, Homer R.
1987-01-01
A computer system under development for the management of pregnancy is described. This system exploits expert systems tools in the HELP Hospital Information System to direct the collection of clinical data and to generate medical decisions aimed at enhancing and standardizing prenatal care.
NASA Astrophysics Data System (ADS)
Manabe, Yoshitsugu; Imura, Masataka; Tsuchiya, Masanobu; Yasumuro, Yoshihiro; Chihara, Kunihiro
2003-01-01
Wearable 3D measurement realizes to acquire 3D information of an objects or an environment using a wearable computer. Recently, we can send voice and sound as well as pictures by mobile phone in Japan. Moreover it will become easy to capture and send data of short movie by it. On the other hand, the computers become compact and high performance. And it can easy connect to Internet by wireless LAN. Near future, we can use the wearable computer always and everywhere. So we will be able to send the three-dimensional data that is measured by wearable computer as a next new data. This paper proposes the measurement method and system of three-dimensional data of an object with the using of wearable computer. This method uses slit light projection for 3D measurement and user"s motion instead of scanning system.
Software For Monitoring A Computer Network
NASA Technical Reports Server (NTRS)
Lee, Young H.
1992-01-01
SNMAT is rule-based expert-system computer program designed to assist personnel in monitoring status of computer network and identifying defective computers, workstations, and other components of network. Also assists in training network operators. Network for SNMAT located at Space Flight Operations Center (SFOC) at NASA's Jet Propulsion Laboratory. Intended to serve as data-reduction system providing windows, menus, and graphs, enabling users to focus on relevant information. SNMAT expected to be adaptable to other computer networks; for example in management of repair, maintenance, and security, or in administration of planning systems, billing systems, or archives.
An Information and Technical Manual for the Computer-Assisted Teacher Training System (CATTS).
ERIC Educational Resources Information Center
Semmel, Melvyn I.; And Others
The manual presents technical information on the computer assisted teacher training system (CATTS) which aims at developing a versatile and economical computer based teacher training system with the capability of providing immediate analysis and feedback of data relevant to teacher pupil transactions in a classroom setting. The physical…
Satellite on-board processing for earth resources data
NASA Technical Reports Server (NTRS)
Bodenheimer, R. E.; Gonzalez, R. C.; Gupta, J. N.; Hwang, K.; Rochelle, R. W.; Wilson, J. B.; Wintz, P. A.
1975-01-01
Results of a survey of earth resources user applications and their data requirements, earth resources multispectral scanner sensor technology, and preprocessing algorithms for correcting the sensor outputs and for data bulk reduction are presented along with a candidate data format. Computational requirements required to implement the data analysis algorithms are included along with a review of computer architectures and organizations. Computer architectures capable of handling the algorithm computational requirements are suggested and the environmental effects of an on-board processor discussed. By relating performance parameters to the system requirements of each of the user requirements the feasibility of on-board processing is determined for each user. A tradeoff analysis is performed to determine the sensitivity of results to each of the system parameters. Significant results and conclusions are discussed, and recommendations are presented.
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Singh, G.
1974-01-01
The dynamics of the Large Space Telescope (LST) control system were studied in order to arrive at a simplified model for computer simulation without loss of accuracy. The frictional nonlinearity of the Control Moment Gyroscope (CMG) Control Loop was analyzed in a model to obtain data for the following: (1) a continuous describing function for the gimbal friction nonlinearity; (2) a describing function of the CMG nonlinearity using an analytical torque equation; and (3) the discrete describing function and function plots for CMG functional linearity. Preliminary computer simulations are shown for the simplified LST system, first without, and then with analytical torque expressions. Transfer functions of the sampled-data LST system are also described. A final computer simulation is presented which uses elements of the simplified sampled-data LST system with analytical CMG frictional torque expressions.
NASA Astrophysics Data System (ADS)
Noumaru, Junichi; Kawai, Jun A.; Schubert, Kiaina; Yagi, Masafumi; Takata, Tadafumi; Winegar, Tom; Scanlon, Tim; Nishida, Takuhiro; Fox, Camron; Hayasaka, James; Forester, Jason; Uchida, Kenji; Nakamura, Isamu; Tom, Richard; Koura, Norikazu; Yamamoto, Tadahiro; Tanoue, Toshiya; Yamada, Toru
2008-07-01
Subaru Telescope has recently replaced most equipment of Subaru Telescope Network II with the new equipment which includes 124TB of RAID system for data archive. Switching the data storage from tape to RAID enables users to access the data faster. The STN-III dropped some important components of STN-II, such as supercomputers, development & testing subsystem for Subaru Observation Control System, or data processing subsystem. On the other hand, we invested more computers to the remote operation system. Thanks to IT innovations, our LAN as well as the network between Hilo and summit were upgraded to gigabit network at the similar or even reduced cost from the previous system. As the result of the redesigning of the computer system by more focusing on the observatory operation, we greatly reduced the total cost for computer rental, purchase and maintenance.
On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat
NASA Astrophysics Data System (ADS)
Hua, H.
2016-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.
ERIC Educational Resources Information Center
Longenecker, Herbert E., Jr.; Babb, Jeffry; Waguespack, Leslie J.; Janicki, Thomas N.; Feinstein, David
2015-01-01
The evolution of computing education spans a spectrum from "computer science" ("CS") grounded in the theory of computing, to "information systems" ("IS"), grounded in the organizational application of data processing. This paper reports on a project focusing on a particular slice of that spectrum commonly…
Case Studies of Auditing in a Computer-Based Systems Environment.
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC.
In response to a growing need for effective and efficient means for auditing computer-based systems, a number of studies dealing primarily with batch-processing type computer operations have been conducted to explore the impact of computers on auditing activities in the Federal Government. This report first presents some statistical data on…
omniClassifier: a Desktop Grid Computing System for Big Data Prediction Modeling
Phan, John H.; Kothari, Sonal; Wang, May D.
2016-01-01
Robust prediction models are important for numerous science, engineering, and biomedical applications. However, best-practice procedures for optimizing prediction models can be computationally complex, especially when choosing models from among hundreds or thousands of parameter choices. Computational complexity has further increased with the growth of data in these fields, concurrent with the era of “Big Data”. Grid computing is a potential solution to the computational challenges of Big Data. Desktop grid computing, which uses idle CPU cycles of commodity desktop machines, coupled with commercial cloud computing resources can enable research labs to gain easier and more cost effective access to vast computing resources. We have developed omniClassifier, a multi-purpose prediction modeling application that provides researchers with a tool for conducting machine learning research within the guidelines of recommended best-practices. omniClassifier is implemented as a desktop grid computing system using the Berkeley Open Infrastructure for Network Computing (BOINC) middleware. In addition to describing implementation details, we use various gene expression datasets to demonstrate the potential scalability of omniClassifier for efficient and robust Big Data prediction modeling. A prototype of omniClassifier can be accessed at http://omniclassifier.bme.gatech.edu/. PMID:27532062
Integrated command, control, communications and computation system functional architecture
NASA Technical Reports Server (NTRS)
Cooley, C. G.; Gilbert, L. E.
1981-01-01
The functional architecture for an integrated command, control, communications, and computation system applicable to the command and control portion of the NASA End-to-End Data. System is described including the downlink data processing and analysis functions required to support the uplink processes. The functional architecture is composed of four elements: (1) the functional hierarchy which provides the decomposition and allocation of the command and control functions to the system elements; (2) the key system features which summarize the major system capabilities; (3) the operational activity threads which illustrate the interrelationahip between the system elements; and (4) the interfaces which illustrate those elements that originate or generate data and those elements that use the data. The interfaces also provide a description of the data and the data utilization and access techniques.
Description of a MIL-STD-1553B Data Bus Ada Driver for the LeRC EPS Testbed
NASA Technical Reports Server (NTRS)
Mackin, Michael A.
1995-01-01
This document describes the software designed to provide communication between control computers in the NASA Lewis Research Center Electrical Power System Testbed using MIL-STD-1553B. The software drivers are coded in the Ada programming language and were developed on a MSDOS-based computer workstation. The Electrical Power System (EPS) Testbed is a reduced-scale prototype space station electrical power system. The power system manages and distributes electrical power from the sources (batteries or photovoltaic arrays) to the end-user loads. The electrical system primary operates at 120 volts DC, and the secondary system operates at 28 volts DC. The devices which direct the flow of electrical power are controlled by a network of six control computers. Data and control messages are passed between the computers using the MIL-STD-1553B network. One of the computers, the Power Management Controller (PMC), controls the primary power distribution and another, the Load Management Controller (LMC), controls the secondary power distribution. Each of these computers communicates with two other computers which act as subsidiary controllers. These subsidiary controllers are, in turn, connected to the devices which directly control the flow of electrical power.
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.
Are Handheld Computers Dependable? A New Data Collection System for Classroom-Based Observations
ERIC Educational Resources Information Center
Adiguzel, Tufan; Vannest, Kimberly J.; Parker, Richard I.
2009-01-01
Very little research exists on the dependability of handheld computers used in public school classrooms. This study addresses four dependability criteria--reliability, maintainability, availability, and safety--to evaluate a data collection tool on a handheld computer. Data were collected from five sources: (1) time-use estimations by 19 special…
Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing
NASA Astrophysics Data System (ADS)
Shi, X.
2017-10-01
Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.
A Simple Technique for Securing Data at Rest Stored in a Computing Cloud
NASA Astrophysics Data System (ADS)
Sedayao, Jeff; Su, Steven; Ma, Xiaohao; Jiang, Minghao; Miao, Kai
"Cloud Computing" offers many potential benefits, including cost savings, the ability to deploy applications and services quickly, and the ease of scaling those application and services once they are deployed. A key barrier for enterprise adoption is the confidentiality of data stored on Cloud Computing Infrastructure. Our simple technique implemented with Open Source software solves this problem by using public key encryption to render stored data at rest unreadable by unauthorized personnel, including system administrators of the cloud computing service on which the data is stored. We validate our approach on a network measurement system implemented on PlanetLab. We then use it on a service where confidentiality is critical - a scanning application that validates external firewall implementations.
Bio and health informatics meets cloud : BioVLab as an example.
Chae, Heejoon; Jung, Inuk; Lee, Hyungro; Marru, Suresh; Lee, Seong-Whan; Kim, Sun
2013-01-01
The exponential increase of genomic data brought by the advent of the next or the third generation sequencing (NGS) technologies and the dramatic drop in sequencing cost have driven biological and medical sciences to data-driven sciences. This revolutionary paradigm shift comes with challenges in terms of data transfer, storage, computation, and analysis of big bio/medical data. Cloud computing is a service model sharing a pool of configurable resources, which is a suitable workbench to address these challenges. From the medical or biological perspective, providing computing power and storage is the most attractive feature of cloud computing in handling the ever increasing biological data. As data increases in size, many research organizations start to experience the lack of computing power, which becomes a major hurdle in achieving research goals. In this paper, we review the features of publically available bio and health cloud systems in terms of graphical user interface, external data integration, security and extensibility of features. We then discuss about issues and limitations of current cloud systems and conclude with suggestion of a biological cloud environment concept, which can be defined as a total workbench environment assembling computational tools and databases for analyzing bio/medical big data in particular application domains.
A new method to acquire 3-D images of a dental cast
NASA Astrophysics Data System (ADS)
Li, Zhongke; Yi, Yaxing; Zhu, Zhen; Li, Hua; Qin, Yongyuan
2006-01-01
This paper introduced our newly developed method to acquire three-dimensional images of a dental cast. A rotatable table, a laser-knife, a mirror, a CCD camera and a personal computer made up of a three-dimensional data acquiring system. A dental cast is placed on the table; the mirror is installed beside the table; a linear laser is projected to the dental cast; the CCD camera is put up above the dental cast, it can take picture of the dental cast and the shadow in the mirror; while the table rotating, the camera records the shape of the laser streak projected on the dental cast, and transmit the data to the computer. After the table rotated one circuit, the computer processes the data, calculates the three-dimensional coordinates of the dental cast's surface. In data processing procedure, artificial neural networks are enrolled to calibrate the lens distortion, map coordinates form screen coordinate system to world coordinate system. According to the three-dimensional coordinates, the computer reconstructs the stereo image of the dental cast. It is essential for computer-aided diagnosis and treatment planning in orthodontics. In comparison with other systems in service, for example, laser beam three-dimensional scanning system, the characteristic of this three-dimensional data acquiring system: a. celerity, it casts only 1 minute to scan a dental cast; b. compact, the machinery is simple and compact; c. no blind zone, a mirror is introduced ably to reduce blind zone.
On the convergence of nanotechnology and Big Data analysis for computer-aided diagnosis.
Rodrigues, Jose F; Paulovich, Fernando V; de Oliveira, Maria Cf; de Oliveira, Osvaldo N
2016-04-01
An overview is provided of the challenges involved in building computer-aided diagnosis systems capable of precise medical diagnostics based on integration and interpretation of data from different sources and formats. The availability of massive amounts of data and computational methods associated with the Big Data paradigm has brought hope that such systems may soon be available in routine clinical practices, which is not the case today. We focus on visual and machine learning analysis of medical data acquired with varied nanotech-based techniques and on methods for Big Data infrastructure. Because diagnosis is essentially a classification task, we address the machine learning techniques with supervised and unsupervised classification, making a critical assessment of the progress already made in the medical field and the prospects for the near future. We also advocate that successful computer-aided diagnosis requires a merge of methods and concepts from nanotechnology and Big Data analysis.
NASA Technical Reports Server (NTRS)
Grove, R. D.; Mayhew, S. C.
1973-01-01
A computer program (Langley program C1123) has been developed for estimating aircraft stability and control parameters from flight test data. These parameters are estimated by the maximum likelihood estimation procedure implemented on a real-time digital simulation system, which uses the Control Data 6600 computer. This system allows the investigator to interact with the program in order to obtain satisfactory results. Part of this system, the control and display capabilities, is described for this program. This report also describes the computer program by presenting the program variables, subroutines, flow charts, listings, and operational features. Program usage is demonstrated with a test case using pseudo or simulated flight data.
Improvements to information management systems simulator
NASA Technical Reports Server (NTRS)
Bilek, R. W.
1972-01-01
The performance of personnel in the augmentation and improvement of the interactive IMSIM information management simulation model is summarized. With this augmented model, NASA now has even greater capabilities for the simulation of computer system configurations, data processing loads imposed on these configurations, and executive software to control system operations. Through these simulations, NASA has an extremely cost effective capability for the design and analysis of computer-based data management systems.
System Access | High-Performance Computing | NREL
) systems. Photo of man looking at a large computer monitor with a colorful, visual display of data. System secure shell gateway (SSH) or virtual private network (VPN). User Accounts Request a user account
The microcomputer in the dental office: a new diagnostic aid.
van der Stelt, P F
1985-06-01
The first computer applications in the dental office were based upon standard accountancy procedures. Recently, more and more computer applications have become available to meet the specific requirements of dental practice. This implies not only business procedures, but also facilities to store patient records in the system and retrieve them easily. Another development concerns the automatic calculation of diagnostic data such as those provided in cephalometric analysis. Furthermore, growth and surgical results in the craniofacial area can be predicted by computerized extrapolation. Computers have been useful in obtaining the patient's anamnestic data objectively and for the making of decisions based on such data. Computer-aided instruction systems have been developed for undergraduate students to bridge the gap between textbook and patient interaction without the risks inherent in the latter. Radiology will undergo substantial changes as a result of the application of electronic imaging devices instead of the conventional radiographic films. Computer-assisted electronic imaging will enable image processing, image enhancement, pattern recognition and data transmission for consultation and storage purposes. Image processing techniques will increase image quality whilst still allowing low-dose systems. Standardization of software and system configuration and the development of 'user friendly' programs is the major concern for the near future.
A new taxonomy for distributed computer systems based upon operating system structure
NASA Technical Reports Server (NTRS)
Foudriat, E. C.
1985-01-01
Characteristics of the resource structure found in the operating system are considered as a mechanism for classifying distributed computer systems. Since the operating system resources, themselves, are too diversified to provide a consistent classification, the structure upon which resources are built and shared are examined. The location and control character of this indivisibility provides the taxonomy for separating uniprocessors, computer networks, network computers (fully distributed processing systems or decentralized computers) and algorithm and/or data control multiprocessors. The taxonomy is important because it divides machines into a classification that is relevant or important to the client and not the hardware architect. It also defines the character of the kernel O/S structure needed for future computer systems. What constitutes an operating system for a fully distributed processor is discussed in detail.
CAMAC throughput of a new RISC-based data acquisition computer at the DIII-D tokamak
NASA Astrophysics Data System (ADS)
Vanderlaan, J. F.; Cummings, J. W.
1993-10-01
The amount of experimental data acquired per plasma discharge at DIII-D has continued to grow. The largest shot size in May 1991 was 49 Mbyte; in May 1992, 66 Mbyte; and in April 1993, 80 Mbyte. The increasing load has prompted the installation of a new Motorola 88100-based MODCOMP computer to supplement the existing core of three older MODCOMP data acquisition CPU's. New Kinetic Systems CAMAC serial highway driver hardware runs on the 88100 VME bus. The new operating system is MODCOMP REAL/IX version of AT&T System V UNIX with real-time extensions and networking capabilities; future plans call for installation of additional computers of this type for tokamak and neutral beam control functions. Experiences with the CAMAC hardware and software will be chronicled, including observation of data throughput. The Enhanced Serial Highway crate controller is advertised as twice as fast as the previous crate controller, and computer I/O speeds are expected to also increase data rates.
An Analysis for Capital Expenditure Decisions at a Naval Regional Medical Center.
1981-12-01
Service Equipment Review Committee 1. Portable defibrilator Computed tomographic scanner and cardioscope 2. ECG cart Automated blood cell counter 3. Gas...system sterilizer Gas system sterilizer 4. Automated blood cell Portable defibrilator and counter cardioscope 5. Computed tomographic ECG cart scanner...dictating and automated typing) systems. e. Filing equipment f. Automatic data processing equipment including data communications equipment. g
Bailey, Sarah F; Scheible, Melissa K; Williams, Christopher; Silva, Deborah S B S; Hoggan, Marina; Eichman, Christopher; Faith, Seth A
2017-11-01
Next-generation Sequencing (NGS) is a rapidly evolving technology with demonstrated benefits for forensic genetic applications, and the strategies to analyze and manage the massive NGS datasets are currently in development. Here, the computing, data storage, connectivity, and security resources of the Cloud were evaluated as a model for forensic laboratory systems that produce NGS data. A complete front-to-end Cloud system was developed to upload, process, and interpret raw NGS data using a web browser dashboard. The system was extensible, demonstrating analysis capabilities of autosomal and Y-STRs from a variety of NGS instrumentation (Illumina MiniSeq and MiSeq, and Oxford Nanopore MinION). NGS data for STRs were concordant with standard reference materials previously characterized with capillary electrophoresis and Sanger sequencing. The computing power of the Cloud was implemented with on-demand auto-scaling to allow multiple file analysis in tandem. The system was designed to store resulting data in a relational database, amenable to downstream sample interpretations and databasing applications following the most recent guidelines in nomenclature for sequenced alleles. Lastly, a multi-layered Cloud security architecture was tested and showed that industry standards for securing data and computing resources were readily applied to the NGS system without disadvantageous effects for bioinformatic analysis, connectivity or data storage/retrieval. The results of this study demonstrate the feasibility of using Cloud-based systems for secured NGS data analysis, storage, databasing, and multi-user distributed connectivity. Copyright © 2017 Elsevier B.V. All rights reserved.
SCUT: clinical data organization for physicians using pen computers.
Wormuth, D. W.
1992-01-01
The role of computers in assisting physicians with patient care is rapidly advancing. One of the significant obstacles to efficient use of computers in patient care has been the unavailability of reasonably configured portable computers. Lightweight portable computers are becoming more attractive as physician data-management devices, but still pose a significant problem with bedside use. The advent of computers designed to accept input from a pen and having no keyboard present a usable computer platform to enable physicians to perform clinical computing at the bedside. This paper describes a prototype system to maintain an electronic "scut" sheet. SCUT makes use of pen-input and background rule checking to enhance patient care. GO Corporation's PenPoint Operating System is used to implement the SCUT project. PMID:1483012
NASA Astrophysics Data System (ADS)
Darema, F.
2016-12-01
InfoSymbiotics/DDDAS embodies the power of Dynamic Data Driven Applications Systems (DDDAS), a concept whereby an executing application model is dynamically integrated, in a feed-back loop, with the real-time data-acquisition and control components, as well as other data sources of the application system. Advanced capabilities can be created through such new computational approaches in modeling and simulations, and in instrumentation methods, and include: enhancing the accuracy of the application model; speeding-up the computation to allow faster and more comprehensive models of a system, and create decision support systems with the accuracy of full-scale simulations; in addition, the notion of controlling instrumentation processes by the executing application results in more efficient management of application-data and addresses challenges of how to architect and dynamically manage large sets of heterogeneous sensors and controllers, an advance over the static and ad-hoc ways of today - with DDDAS these sets of resources can be managed adaptively and in optimized ways. Large-Scale-Dynamic-Data encompasses the next wave of Big Data, and namely dynamic data arising from ubiquitous sensing and control in engineered, natural, and societal systems, through multitudes of heterogeneous sensors and controllers instrumenting these systems, and where opportunities and challenges at these "large-scales" relate not only to data size but the heterogeneity in data, data collection modalities, fidelities, and timescales, ranging from real-time data to archival data. In tandem with this important dimension of dynamic data, there is an extended view of Big Computing, which includes the collective computing by networked assemblies of multitudes of sensors and controllers, this range from the high-end to the real-time seamlessly integrated and unified, and comprising the Large-Scale-Big-Computing. InfoSymbiotics/DDDAS engenders transformative impact in many application domains, ranging from the nano-scale to the terra-scale and to the extra-terra-scale. The talk will address opportunities for new capabilities together with corresponding research challenges, with illustrative examples from several application areas including environmental sciences, geosciences, and space sciences.
VMEbus based computer and real-time UNIX as infrastructure of DAQ
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yasu, Y.; Fujii, H.; Nomachi, M.
1994-12-31
This paper describes what the authors have constructed as the infrastructure of data acquisition system (DAQ). The paper reports recent developments concerned with HP VME board computer with LynxOS (HP742rt/HP-RT) and Alpha/OSF1 with VMEbus adapter. The paper also reports current status of developing a Benchmark Suite for Data Acquisition (DAQBENCH) for measuring not only the performance of VME/CAMAC access but also that of the context switching, the inter-process communications and so on, for various computers including Workstation-based systems and VME board computers.
CAD/CAE Integration Enhanced by New CAD Services Standard
NASA Technical Reports Server (NTRS)
Claus, Russell W.
2002-01-01
A Government-industry team led by the NASA Glenn Research Center has developed a computer interface standard for accessing data from computer-aided design (CAD) systems. The Object Management Group, an international computer standards organization, has adopted this CAD services standard. The new standard allows software (e.g., computer-aided engineering (CAE) and computer-aided manufacturing software to access multiple CAD systems through one programming interface. The interface is built on top of a distributed computing system called the Common Object Request Broker Architecture (CORBA). CORBA allows the CAD services software to operate in a distributed, heterogeneous computing environment.
Computer-Assisted Monitoring Of A Complex System
NASA Technical Reports Server (NTRS)
Beil, Bob J.; Mickelson, Eric M.; Sterritt, John M.; Costantino, Rob W.; Houvener, Bob C.; Super, Mike A.
1995-01-01
Propulsion System Advisor (PSA) computer-based system assists engineers and technicians in analyzing masses of sensory data indicative of operating conditions of space shuttle propulsion system during pre-launch and launch activities. Designed solely for monitoring; does not perform any control functions. Although PSA developed for highly specialized application, serves as prototype of noncontrolling, computer-based subsystems for monitoring other complex systems like electric-power-distribution networks and factories.
Developing the Next Generation of Science Data System Engineers
NASA Technical Reports Server (NTRS)
Moses, John F.; Behnke, Jeanne; Durachka, Christopher D.
2016-01-01
At Goddard, engineers and scientists with a range of experience in science data systems are needed to employ new technologies and develop advances in capabilities for supporting new Earth and Space science research. Engineers with extensive experience in science data, software engineering and computer-information architectures are needed to lead and perform these activities. The increasing types and complexity of instrument data and emerging computer technologies coupled with the current shortage of computer engineers with backgrounds in science has led the need to develop a career path for science data systems engineers and architects.The current career path, in which undergraduate students studying various disciplines such as Computer Engineering or Physical Scientist, generally begins with serving on a development team in any of the disciplines where they can work in depth on existing Goddard data systems or serve with a specific NASA science team. There they begin to understand the data, infuse technologies, and begin to know the architectures of science data systems. From here the typical career involves peermentoring, on-the-job training or graduate level studies in analytics, computational science and applied science and mathematics. At the most senior level, engineers become subject matter experts and system architect experts, leading discipline-specific data centers and large software development projects. They are recognized as a subject matter expert in a science domain, they have project management expertise, lead standards efforts and lead international projects. A long career development remains necessary not only because of the breadth of knowledge required across physical sciences and engineering disciplines, but also because of the diversity of instrument data being developed today both by NASA and international partner agencies and because multidiscipline science and practitioner communities expect to have access to all types of observational data.This paper describes an approach to defining career-path guidance for college-bound high school and undergraduate engineering students, junior and senior engineers from various disciplines.
Developing the Next Generation of Science Data System Engineers
NASA Astrophysics Data System (ADS)
Moses, J. F.; Durachka, C. D.; Behnke, J.
2015-12-01
At Goddard, engineers and scientists with a range of experience in science data systems are needed to employ new technologies and develop advances in capabilities for supporting new Earth and Space science research. Engineers with extensive experience in science data, software engineering and computer-information architectures are needed to lead and perform these activities. The increasing types and complexity of instrument data and emerging computer technologies coupled with the current shortage of computer engineers with backgrounds in science has led the need to develop a career path for science data systems engineers and architects. The current career path, in which undergraduate students studying various disciplines such as Computer Engineering or Physical Scientist, generally begins with serving on a development team in any of the disciplines where they can work in depth on existing Goddard data systems or serve with a specific NASA science team. There they begin to understand the data, infuse technologies, and begin to know the architectures of science data systems. From here the typical career involves peer mentoring, on-the-job training or graduate level studies in analytics, computational science and applied science and mathematics. At the most senior level, engineers become subject matter experts and system architect experts, leading discipline-specific data centers and large software development projects. They are recognized as a subject matter expert in a science domain, they have project management expertise, lead standards efforts and lead international projects. A long career development remains necessary not only because of the breath of knowledge required across physical sciences and engineering disciplines, but also because of the diversity of instrument data being developed today both by NASA and international partner agencies and because multi-discipline science and practitioner communities expect to have access to all types of observational data. This paper describes an approach to defining career-path guidance for college-bound high school and undergraduate engineering students, junior and senior engineers from various disciplines.
AGIS: The ATLAS Grid Information System
NASA Astrophysics Data System (ADS)
Anisenkov, Alexey; Belov, Sergey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander
2012-12-01
ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-01
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343
NASA Technical Reports Server (NTRS)
Milner, E. J.; Krosel, S. M.
1977-01-01
Techniques are presented for determining the elements of the A, B, C, and D state variable matrices for systems simulated on an EAI Pacer 100 hybrid computer. An automated procedure systematically generates disturbance data necessary to linearize the simulation model and stores these data on a floppy disk. A separate digital program verifies this data, calculates the elements of the system matrices, and prints these matrices appropriately labeled. The partial derivatives forming the elements of the state variable matrices are approximated by finite difference calculations.
Belle II grid computing: An overview of the distributed data management system.
NASA Astrophysics Data System (ADS)
Bansal, Vikas; Schram, Malachi; Belle Collaboration, II
2017-01-01
The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan, will start physics data taking in 2018 and will accumulate 50/ab of e +e- collision data, about 50 times larger than the data set of the Belle experiment. The computing requirements of Belle II are comparable to those of a Run I LHC experiment. Computing at this scale requires efficient use of the compute grids in North America, Asia and Europe and will take advantage of upgrades to the high-speed global network. We present the architecture of data flow and data handling as a part of the Belle II computing infrastructure.
ACToR A Aggregated Computational Toxicology Resource ...
We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology. We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology.