Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
...; (Formerly FDA-2007D-0393)] Guidance for Industry: Blood Establishment Computer System Validation in the User... Industry: Blood Establishment Computer System Validation in the User's Facility'' dated April 2013. The... document entitled ``Guidance for Industry: Blood Establishment Computer System Validation in the User's...
41 CFR 105-64.110 - When may GSA establish computer matching programs?
Code of Federal Regulations, 2013 CFR
2013-07-01
... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...
41 CFR 105-64.110 - When may GSA establish computer matching programs?
Code of Federal Regulations, 2012 CFR
2012-01-01
... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...
41 CFR 105-64.110 - When may GSA establish computer matching programs?
Code of Federal Regulations, 2014 CFR
2014-01-01
... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...
41 CFR 105-64.110 - When may GSA establish computer matching programs?
Code of Federal Regulations, 2010 CFR
2010-07-01
... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...
41 CFR 105-64.110 - When may GSA establish computer matching programs?
Code of Federal Regulations, 2011 CFR
2011-01-01
... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...
AEC Experiment Establishes Computer Link Between California and Paris
demonstrated that a terminal in Paris could search a computer in California and display the resulting (Copies) AEC EXPERIMENT ESTABLISHES COMPUTER LINK BETWEEN CALIFORNIA AND PARIS The feasibility of a worldwide information retrieval system which would tie a computer base of information to terminals on the
Impact of coverage on the reliability of a fault tolerant computer
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1975-01-01
A mathematical reliability model is established for a reconfigurable fault tolerant avionic computer system utilizing state-of-the-art computers. System reliability is studied in light of the coverage probabilities associated with the first and second independent hardware failures. Coverage models are presented as a function of detection, isolation, and recovery probabilities. Upper and lower bonds are established for the coverage probabilities and the method for computing values for the coverage probabilities is investigated. Further, an architectural variation is proposed which is shown to enhance coverage.
A new security model for collaborative environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Deborah; Lorch, Markus; Thompson, Mary
Prevalent authentication and authorization models for distributed systems provide for the protection of computer systems and resources from unauthorized use. The rules and policies that drive the access decisions in such systems are typically configured up front and require trust establishment before the systems can be used. This approach does not work well for computer software that moderates human-to-human interaction. This work proposes a new model for trust establishment and management in computer systems supporting collaborative work. The model supports the dynamic addition of new users to a collaboration with very little initial trust placed into their identity and supportsmore » the incremental building of trust relationships through endorsements from established collaborators. It also recognizes the strength of a users authentication when making trust decisions. By mimicking the way humans build trust naturally the model can support a wide variety of usage scenarios. Its particular strength lies in the support for ad-hoc and dynamic collaborations and the ubiquitous access to a Computer Supported Collaboration Workspace (CSCW) system from locations with varying levels of trust and security.« less
47 CFR 73.151 - Field strength measurements to establish performance of directional antennas.
Code of Federal Regulations, 2010 CFR
2010-10-01
... verified either by field strength measurement or by computer modeling and sampling system verification. (a... specifically identified by the Commission. (c) Computer modeling and sample system verification of modeled... performance verified by computer modeling and sample system verification. (1) A matrix of impedance...
Computer image generation: Reconfigurability as a strategy in high fidelity space applications
NASA Technical Reports Server (NTRS)
Bartholomew, Michael J.
1989-01-01
The demand for realistic, high fidelity, computer image generation systems to support space simulation is well established. However, as the number and diversity of space applications increase, the complexity and cost of computer image generation systems also increase. One strategy used to harmonize cost with varied requirements is establishment of a reconfigurable image generation system that can be adapted rapidly and easily to meet new and changing requirements. The reconfigurability strategy through the life cycle of system conception, specification, design, implementation, operation, and support for high fidelity computer image generation systems are discussed. The discussion is limited to those issues directly associated with reconfigurability and adaptability of a specialized scene generation system in a multi-faceted space applications environment. Examples and insights gained through the recent development and installation of the Improved Multi-function Scene Generation System at Johnson Space Center, Systems Engineering Simulator are reviewed and compared with current simulator industry practices. The results are clear; the strategy of reconfigurability applied to space simulation requirements provides a viable path to supporting diverse applications with an adaptable computer image generation system.
22 CFR 1101.4 - Reports on new systems of records; computer matching programs.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 22 Foreign Relations 2 2012-04-01 2009-04-01 true Reports on new systems of records; computer matching programs. 1101.4 Section 1101.4 Foreign Relations INTERNATIONAL BOUNDARY AND WATER COMMISSION... records; computer matching programs. (a) Before establishing any new systems of records, or making any...
22 CFR 1101.4 - Reports on new systems of records; computer matching programs.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 22 Foreign Relations 2 2014-04-01 2014-04-01 false Reports on new systems of records; computer matching programs. 1101.4 Section 1101.4 Foreign Relations INTERNATIONAL BOUNDARY AND WATER COMMISSION... records; computer matching programs. (a) Before establishing any new systems of records, or making any...
22 CFR 1101.4 - Reports on new systems of records; computer matching programs.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 22 Foreign Relations 2 2013-04-01 2009-04-01 true Reports on new systems of records; computer matching programs. 1101.4 Section 1101.4 Foreign Relations INTERNATIONAL BOUNDARY AND WATER COMMISSION... records; computer matching programs. (a) Before establishing any new systems of records, or making any...
22 CFR 1101.4 - Reports on new systems of records; computer matching programs.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 2 2011-04-01 2009-04-01 true Reports on new systems of records; computer matching programs. 1101.4 Section 1101.4 Foreign Relations INTERNATIONAL BOUNDARY AND WATER COMMISSION... records; computer matching programs. (a) Before establishing any new systems of records, or making any...
ERIC Educational Resources Information Center
Longenecker, Herbert E., Jr.; Babb, Jeffry; Waguespack, Leslie J.; Janicki, Thomas N.; Feinstein, David
2015-01-01
The evolution of computing education spans a spectrum from "computer science" ("CS") grounded in the theory of computing, to "information systems" ("IS"), grounded in the organizational application of data processing. This paper reports on a project focusing on a particular slice of that spectrum commonly…
ERIC Educational Resources Information Center
Nee, John G.; Kare, Audhut P.
1987-01-01
Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)
Telesoftware. CET Information Sheet No. 3.
ERIC Educational Resources Information Center
Council for Educational Technology, London (England).
Telesoftware provides the transmission of computer programs from one computer to another by either broadcast radio or television via telephone lines and offers a national electronic system for the distribution of computer programs. Telephone based telesoftware can be based on any viewdata system or locally established telephone lines between…
Research Trends in Computational Linguistics. Conference Report.
ERIC Educational Resources Information Center
Center for Applied Linguistics, Washington, DC.
This document contains the reports summarizing the main discussion held during the March 1972 Computational Linguistics Conference. The first report, "Computational Linguistics and Linguistics," helps to establish definitions and an understanding of the scope of computational linguistics. "Integrated Computer Systems for Language" and…
Key Issues in Instructional Computer Graphics.
ERIC Educational Resources Information Center
Wozny, Michael J.
1981-01-01
Addresses key issues facing universities which plan to establish instructional computer graphics facilities, including computer-aided design/computer aided manufacturing systems, role in curriculum, hardware, software, writing instructional software, faculty involvement, operations, and research. Thirty-seven references and two appendices are…
MIRADS-2 Implementation Manual
NASA Technical Reports Server (NTRS)
1975-01-01
The Marshall Information Retrieval and Display System (MIRADS) which is a data base management system designed to provide the user with a set of generalized file capabilities is presented. The system provides a wide variety of ways to process the contents of the data base and includes capabilities to search, sort, compute, update, and display the data. The process of creating, defining, and loading a data base is generally called the loading process. The steps in the loading process which includes (1) structuring, (2) creating, (3) defining, (4) and implementing the data base for use by MIRADS are defined. The execution of several computer programs is required to successfully complete all steps of the loading process. This library must be established as a cataloged mass storage file as the first step in MIRADS implementation. The procedure for establishing the MIRADS Library is given. The system is currently operational for the UNIVAC 1108 computer system utilizing the Executive Operating System. All procedures relate to the use of MIRADS on the U-1108 computer.
Applicability of computational systems biology in toxicology.
Kongsbak, Kristine; Hadrup, Niels; Audouze, Karine; Vinggaard, Anne Marie
2014-07-01
Systems biology as a research field has emerged within the last few decades. Systems biology, often defined as the antithesis of the reductionist approach, integrates information about individual components of a biological system. In integrative systems biology, large data sets from various sources and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search. However, computational systems biology offers more advantages than providing a high-throughput literature search; it may form the basis for establishment of hypotheses on potential links between environmental chemicals and human diseases, which would be very difficult to establish experimentally. This is possible due to the existence of comprehensive databases containing information on networks of human protein-protein interactions and protein-disease associations. Experimentally determined targets of the specific chemical of interest can be fed into these networks to obtain additional information that can be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method in the hypothesis-generating phase of toxicological research. © 2014 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).
32 CFR Appendix E to Part 806b - Privacy Impact Assessment
Code of Federal Regulations, 2011 CFR
2011-07-01
... Systems Development System Privacy. Rapid advancements in computer technology make it possible to store...-503, The Computer Matching and Privacy Act of 1988. 13 13 http://www.defenselink.mil/privacy/1975OMB_PAGuide/jun1989.pdf. (2) Public Law 100-235, The Computer Security Act of 1987, 14 which establishes...
32 CFR Appendix E to Part 806b - Privacy Impact Assessment
Code of Federal Regulations, 2014 CFR
2014-07-01
... Systems Development System Privacy. Rapid advancements in computer technology make it possible to store...-503, The Computer Matching and Privacy Act of 1988. 13 13 http://www.defenselink.mil/privacy/1975OMB_PAGuide/jun1989.pdf. (2) Public Law 100-235, The Computer Security Act of 1987, 14 which establishes...
32 CFR Appendix E to Part 806b - Privacy Impact Assessment
Code of Federal Regulations, 2012 CFR
2012-07-01
... Systems Development System Privacy. Rapid advancements in computer technology make it possible to store...-503, The Computer Matching and Privacy Act of 1988. 13 13 http://www.defenselink.mil/privacy/1975OMB_PAGuide/jun1989.pdf. (2) Public Law 100-235, The Computer Security Act of 1987, 14 which establishes...
32 CFR Appendix E to Part 806b - Privacy Impact Assessment
Code of Federal Regulations, 2013 CFR
2013-07-01
... Systems Development System Privacy. Rapid advancements in computer technology make it possible to store...-503, The Computer Matching and Privacy Act of 1988. 13 13 http://www.defenselink.mil/privacy/1975OMB_PAGuide/jun1989.pdf. (2) Public Law 100-235, The Computer Security Act of 1987, 14 which establishes...
Benefits of Exchange Between Computer Scientists and Perceptual Scientists: A Panel Discussion
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Null, Cynthia H. (Technical Monitor)
1995-01-01
We have established several major goals for this panel: 1) Introduce the computer graphics community to some specific leaders in the use of perceptual psychology relating to computer graphics; 2) Enumerate the major results that are known, and provide a set of resources for finding others; 3) Identify research areas where knowledge of perceptual psychology can help computer system designers improve their systems; and 4) Provide advice to researchers on how they can establish collaborations in their own research programs. We believe this will be a very important panel. In addition to generating lively discussion, we hope to point out some of the fundamental issues that occur at the boundary between computer science and perception, and possibly help researchers avoid some of the common pitfalls.
Bringing the CMS distributed computing system into scalable operations
NASA Astrophysics Data System (ADS)
Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.
2010-04-01
Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.
NASA Technical Reports Server (NTRS)
Denning, P. J.; Adams, G. B., III; Brown, R. L.; Kanerva, P.; Leiner, B. M.; Raugh, M. R.
1986-01-01
Large, complex computer systems require many years of development. It is recognized that large scale systems are unlikely to be delivered in useful condition unless users are intimately involved throughout the design process. A mechanism is described that will involve users in the design of advanced computing systems and will accelerate the insertion of new systems into scientific research. This mechanism is embodied in a facility called the Center for Advanced Architectures (CAA). CAA would be a division of RIACS (Research Institute for Advanced Computer Science) and would receive its technical direction from a Scientific Advisory Board established by RIACS. The CAA described here is a possible implementation of a center envisaged in a proposed cooperation between NASA and DARPA.
48 CFR 212.7003 - Technical data and computer software.
Code of Federal Regulations, 2011 CFR
2011-10-01
... computer software. 212.7003 Section 212.7003 Federal Acquisition Regulations System DEFENSE ACQUISITION... data and computer software. For purposes of establishing delivery requirements and license rights for technical data under 227.7102 and for computer software under 227.7202, there shall be a rebuttable...
48 CFR 212.7003 - Technical data and computer software.
Code of Federal Regulations, 2013 CFR
2013-10-01
... computer software. 212.7003 Section 212.7003 Federal Acquisition Regulations System DEFENSE ACQUISITION... data and computer software. For purposes of establishing delivery requirements and license rights for technical data under 227.7102 and for computer software under 227.7202, there shall be a rebuttable...
48 CFR 212.7003 - Technical data and computer software.
Code of Federal Regulations, 2012 CFR
2012-10-01
... computer software. 212.7003 Section 212.7003 Federal Acquisition Regulations System DEFENSE ACQUISITION... data and computer software. For purposes of establishing delivery requirements and license rights for technical data under 227.7102 and for computer software under 227.7202, there shall be a rebuttable...
48 CFR 212.7003 - Technical data and computer software.
Code of Federal Regulations, 2014 CFR
2014-10-01
... computer software. 212.7003 Section 212.7003 Federal Acquisition Regulations System DEFENSE ACQUISITION... data and computer software. For purposes of establishing delivery requirements and license rights for technical data under 227.7102 and for computer software under 227.7202, there shall be a rebuttable...
48 CFR 212.7003 - Technical data and computer software.
Code of Federal Regulations, 2010 CFR
2010-10-01
... computer software. 212.7003 Section 212.7003 Federal Acquisition Regulations System DEFENSE ACQUISITION... data and computer software. For purposes of establishing delivery requirements and license rights for technical data under 227.7102 and for computer software under 227.7202, there shall be a rebuttable...
Managing Computer Systems Development: Understanding the Human and Technological Imperatives.
1985-06-01
for their organization’s use? How can they predict tle impact of future systems ca their management control capabilities ? Cf equal importance is the...commercial organizations discovered that there was only a limited capability of interaction between various types of computers. These organizations were...Viewed together, these three interrelated subsystems, EDP, MIS, and DSS, establish the framework of an overall systems capability known as a Computer
Information management system study results. Volume 2: IMS study results appendixes
NASA Technical Reports Server (NTRS)
1971-01-01
Computer systems program specifications are presented for the modular space station information management system. These are the computer program contract end item, data bus system, data bus breadboard, and display interface adapter specifications. The performance, design, tests, and qualification requirements are established for the implementation of the information management system. For Vol. 1, see N72-19972.
An Undergraduate Course on Operating Systems Principles.
ERIC Educational Resources Information Center
National Academy of Engineering, Washington, DC. Commission on Education.
This report is from Task Force VIII of the COSINE Committee of the Commission on Education of the National Academy of Engineering. The task force was established to formulate subject matter for an elective undergraduate subject on computer operating systems principles for students whose major interest is in the engineering of computer systems and…
An Innovative Improvement of Engineering Learning System Using Computational Fluid Dynamics Concept
ERIC Educational Resources Information Center
Hung, T. C.; Wang, S. K.; Tai, S. W.; Hung, C. T.
2007-01-01
An innovative concept of an electronic learning system has been established in an attempt to achieve a technology that provides engineering students with an instructive and affordable framework for learning engineering-related courses. This system utilizes an existing Computational Fluid Dynamics (CFD) package, Active Server Pages programming,…
32 CFR 806b.35 - Balancing protection.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., Computer Security, 5 for procedures on safeguarding personal information in automated records. 5 http://www... automated system with a log-on protocol. Others may require more sophisticated security protection based on the sensitivity of the information. Classified computer systems or those with established audit and...
32 CFR 806b.35 - Balancing protection.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., Computer Security, 5 for procedures on safeguarding personal information in automated records. 5 http://www... automated system with a log-on protocol. Others may require more sophisticated security protection based on the sensitivity of the information. Classified computer systems or those with established audit and...
32 CFR 806b.35 - Balancing protection.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., Computer Security, 5 for procedures on safeguarding personal information in automated records. 5 http://www... automated system with a log-on protocol. Others may require more sophisticated security protection based on the sensitivity of the information. Classified computer systems or those with established audit and...
32 CFR 806b.35 - Balancing protection.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., Computer Security, 5 for procedures on safeguarding personal information in automated records. 5 http://www... automated system with a log-on protocol. Others may require more sophisticated security protection based on the sensitivity of the information. Classified computer systems or those with established audit and...
32 CFR 806b.35 - Balancing protection.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., Computer Security, 5 for procedures on safeguarding personal information in automated records. 5 http://www... automated system with a log-on protocol. Others may require more sophisticated security protection based on the sensitivity of the information. Classified computer systems or those with established audit and...
Park, Joo Hyun; Son, Ji Young; Kim, Sun
2012-09-01
The purpose of this study was to establish an e-learning system to support learning in medical education and identify solutions for improving the system. A learning management system (LMS) and computer-based test (CBT) system were established to support e-learning for medical students. A survey of 219 first- and second-grade medical students was administered. The questionnaire included 9 forced choice questions about the usability of system and 2 open-ended questions about necessary improvements to the system. The LMS consisted of a class management, class evaluation, and class attendance system. CBT consisted of a test management, item bank, and authoring tool system. The results of the survey showed a high level of satisfaction in all system usability items except for stability. Further, the advantages of the e-learning system were ensuring information accessibility, providing constant feedback, and designing an intuitive interface. Necessary improvements to the system were stability, user control, readability, and diverse device usage. Based on the findings, suggestions for developing an e-learning system to improve usability by medical students and support learning effectively are recommended.
Broadcasting a message in a parallel computer
Berg, Jeremy E [Rochester, MN; Faraj, Ahmad A [Rochester, MN
2011-08-02
Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.
Computing Quantitative Characteristics of Finite-State Real-Time Systems
1994-05-04
Current methods for verifying real - time systems are essentially decision procedures that establish whether the system model satisfies a given...specification. We present a general method for computing quantitative information about finite-state real - time systems . We have developed algorithms that...our technique can be extended to a more general representation of real - time systems , namely, timed transition graphs. The algorithms presented in this
SUMC fault tolerant computer system
NASA Technical Reports Server (NTRS)
1980-01-01
The results of the trade studies are presented. These trades cover: establishing the basic configuration, establishing the CPU/memory configuration, establishing an approach to crosstrapping interfaces, defining the requirements of the redundancy management unit (RMU), establishing a spare plane switching strategy for the fault-tolerant memory (FTM), and identifying the most cost effective way of extending the memory addressing capability beyond the 64 K-bytes (K=1024) of SUMC-II B. The results of the design are compiled in Contract End Item (CEI) Specification for the NASA Standard Spacecraft Computer II (NSSC-II), IBM 7934507. The implementation of the FTM and memory address expansion.
İnal, Tolga; Ataç, Gökçe
2014-01-01
We aimed to determine the radiation doses delivered to patients undergoing general examinations using computed or digital radiography systems in Turkey. Radiographs of 20 patients undergoing posteroanterior chest X-ray and of 20 patients undergoing anteroposterior kidney-ureter-bladder radiography were evaluated in five X-ray rooms at four local hospitals in the Ankara region. Currently, almost all radiology departments in Turkey have switched from conventional radiography systems to computed radiography or digital radiography systems. Patient dose was measured for both systems. The results were compared with published diagnostic reference levels (DRLs) from the European Union and International Atomic Energy Agency. The average entrance surface doses (ESDs) for chest examinations exceeded established international DRLs at two of the X-ray rooms in a hospital with computed radiography. All of the other ESD measurements were approximately equal to or below the DRLs for both examinations in all of the remaining hospitals. Improper adjustment of the exposure parameters, uncalibrated automatic exposure control systems, and failure of the technologists to choose exposure parameters properly were problems we noticed during the study. This study is an initial attempt at establishing local DRL values for digital radiography systems, and will provide a benchmark so that the authorities can establish reference dose levels for diagnostic radiology in Turkey.
PACE 2: Pricing and Cost Estimating Handbook
NASA Technical Reports Server (NTRS)
Stewart, R. D.; Shepherd, T.
1977-01-01
An automatic data processing system to be used for the preparation of industrial engineering type manhour and material cost estimates has been established. This computer system has evolved into a highly versatile and highly flexible tool which significantly reduces computation time, eliminates computational errors, and reduces typing and reproduction time for estimators and pricers since all mathematical and clerical functions are automatic once basic inputs are derived.
49 CFR 383.73 - State procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... endorsement knowledge tests; (iv) Allow only a group-specific passenger (P) and school bus (S) endorsement and... must verify the name, date of birth, and Social Security Number provided by the applicant with the...-domiciled CDL. (n) Computer system controls. The State must establish computer system controls that will: (1...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-24
... the entire information system with respect to computer security, prohibition and detection of any.... Safeguards: --Computer-stored information is protected in accordance with the Agency's security requirements..., loaner car agreement, cash incentives agreement (includes social security number for mandatory tax...
A Course on Reconfigurable Processors
ERIC Educational Resources Information Center
Shoufan, Abdulhadi; Huss, Sorin A.
2010-01-01
Reconfigurable computing is an established field in computer science. Teaching this field to computer science students demands special attention due to limited student experience in electronics and digital system design. This article presents a compact course on reconfigurable processors, which was offered at the Technische Universitat Darmstadt,…
Some system considerations in configuring a digital flight control - navigation system
NASA Technical Reports Server (NTRS)
Boone, J. H.; Flynn, G. R.
1976-01-01
A trade study was conducted with the objective of providing a technical guideline for selection of the most appropriate computer technology for the automatic flight control system of a civil subsonic jet transport. The trade study considers aspects of using either an analog, incremental type special purpose computer or a general purpose computer to perform critical autopilot computation functions. It also considers aspects of integration of noncritical autopilot and autothrottle modes into the computer performing the critical autoland functions, as compared to the federation of the noncritical modes into either a separate computer or with a R-Nav computer. The study is accomplished by establishing the relative advantages and/or risks associated with each of the computer configurations.
National resource for computation in chemistry, phase I: evaluation and recommendations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1980-05-01
The National Resource for Computation in Chemistry (NRCC) was inaugurated at the Lawrence Berkeley Laboratory (LBL) in October 1977, with joint funding by the Department of Energy (DOE) and the National Science Foundation (NSF). The chief activities of the NRCC include: assembling a staff of eight postdoctoral computational chemists, establishing an office complex at LBL, purchasing a midi-computer and graphics display system, administering grants of computer time, conducting nine workshops in selected areas of computational chemistry, compiling a library of computer programs with adaptations and improvements, initiating a software distribution system, providing user assistance and consultation on request. This reportmore » presents assessments and recommendations of an Ad Hoc Review Committee appointed by the DOE and NSF in January 1980. The recommendations are that NRCC should: (1) not fund grants for computing time or research but leave that to the relevant agencies, (2) continue the Workshop Program in a mode similar to Phase I, (3) abandon in-house program development and establish instead a competitive external postdoctoral program in chemistry software development administered by the Policy Board and Director, and (4) not attempt a software distribution system (leaving that function to the QCPE). Furthermore, (5) DOE should continue to make its computational facilities available to outside users (at normal cost rates) and should find some way to allow the chemical community to gain occasional access to a CRAY-level computer.« less
NASA Technical Reports Server (NTRS)
Mckay, C. W.; Bown, R. L.
1985-01-01
The space station data management system involves networks of computing resources that must work cooperatively and reliably over an indefinite life span. This program requires a long schedule of modular growth and an even longer period of maintenance and operation. The development and operation of space station computing resources will involve a spectrum of systems and software life cycle activities distributed across a variety of hosts, an integration, verification, and validation host with test bed, and distributed targets. The requirement for the early establishment and use of an apporopriate Computer Systems and Software Engineering Support Environment is identified. This environment will support the Research and Development Productivity challenges presented by the space station computing system.
ERIC Educational Resources Information Center
Data Research Associates, Inc., St. Louis, MO.
The topic of open systems as it relates to the needs of libraries to establish interoperability between dissimilar computer systems can be clarified by an understanding of the background and evolution of the issue. The International Standards Organization developed a model to link dissimilar computers, and this model has evolved into consensus…
Models for evaluating the performability of degradable computing systems
NASA Technical Reports Server (NTRS)
Wu, L. T.
1982-01-01
Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.
49 CFR 383.73 - State procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... endorsement knowledge tests; (iv) Allow only a group-specific passenger (P) and school bus (S) endorsement and... verification. (1) Prior to issuing a CLP or a CDL to a person the State must verify the name, date of birth... of issuance of the CLP or CDL. (n) Computer system controls. The State must establish computer system...
49 CFR 383.73 - State procedures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... endorsement knowledge tests; (iv) Allow only a group-specific passenger (P) and school bus (S) endorsement and... verification. (1) Prior to issuing a CLP or a CDL to a person the State must verify the name, date of birth... of issuance of the CLP or CDL. (n) Computer system controls. The State must establish computer system...
Black hole based quantum computing in labs and in the sky
NASA Astrophysics Data System (ADS)
Dvali, Gia; Panchenko, Mischa
2016-08-01
Analyzing some well established facts, we give a model-independent parameterization of black hole quantum computing in terms of a set of macro and micro quantities and their relations. These include the relations between the extraordinarily-small energy gap of black hole qubits and important time-scales of information-processing, such as, scrambling time and Page's time. We then show, confirming and extending previous results, that other systems of nature with identical quantum informatics features are attractive Bose-Einstein systems at the critical point of quantum phase transition. Here we establish a complete isomorphy between the quantum computational properties of these two systems. In particular, we show that the quantum hair of a critical condensate is strikingly similar to the quantum hair of a black hole. Irrespectively whether one takes the similarity between the two systems as a remarkable coincidence or as a sign of a deeper underlying connection, the following is evident. Black holes are not unique in their way of quantum information processing and we can manufacture black hole based quantum computers in labs by taking advantage of quantum criticality.
Central Computational Facility CCF communications subsystem options
NASA Technical Reports Server (NTRS)
Hennigan, K. B.
1979-01-01
A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.
Mobile Computer-Assisted-Instruction in Rural New Mexico.
ERIC Educational Resources Information Center
Gittinger, Jack D., Jr.
The University of New Mexico's three-year Computer Assisted Instruction Project established one mobile and five permanent laboratories offering remedial and vocational instruction in winter, 1984-85. Each laboratory has a Degem learning system with minicomputer, teacher terminal, and 32 student terminals. A Digital PDP-11 host computer runs the…
İnal, Tolga; Ataç, Gökçe
2014-01-01
PURPOSE We aimed to determine the radiation doses delivered to patients undergoing general examinations using computed or digital radiography systems in Turkey. MATERIALS AND METHODS Radiographs of 20 patients undergoing posteroanterior chest X-ray and of 20 patients undergoing anteroposterior kidney-ureter-bladder radiography were evaluated in five X-ray rooms at four local hospitals in the Ankara region. Currently, almost all radiology departments in Turkey have switched from conventional radiography systems to computed radiography or digital radiography systems. Patient dose was measured for both systems. The results were compared with published diagnostic reference levels (DRLs) from the European Union and International Atomic Energy Agency. RESULTS The average entrance surface doses (ESDs) for chest examinations exceeded established international DRLs at two of the X-ray rooms in a hospital with computed radiography. All of the other ESD measurements were approximately equal to or below the DRLs for both examinations in all of the remaining hospitals. Improper adjustment of the exposure parameters, uncalibrated automatic exposure control systems, and failure of the technologists to choose exposure parameters properly were problems we noticed during the study. CONCLUSION This study is an initial attempt at establishing local DRL values for digital radiography systems, and will provide a benchmark so that the authorities can establish reference dose levels for diagnostic radiology in Turkey. PMID:24317331
Enhancing Security by System-Level Virtualization in Cloud Computing Environments
NASA Astrophysics Data System (ADS)
Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei
Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.
Laboratory for Computer Science Progress Report 19, 1 July 1981-30 June 1982.
1984-05-01
Multiprocessor Architectures 202 4. TRIX Operating System 209 5. VLSI Tools 212 ’SYSTEMATIC PROGRAM DEVELOPMENT, 221 1. Introduction 222 2. Specification...exploring distributed operating systems and the architecture of single-user powerful computers that are interconnected by communication networks. The...to now. In particular, we expect to experiment with languages, operating systems , and applications that establish the feasibility of distributed
Automatic summary generating technology of vegetable traceability for information sharing
NASA Astrophysics Data System (ADS)
Zhenxuan, Zhang; Minjing, Peng
2017-06-01
In order to solve problems of excessive data entries and consequent high costs for data collection in vegetable traceablility for farmers in traceability applications, the automatic summary generating technology of vegetable traceability for information sharing was proposed. The proposed technology is an effective way for farmers to share real-time vegetable planting information in social networking platforms to enhance their brands and obtain more customers. In this research, the influencing factors in the vegetable traceablility for customers were analyzed to establish the sub-indicators and target indicators and propose a computing model based on the collected parameter values of the planted vegetables and standard legal systems on food safety. The proposed standard parameter model involves five steps: accessing database, establishing target indicators, establishing sub-indicators, establishing standard reference model and computing scores of indicators. On the basis of establishing and optimizing the standards of food safety and traceability system, this proposed technology could be accepted by more and more farmers and customers.
NASA Astrophysics Data System (ADS)
Brandic, Ivona; Music, Dejan; Dustdar, Schahram
Nowadays, novel computing paradigms as for example Cloud Computing are gaining more and more on importance. In case of Cloud Computing users pay for the usage of the computing power provided as a service. Beforehand they can negotiate specific functional and non-functional requirements relevant for the application execution. However, providing computing power as a service bears different research challenges. On one hand dynamic, versatile, and adaptable services are required, which can cope with system failures and environmental changes. On the other hand, human interaction with the system should be minimized. In this chapter we present the first results in establishing adaptable, versatile, and dynamic services considering negotiation bootstrapping and service mediation achieved in context of the Foundations of Self-Governing ICT Infrastructures (FoSII) project. We discuss novel meta-negotiation and SLA mapping solutions for Cloud services bridging the gap between current QoS models and Cloud middleware and representing important prerequisites for the establishment of autonomic Cloud services.
The Association between Students' Use of an Electronic Voting System and their Learning Outcomes
ERIC Educational Resources Information Center
Kennedy, G. E.; Cutts, Q. I.
2005-01-01
This paper reports on the use of an electronic voting system (EVS) in a first-year computing science subject. Previous investigations suggest that students' use of an EVS would be positively associated with their learning outcomes. However, no research has established this relationship empirically. This study sought to establish whether there was…
Using a Computer-based Messaging System at a High School To Increase School/Home Communication.
ERIC Educational Resources Information Center
Burden, Mitzi K.
Minimal communication between school and home was found to contribute to low performance by students at McDuffie High School (South Carolina). This report describes the experience of establishing a computer-based telephone messaging system in the high school and involving parents, teachers, and students in its use. Additional strategies employed…
Data management in engineering
NASA Technical Reports Server (NTRS)
Browne, J. C.
1976-01-01
An introduction to computer based data management is presented with an orientation toward the needs of engineering application. The characteristics and structure of data management systems are discussed. A link to familiar engineering applications of computing is established through a discussion of data structure and data access procedures. An example data management system for a hypothetical engineering application is presented.
Wearable computer technology for dismounted applications
NASA Astrophysics Data System (ADS)
Daniels, Reginald
2010-04-01
Small computing devices which rival the compact size of traditional personal digital assistants (PDA) have recently established a market niche. These computing devices are small enough to be considered unobtrusive for humans to wear. The computing devices are also powerful enough to run full multi-tasking general purpose operating systems. This paper will explore the wearable computer information system for dismounted applications recently fielded for ground-based US Air Force use. The environments that the information systems are used in will be reviewed, as well as a description of the net-centric, ground-based warrior. The paper will conclude with a discussion regarding the importance of intuitive, usable, and unobtrusive operator interfaces for dismounted operators.
NASA Technical Reports Server (NTRS)
Botts, Michael E.; Phillips, Ron J.; Parker, John V.; Wright, Patrick D.
1992-01-01
Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented.
NASA Technical Reports Server (NTRS)
Dodson, D. W.; Shields, N. L., Jr.
1979-01-01
Individual Spacelab experiments are responsible for developing their CRT display formats and interactive command scenarios for payload crew monitoring and control of experiment operations via the Spacelab Data Display System (DDS). In order to enhance crew training and flight operations, it was important to establish some standardization of the crew/experiment interface among different experiments by providing standard methods and techniques for data presentation and experiment commanding via the DDS. In order to establish optimum usage guidelines for the Spacelab DDS, the capabilities and limitations of the hardware and Experiment Computer Operating System design had to be considered. Since the operating system software and hardware design had already been established, the Display and Command Usage Guidelines were constrained to the capabilities of the existing system design. Empirical evaluations were conducted on a DDS simulator to determine optimum operator/system interface utilization of the system capabilities. Display parameters such as information location, display density, data organization, status presentation and dynamic update effects were evaluated in terms of response times and error rates.
ERIC Educational Resources Information Center
Logan, Keri
2007-01-01
It has been well established in the literature that girls are turning their backs on computing courses at all levels of the education system. One reason given for this is that the computer learning environment is not conducive to girls, and it is often suggested that they would benefit from learning computing in a single-sex environment. The…
NASA Technical Reports Server (NTRS)
Rushby, John
1991-01-01
The formal specification and mechanically checked verification for a model of fault-masking and transient-recovery among the replicated computers of digital flight-control systems are presented. The verification establishes, subject to certain carefully stated assumptions, that faults among the component computers are masked so that commands sent to the actuators are the same as those that would be sent by a single computer that suffers no failures.
The engineering design integration (EDIN) system. [digital computer program complex
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.
1974-01-01
A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.
White, Timothy C.; Sauter, Edward A.; Stewart, Duff C.
2014-01-01
Intermagnet is an international oversight group which exists to establish a global network for geomagnetic observatories. This group establishes data standards and standard operating procedures for members and prospective members. Intermagnet has proposed a new One-Second Data Standard, for that emerging geomagnetic product. The standard specifies that all data collected must have a time stamp accuracy of ±10 milliseconds of the top-of-the-second Coordinated Universal Time. Therefore, the U.S. Geological Survey Geomagnetism Program has designed and executed several tests on its current data collection system, the Personal Computer Data Collection Platform. Tests are designed to measure the time shifts introduced by individual components within the data collection system, as well as to measure the time shift introduced by the entire Personal Computer Data Collection Platform. Additional testing designed for Intermagnet will be used to validate further such measurements. Current results of the measurements showed a 5.0–19.9 millisecond lag for the vertical channel (Z) of the Personal Computer Data Collection Platform and a 13.0–25.8 millisecond lag for horizontal channels (H and D) of the collection system. These measurements represent a dynamically changing delay introduced within the U.S. Geological Survey Personal Computer Data Collection Platform.
ERIC Educational Resources Information Center
Hecquet, Ignace; And Others
Principles are outlined that are used as a basis for the system of pricing the services of the Computer Centre. The system illustrates the use of a management method to secure better utilization of university resources. Departments decide how to use the appropriations granted to them and establish a system of internal prices that reflect the cost…
Davidson, R W
1985-01-01
The increasing need to communicate to exchange data can be handled by personal microcomputers. The necessity for the transference of information stored in one type of personal computer to another type of personal computer is often encountered in the process of integrating multiple sources of information stored in different and incompatible computers in Medical Research and Practice. A practical example is demonstrated with two relatively inexpensive commonly used computers, the IBM PC jr. and the Apple IIe. The basic input/output (I/O) interface chip for serial communication for each computer are joined together using a Null connector and cable to form a communications link. Using BASIC (Beginner's All-purpose Symbolic Instruction Code) Computer Language and the Disk Operating System (DOS) the communications handshaking protocol and file transfer is established between the two computers. The BASIC programming languages used are Applesoft (Apple Personal Computer) and PC BASIC (IBM Personal computer).
ADDJUST - An automated system for steering Centaur launch vehicles in measured winds
NASA Technical Reports Server (NTRS)
Swanson, D. C.
1977-01-01
ADDJUST (Automatic Determination and Dissemination of Just-Updated Steering Terms) is an automated computer and communication system designed to provide Atlas/Centaur and Titan/Centaur launch vehicles with booster-phase steering data on launch day. Wind soundings are first obtained, from which a smoothed wind velocity vs altitude relationship is established. Design for conditions at the end of the boost phase with initial pitch and yaw maneuvers, followed by zero total angle of attack through the filtered wind establishes the required vehicle attitude as a function of altitude. Polynomial coefficients for pitch and yaw attitude vs altitude are determined and are transmitted for validation and loading into the Centaur airborne computer. The system has enabled 14 consecutive launches without a flight wind delay.
23 CFR 771.117 - Categorical exclusions.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., computer-aided dispatching systems, radio communications systems, dynamic message signs, and security... effects can be assessed; and Federal-aid system revisions which establish classes of highways on the Federal-aid highway system. (2) Approval of utility installations along or across a transportation...
Reliability model of a monopropellant auxiliary propulsion system
NASA Technical Reports Server (NTRS)
Greenberg, J. S.
1971-01-01
A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.
Computer-aided design of large-scale integrated circuits - A concept
NASA Technical Reports Server (NTRS)
Schansman, T. T.
1971-01-01
Circuit design and mask development sequence are improved by using general purpose computer with interactive graphics capability establishing efficient two way communications link between design engineer and system. Interactive graphics capability places design engineer in direct control of circuit development.
Design of on-board parallel computer on nano-satellite
NASA Astrophysics Data System (ADS)
You, Zheng; Tian, Hexiang; Yu, Shijie; Meng, Li
2007-11-01
This paper provides one scheme of the on-board parallel computer system designed for the Nano-satellite. Based on the development request that the Nano-satellite should have a small volume, low weight, low power cost, and intelligence, this scheme gets rid of the traditional one-computer system and dual-computer system with endeavor to improve the dependability, capability and intelligence simultaneously. According to the method of integration design, it employs the parallel computer system with shared memory as the main structure, connects the telemetric system, attitude control system, and the payload system by the intelligent bus, designs the management which can deal with the static tasks and dynamic task-scheduling, protect and recover the on-site status and so forth in light of the parallel algorithms, and establishes the fault diagnosis, restoration and system restructure mechanism. It accomplishes an on-board parallel computer system with high dependability, capability and intelligence, a flexible management on hardware resources, an excellent software system, and a high ability in extension, which satisfies with the conception and the tendency of the integration electronic design sufficiently.
Task allocation model for minimization of completion time in distributed computer systems
NASA Astrophysics Data System (ADS)
Wang, Jai-Ping; Steidley, Carl W.
1993-08-01
A task in a distributed computing system consists of a set of related modules. Each of the modules will execute on one of the processors of the system and communicate with some other modules. In addition, precedence relationships may exist among the modules. Task allocation is an essential activity in distributed-software design. This activity is of importance to all phases of the development of a distributed system. This paper establishes task completion-time models and task allocation models for minimizing task completion time. Current work in this area is either at the experimental level or without the consideration of precedence relationships among modules. The development of mathematical models for the computation of task completion time and task allocation will benefit many real-time computer applications such as radar systems, navigation systems, industrial process control systems, image processing systems, and artificial intelligence oriented systems.
High-Level Data-Abstraction System
NASA Technical Reports Server (NTRS)
Fishwick, P. A.
1986-01-01
Communication with data-base processor flexible and efficient. High Level Data Abstraction (HILDA) system is three-layer system supporting data-abstraction features of Intel data-base processor (DBP). Purpose of HILDA establishment of flexible method of efficiently communicating with DBP. Power of HILDA lies in its extensibility with regard to syntax and semantic changes. HILDA's high-level query language readily modified. Offers powerful potential to computer sites where DBP attached to DEC VAX-series computer. HILDA system written in Pascal and FORTRAN 77 for interactive execution.
NASA Technical Reports Server (NTRS)
Hoadley, A. W.; Porter, A. J.
1990-01-01
This paper presents data on a preliminary analysis of the thermal dynamic characteristics of the Airborne Information Management System (AIMS), which is a continuing design project at NASA Dryden. The analysis established the methods which will be applied to the actual AIMS boards as they become available. The paper also describes the AIMS liquid cooling system design and presents a thermodynamic computer model of the AIMS cooling system, together with an experimental validation of this model.
Establishing a Cloud Computing Success Model for Hospitals in Taiwan.
Lian, Jiunn-Woei
2017-01-01
The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services.
Establishing a Cloud Computing Success Model for Hospitals in Taiwan
Lian, Jiunn-Woei
2017-01-01
The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services. PMID:28112020
Reconfigurable vision system for real-time applications
NASA Astrophysics Data System (ADS)
Torres-Huitzil, Cesar; Arias-Estrada, Miguel
2002-03-01
Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.
NASA Technical Reports Server (NTRS)
Taylor, N. L.
1983-01-01
To response to a need for improved computer-generated plots that are acceptable to the Langley publication process, the LaRC Graphics Output System has been modified to encompass the publication requirements, and a guideline has been established. This guideline deals only with the publication requirements of computer-generated plots. This report explains the capability that authors of NASA technical reports can use to obtain publication--quality computer-generated plots or the Langley publication process. The rules applied in developing this guideline and examples illustrating the rules are included.
Simulating chemistry using quantum computers.
Kassal, Ivan; Whitfield, James D; Perdomo-Ortiz, Alejandro; Yung, Man-Hong; Aspuru-Guzik, Alán
2011-01-01
The difficulty of simulating quantum systems, well known to quantum chemists, prompted the idea of quantum computation. One can avoid the steep scaling associated with the exact simulation of increasingly large quantum systems on conventional computers, by mapping the quantum system to another, more controllable one. In this review, we discuss to what extent the ideas in quantum computation, now a well-established field, have been applied to chemical problems. We describe algorithms that achieve significant advantages for the electronic-structure problem, the simulation of chemical dynamics, protein folding, and other tasks. Although theory is still ahead of experiment, we outline recent advances that have led to the first chemical calculations on small quantum information processors.
DOT National Transportation Integrated Search
1973-02-01
The volume presents the models used to analyze basic features of the system, establish feasibility of techniques, and evaluate system performance. The models use analytical expressions and computer simulations to represent the relationship between sy...
Planetary Waves and Mesoscale Disturbances in the Middle and Upper Atmosphere
1998-05-14
processing of ionogram records made us to begin designing a computer - controlled system to collect, store, display and scale the ionograms in digital...circuit board " L - 154". L - 154 passed signals from the re- ceiver and the system of the control to computer in order to collect in for motion...the main purpose of the PSMOS project is the establishment of a ground-based mesopause observing system for the investigation of planetary scale
Dynamic resource allocation scheme for distributed heterogeneous computer systems
NASA Technical Reports Server (NTRS)
Liu, Howard T. (Inventor); Silvester, John A. (Inventor)
1991-01-01
This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.
Micro computed tomography (CT) scanned anatomical gateway to insect pest bioinformatics
USDA-ARS?s Scientific Manuscript database
An international collaboration to establish an interactive Digital Video Library for a Systems Biology Approach to study the Asian citrus Psyllid and psyllid genomics/proteomics interactions is demonstrated. Advances in micro-CT, digital computed tomography (CT) scan uses X-rays to make detailed pic...
Computer-Based Training Starter Kit.
ERIC Educational Resources Information Center
Federal Interagency Group for Computer-Based Training, Washington, DC.
Intended for use by training professionals with little or no background in the application of automated data processing (ADP) systems, processes, or procurement requirements, this reference manual provides guidelines for establishing a computer based training (CBT) program within a federal agency of the United States government. The manual covers:…
NASA Technical Reports Server (NTRS)
Divito, Ben L.; Butler, Ricky W.; Caldwell, James L.
1990-01-01
A high-level design is presented for a reliable computing platform for real-time control applications. Design tradeoffs and analyses related to the development of the fault-tolerant computing platform are discussed. The architecture is formalized and shown to satisfy a key correctness property. The reliable computing platform uses replicated processors and majority voting to achieve fault tolerance. Under the assumption of a majority of processors working in each frame, it is shown that the replicated system computes the same results as a single processor system not subject to failures. Sufficient conditions are obtained to establish that the replicated system recovers from transient faults within a bounded amount of time. Three different voting schemes are examined and proved to satisfy the bounded recovery time conditions.
NASA Technical Reports Server (NTRS)
Goltz, G.; Kaiser, L. M.; Weiner, H.
1977-01-01
A computer program has been developed for designing and analyzing the performance of solar array/battery power systems for the U.S. Coast Guard Navigational Aids. This program is called the Design Synthesis/Performance Analysis (DSPA) Computer Program. The basic function of the Design Synthesis portion of the DSPA program is to evaluate functional and economic criteria to provide specifications for viable solar array/battery power systems. The basic function of the Performance Analysis portion of the DSPA program is to simulate the operation of solar array/battery power systems under specific loads and environmental conditions. This document establishes the software requirements for the DSPA computer program, discusses the processing that occurs within the program, and defines the necessary interfaces for operation.
A Systematic Model for Evaluating Professorial Publications
ERIC Educational Resources Information Center
Yoda, Koji
1977-01-01
A model for reviewing both quality and quantity of professorial publications establishes a variety of criteria that ideal publications should meet, provides for the assignment of relative weight to each criterion, and establishes a rating system for computing a raw score for each set of faculty publications being reviewed. (LBH)
How Emerging Technologies are Changing the Rules of Spacecraft Ground Support
NASA Technical Reports Server (NTRS)
Boland, Dillard; Steger, Warren; Weidow, David; Yakstis, Lou
1996-01-01
As part of its effort to develop the flight dynamics distributed system (FDDS), NASA established a program for the continual monitoring of the developments in computer and software technologies, and for assessing the significance of constructing and operating spacecraft ground data systems. In relation to this, technology trends in the computing industry are reviewed, exploring their significance for the spacecraft ground support industry. The technologies considered are: hardware; object computing; Internet; automation, and software development. The ways in which these technologies have affected the industry are considered.
Computational modeling in the optimization of corrosion control to reduce lead in drinking water
An international “proof-of-concept” research project (UK, US, CA) will present its findings during this presentation. An established computational modeling system developed in the UK is being calibrated and validated in U.S. and Canadian case studies. It predicts LCR survey resul...
The Use of Computer Simulation Techniques in Educational Planning.
ERIC Educational Resources Information Center
Wilson, Charles Z.
Computer simulations provide powerful models for establishing goals, guidelines, and constraints in educational planning. They are dynamic models that allow planners to examine logical descriptions of organizational behavior over time as well as permitting consideration of the large and complex systems required to provide realistic descriptions of…
SpecialNet. A National Computer-Based Communications Network.
ERIC Educational Resources Information Center
Morin, Alfred J.
1986-01-01
"SpecialNet," a computer-based communications network for educators at all administrative levels, has been established and is managed by National Systems Management, Inc. Users can send and receive electronic mail, share information on electronic bulletin boards, participate in electronic conferences, and send reports and other documents to each…
Symbolizing as a Constructive Activity in a Computer Microworld.
ERIC Educational Resources Information Center
Steffe, Leslie P.; Olive, John
1996-01-01
Describes how 2 10-year olds developed drawings and numeral systems to symbolize their mental operations while dividing unit bars into thirds and fourths using TIMA: Bars, a computer microworld, as a medium for enacting mathematical actions. The symbolic nature of their partitioning operations was crucial in establishing more conventional…
ERIC Educational Resources Information Center
Mizell, Al P.; Centini, Barry M.
The role of telecommunications in establishing the electronic classroom in distance education is illustrated. Using a computer-based doctoral program and the UNIX operating system as an example, how a personal computer and modem may be combined with a telephone line for instructional delivery is described. A number of issues must be addressed in…
A characterization of workflow management systems for extreme-scale applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
A characterization of workflow management systems for extreme-scale applications
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...
2017-02-16
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
Natural Resource Information System. Volume 1: Overall description
NASA Technical Reports Server (NTRS)
1972-01-01
A prototype computer-based Natural Resource Information System was designed which could store, process, and display data of maximum usefulness to land management decision making. The system includes graphic input and display, the use of remote sensing as a data source, and it is useful at multiple management levels. A survey established current decision making processes and functions, information requirements, and data collection and processing procedures. The applications of remote sensing data and processing requirements were established. Processing software was constructed and a data base established using high-altitude imagery and map coverage of selected areas of SE Arizona. Finally a demonstration of system processing functions was conducted utilizing material from the data base.
Sigint Application for Polymorphous Computing Architecture (PCA): Wideband DF
2006-08-01
Polymorphous Computing Architecture (PCA) program as stated by Robert Graybill is to Develop the computing foundation for agile systems by establishing...ubiquitous MUSIC algorithm rely upon an underlying narrowband signal model [8]. In this case, narrowband means that the signal bandwidth is less than...a wideband DF algorithm is needed to compensate for this model inadequacy. Among the various wideband DF techniques available, the coherent signal
Digital avionics design and reliability analyzer
NASA Technical Reports Server (NTRS)
1981-01-01
The description and specifications for a digital avionics design and reliability analyzer are given. Its basic function is to provide for the simulation and emulation of the various fault-tolerant digital avionic computer designs that are developed. It has been established that hardware emulation at the gate-level will be utilized. The primary benefit of emulation to reliability analysis is the fact that it provides the capability to model a system at a very detailed level. Emulation allows the direct insertion of faults into the system, rather than waiting for actual hardware failures to occur. This allows for controlled and accelerated testing of system reaction to hardware failures. There is a trade study which leads to the decision to specify a two-machine system, including an emulation computer connected to a general-purpose computer. There is also an evaluation of potential computers to serve as the emulation computer.
On Roles of Models in Information Systems
NASA Astrophysics Data System (ADS)
Sølvberg, Arne
The increasing penetration of computers into all aspects of human activity makes it desirable that the interplay among software, data and the domains where computers are applied is made more transparent. An approach to this end is to explicitly relate the modeling concepts of the domains, e.g., natural science, technology and business, to the modeling concepts of software and data. This may make it simpler to build comprehensible integrated models of the interactions between computers and non-computers, e.g., interaction among computers, people, physical processes, biological processes, and administrative processes. This chapter contains an analysis of various facets of the modeling environment for information systems engineering. The lack of satisfactory conceptual modeling tools seems to be central to the unsatisfactory state-of-the-art in establishing information systems. The chapter contains a proposal for defining a concept of information that is relevant to information systems engineering.
NASA Technical Reports Server (NTRS)
Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.
1989-01-01
The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.
A Computational Workflow for the Automated Generation of Models of Genetic Designs.
Misirli, Göksel; Nguyen, Tramy; McLaughlin, James Alastair; Vaidyanathan, Prashant; Jones, Timothy S; Densmore, Douglas; Myers, Chris; Wipat, Anil
2018-06-05
Computational models are essential to engineer predictable biological systems and to scale up this process for complex systems. Computational modeling often requires expert knowledge and data to build models. Clearly, manual creation of models is not scalable for large designs. Despite several automated model construction approaches, computational methodologies to bridge knowledge in design repositories and the process of creating computational models have still not been established. This paper describes a workflow for automatic generation of computational models of genetic circuits from data stored in design repositories using existing standards. This workflow leverages the software tool SBOLDesigner to build structural models that are then enriched by the Virtual Parts Repository API using Systems Biology Open Language (SBOL) data fetched from the SynBioHub design repository. The iBioSim software tool is then utilized to convert this SBOL description into a computational model encoded using the Systems Biology Markup Language (SBML). Finally, this SBML model can be simulated using a variety of methods. This workflow provides synthetic biologists with easy to use tools to create predictable biological systems, hiding away the complexity of building computational models. This approach can further be incorporated into other computational workflows for design automation.
System integration of pattern recognition, adaptive aided, upper limb prostheses
NASA Technical Reports Server (NTRS)
Lyman, J.; Freedy, A.; Solomonow, M.
1975-01-01
The requirements for successful integration of a computer aided control system for multi degree of freedom artificial arms are discussed. Specifications are established for a system which shares control between a human amputee and an automatic control subsystem. The approach integrates the following subsystems: (1) myoelectric pattern recognition, (2) adaptive computer aiding; (3) local reflex control; (4) prosthetic sensory feedback; and (5) externally energized arm with the functions of prehension, wrist rotation, elbow extension and flexion and humeral rotation.
A Patient Record-Filing System for Family Practice
Levitt, Cheryl
1988-01-01
The efficient storage and easy retrieval of quality records are a central concern of good family practice. Many physicians starting out in practice have difficulty choosing a practical and lasting system for storing their records. Some who have established practices are installing computers in their offices and finding that their filing systems are worn, outdated, and incompatible with computerized systems. This article describes a new filing system installed simultaneously with a new computer system in a family-practice teaching centre. The approach adopted solved all identifiable problems and is applicable in family practices of all sizes.
Arthur, J.K.; Taylor, R.E.
1986-01-01
As part of the Gulf Coast Regional Aquifer System Analysis (GC RASA) study, data from 184 geophysical well logs were used to define the geohydrologic framework of the Mississippi embayment aquifer system in Mississippi for flow model simulation. Five major aquifers of Eocene and Paleocene age were defined within this aquifer system in Mississippi. A computer data storage system was established to assimilate the information obtained from the geophysical logs. Computer programs were developed to manipulate the data to construct geologic sections and structure maps. Data from the storage system will be input to a five-layer, three-dimensional, finite-difference digital computer model that is used to simulate the flow dynamics in the five major aquifers of the Mississippi embayment aquifer system.
Computer tools for systems engineering at LaRC
NASA Technical Reports Server (NTRS)
Walters, J. Milam
1994-01-01
The Systems Engineering Office (SEO) has been established to provide life cycle systems engineering support to Langley research Center projects. over the last two years, the computing market has been reviewed for tools which could enhance the effectiveness and efficiency of activities directed towards this mission. A group of interrelated applications have been procured, or are under development including a requirements management tool, a system design and simulation tool, and project and engineering data base. This paper will review the current configuration of these tools and provide information on future milestones and directions.
NASA Astrophysics Data System (ADS)
Zhao, Ben; Garbacki, Paweł; Gkantsidis, Christos; Iamnitchi, Adriana; Voulgaris, Spyros
After a decade of intensive investigation, peer-to-peer computing has established itself as an accepted research eld in the general area of distributed systems. Peer-to- peer computing can be seen as the democratization of computing over throwing traditional hierarchical designs favored in client-server systems largely brought about by last-mile network improvements which have made individual PCs rst-class citizens in the network community. Much of the early focus in peer-to-peer systems was on best-effort le sharing applications. In recent years, however, research has focused on peer-to-peer systems that provide operational properties and functionality similar to those shown by more traditional distributed systems. These properties include stronger consistency, reliability, and security guarantees suitable to supporting traditional applications such as databases.
Computer ethics: A capstone course
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisher, T.G.; Abunawass, A.M.
1994-12-31
This paper presents a capstone course on computer ethics required for all computer science majors in our program. The course was designed to encourage students to evaluate their own personal value systems in terms of the established values in computer science as represented by the ACM Code of Ethics. The structure, activities, and topics of the course as well as assessment of the students are presented. Observations on various course components and student evaluations of the course are also presented.
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Huang, Cheng-Fu
2015-04-01
From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.
Multimedia courseware in an open-systems environment: a DoD strategy
NASA Astrophysics Data System (ADS)
Welsch, Lawrence A.
1991-03-01
The federal government is about to invest billions of dollars to develop multimedia training materials for delivery on computer-based interactive training systems. Acquisition of a variety of computers and peripheral devices hosting various operating systems and suites of authoring system software will be necessary to facilitate the development of this courseware. There is no single source that will satisfy all needs. Although high-performance, low-cost interactive training hardware is available, the products have proprietary software interfaces. Because the interfaces are proprietary, expensive reprogramming is usually required to adapt such software products to other platforms. This costly reprogramming could be eliminated by adopting standard software interfaces. DoD's Portable Courseware Project (PORTCO) is typical of projects worldwide that require standard software interfaces. This paper articulates the strategy whereby PORTCO leverages the open systems movement and the new realities of information technology. These realities encompass changes in the pace at which new technology becomes available, changes in organizational goals and philosophy, new roles of vendors and users, changes in the procurement process, and acceleration toward open system environments. The PORTCO strategy is applicable to all projects and systems that require open systems to achieve mission objectives. The federal goal is to facilitate the creation of an environment in which high quality portable courseware is available as commercial off-the-shelf products and is competitively supplied by a variety of vendors. In order to achieve this goal a system architecture incorporating standards to meet the users' needs must be established. The Request for Architecture (RFA) developed cooperatively by DoD and the National Institute of Standards and Technology (NIST) will generate the PORTCO systems architecture. This architecture must freely integrate the courseware and authoring software from the lower levels of machine architecture and systems service implementation. In addition, the systems architecture will establish how the application-specific technologies relate to other technologies. Further, a computer-based interactive training applications profile must be developed. This profile, along with the systems architecture derived as a result of the RFA, provides the basis for identifying the needed standards. NIST will then accelerate the development of these standards using, but not restricted to, existing standards activities within established standards forums. The federal multimedia courseware effort has adopted the Interactive Multimedia Association (INA) Recommended Practices for Interactive Video Portability as the baseline for the migration of computer-based interactive training systems to an open systems environment based upon international standards. The PORTCO strategy includes an evolutionary migration to a standards-based, Open System Environments (OSE). An important aspect of this migration strategy is to move to open systems via stepwise evolution rather than via quantum leaps. Another area of concern is that of infrastructure issues, such as maintaining and supporting the technologies required for computer-based interactive training. The federal multimedia initiative will use the RFA-based architecture to differentiate between those technologies that can be maintained and supported by existing infrastructure mechanisms and those that require new mechanisms. Existing infrastructure mechanisms will be used and where infrastructure mechanisms do not exist, the approach will be to place high priority on establishing the appropriate mechanisms. Establishing an infrastructure mechanism is a nontrivial task requiring sustained investment of resources.
Beach Profile Analysis System (BPAS). Volume III. BPAS User’s Guide: Analysis Module SURVY1.
1982-06-01
extrapolated using the two seawardmost points. Before computing volume changes, common bonds are established relative to the landward and seawsrd extent...Cyber 176 or equivalent computer. Such features include the 10- character, 60-bit word size, the FORTRAN- callable sort routine (interfacing with the NOS
Beach Profile Analysis System (BPAS). Volume IV. BPAS User’s Guide: Analysis Module SURVY2.
1982-06-01
feet NSL), the shoreline position can be extrapolated using the two sawardmost points. Before computing volume changes, comon bonds are established...computer. Such features include the 10- character, 60-bit word size, the FORTRAN- callable sort routine (interfacing with the NOS or NOS/BE operating
13 CFR 306.4 - Purpose and scope.
Code of Federal Regulations, 2013 CFR
2013-01-01
... faculty, staff, libraries, laboratories and computer systems that can address local economic problems and opportunities. With Investment Assistance, institutions of higher education establish and operate research...
13 CFR 306.4 - Purpose and scope.
Code of Federal Regulations, 2014 CFR
2014-01-01
... faculty, staff, libraries, laboratories and computer systems that can address local economic problems and opportunities. With Investment Assistance, institutions of higher education establish and operate research...
13 CFR 306.4 - Purpose and scope.
Code of Federal Regulations, 2011 CFR
2011-01-01
... faculty, staff, libraries, laboratories and computer systems that can address local economic problems and opportunities. With Investment Assistance, institutions of higher education establish and operate research...
13 CFR 306.4 - Purpose and scope.
Code of Federal Regulations, 2012 CFR
2012-01-01
... faculty, staff, libraries, laboratories and computer systems that can address local economic problems and opportunities. With Investment Assistance, institutions of higher education establish and operate research...
Horton, John J.
2006-04-11
A system and method of maintaining communication between a computer and a server, the server being in communication with the computer via xDSL service or dial-up modem service, with xDSL service being the default mode of communication, the method including sending a request to the server via xDSL service to which the server should respond and determining if a response has been received. If no response has been received, displaying on the computer a message (i) indicating that xDSL service has failed and (ii) offering to establish communication between the computer and the server via the dial-up modem, and thereafter changing the default mode of communication between the computer and the server to dial-up modem service. In a preferred embodiment, an xDSL service provider monitors dial-up modem communications and determines if the computer dialing in normally establishes communication with the server via xDSL service. The xDSL service provider can thus quickly and easily detect xDSL failures.
Consolidation of cloud computing in ATLAS
NASA Astrophysics Data System (ADS)
Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration
2017-10-01
Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.
Teaching Engineering Design in a Laboratory Setting
ERIC Educational Resources Information Center
Hummon, Norman P.; Bullen, A. G. R.
1974-01-01
Discusses the establishment of an environmental systems laboratory at the University of Pittsburgh with the support of the Sloan Foundation. Indicates that the "real world" can be brought into the laboratory by simulating on computers, software systems, and data bases. (CC)
75 FR 68849 - Privacy Act of 1974: System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-09
... processing of personal information is conducted within established FAA computer security regulations. A risk... SECURITY CLASSIFICATION: Sensitive, unclassified SYSTEM LOCATION: Federal Aviation Administration (FAA... Enforcement Centers of the Drug Abatement Division; Office of Security and Hazardous Materials; Flight...
The Application of Large-Scale Hypermedia Information Systems to Training.
ERIC Educational Resources Information Center
Crowder, Richard; And Others
1995-01-01
Discusses the use of hypermedia in electronic information systems that support maintenance operations in large-scale industrial plants. Findings show that after establishing an information system, the same resource base can be used to train personnel how to use the computer system and how to perform operational and maintenance tasks. (Author/JMV)
NASA Technical Reports Server (NTRS)
Pepe, J. T.
1972-01-01
A functional design of software executive system for the space shuttle avionics computer is presented. Three primary functions of the executive are emphasized in the design: task management, I/O management, and configuration management. The executive system organization is based on the applications software and configuration requirements established during the Phase B definition of the Space Shuttle program. Although the primary features of the executive system architecture were derived from Phase B requirements, it was specified for implementation with the IBM 4 Pi EP aerospace computer and is expected to be incorporated into a breadboard data management computer system at NASA Manned Spacecraft Center's Information system division. The executive system was structured for internal operation on the IBM 4 Pi EP system with its external configuration and applications software assumed to the characteristic of the centralized quad-redundant avionics systems defined in Phase B.
Increasing the Interaction with Distant Learners on an Interactive Telecommunications System.
ERIC Educational Resources Information Center
Schlenker, Jon
1994-01-01
Suggests a variety of ways to increase interaction with distance learners on an interactive telecommunications system, based on experiences at the University of Maine at Augusta. Highlights include establishing the proper environment; telephone systems; voice mail; fax; electronic mail; computer conferencing; postal mail; printed materials; and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tzeng, Nian-Feng; White, Christopher D.; Moreman, Douglas
2012-07-14
The UCoMS research cluster has spearheaded three research areas since August 2004, including wireless and sensor networks, Grid computing, and petroleum applications. The primary goals of UCoMS research are three-fold: (1) creating new knowledge to push forward the technology forefronts on pertinent research on the computing and monitoring aspects of energy resource management, (2) developing and disseminating software codes and toolkits for the research community and the public, and (3) establishing system prototypes and testbeds for evaluating innovative techniques and methods. Substantial progress and diverse accomplishment have been made by research investigators in their respective areas of expertise cooperatively onmore » such topics as sensors and sensor networks, wireless communication and systems, computational Grids, particularly relevant to petroleum applications.« less
Operation plan for the data 100/LARS terminal system
NASA Technical Reports Server (NTRS)
Bowen, A. J., Jr.
1980-01-01
The Data 100/LARS terminal system provides an interface for processing on the IBM 3031 computer system at Purdue University's Laboratory for Applications of Remote Sensing. The environment in which the system is operated and supported is discussed. The general support responsibilities, procedural mechanisms, and training established for the benefit of the system users are defined.
Analysis and design of hospital management information system based on UML
NASA Astrophysics Data System (ADS)
Ma, Lin; Zhao, Huifang; You, Shi Jun; Ge, Wenyong
2018-05-01
With the rapid development of computer technology, computer information management system has been utilized in many industries. Hospital Information System (HIS) is in favor of providing data for directors, lightening the workload for the medical workers, and improving the workers efficiency. According to the HIS demand analysis and system design, this paper focus on utilizing unified modeling language (UML) models to establish the use case diagram, class diagram, sequence chart and collaboration diagram, and satisfying the demands of the daily patient visit, inpatient, drug management and other relevant operations. At last, the paper summarizes the problems of the system and puts forward an outlook of the HIS system.
Space station Simulation Computer System (SCS) study for NASA/MSFC. Volume 2: Concept document
NASA Technical Reports Server (NTRS)
1989-01-01
The Simulation Computer System (SCS) concept document describes and establishes requirements for the functional performance of the SCS system, including interface, logistic, and qualification requirements. The SCS is the computational communications and display segment of the Marshall Space Flight Center (MSFC) Payload Training Complex (PTC). The PTC is the MSFC facility that will train onboard and ground operations personnel to operate the payloads and experiments on board the international Space Station Freedom. The requirements to be satisfied by the system implementation are identified here. The SCS concept document defines the requirements to be satisfied through the implementation of the system capability. The information provides the operational basis for defining the requirements to be allocated to the system components and enables the system organization to assess whether or not the completed system complies with the requirements of the system.
NASA Technical Reports Server (NTRS)
Fernandez, J. P.; Mills, D.
1991-01-01
A Vibroacoustic Payload Environment Prediction System (VAPEPS) Management Center was established at the JPL. The center utilizes the VAPEPS software package to manage a data base of Space Shuttle and expendable launch vehicle payload flight and ground test data. Remote terminal access over telephone lines to the computer system, where the program resides, was established to provide the payload community a convenient means of querying the global VAPEPS data base. This guide describes the functions of the VAPEPS Management Center and contains instructions for utilizing the resources of the center.
NASA Astrophysics Data System (ADS)
Delgado, Francisco
2017-12-01
Quantum information is an emergent area merging physics, mathematics, computer science and engineering. To reach its technological goals, it is requiring adequate approaches to understand how to combine physical restrictions, computational approaches and technological requirements to get functional universal quantum information processing. This work presents the modeling and the analysis of certain general type of Hamiltonian representing several physical systems used in quantum information and establishing a dynamics reduction in a natural grammar for bipartite processing based on entangled states.
Augmenting Trust Establishment in Dynamic Systems with Social Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagesse, Brent J; Kumar, Mohan; Venkatesh, Svetha
2010-01-01
Social networking has recently flourished in popularity through the use of social websites. Pervasive computing resources have allowed people stay well-connected to each other through access to social networking resources. We take the position that utilizing information produced by relationships within social networks can assist in the establishment of trust for other pervasive computing applications. Furthermore, we describe how such a system can augment a sensor infrastructure used for event observation with information from mobile sensors (ie, mobile phones with cameras) controlled by potentially untrusted third parties. Pervasive computing systems are invisible systems, oriented around the user. As a result,more » many future pervasive systems are likely to include a social aspect to the system. The social communities that are developed in these systems can augment existing trust mechanisms with information about pre-trusted entities or entities to initially consider when beginning to establish trust. An example of such a system is the Collaborative Virtual Observation (CoVO) system fuses sensor information from disaparate sources in soft real-time to recreate a scene that provides observation of an event that has recently transpired. To accomplish this, CoVO must efficently access services whilst protecting the data from corruption from unknown remote nodes. CoVO combines dynamic service composition with virtual observation to utilize existing infrastructure with third party services available in the environment. Since these services are not under the control of the system, they may be unreliable or malicious. When an event of interest occurs, the given infrastructure (bus cameras, etc.) may not sufficiently cover the necessary information (be it in space, time, or sensor type). To enhance observation of the event, infrastructure is augmented with information from sensors in the environment that the infrastructure does not control. These sensors may be unreliable, uncooperative, or even malicious. Additionally, to execute queries in soft real-time, processing must be distributed to available systems in the environment. We propose to use information from social networks to satisfy these requirements. In this paper, we present our position that knowledge gained from social activities can be used to augment trust mechanisms in pervasive computing. The system uses social behavior of nodes to predict a subset that it wants to query for information. In this context, social behavior such as transit patterns and schedules (which can be used to determine if a queried node is likely to be reliable) or known relationships, such as a phone's address book, that can be used to determine networks of nodes that may also be able to assist in retrieving information. Neither implicit nor explicit relationships necessarily imply that the user trusts an entity, but rather will provide a starting place for establishing trust. The proposed framework utilizes social network information to assist in trust establishment when third-party sensors are used for sensing events.« less
76 FR 21373 - Privacy Act of 1974; Report of a New System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-15
... Information Security Management Act of 2002; the Computer Fraud and Abuse Act of 1986; the Health Insurance... 1974; the Federal Information Security Management Act of 2002; the Computer Fraud and Abuse Act of 1986... established by State law; (3) support litigation involving the Agency; (4) combat fraud, waste, and abuse in...
Beach Profile Analysis System (BPAS). Volume VII. BPAS User’s Guide: Analysis Module ELVDIS.
1982-06-01
changes, common bonds are established relative to the landward and seaward extent of the surveys on each profile line. The computed area under each pro...Cyber 176 or equivalent computer. Such features include the 10-character, 60- bit word size, the FORTRAN- callable sort routine (interfacing with the NOS
Strategy and gaps for modeling, simulation, and control of hybrid systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rabiti, Cristian; Garcia, Humberto E.; Hovsapian, Rob
2015-04-01
The purpose of this report is to establish a strategy for modeling and simulation of candidate hybrid energy systems. Modeling and simulation is necessary to design, evaluate, and optimize the system technical and economic performance. Accordingly, this report first establishes the simulation requirements to analysis candidate hybrid systems. Simulation fidelity levels are established based on the temporal scale, real and synthetic data availability or needs, solution accuracy, and output parameters needed to evaluate case-specific figures of merit. Accordingly, the associated computational and co-simulation resources needed are established; including physical models when needed, code assembly and integrated solutions platforms, mathematical solvers,more » and data processing. This report first attempts to describe the figures of merit, systems requirements, and constraints that are necessary and sufficient to characterize the grid and hybrid systems behavior and market interactions. Loss of Load Probability (LOLP) and effective cost of Effective Cost of Energy (ECE), as opposed to the standard Levelized Cost of Electricty (LCOE), are introduced as technical and economical indices for integrated energy system evaluations. Financial assessment methods are subsequently introduced for evaluation of non-traditional, hybrid energy systems. Algorithms for coupled and iterative evaluation of the technical and economic performance are subsequently discussed. This report further defines modeling objectives, computational tools, solution approaches, and real-time data collection and processing (in some cases using real test units) that will be required to model, co-simulate, and optimize; (a) an energy system components (e.g., power generation unit, chemical process, electricity management unit), (b) system domains (e.g., thermal, electrical or chemical energy generation, conversion, and transport), and (c) systems control modules. Co-simulation of complex, tightly coupled, dynamic energy systems requires multiple simulation tools, potentially developed in several programming languages and resolved on separate time scales. Whereas further investigation and development of hybrid concepts will provide a more complete understanding of the joint computational and physical modeling needs, this report highlights areas in which co-simulation capabilities are warranted. The current development status, quality assurance, availability and maintainability of simulation tools that are currently available for hybrid systems modeling is presented. Existing gaps in the modeling and simulation toolsets and development needs are subsequently discussed. This effort will feed into a broader Roadmap activity for designing, developing, and demonstrating hybrid energy systems.« less
Proposal for a Security Management in Cloud Computing for Health Care
Dzombeta, Srdan; Brandis, Knud
2014-01-01
Cloud computing is actually one of the most popular themes of information systems research. Considering the nature of the processed information especially health care organizations need to assess and treat specific risks according to cloud computing in their information security management system. Therefore, in this paper we propose a framework that includes the most important security processes regarding cloud computing in the health care sector. Starting with a framework of general information security management processes derived from standards of the ISO 27000 family the most important information security processes for health care organizations using cloud computing will be identified considering the main risks regarding cloud computing and the type of information processed. The identified processes will help a health care organization using cloud computing to focus on the most important ISMS processes and establish and operate them at an appropriate level of maturity considering limited resources. PMID:24701137
Proposal for a security management in cloud computing for health care.
Haufe, Knut; Dzombeta, Srdan; Brandis, Knud
2014-01-01
Cloud computing is actually one of the most popular themes of information systems research. Considering the nature of the processed information especially health care organizations need to assess and treat specific risks according to cloud computing in their information security management system. Therefore, in this paper we propose a framework that includes the most important security processes regarding cloud computing in the health care sector. Starting with a framework of general information security management processes derived from standards of the ISO 27000 family the most important information security processes for health care organizations using cloud computing will be identified considering the main risks regarding cloud computing and the type of information processed. The identified processes will help a health care organization using cloud computing to focus on the most important ISMS processes and establish and operate them at an appropriate level of maturity considering limited resources.
Analysis and Preliminary Design of an Advanced Technology Transport Flight Control System
NASA Technical Reports Server (NTRS)
Frazzini, R.; Vaughn, D.
1975-01-01
The analysis and preliminary design of an advanced technology transport aircraft flight control system using avionics and flight control concepts appropriate to the 1980-1985 time period are discussed. Specifically, the techniques and requirements of the flight control system were established, a number of candidate configurations were defined, and an evaluation of these configurations was performed to establish a recommended approach. Candidate configurations based on redundant integration of various sensor types, computational methods, servo actuator arrangements and data-transfer techniques were defined to the functional module and piece-part level. Life-cycle costs, for the flight control configurations, as determined in an operational environment model for 200 aircraft over a 15-year service life, were the basis of the optimum configuration selection tradeoff. The recommended system concept is a quad digital computer configuration utilizing a small microprocessor for input/output control, a hexad skewed set of conventional sensors for body rate and body acceleration, and triple integrated actuators.
Intelligent Tutoring Systems: Formalization as Automata and Interface Design Using Neural Networks
ERIC Educational Resources Information Center
Curilem, S. Gloria; Barbosa, Andrea R.; de Azevedo, Fernando M.
2007-01-01
This article proposes a mathematical model of Intelligent Tutoring Systems (ITS), based on observations of the behaviour of these systems. One of the most important problems of pedagogical software is to establish a common language between the knowledge areas involved in their development, basically pedagogical, computing and domain areas. A…
Safety in the Chemical Laboratory
ERIC Educational Resources Information Center
Coffee, Robert D.
1972-01-01
The author discusses a system for establishing the relative potential of a chemical to release energy suddenly and to indicate release. This system is applicable to chemical storage and transportation. The system is based upon three simple tests requiring a minimum sample (1 go or 1 ml): (1) computation, (2) impact sensitivity, and (3) thermal…
Lee, Kang-Hoon; Shin, Kyung-Seop; Lim, Debora; Kim, Woo-Chan; Chung, Byung Chang; Han, Gyu-Bum; Roh, Jeongkyu; Cho, Dong-Ho; Cho, Kiho
2015-07-01
The genomes of living organisms are populated with pleomorphic repetitive elements (REs) of varying densities. Our hypothesis that genomic RE landscapes are species/strain/individual-specific was implemented into the Genome Signature Imaging system to visualize and compute the RE-based signatures of any genome. Following the occurrence profiling of 5-nucleotide REs/words, the information from top-50 frequency words was transformed into a genome-specific signature and visualized as Genome Signature Images (GSIs), using a CMYK scheme. An algorithm for computing distances among GSIs was formulated using the GSIs' variables (word identity, frequency, and frequency order). The utility of the GSI-distance computation system was demonstrated with control genomes. GSI-based computation of genome-relatedness among 1766 microbes (117 archaea and 1649 bacteria) identified their clustering patterns; although the majority paralleled the established classification, some did not. The Genome Signature Imaging system, with its visualization and distance computation functions, enables genome-scale evolutionary studies involving numerous genomes with varying sizes. Copyright © 2015 Elsevier Inc. All rights reserved.
Application of digital computer APU modeling techniques to control system design.
NASA Technical Reports Server (NTRS)
Bailey, D. A.; Burriss, W. L.
1973-01-01
Study of the required controls for a H2-O2 auxiliary power unit (APU) technology program for the Space Shuttle. A steady-state system digital computer program was prepared and used to optimize initial system design. Analytical models of each system component were included. The program was used to solve a nineteen-dimensional problem, and then time-dependent differential equations were added to the computer program to simulate transient APU system and control. Some system parameters were considered quasi-steady-state, and others were treated as differential variables. The dynamic control analysis proceeded from initial ideal control modeling (which considered one control function and assumed the others to be ideal), stepwise through the system (adding control functions), until all of the control functions and their interactions were considered. In this way, the adequacy of the final control design over the required wide range of APU operating conditions was established.
User’s Guide for SHIPINT - A Computer Program to Compute Two Ship Interaction in Waves
1996-08-01
P500693.PDF [Page: 1 of 84] Image Cover Sheet CLASSIFICATION SYSTEM NUMBER 500693 UNCLASSIFIED I llllll 111111111111111111111111111111111 TITLE...Halifax, Nova Scotia, Canada B3J 2X4 1 ---------· CONTRACTOR REPORT I I I I I iii;,"’: · 1 Defence Research Establishment Atlantic Canada...SUMMARY 1 Introduction 2 Coordinate Systems and Two Ship Motions 3 Flow Chart 4 Input Data File Description 4.1 shipint.in ........ . 4.2 paneLa.in
NASA Technical Reports Server (NTRS)
Palusinski, O. A.; Allgyer, T. T.
1979-01-01
The elimination of Ampholine from the system by establishing the pH gradient with simple ampholytes is proposed. A mathematical model was exercised at the level of the two-component system by using values for mobilities, diffusion coefficients, and dissociation constants representative of glutamic acid and histidine. The constants assumed in the calculations are reported. The predictions of the model and computer simulation of isoelectric focusing experiments are in direct importance to obtain Ampholine-free, stable pH gradients.
NASA Technical Reports Server (NTRS)
1975-01-01
The SATIL 2 computer program was developed to assist with the programmatic evaluation of alternative approaches to establishing and maintaining a specified mix of operational sensors on spacecraft in an operational SEASAT system. The program computes the probability distributions of events (i.e., number of launch attempts, number of spacecraft purchased, etc.), annual recurring cost, and present value of recurring cost. This is accomplished for the specific task of placing a desired mix of sensors in orbit in an optimal fashion in order to satisfy a specified sensor demand function. Flow charts are shown, and printouts of the programs are given.
NASA Technical Reports Server (NTRS)
Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)
1990-01-01
This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.
Contextuality as a Resource for Models of Quantum Computation with Qubits
NASA Astrophysics Data System (ADS)
Bermejo-Vega, Juan; Delfosse, Nicolas; Browne, Dan E.; Okay, Cihan; Raussendorf, Robert
2017-09-01
A central question in quantum computation is to identify the resources that are responsible for quantum speed-up. Quantum contextuality has been recently shown to be a resource for quantum computation with magic states for odd-prime dimensional qudits and two-dimensional systems with real wave functions. The phenomenon of state-independent contextuality poses a priori an obstruction to characterizing the case of regular qubits, the fundamental building block of quantum computation. Here, we establish contextuality of magic states as a necessary resource for a large class of quantum computation schemes on qubits. We illustrate our result with a concrete scheme related to measurement-based quantum computation.
Robot computer problem solving system
NASA Technical Reports Server (NTRS)
Merriam, E. W.; Becker, J. D.
1973-01-01
A robot computer problem solving system which represents a robot exploration vehicle in a simulated Mars environment is described. The model exhibits changes and improvements made on a previously designed robot in a city environment. The Martian environment is modeled in Cartesian coordinates; objects are scattered about a plane; arbitrary restrictions on the robot's vision have been removed; and the robot's path contains arbitrary curves. New environmental features, particularly the visual occlusion of objects by other objects, were added to the model. Two different algorithms were developed for computing occlusion. Movement and vision capabilities of the robot were established in the Mars environment, using LISP/FORTRAN interface for computational efficiency. The graphical display program was redesigned to reflect the change to the Mars-like environment.
A FPGA-based architecture for real-time image matching
NASA Astrophysics Data System (ADS)
Wang, Jianhui; Zhong, Sheng; Xu, Wenhui; Zhang, Weijun; Cao, Zhiguo
2013-10-01
Image matching is a fundamental task in computer vision. It is used to establish correspondence between two images taken at different viewpoint or different time from the same scene. However, its large computational complexity has been a challenge to most embedded systems. This paper proposes a single FPGA-based image matching system, which consists of SIFT feature detection, BRIEF descriptor extraction and BRIEF matching. It optimizes the FPGA architecture for the SIFT feature detection to reduce the FPGA resources utilization. Moreover, we implement BRIEF description and matching on FPGA also. The proposed system can implement image matching at 30fps (frame per second) for 1280x720 images. Its processing speed can meet the demand of most real-life computer vision applications.
2009-03-01
viii 3.2.3 Sub7 ...from TaskInfo in Excel Format. 3.2.3 Sub7 Also known as SubSeven, this is one of the best known, most widely distributed backdoor programs on the...engineering the spread of viruses, worms, backdoors and other malware. The Sub7 Trojan establishes a server on the victim computer that
Computational structures technology and UVA Center for CST
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1992-01-01
Rapid advances in computer hardware have had a profound effect on various engineering and mechanics disciplines, including the materials, structures, and dynamics disciplines. A new technology, computational structures technology (CST), has recently emerged as an insightful blend between material modeling, structural and dynamic analysis and synthesis on the one hand, and other disciplines such as computer science, numerical analysis, and approximation theory, on the other hand. CST is an outgrowth of finite element methods developed over the last three decades. The focus of this presentation is on some aspects of CST which can impact future airframes and propulsion systems, as well as on the newly established University of Virginia (UVA) Center for CST. The background and goals for CST are described along with the motivations for developing CST, and a brief discussion is made on computational material modeling. We look at the future in terms of technical needs, computing environment, and research directions. The newly established UVA Center for CST is described. One of the research projects of the Center is described, and a brief summary of the presentation is given.
Undecidability and Irreducibility Conditions for Open-Ended Evolution and Emergence.
Hernández-Orozco, Santiago; Hernández-Quiroz, Francisco; Zenil, Hector
2018-01-01
Is undecidability a requirement for open-ended evolution (OEE)? Using methods derived from algorithmic complexity theory, we propose robust computational definitions of open-ended evolution and the adaptability of computable dynamical systems. Within this framework, we show that decidability imposes absolute limits on the stable growth of complexity in computable dynamical systems. Conversely, systems that exhibit (strong) open-ended evolution must be undecidable, establishing undecidability as a requirement for such systems. Complexity is assessed in terms of three measures: sophistication, coarse sophistication, and busy beaver logical depth. These three complexity measures assign low complexity values to random (incompressible) objects. As time grows, the stated complexity measures allow for the existence of complex states during the evolution of a computable dynamical system. We show, however, that finding these states involves undecidable computations. We conjecture that for similar complexity measures that assign low complexity values, decidability imposes comparable limits on the stable growth of complexity, and that such behavior is necessary for nontrivial evolutionary systems. We show that the undecidability of adapted states imposes novel and unpredictable behavior on the individuals or populations being modeled. Such behavior is irreducible. Finally, we offer an example of a system, first proposed by Chaitin, that exhibits strong OEE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.« less
Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems
Wu, Jun; Su, Zhou; Li, Jianhua
2017-01-01
Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on “friend” relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems. PMID:28758943
Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems.
Wu, Jun; Su, Zhou; Wang, Shen; Li, Jianhua
2017-07-30
Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on "friend" relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems.
ERIC Educational Resources Information Center
Malone, Bobby G.; Nelson, Jacquelyn S.; Nelson, C. Van
The implementation of a plus/minus system of grading to replace the traditional A through F grading system for graduate students was studied at a midsize Midwestern university. Decimal equivalents were established to enable the computation of grade point averages (GPAs) that reflected the dispersion of grades through the plus/minus system. A…
Optical mass memory system (AMM-13). AMM/DBMS interface control document
NASA Technical Reports Server (NTRS)
Bailey, G. A.
1980-01-01
The baseline for external interfaces of a 10 to the 13th power bit, optical archival mass memory system (AMM-13) is established. The types of interfaces addressed include data transfer; AMM-13, Data Base Management System, NASA End-to-End Data System computer interconnect; data/control input and output interfaces; test input data source; file management; and facilities interface.
NASA Technical Reports Server (NTRS)
Thomas, V. C.
1986-01-01
A Vibroacoustic Data Base Management Center has been established at the Jet Propulsion Laboratory (JPL). The center utilizes the Vibroacoustic Payload Environment Prediction System (VAPEPS) software package to manage a data base of shuttle and expendable launch vehicle flight and ground test data. Remote terminal access over telephone lines to a dedicated VAPEPS computer system has been established to provide the payload community a convenient means of querying the global VAPEPS data base. This guide describes the functions of the JPL Data Base Management Center and contains instructions for utilizing the resources of the center.
Cogeneration technology alternatives study. Volume 6: Computer data
NASA Technical Reports Server (NTRS)
1980-01-01
The potential technical capabilities of energy conversion systems in the 1985 - 2000 time period were defined with emphasis on systems using coal, coal-derived fuels or alternate fuels. Industrial process data developed for the large energy consuming industries serve as a framework for the cogeneration applications. Ground rules for the study were established and other necessary equipment (balance-of-plant) was defined. This combination of technical information, energy conversion system data ground rules, industrial process information and balance-of-plant characteristics was analyzed to evaluate energy consumption, capital and operating costs and emissions. Data in the form of computer printouts developed for 3000 energy conversion system-industrial process combinations are presented.
Raja, Muhammad Asif Zahoor; Kiani, Adiqa Kausar; Shehzad, Azam; Zameer, Aneela
2016-01-01
In this study, bio-inspired computing is exploited for solving system of nonlinear equations using variants of genetic algorithms (GAs) as a tool for global search method hybrid with sequential quadratic programming (SQP) for efficient local search. The fitness function is constructed by defining the error function for systems of nonlinear equations in mean square sense. The design parameters of mathematical models are trained by exploiting the competency of GAs and refinement are carried out by viable SQP algorithm. Twelve versions of the memetic approach GA-SQP are designed by taking a different set of reproduction routines in the optimization process. Performance of proposed variants is evaluated on six numerical problems comprising of system of nonlinear equations arising in the interval arithmetic benchmark model, kinematics, neurophysiology, combustion and chemical equilibrium. Comparative studies of the proposed results in terms of accuracy, convergence and complexity are performed with the help of statistical performance indices to establish the worth of the schemes. Accuracy and convergence of the memetic computing GA-SQP is found better in each case of the simulation study and effectiveness of the scheme is further established through results of statistics based on different performance indices for accuracy and complexity.
Natural Resource Information System, design analysis
NASA Technical Reports Server (NTRS)
1972-01-01
The computer-based system stores, processes, and displays map data relating to natural resources. The system was designed on the basis of requirements established in a user survey and an analysis of decision flow. The design analysis effort is described, and the rationale behind major design decisions, including map processing, cell vs. polygon, choice of classification systems, mapping accuracy, system hardware, and software language is summarized.
A Vote for Election Science as an Academic Discipline
ERIC Educational Resources Information Center
Foster, Andrea L.
2006-01-01
This article presents the suggestion of Merle S. King, chairman of the department of computer science and information systems at Kennesaw State University and also a director of Kennesaw State's Center for Elections Systems, which has helped establish a uniform statewide voting system in Georgia. On the last day of the conference sponsored by the…
Plouff, Donald
2000-01-01
Gravity observations are directly made or are obtained from other sources by the U.S. Geological Survey in order to prepare maps of the anomalous gravity field and consequently to interpret the subsurface distribution of rock densities and associated lithologic or geologic units. Observations are made in the field with gravity meters at new locations and at reoccupations of previously established gravity "stations." This report illustrates an interactively-prompted series of steps needed to convert gravity "readings" to values that are tied to established gravity datums and includes computer programs to implement those steps. Inasmuch as individual gravity readings have small variations, gravity-meter (instrument) drift may not be smoothly variable, and acommodations may be needed for ties to previously established stations, the reduction process is iterative. Decision-making by the program user is prompted by lists of best values and graphical displays. Notes about irregularities of topography, which affect the value of observed gravity but are not shown in sufficient detail on topographic maps, must be recorded in the field. This report illustrates ways to record field notes (distances, heights, and slope angles) and includes computer programs to convert field notes to gravity terrain corrections. This report includes approaches that may serve as models for other applications, for example: portrayal of system flow; style of quality control to document and validate computer applications; lack of dependence on proprietary software except source code compilation; method of file-searching with a dwindling list; interactive prompting; computer code to write directly in the PostScript (Adobe Systems Incorporated) printer language; and high-lighting the four-digit year on the first line of time-dependent data sets for assured Y2K compatibility. Computer source codes provided are written in the Fortran scientific language. In order for the programs to operate, they first must be converted (compiled) into an executable form on the user's computer. Although program testing was done in a UNIX (tradename of American Telephone and Telegraph Company) computer environment, it is anticipated that only a system-dependent date-and-time function may need to be changed for adaptation to other computer platforms that accept standard Fortran code.d del iliscipit volorer sequi ting etue feum zzriliquatum zzriustrud esenibh ex esto esequat.
Hyperswitch Network For Hypercube Computer
NASA Technical Reports Server (NTRS)
Chow, Edward; Madan, Herbert; Peterson, John
1989-01-01
Data-driven dynamic switching enables high speed data transfer. Proposed hyperswitch network based on mixed static and dynamic topologies. Routing header modified in response to congestion or faults encountered as path established. Static topology meets requirement if nodes have switching elements that perform necessary routing header revisions dynamically. Hypercube topology now being implemented with switching element in each computer node aimed at designing very-richly-interconnected multicomputer system. Interconnection network connects great number of small computer nodes, using fixed hypercube topology, characterized by point-to-point links between nodes.
The Mesa Arizona Pupil Tracking System
NASA Technical Reports Server (NTRS)
Wright, D. L.
1973-01-01
A computer-based Pupil Tracking/Teacher Monitoring System was designed for Mesa Public Schools, Mesa, Arizona. The established objectives of the system were to: (1) facilitate the economical collection and storage of student performance data necessary to objectively evaluate the relative effectiveness of teachers, instructional methods, materials, and applied concepts; and (2) identify, on a daily basis, those students requiring special attention in specific subject areas. The system encompasses computer hardware/software and integrated curricula progression/administration devices. It provides daily evaluation and monitoring of performance as students progress at class or individualized rates. In the process, it notifies the student and collects information necessary to validate or invalidate subject presentation devices, methods, materials, and measurement devices in terms of direct benefit to the students. The system utilizes a small-scale computer (e.g., IBM 1130) to assure low-cost replicability, and may be used for many subjects of instruction.
Establishing Information Security Systems via Optical Imaging
2015-08-11
SLM, spatial light modulator; BSC, non - polarizing beam splitter cube; CCD, charge-coupled device. In computational ghost imaging, a series of...Laser Object Computer Fig. 5. A schematic setup for the proposed method using holography: BSC, Beam splitter cube; CCD, Charge-coupled device. The...interference between reference and object beams . (a) (e) (d) (c) (b) Distribution Code A: Approved for public release, distribution is unlimited
Computing, Information and Communications Technology (CICT) Website
NASA Technical Reports Server (NTRS)
Hardman, John; Tu, Eugene (Technical Monitor)
2002-01-01
The Computing, Information and Communications Technology Program (CICT) was established in 2001 to ensure NASA's Continuing leadership in emerging technologies. It is a coordinated, Agency-wide effort to develop and deploy key enabling technologies for a broad range of mission-critical tasks. The NASA CICT program is designed to address Agency-specific computing, information, and communications technology requirements beyond the projected capabilities of commercially available solutions. The areas of technical focus have been chosen for their impact on NASA's missions, their national importance, and the technical challenge they provide to the Program. In order to meet its objectives, the CICT Program is organized into the following four technology focused projects: 1) Computing, Networking and Information Systems (CNIS); 2) Intelligent Systems (IS); 3) Space Communications (SC); 4) Information Technology Strategic Research (ITSR).
Energy Efficient Engine (E3) controls and accessories detail design report
NASA Technical Reports Server (NTRS)
Beitler, R. S.; Lavash, J. P.
1982-01-01
An Energy Efficient Engine program has been established by NASA to develop technology for improving the energy efficiency of future commercial transport aircraft engines. As part of this program, a new turbofan engine was designed. This report describes the fuel and control system for this engine. The system design is based on many of the proven concepts and component designs used on the General Electric CF6 family of engines. One significant difference is the incorporation of digital electronic computation in place of the hydromechanical computation currently used.
Deadbeat Predictive Controllers
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh
1997-01-01
Several new computational algorithms are presented to compute the deadbeat predictive control law. The first algorithm makes use of a multi-step-ahead output prediction to compute the control law without explicitly calculating the controllability matrix. The system identification must be performed first and then the predictive control law is designed. The second algorithm uses the input and output data directly to compute the feedback law. It combines the system identification and the predictive control law into one formulation. The third algorithm uses an observable-canonical form realization to design the predictive controller. The relationship between all three algorithms is established through the use of the state-space representation. All algorithms are applicable to multi-input, multi-output systems with disturbance inputs. In addition to the feedback terms, feed forward terms may also be added for disturbance inputs if they are measurable. Although the feedforward terms do not influence the stability of the closed-loop feedback law, they enhance the performance of the controlled system.
BESIU Physical Analysis on Hadoop Platform
NASA Astrophysics Data System (ADS)
Huo, Jing; Zang, Dongsong; Lei, Xiaofeng; Li, Qiang; Sun, Gongxing
2014-06-01
In the past 20 years, computing cluster has been widely used for High Energy Physics data processing. The jobs running on the traditional cluster with a Data-to-Computing structure, have to read large volumes of data via the network to the computing nodes for analysis, thereby making the I/O latency become a bottleneck of the whole system. The new distributed computing technology based on the MapReduce programming model has many advantages, such as high concurrency, high scalability and high fault tolerance, and it can benefit us in dealing with Big Data. This paper brings the idea of using MapReduce model to do BESIII physical analysis, and presents a new data analysis system structure based on Hadoop platform, which not only greatly improve the efficiency of data analysis, but also reduces the cost of system building. Moreover, this paper establishes an event pre-selection system based on the event level metadata(TAGs) database to optimize the data analyzing procedure.
Computational neuropharmacology: dynamical approaches in drug discovery.
Aradi, Ildiko; Erdi, Péter
2006-05-01
Computational approaches that adopt dynamical models are widely accepted in basic and clinical neuroscience research as indispensable tools with which to understand normal and pathological neuronal mechanisms. Although computer-aided techniques have been used in pharmaceutical research (e.g. in structure- and ligand-based drug design), the power of dynamical models has not yet been exploited in drug discovery. We suggest that dynamical system theory and computational neuroscience--integrated with well-established, conventional molecular and electrophysiological methods--offer a broad perspective in drug discovery and in the search for novel targets and strategies for the treatment of neurological and psychiatric diseases.
Addressing Failures in Exascale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snir, Marc; Wisniewski, Robert; Abraham, Jacob
2014-01-01
We present here a report produced by a workshop on Addressing failures in exascale computing' held in Park City, Utah, 4-11 August 2012. The charter of this workshop was to establish a common taxonomy about resilience across all the levels in a computing system, discuss existing knowledge on resilience across the various hardware and software layers of an exascale system, and build on those results, examining potential solutions from both a hardware and software perspective and focusing on a combined approach. The workshop brought together participants with expertise in applications, system software, and hardware; they came from industry, government, andmore » academia, and their interests ranged from theory to implementation. The combination allowed broad and comprehensive discussions and led to this document, which summarizes and builds on those discussions.« less
Human-Centered Design of Human-Computer-Human Dialogs in Aerospace Systems
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1998-01-01
A series of ongoing research programs at Georgia Tech established a need for a simulation support tool for aircraft computer-based aids. This led to the design and development of the Georgia Tech Electronic Flight Instrument Research Tool (GT-EFIRT). GT-EFIRT is a part-task flight simulator specifically designed to study aircraft display design and single pilot interaction. ne simulator, using commercially available graphics and Unix workstations, replicates to a high level of fidelity the Electronic Flight Instrument Systems (EFIS), Flight Management Computer (FMC) and Auto Flight Director System (AFDS) of the Boeing 757/767 aircraft. The simulator can be configured to present information using conventional looking B757n67 displays or next generation Primary Flight Displays (PFD) such as found on the Beech Starship and MD-11.
NASA Astrophysics Data System (ADS)
Liu, Yuan; Wang, Mingqiang; Ning, Xingyao
2018-02-01
Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.
Human Systems Center Products and Progress.
1993-10-01
and (CASHE:PVS). CASHE:PVS version 1.0 is a CD-ROM- As a precursor to developing collaborative based hypermedia- ergonomic information base design...computer-generated image to determine if the Crew System Ergonomics Information Analysis activity is physically possible. Expert system Center known as...and facility issues relative Federal Drug Administration, and Centers for to dentistry . The scope includes technical Disease Control to establish
Hu, Jian; Xu, Xiang-yang; Song, En-min; Tan, Hong-bao; Wang, Yi-ning
2009-09-01
To establish a new visual educational system of virtual reality for clinical dentistry based on world wide web (WWW) webpage in order to provide more three-dimensional multimedia resources to dental students and an online three-dimensional consulting system for patients. Based on computer graphics and three-dimensional webpage technologies, the software of 3Dsmax and Webmax were adopted in the system development. In the Windows environment, the architecture of whole system was established step by step, including three-dimensional model construction, three-dimensional scene setup, transplanting three-dimensional scene into webpage, reediting the virtual scene, realization of interactions within the webpage, initial test, and necessary adjustment. Five cases of three-dimensional interactive webpage for clinical dentistry were completed. The three-dimensional interactive webpage could be accessible through web browser on personal computer, and users could interact with the webpage through rotating, panning and zooming the virtual scene. It is technically feasible to implement the visual educational system of virtual reality for clinical dentistry based on WWW webpage. Information related to clinical dentistry can be transmitted properly, visually and interactively through three-dimensional webpage.
Scripting for Construction of a Transactive Memory System in Multidisciplinary CSCL Environments
ERIC Educational Resources Information Center
Noroozi, Omid; Biemans, Harm J. A.; Weinberger, Armin; Mulder, Martin; Chizari, Mohammad
2013-01-01
Establishing a Transactive Memory System (TMS) is essential for groups of learners, when they are multidisciplinary and collaborate online. Environments for Computer-Supported Collaborative Learning (CSCL) could be designed to facilitate the TMS. This study investigates how various aspects of a TMS (i.e., specialization, coordination, and trust)…
Low-Cost Terminal Alternative for Learning Center Managers. Final Report.
ERIC Educational Resources Information Center
Nix, C. Jerome; And Others
This study established the feasibility of replacing high performance and relatively expensive computer terminals with less expensive ones adequate for supporting specific tasks of Advanced Instructional System (AIS) at Lowry AFB, Colorado. Surveys of user requirements and available devices were conducted and the results used in a system analysis.…
48 CFR 239.7102-2 - Compromising emanations-TEMPEST or other standard.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-TEMPEST or other standard. 239.7102-2 Section 239.7102-2 Federal Acquisition Regulations System DEFENSE... INFORMATION TECHNOLOGY Security and Privacy for Computer Systems 239.7102-2 Compromising emanations—TEMPEST or....e., an established National TEMPEST standard (e.g., NACSEM 5100, NACSIM 5100A) or a standard used by...
48 CFR 239.7102-2 - Compromising emanations-TEMPEST or other standard.
Code of Federal Regulations, 2014 CFR
2014-10-01
...-TEMPEST or other standard. 239.7102-2 Section 239.7102-2 Federal Acquisition Regulations System DEFENSE... INFORMATION TECHNOLOGY Security and Privacy for Computer Systems 239.7102-2 Compromising emanations—TEMPEST or....e., an established National TEMPEST standard (e.g., NACSEM 5100, NACSIM 5100A) or a standard used by...
48 CFR 239.7102-2 - Compromising emanations-TEMPEST or other standard.
Code of Federal Regulations, 2011 CFR
2011-10-01
...-TEMPEST or other standard. 239.7102-2 Section 239.7102-2 Federal Acquisition Regulations System DEFENSE... INFORMATION TECHNOLOGY Security and Privacy for Computer Systems 239.7102-2 Compromising emanations—TEMPEST or....e., an established National TEMPEST standard (e.g., NACSEM 5100, NACSIM 5100A) or a standard used by...
48 CFR 239.7102-2 - Compromising emanations-TEMPEST or other standard.
Code of Federal Regulations, 2012 CFR
2012-10-01
...-TEMPEST or other standard. 239.7102-2 Section 239.7102-2 Federal Acquisition Regulations System DEFENSE... INFORMATION TECHNOLOGY Security and Privacy for Computer Systems 239.7102-2 Compromising emanations—TEMPEST or....e., an established National TEMPEST standard (e.g., NACSEM 5100, NACSIM 5100A) or a standard used by...
48 CFR 239.7102-2 - Compromising emanations-TEMPEST or other standard.
Code of Federal Regulations, 2013 CFR
2013-10-01
...-TEMPEST or other standard. 239.7102-2 Section 239.7102-2 Federal Acquisition Regulations System DEFENSE... INFORMATION TECHNOLOGY Security and Privacy for Computer Systems 239.7102-2 Compromising emanations—TEMPEST or....e., an established National TEMPEST standard (e.g., NACSEM 5100, NACSIM 5100A) or a standard used by...
Implementation of Project Based Learning in Mechatronic Lab Course at Bandung State Polytechnic
ERIC Educational Resources Information Center
Basjaruddin, Noor Cholis; Rakhman, Edi
2016-01-01
Mechatronics is a multidisciplinary that includes a combination of mechanics, electronics, control systems, and computer science. The main objective of mechatronics learning is to establish a comprehensive mindset in the development of mechatronic systems. Project Based Learning (PBL) is an appropriate method for use in the learning process of…
Beach Profile Analysis Systems (BPAS). Volume VI. BPAS User’s Guide: Analysis Module VOLCTR.
1982-06-01
the two seawardmost points. Before computing volume changes, common bonds are established relative to the landward and seaward extent of the surveys on...bit word size, the FORTRAN- callable sort routine (interfacing with the NOS or NOSME operating system SORTMRG utility), and the utility subroutines and
GIS Facility and Services at the Ronald Greeley Center for Planetary Studies
NASA Astrophysics Data System (ADS)
Nelson, D. M.; Williams, D. A.
2017-06-01
At the RGCPS, we established a Geographic Information Systems (GIS) computer laboratory, where we instruct researchers how to use GIS and image processing software. Seminars demonstrate viewing, integrating, and digitally mapping planetary data.
Multiple Approaches to Design Education
ERIC Educational Resources Information Center
Fox, Richard L.; And Others
1974-01-01
Discusses implementation of Sloan Foundation projects at the Case Western School of Engineering, including the development of a computer assisted mechanical structural design course, the establishment of a complex systems laboratory, and personnel views of industry-university design projects. (CC)
A finite element solution algorithm for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Baker, A. J.
1974-01-01
A finite element solution algorithm is established for the two-dimensional Navier-Stokes equations governing the steady-state kinematics and thermodynamics of a variable viscosity, compressible multiple-species fluid. For an incompressible fluid, the motion may be transient as well. The primitive dependent variables are replaced by a vorticity-streamfunction description valid in domains spanned by rectangular, cylindrical and spherical coordinate systems. Use of derived variables provides a uniformly elliptic partial differential equation description for the Navier-Stokes system, and for which the finite element algorithm is established. Explicit non-linearity is accepted by the theory, since no psuedo-variational principles are employed, and there is no requirement for either computational mesh or solution domain closure regularity. Boundary condition constraints on the normal flux and tangential distribution of all computational variables, as well as velocity, are routinely piecewise enforceable on domain closure segments arbitrarily oriented with respect to a global reference frame.
Recent Advances in X-ray Cone-beam Computed Laminography.
O'Brien, Neil S; Boardman, Richard P; Sinclair, Ian; Blumensath, Thomas
2016-10-06
X-ray computed tomography is an established volume imaging technique used routinely in medical diagnosis, industrial non-destructive testing, and a wide range of scientific fields. Traditionally, computed tomography uses scanning geometries with a single axis of rotation together with reconstruction algorithms specifically designed for this setup. Recently there has however been increasing interest in more complex scanning geometries. These include so called X-ray computed laminography systems capable of imaging specimens with large lateral dimensions or large aspect ratios, neither of which are well suited to conventional CT scanning procedures. Developments throughout this field have thus been rapid, including the introduction of novel system trajectories, the application and refinement of various reconstruction methods, and the use of recently developed computational hardware and software techniques to accelerate reconstruction times. Here we examine the advances made in the last several years and consider their impact on the state of the art.
High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations
NASA Technical Reports Server (NTRS)
Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.
2003-01-01
Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, B.
The Energy Research program may be on the verge of abdicating an important role it has traditionally played in the development and use of state-of-the-art computer systems. The lack of easy access to Class VI systems coupled to the easy availability of local, user-friendly systems is conspiring to drive many investigators away from forefront research in computational science and in the use of state-of-the-art computers for more discipline-oriented problem solving. The survey conducted under the auspices of this contract clearly demonstrates a significant suppressed demand for actual Class VI hours totaling the full capacity of one such system. The currentmore » usage is about a factor of 15 below this level. There is also a need for about 50% more capacity in the current mini/midi availability. Meeting the needs of the ER community for this level of computing power and capacity is most probably best achieved through the establishment of a central Class VI capability at some site linked through a nationwide network to the various ER laboratories and universities and interfaced with the local user-friendly systems at those remote sites.« less
The impact of computer usage on the perceptions of hospital secretaries.
Foner, C; Nour, M; Luo, X; Kim, J
1991-09-01
This study explored the perceptions of hospital unit secretaries regarding computer usage. Specifically, six attitudinal variables: performance, resistance, interpersonal relations, satisfaction, challenge, and work overload were examined. The study had two major findings: (1) hospital unit secretaries have positive perceptions of job performance, satisfaction, and challenge as a result of using the PHAMIS computer system and (2) hospital unit secretaries do not feel resistant to the system, overloaded with work, or inclined to increase their interpersonal interaction with coworkers. These two findings might appear contradictory on the surface, but in fact are consistent with overall positive perceptions about the PHAMIS system. The study also considered the impact of two independent variables--age and number of years at work--on the responses of subjects. The analysis indicated that together these two variables explained some variations in the values of at least two of the dependent variables--resistance and challenge. The authors therefore concluded that the installation of the hospital computer system has established a favorable working environment for those whose work is affected by it. The dramatic expansion of computer systems in nonprofit institutions as well as in profit-oriented institutions has made people more familiar with computer technology. This trend can account for the overall positive perception of the unit secretaries toward the new computer system. Moreover, training programs and the support of top management for the system may also have enhanced the positive attitude of the users.
Sign use and cognition in automated scientific discovery: are computers only special kinds of signs?
NASA Astrophysics Data System (ADS)
Giza, Piotr
2018-04-01
James Fetzer criticizes the computational paradigm, prevailing in cognitive science by questioning, what he takes to be, its most elementary ingredient: that cognition is computation across representations. He argues that if cognition is taken to be a purposive, meaningful, algorithmic problem solving activity, then computers are incapable of cognition. Instead, they appear to be signs of a special kind, that can facilitate computation. He proposes the conception of minds as semiotic systems as an alternative paradigm for understanding mental phenomena, one that seems to overcome the difficulties of computationalism. Now, I argue, that with computer systems dealing with scientific discovery, the matter is not so simple as that. The alleged superiority of humans using signs to stand for something other over computers being merely "physical symbol systems" or "automatic formal systems" is only easy to establish in everyday life, but becomes far from obvious when scientific discovery is at stake. In science, as opposed to everyday life, the meaning of symbols is, apart from very low-level experimental investigations, defined implicitly by the way the symbols are used in explanatory theories or experimental laws relevant to the field, and in consequence, human and machine discoverers are much more on a par. Moreover, the great practical success of the genetic programming method and recent attempts to apply it to automatic generation of cognitive theories seem to show, that computer systems are capable of very efficient problem solving activity in science, which is neither purposive nor meaningful, nor algorithmic. This, I think, undermines Fetzer's argument that computer systems are incapable of cognition because computation across representations is bound to be a purposive, meaningful, algorithmic problem solving activity.
Establishment and outcomes of a model primary care pharmacy service system.
Carmichael, Jannet M; Alvarez, Autumn; Chaput, Ryan; DiMaggio, Jennifer; Magallon, Heather; Mambourg, Scott
2004-03-01
The establishment and outcomes of a model primary care pharmacy service system are described. A primary care pharmacy practice model was established at a government health care facility in March 1996. The original objective was to establish a primary pharmacy practice model that would demonstrate improved patient outcomes and maximize the pharmacist's contributions to drug therapy. Since its inception, many improvements have been realized and supported by advanced computer and automated systems, expanded disease state management practices, and unique practitioner and administrative support. Many outcomes studies have been performed on the pharmacist-initiated and -managed clinics, leading to improved patient care and conveying the quality-conscious and cost-effective role pharmacists can play as independent practitioners in this environment. These activities demonstrate cutting-edge leadership in health-system pharmacy. Redesign has been used to improve consistent access to a medication expert and has significantly improved the quality of patient care while easing physicians' workload without increasing health care costs. A system using pharmacists as independent practitioners to promote primary care has achieved high-quality and cost-effective patient care.
A Pilot Study of the Naming Transaction Shell
1991-06-01
effective computer-based instructional design. AIDA will take established theories of knowledge, learning , and instruction and incorporate the theories...felt that anyone could learn to use the system both in design and delivery modes. Traditional course development (non- computer instruction) for the...students were studying and learning the material in the text. This often resulted in wasted effort in the simulator. By ensuring that the students knew the
Norton, James J. S.; Lee, Dong Sup; Lee, Jung Woo; Lee, Woosik; Kwon, Ohjin; Won, Phillip; Jung, Sung-Young; Cheng, Huanyu; Jeong, Jae-Woong; Akce, Abdullah; Umunna, Stephen; Na, Ilyoun; Kwon, Yong Ho; Wang, Xiao-Qi; Liu, ZhuangJian; Paik, Ungyu; Huang, Yonggang; Bretl, Timothy; Yeo, Woon-Hong; Rogers, John A.
2015-01-01
Recent advances in electrodes for noninvasive recording of electroencephalograms expand opportunities collecting such data for diagnosis of neurological disorders and brain–computer interfaces. Existing technologies, however, cannot be used effectively in continuous, uninterrupted modes for more than a few days due to irritation and irreversible degradation in the electrical and mechanical properties of the skin interface. Here we introduce a soft, foldable collection of electrodes in open, fractal mesh geometries that can mount directly and chronically on the complex surface topology of the auricle and the mastoid, to provide high-fidelity and long-term capture of electroencephalograms in ways that avoid any significant thermal, electrical, or mechanical loading of the skin. Experimental and computational studies establish the fundamental aspects of the bending and stretching mechanics that enable this type of intimate integration on the highly irregular and textured surfaces of the auricle. Cell level tests and thermal imaging studies establish the biocompatibility and wearability of such systems, with examples of high-quality measurements over periods of 2 wk with devices that remain mounted throughout daily activities including vigorous exercise, swimming, sleeping, and bathing. Demonstrations include a text speller with a steady-state visually evoked potential-based brain–computer interface and elicitation of an event-related potential (P300 wave). PMID:25775550
Norton, James J S; Lee, Dong Sup; Lee, Jung Woo; Lee, Woosik; Kwon, Ohjin; Won, Phillip; Jung, Sung-Young; Cheng, Huanyu; Jeong, Jae-Woong; Akce, Abdullah; Umunna, Stephen; Na, Ilyoun; Kwon, Yong Ho; Wang, Xiao-Qi; Liu, ZhuangJian; Paik, Ungyu; Huang, Yonggang; Bretl, Timothy; Yeo, Woon-Hong; Rogers, John A
2015-03-31
Recent advances in electrodes for noninvasive recording of electroencephalograms expand opportunities collecting such data for diagnosis of neurological disorders and brain-computer interfaces. Existing technologies, however, cannot be used effectively in continuous, uninterrupted modes for more than a few days due to irritation and irreversible degradation in the electrical and mechanical properties of the skin interface. Here we introduce a soft, foldable collection of electrodes in open, fractal mesh geometries that can mount directly and chronically on the complex surface topology of the auricle and the mastoid, to provide high-fidelity and long-term capture of electroencephalograms in ways that avoid any significant thermal, electrical, or mechanical loading of the skin. Experimental and computational studies establish the fundamental aspects of the bending and stretching mechanics that enable this type of intimate integration on the highly irregular and textured surfaces of the auricle. Cell level tests and thermal imaging studies establish the biocompatibility and wearability of such systems, with examples of high-quality measurements over periods of 2 wk with devices that remain mounted throughout daily activities including vigorous exercise, swimming, sleeping, and bathing. Demonstrations include a text speller with a steady-state visually evoked potential-based brain-computer interface and elicitation of an event-related potential (P300 wave).
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, William L.; Glass, Christopher E.; Streett, Craig L.; Schuster, David M.
2015-01-01
A transonic flow field about a Space Launch System (SLS) configuration was simulated with the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics (CFD) code at wind tunnel conditions. Unsteady, time-accurate computations were performed using second-order Delayed Detached Eddy Simulation (DDES) for up to 1.5 physical seconds. The surface pressure time history was collected at 619 locations, 169 of which matched locations on a 2.5 percent wind tunnel model that was tested in the 11 ft. x 11 ft. test section of the NASA Ames Research Center's Unitary Plan Wind Tunnel. Comparisons between computation and experiment showed that the peak surface pressure RMS level occurs behind the forward attach hardware, and good agreement for frequency and power was obtained in this region. Computational domain, grid resolution, and time step sensitivity studies were performed. These included an investigation of pseudo-time sub-iteration convergence. Using these sensitivity studies and experimental data comparisons, a set of best practices to date have been established for FUN3D simulations for SLS launch vehicle analysis. To the author's knowledge, this is the first time DDES has been used in a systematic approach and establish simulation time needed, to analyze unsteady pressure loads on a space launch vehicle such as the NASA SLS.
Norton, James J. S.; Lee, Dong Sup; Lee, Jung Woo; ...
2015-03-16
Some recent advances in electrodes for noninvasive recording of electroencephalograms expand opportunities collecting such data for diagnosis of neurological disorders and brain–computer interfaces. Existing technologies, but, cannot be used effectively in continuous, uninterrupted modes for more than a few days due to irritation and irreversible degradation in the electrical and mechanical properties of the skin interface. We introduce a soft, foldable collection of electrodes in open, fractal mesh geometries that can mount directly and chronically on the complex surface topology of the auricle and the mastoid, to provide high-fidelity and long-term capture of electroencephalograms in ways that avoid any significantmore » thermal, electrical, or mechanical loading of the skin. Experimental and computational studies establish the fundamental aspects of the bending and stretching mechanics that enable this type of intimate integration on the highly irregular and textured surfaces of the auricle. Furthermore, cell level tests and thermal imaging studies establish the biocompatibility and wearability of such systems, with examples of high-quality measurements over periods of 2 wk with devices that remain mounted throughout daily activities including vigorous exercise, swimming, sleeping, and bathing. Demonstrations include a text speller with a steady-state visually evoked potential-based brain–computer interface and elicitation of an event-related potential (P300 wave).« less
Stochastic Stability of Sampled Data Systems with a Jump Linear Controller
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven
2004-01-01
In this paper an equivalence between the stochastic stability of a sampled-data system and its associated discrete-time representation is established. The sampled-data system consists of a deterministic, linear, time-invariant, continuous-time plant and a stochastic, linear, time-invariant, discrete-time, jump linear controller. The jump linear controller models computer systems and communication networks that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. This paper shows that the known equivalence between the stability of a deterministic sampled-data system and the associated discrete-time representation holds even in a stochastic framework.
Incremental update of electrostatic interactions in adaptively restrained particle simulations.
Edorh, Semeho Prince A; Redon, Stéphane
2018-04-06
The computation of long-range potentials is one of the demanding tasks in Molecular Dynamics. During the last decades, an inventive panoply of methods was developed to reduce the CPU time of this task. In this work, we propose a fast method dedicated to the computation of the electrostatic potential in adaptively restrained systems. We exploit the fact that, in such systems, only some particles are allowed to move at each timestep. We developed an incremental algorithm derived from a multigrid-based alternative to traditional Fourier-based methods. Our algorithm was implemented inside LAMMPS, a popular molecular dynamics simulation package. We evaluated the method on different systems. We showed that the new algorithm's computational complexity scales with the number of active particles in the simulated system, and is able to outperform the well-established Particle Particle Particle Mesh (P3M) for adaptively restrained simulations. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Forensic Carving of Network Packets and Associated Data Structures
2011-01-01
establishment of prior connection activity and services used; identification of other systems present on the system’s LAN or WLAN; geolocation of the...identification of other systems present on the system?s LAN or WLAN; geolocation of the host computer system; and cross-drive analysis. We show that network...Finally, our work in geolocation was assisted by geo- location databases created by companies such as Google (Google Mobile, 2011) and Skyhook
Novel schemes for measurement-based quantum computation.
Gross, D; Eisert, J
2007-06-01
We establish a framework which allows one to construct novel schemes for measurement-based quantum computation. The technique develops tools from many-body physics-based on finitely correlated or projected entangled pair states-to go beyond the cluster-state based one-way computer. We identify resource states radically different from the cluster state, in that they exhibit nonvanishing correlations, can be prepared using nonmaximally entangling gates, or have very different local entanglement properties. In the computational models, randomness is compensated in a different manner. It is shown that there exist resource states which are locally arbitrarily close to a pure state. We comment on the possibility of tailoring computational models to specific physical systems.
Design and implementation of spatial knowledge grid for integrated spatial analysis
NASA Astrophysics Data System (ADS)
Liu, Xiangnan; Guan, Li; Wang, Ping
2006-10-01
Supported by spatial information grid(SIG), the spatial knowledge grid (SKG) for integrated spatial analysis utilizes the middleware technology in constructing the spatial information grid computation environment and spatial information service system, develops spatial entity oriented spatial data organization technology, carries out the profound computation of the spatial structure and spatial process pattern on the basis of Grid GIS infrastructure, spatial data grid and spatial information grid (specialized definition). At the same time, it realizes the complex spatial pattern expression and the spatial function process simulation by taking the spatial intelligent agent as the core to establish space initiative computation. Moreover through the establishment of virtual geographical environment with man-machine interactivity and blending, complex spatial modeling, network cooperation work and spatial community decision knowledge driven are achieved. The framework of SKG is discussed systematically in this paper. Its implement flow and the key technology with examples of overlay analysis are proposed as well.
[HYGIENIC REGULATION OF THE USE OF ELECTRONIC EDUCATIONAL RESOURCES IN THE MODERN SCHOOL].
Stepanova, M I; Aleksandrova, I E; Sazanyuk, Z I; Voronova, B Z; Lashneva, L P; Shumkova, T V; Berezina, N O
2015-01-01
We studied the effect of academic studies with the use a notebook computer and interactive whiteboard on the functional state of an organism of schoolchildren. Using a complex of hygienic and physiological methods of the study we established that regulation of the computer activity of students must take into account not only duration but its intensity either. Design features of a notebook computer were shown both to impede keeping the optimal working posture in primary school children and increase the risk offormation of disorders of vision and musculoskeletal system. There were established the activating influence of the interactive whiteboard on performance activities and favorable dynamics of indices of the functional state of the organism of students under keeping optimal density of the academic study and the duration of its use. There are determined safety regulations of the work of schoolchildren with electronic resources in the educational process.
A computational workflow for designing silicon donor qubits
Humble, Travis S.; Ericson, M. Nance; Jakowski, Jacek; ...
2016-09-19
Developing devices that can reliably and accurately demonstrate the principles of superposition and entanglement is an on-going challenge for the quantum computing community. Modeling and simulation offer attractive means of testing early device designs and establishing expectations for operational performance. However, the complex integrated material systems required by quantum device designs are not captured by any single existing computational modeling method. We examine the development and analysis of a multi-staged computational workflow that can be used to design and characterize silicon donor qubit systems with modeling and simulation. Our approach integrates quantum chemistry calculations with electrostatic field solvers to performmore » detailed simulations of a phosphorus dopant in silicon. We show how atomistic details can be synthesized into an operational model for the logical gates that define quantum computation in this particular technology. In conclusion, the resulting computational workflow realizes a design tool for silicon donor qubits that can help verify and validate current and near-term experimental devices.« less
Future of Assurance: Ensuring that a System is Trustworthy
NASA Astrophysics Data System (ADS)
Sadeghi, Ahmad-Reza; Verbauwhede, Ingrid; Vishik, Claire
Significant efforts are put in defining and implementing strong security measures for all components of the comput-ing environment. It is equally important to be able to evaluate the strength and robustness of these measures and establish trust among the components of the computing environment based on parameters and attributes of these elements and best practices associated with their production and deployment. Today the inventory of techniques used for security assurance and to establish trust -- audit, security-conscious development process, cryptographic components, external evaluation - is somewhat limited. These methods have their indisputable strengths and have contributed significantly to the advancement in the area of security assurance. However, shorter product and tech-nology development cycles and the sheer complexity of modern digital systems and processes have begun to decrease the efficiency of these techniques. Moreover, these approaches and technologies address only some aspects of security assurance and, for the most part, evaluate assurance in a general design rather than an instance of a product. Additionally, various components of the computing environment participating in the same processes enjoy different levels of security assurance, making it difficult to ensure adequate levels of protection end-to-end. Finally, most evaluation methodologies rely on the knowledge and skill of the evaluators, making reliable assessments of trustworthiness of a system even harder to achieve. The paper outlines some issues in security assurance that apply across the board, with the focus on the trustworthiness and authenticity of hardware components and evaluates current approaches to assurance.
Method for nonlinear optimization for gas tagging and other systems
Chen, Ting; Gross, Kenny C.; Wegerich, Stephan
1998-01-01
A method and system for providing nuclear fuel rods with a configuration of isotopic gas tags. The method includes selecting a true location of a first gas tag node, selecting initial locations for the remaining n-1 nodes using target gas tag compositions, generating a set of random gene pools with L nodes, applying a Hopfield network for computing on energy, or cost, for each of the L gene pools and using selected constraints to establish minimum energy states to identify optimal gas tag nodes with each energy compared to a convergence threshold and then upon identifying the gas tag node continuing this procedure until establishing the next gas tag node until all remaining n nodes have been established.
Method for nonlinear optimization for gas tagging and other systems
Chen, T.; Gross, K.C.; Wegerich, S.
1998-01-06
A method and system are disclosed for providing nuclear fuel rods with a configuration of isotopic gas tags. The method includes selecting a true location of a first gas tag node, selecting initial locations for the remaining n-1 nodes using target gas tag compositions, generating a set of random gene pools with L nodes, applying a Hopfield network for computing on energy, or cost, for each of the L gene pools and using selected constraints to establish minimum energy states to identify optimal gas tag nodes with each energy compared to a convergence threshold and then upon identifying the gas tag node continuing this procedure until establishing the next gas tag node until all remaining n nodes have been established. 6 figs.
NASA Technical Reports Server (NTRS)
Daly, J. K.; Torian, J. G.
1979-01-01
An overview of studies conducted to establish the requirements for advanced subsystem analytical tools is presented. Modifications are defined for updating current computer programs used to analyze environmental control, life support, and electric power supply systems so that consumables for future advanced spacecraft may be managed.
Review of Collaborative Tools for Planning and Engineering
2007-10-01
including PDAs) and Operating Systems 1 In general, should support laptops, desktops, Windows OS, Mac OS, Palm OS, Windows CE, Blackberry , Sun...better), voting (to establish operating parameters), reactor design, wind tunnel simulation Display same material on every computer, synchronisation
Laboratory services series: a master-slave manipulator maintenance program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenness, R. G.; Hicks, R. E.; Wicker, C. D.
1976-12-01
The volume of master slave manipulator maintenance at Oak Ridge National Laboratory has necessitated the establishment of a repair facility and organization of a specially trained group of craftsmen. Emphasis on cell containment requires the use of manipulator boots and development of precise procedures for accomplishing the maintenance of 287 installed units. A very satisfactory computer programmed maintenance system has been established at the Laboratory to provide an economical approach to preventive maintenance.
2011-09-01
concert with a physical attack. Additionally, the importance of preventive measures implemented by a social human network to counteract a cyber attack...integrity of the data stored on specific computers. This coordinated cyber attack would have been successful if not for the trusted social network...established by Mr. Hillar Aarelaid, head of the Estonian computer 6 emergency response team (CERT). This social network consisted of Mr. Hillar Aarelaid
Certification in Structural Health Monitoring Systems
2011-09-01
validation [3,8]. This may be accomplished by computing the sum of squares of pure error ( SSPE ) and its associated squared correlation [3,8]. To compute...these values, a cross- validation sample must be established. In general, if the SSPE is high, the model does not predict well on independent data...plethora of cross- validation methods, some of which are more useful for certain models than others [3,8]. When possible, a disclosure of the SSPE
Market Survey and Analysis in Support of ASAS Computer-Based Training System Design
1988-11-01
development nf a recommended strategy for incorporating CBT in the ASAS/ENSCE training system. Approach - In order to establish the state of the art and...a training system which will meet ASAS training requirements. Eleven subsystems are described in terms of their functional input to the overall...keyboard and displays used in actual operation are also used in training, maximizing the transfer effect from practice situations to actual system
A Computer-Based Nursing Diagnosis Consultant
Evans, Steven
1984-01-01
This consultant permits a nurse to enter patient signs and symptoms which are then interpreted by the system in order to relate them to well-established nursing-related dysfunctional patterns. The system attempts to confirm the pattern by soliciting additional patient information from the nurse. This process provides an educational prompt to the nurse, and the suggestions of the system also provide a clinical support tool that can be of practical value. As our testing hones the system and subtlety is added to the weighing of the evidence the nurse provides, it is expected that this tool will be a useful adjunct to computer-based nursing services in support of health care. This Nursing Diagnosis Consultant is yet another element in the COMMES family of consultants for health professionals.
Vehicle Fault Diagnose Based on Smart Sensor
NASA Astrophysics Data System (ADS)
Zhining, Li; Peng, Wang; Jianmin, Mei; Jianwei, Li; Fei, Teng
In the vehicle's traditional fault diagnose system, we usually use a computer system with a A/D card and with many sensors connected to it. The disadvantage of this system is that these sensor can hardly be shared with control system and other systems, there are too many connect lines and the electro magnetic compatibility(EMC) will be affected. In this paper, smart speed sensor, smart acoustic press sensor, smart oil press sensor, smart acceleration sensor and smart order tracking sensor were designed to solve this problem. With the CAN BUS these smart sensors, fault diagnose computer and other computer could be connected together to establish a network system which can monitor and control the vehicle's diesel and other system without any duplicate sensor. The hard and soft ware of the smart sensor system was introduced, the oil press, vibration and acoustic signal are resampled by constant angle increment to eliminate the influence of the rotate speed. After the resample, the signal in every working cycle could be averaged in angle domain and do other analysis like order spectrum.
HyPEP FY06 Report: Models and Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
DOE report
2006-09-01
The Department of Energy envisions the next generation very high-temperature gas-cooled reactor (VHTR) as a single-purpose or dual-purpose facility that produces hydrogen and electricity. The Ministry of Science and Technology (MOST) of the Republic of Korea also selected VHTR for the Nuclear Hydrogen Development and Demonstration (NHDD) Project. This research project aims at developing a user-friendly program for evaluating and optimizing cycle efficiencies of producing hydrogen and electricity in a Very-High-Temperature Reactor (VHTR). Systems for producing electricity and hydrogen are complex and the calculations associated with optimizing these systems are intensive, involving a large number of operating parameter variations andmore » many different system configurations. This research project will produce the HyPEP computer model, which is specifically designed to be an easy-to-use and fast running tool for evaluating nuclear hydrogen and electricity production facilities. The model accommodates flexible system layouts and its cost models will enable HyPEP to be well-suited for system optimization. Specific activities of this research are designed to develop the HyPEP model into a working tool, including (a) identifying major systems and components for modeling, (b) establishing system operating parameters and calculation scope, (c) establishing the overall calculation scheme, (d) developing component models, (e) developing cost and optimization models, and (f) verifying and validating the program. Once the HyPEP model is fully developed and validated, it will be used to execute calculations on candidate system configurations. FY-06 report includes a description of reference designs, methods used in this study, models and computational strategies developed for the first year effort. Results from computer codes such as HYSYS and GASS/PASS-H used by Idaho National Laboratory and Argonne National Laboratory, respectively will be benchmarked with HyPEP results in the following years.« less
Assessing the Impact of Educational Differences in HCI Design Practice
ERIC Educational Resources Information Center
Antunes, Pedro; Xiao, Lu; Pino, Jose A.
2014-01-01
Human-computer interaction (HCI) design generally involves collaboration from professionals in different disciplines. Trained in different design education systems, these professionals can have different conceptual understandings about design. Recognizing and identifying these differences are key issues for establishing shared design practices…
Regional Sustainability: The San Luis Basin Metrics Project
There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute. Moreover, individual metrics may not capture all aspects of a system that are relevant to sust...
Development of a Multidisciplinary Approach to Access Sustainability
There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute the metrics. Moreover, individual metrics do not capture all aspects of a system that are relevan...
DOT National Transportation Integrated Search
2008-09-01
FSUTMS training is a major activity of the Systems Planning Office of the Florida Department of : Transportation (FDOT). The training aims to establish and maintain quality assurance for consistent : statewide modeling standards and provide up-to-dat...
Entropy generation method to quantify thermal comfort.
Boregowda, S C; Tiwari, S N; Chaturvedi, S K
2001-12-01
The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study is needed in the future to fully establish the validity of the OTCI formula and the model. One of the practical applications of this index is that could it be integrated in thermal control systems to develop human-centered environmental control systems for potential use in aircraft, mass transit vehicles, intelligent building systems, and space vehicles.
Entropy generation method to quantify thermal comfort
NASA Technical Reports Server (NTRS)
Boregowda, S. C.; Tiwari, S. N.; Chaturvedi, S. K.
2001-01-01
The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study is needed in the future to fully establish the validity of the OTCI formula and the model. One of the practical applications of this index is that could it be integrated in thermal control systems to develop human-centered environmental control systems for potential use in aircraft, mass transit vehicles, intelligent building systems, and space vehicles.
The Computer Aided Aircraft-design Package (CAAP)
NASA Technical Reports Server (NTRS)
Yalif, Guy U.
1994-01-01
The preliminary design of an aircraft is a complex, labor-intensive, and creative process. Since the 1970's, many computer programs have been written to help automate preliminary airplane design. Time and resource analyses have identified, 'a substantial decrease in project duration with the introduction of an automated design capability'. Proof-of-concept studies have been completed which establish 'a foundation for a computer-based airframe design capability', Unfortunately, today's design codes exist in many different languages on many, often expensive, hardware platforms. Through the use of a module-based system architecture, the Computer aided Aircraft-design Package (CAAP) will eventually bring together many of the most useful features of existing programs. Through the use of an expert system, it will add an additional feature that could be described as indispensable to entry level engineers and students: the incorporation of 'expert' knowledge into the automated design process.
Final Report. Center for Scalable Application Development Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellor-Crummey, John
2014-10-26
The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codesmore » for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.« less
Implementation of a Computerized Maintenance Management System
NASA Technical Reports Server (NTRS)
Shen, Yong-Hong; Askari, Bruce
1994-01-01
A primer Computerized Maintenance Management System (CMMS) has been established for NASA Ames pressure component certification program. The CMMS takes full advantage of the latest computer technology and SQL relational database to perform periodic services for vital pressure components. The Ames certification program is briefly described and the aspects of the CMMS implementation are discussed as they are related to the certification objectives.
Application for temperature and humidity monitoring of data center environment
NASA Astrophysics Data System (ADS)
Albert, Ş.; Truşcǎ, M. R. C.; Soran, M. L.
2015-12-01
The technology and computer science registered a large development in the last years. Most systems that use high technologies require special working conditions. The monitoring and the controlling are very important. The temperature and the humidity are important parameters in the operation of computer systems, industrial and research, maintaining it between certain values to ensure their proper functioning being important. Usually, the temperature is maintained in the established range using an air conditioning system, but the humidity is affected. In the present work we developed an application based on a board with own firmware called "AVR_NET_IO" using a microcontroller ATmega32 type for temperature and humidity monitoring in Data Center of INCDTIM. On this board, temperature sensors were connected to measure the temperature in different points of the Data Center and outside of this. Humidity monitoring is performed using data from integrated sensors of the air conditioning system, thus achieving a correlation between humidity and temperature variation. It was developed a software application (CM-1) together with the hardware, which allows temperature monitoring and register inside Data Center and trigger an alarm when variations are greater with 3°C than established limits of the temperature.
Definition and maintenance of a telemetry database dictionary
NASA Technical Reports Server (NTRS)
Knopf, William P. (Inventor)
2007-01-01
A telemetry dictionary database includes a component for receiving spreadsheet workbooks of telemetry data over a web-based interface from other computer devices. Another component routes the spreadsheet workbooks to a specified directory on the host processing device. A process then checks the received spreadsheet workbooks for errors, and if no errors are detected the spreadsheet workbooks are routed to another directory to await initiation of a remote database loading process. The loading process first converts the spreadsheet workbooks to comma separated value (CSV) files. Next, a network connection with the computer system that hosts the telemetry dictionary database is established and the CSV files are ported to the computer system that hosts the telemetry dictionary database. This is followed by a remote initiation of a database loading program. Upon completion of loading a flatfile generation program is manually initiated to generate a flatfile to be used in a mission operations environment by the core ground system.
Computational Toxicology at the US EPA | Science Inventory ...
Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in America’s air, water, and hazardous-waste sites. The ORD Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the EPA Science to Achieve Results (STAR) program. Key intramural projects of the CTRP include digitizing legacy toxicity testing information toxicity reference database (ToxRefDB), predicting toxicity (ToxCast™) and exposure (ExpoCast™), and creating virtual liver (v-Liver™) and virtual embryo (v-Embryo™) systems models. The models and underlying data are being made publicly available t
NASA Astrophysics Data System (ADS)
Bogusz, Michael
1993-01-01
The need for a systematic methodology for the analysis of aircraft electromagnetic compatibility (EMC) problems is examined. The available computer aids used in aircraft EMC analysis are assessed and a theoretical basis is established for the complex algorithms which identify and quantify electromagnetic interactions. An overview is presented of one particularly well established aircraft antenna to antenna EMC analysis code, the Aircraft Inter-Antenna Propagation with Graphics (AAPG) Version 07 software. The specific new algorithms created to compute cone geodesics and their associated path losses and to graph the physical coupling path are discussed. These algorithms are validated against basic principles. Loss computations apply the uniform geometrical theory of diffraction and are subsequently compared to measurement data. The increased modelling and analysis capabilities of the newly developed AAPG Version 09 are compared to those of Version 07. Several models of real aircraft, namely the Electronic Systems Trainer Challenger, are generated and provided as a basis for this preliminary comparative assessment. Issues such as software reliability, algorithm stability, and quality of hardcopy output are also discussed.
A Higher Order Iterative Method for Computing the Drazin Inverse
Soleymani, F.; Stanimirović, Predrag S.
2013-01-01
A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper. PMID:24222747
The NCOREL computer program for 3D nonlinear supersonic potential flow computations
NASA Technical Reports Server (NTRS)
Siclari, M. J.
1983-01-01
An innovative computational technique (NCOREL) was established for the treatment of three dimensional supersonic flows. The method is nonlinear in that it solves the nonconservative finite difference analog of the full potential equation and can predict the formation of supercritical cross flow regions, embedded and bow shocks. The method implicitly computes a conical flow at the apex (R = 0) of a spherical coordinate system and uses a fully implicit marching technique to obtain three dimensional cross flow solutions. This implies that the radial Mach number must remain supersonic. The cross flow solutions are obtained by using type dependent transonic relaxation techniques with the type dependency linked to the character of the cross flow velocity (i.e., subsonic/supersonic). The spherical coordinate system and marching on spherical surfaces is ideally suited to the computation of wing flows at low supersonic Mach numbers due to the elimination of the subsonic axial Mach number problems that exist in other marching codes that utilize Cartesian transverse marching planes.
Kominami, Yoko; Yoshida, Shigeto; Tanaka, Shinji; Sanomura, Yoji; Hirakawa, Tsubasa; Raytchev, Bisser; Tamaki, Toru; Koide, Tetsusi; Kaneda, Kazufumi; Chayama, Kazuaki
2016-03-01
It is necessary to establish cost-effective examinations and treatments for diminutive colorectal tumors that consider the treatment risk and surveillance interval after treatment. The Preservation and Incorporation of Valuable Endoscopic Innovations (PIVI) committee of the American Society for Gastrointestinal Endoscopy published a statement recommending the establishment of endoscopic techniques that practice the resect and discard strategy. The aims of this study were to evaluate whether our newly developed real-time image recognition system can predict histologic diagnoses of colorectal lesions depicted on narrow-band imaging and to satisfy some problems with the PIVI recommendations. We enrolled 41 patients who had undergone endoscopic resection of 118 colorectal lesions (45 nonneoplastic lesions and 73 neoplastic lesions). We compared the results of real-time image recognition system analysis with that of narrow-band imaging diagnosis and evaluated the correlation between image analysis and the pathological results. Concordance between the endoscopic diagnosis and diagnosis by a real-time image recognition system with a support vector machine output value was 97.5% (115/118). Accuracy between the histologic findings of diminutive colorectal lesions (polyps) and diagnosis by a real-time image recognition system with a support vector machine output value was 93.2% (sensitivity, 93.0%; specificity, 93.3%; positive predictive value (PPV), 93.0%; and negative predictive value, 93.3%). Although further investigation is necessary to establish our computer-aided diagnosis system, this real-time image recognition system may satisfy the PIVI recommendations and be useful for predicting the histology of colorectal tumors. Copyright © 2016 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.
Study of an engine flow diverter system for a large scale ejector powered aircraft model
NASA Technical Reports Server (NTRS)
Springer, R. J.; Langley, B.; Plant, T.; Hunter, L.; Brock, O.
1981-01-01
Requirements were established for a conceptual design study to analyze and design an engine flow diverter system and to include accommodations for an ejector system in an existing 3/4 scale fighter model equipped with YJ-79 engines. Model constraints were identified and cost-effective limited modification was proposed to accept the ejectors, ducting and flow diverter valves. Complete system performance was calculated and a versatile computer program capable of analyzing any ejector system was developed.
Reference clock parameters for digital communications systems applications
NASA Technical Reports Server (NTRS)
Kartaschoff, P.
1981-01-01
The basic parameters relevant to the design of network timing systems describe the random and systematic time departures of the system elements, i.e., master (or reference) clocks, transmission links, and other clocks controlled over the links. The quantitative relations between these parameters were established and illustrated by means of numerical examples based on available measured data. The examples were limited to a simple PLL control system but the analysis can eventually be applied to more sophisticated systems at the cost of increased computational effort.
NASA Technical Reports Server (NTRS)
1972-01-01
Laboratory simulations of three concepts, based on maximum use of available off-the-shelf hardware elements, are described. The concepts are a stereo-foveal-peripheral TV system with symmetric steroscopic split-image registration and 90 deg counter rotation; a computer assisted model control system termed the trajectory following control system; and active manipulator damping. It is concluded that the feasibility of these concepts is established.
Head-mounted display systems and the special operations soldier
NASA Astrophysics Data System (ADS)
Loyd, Rodney B.
1998-08-01
In 1997, the Boeing Company, working with DARPA under the Smart Modules program and the US Army Soldier Systems Command, embarked on an advanced research and development program to develop a wearable computer system tailored for use with soldiers of the US Special Operations Command. The 'special operations combat management system' is a rugged advanced wearable tactical computer, designed to provide the special operations soldier with enhanced situation awareness and battlefield information capabilities. Many issues must be considered during the design of wearable computers for a combat soldier, including the system weight, placement on the body with respect to other equipment, user interfaces and display system characteristics. During the initial feasibility study for the system, the operational environment was examined and potential users were interviewed to establish the proper display solution for the system. Many display system requirements resulted, such as head or helmet mounting, Night Vision Goggle compatibility, minimal visible light emissions, environmental performance and even the need for handheld or other 'off the head' type display systems. This paper will address these issues and other end user requirements for display systems for applications in the harsh and demanding environment of the Special Operations soldier.
Robotic tape library system level testing at NSA: Present and planned
NASA Technical Reports Server (NTRS)
Shields, Michael F.
1994-01-01
In the present of declining Defense budgets, increased pressure has been placed on the DOD to utilize Commercial Off the Shelf (COTS) solutions to incrementally solve a wide variety of our computer processing requirements. With the rapid growth in processing power, significant expansion of high performance networking, and the increased complexity of applications data sets, the requirement for high performance, large capacity, reliable and secure, and most of all affordable robotic tape storage libraries has greatly increased. Additionally, the migration to a heterogeneous, distributed computing environment has further complicated the problem. With today's open system compute servers approaching yesterday's supercomputer capabilities, the need for affordable, reliable secure Mass Storage Systems (MSS) has taken on an ever increasing importance to our processing center's ability to satisfy operational mission requirements. To that end, NSA has established an in-house capability to acquire, test, and evaluate COTS products. Its goal is to qualify a set of COTS MSS libraries, thereby achieving a modicum of standardization for robotic tape libraries which can satisfy our low, medium, and high performance file and volume serving requirements. In addition, NSA has established relations with other Government Agencies to complete this in-house effort and to maximize our research, testing, and evaluation work. While the preponderance of the effort is focused at the high end of the storage ladder, considerable effort will be extended this year and next at the server class or mid range storage systems.
Regional sustainable environmental management: sustainability metrics research for decision makers
There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute. Moreover, individual metrics may not capture all aspects of a system that are relevant to sust...
Development of a multidisciplinary approach to assess regional sustainability
There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute the metrics. Moreover, individual metrics do not capture all aspects of a system that are relev...
Computational logic: its origins and applications.
Paulson, Lawrence C
2018-02-01
Computational logic is the use of computers to establish facts in a logical formalism. Originating in nineteenth century attempts to understand the nature of mathematical reasoning, the subject now comprises a wide variety of formalisms, techniques and technologies. One strand of work follows the 'logic for computable functions (LCF) approach' pioneered by Robin Milner, where proofs can be constructed interactively or with the help of users' code (which does not compromise correctness). A refinement of LCF, called Isabelle, retains these advantages while providing flexibility in the choice of logical formalism and much stronger automation. The main application of these techniques has been to prove the correctness of hardware and software systems, but increasingly researchers have been applying them to mathematics itself.
Pricing the Computing Resources: Reading Between the Lines and Beyond
NASA Technical Reports Server (NTRS)
Nakai, Junko; Veronico, Nick (Editor); Thigpen, William W. (Technical Monitor)
2001-01-01
Distributed computing systems have the potential to increase the usefulness of existing facilities for computation without adding anything physical, but that is realized only when necessary administrative features are in place. In a distributed environment, the best match is sought between a computing job to be run and a computer to run the job (global scheduling), which is a function that has not been required by conventional systems. Viewing the computers as 'suppliers' and the users as 'consumers' of computing services, markets for computing services/resources have been examined as one of the most promising mechanisms for global scheduling. We first establish why economics can contribute to scheduling. We further define the criterion for a scheme to qualify as an application of economics. Many studies to date have claimed to have applied economics to scheduling. If their scheduling mechanisms do not utilize economics, contrary to their claims, their favorable results do not contribute to the assertion that markets provide the best framework for global scheduling. We examine the well-known scheduling schemes, which concern pricing and markets, using our criterion of what application of economics is. Our conclusion is that none of the schemes examined makes full use of economics.
Computational Science in Armenia (Invited Talk)
NASA Astrophysics Data System (ADS)
Marandjian, H.; Shoukourian, Yu.
This survey is devoted to the development of informatics and computer science in Armenia. The results in theoretical computer science (algebraic models, solutions to systems of general form recursive equations, the methods of coding theory, pattern recognition and image processing), constitute the theoretical basis for developing problem-solving-oriented environments. As examples can be mentioned: a synthesizer of optimized distributed recursive programs, software tools for cluster-oriented implementations of two-dimensional cellular automata, a grid-aware web interface with advanced service trading for linear algebra calculations. In the direction of solving scientific problems that require high-performance computing resources, examples of completed projects include the field of physics (parallel computing of complex quantum systems), astrophysics (Armenian virtual laboratory), biology (molecular dynamics study of human red blood cell membrane), meteorology (implementing and evaluating the Weather Research and Forecast Model for the territory of Armenia). The overview also notes that the Institute for Informatics and Automation Problems of the National Academy of Sciences of Armenia has established a scientific and educational infrastructure, uniting computing clusters of scientific and educational institutions of the country and provides the scientific community with access to local and international computational resources, that is a strong support for computational science in Armenia.
77 FR 47641 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-09
...In accordance with the requirements of the Privacy Act of 1974, as amended (Privacy Act), the Federal Housing Finance Agency (FHFA) gives notice of and requests comments on the proposed revision of one existing system of records, the establishment of four new systems of records, and the removal of three existing systems of records notices. The revised existing system of records is ``Fraud Reporting System'' (FHFA-6). The proposed systems of records are: ``Visitor Badge, Employee and Contractor Personnel Day Pass, and Trackable Mail System'' (FHFA-17), ``Reasonable Accommodation Information System'' (FHFA-18), ``Computer Systems Activity and Access Records System'' (FHFA-19), and ``Telecommunications System'' (FHFA-20). In addition, upon the effective date of this notice, the Office of Federal Housing Enterprise Oversight systems of records notices, ``OFHEO-10 Reasonable Accommodation Information System'' (73 FR 19236 (April 9, 2008)), ``OFHEO-08 Computer Systems Activity and Access Records System'' (71 FR 6085 (February 6, 2006)), and ``OFHEO-09 Telecommunications System'' (71 FR 39123 (July 11, 2006)) will be removed.
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...
2015-07-14
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
NASA Technical Reports Server (NTRS)
1995-01-01
The Formal Methods Specification and Verification Guidebook for Software and Computer Systems describes a set of techniques called Formal Methods (FM), and outlines their use in the specification and verification of computer systems and software. Development of increasingly complex systems has created a need for improved specification and verification techniques. NASA's Safety and Mission Quality Office has supported the investigation of techniques such as FM, which are now an accepted method for enhancing the quality of aerospace applications. The guidebook provides information for managers and practitioners who are interested in integrating FM into an existing systems development process. Information includes technical and administrative considerations that must be addressed when establishing the use of FM on a specific project. The guidebook is intended to aid decision makers in the successful application of FM to the development of high-quality systems at reasonable cost. This is the first volume of a planned two-volume set. The current volume focuses on administrative and planning considerations for the successful application of FM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, N.M.; Petrie, L.M.; Westfall, R.M.
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.« less
NASA Technical Reports Server (NTRS)
1974-01-01
The work is described which was accomplished during the investigation of the application of dry-tuned gimbal gyroscopes to strapdown navigation systems. A conventional strapdown configuration, employing analog electronics in conjunction with digital attitude and navigation computation, was examined using various levels of redundancy and both orthogonal and nonorthogonal sensor orientations. It is concluded that the cost and reliability performance constraints which had been established could not be met simultaneously with such a system. This conclusion led to the examination of an alternative system configuration which utilizes an essentially new strapdown system concept. This system employs all-digital signal processing in conjunction with the newly-developed large scale integration (LSI) electronic packaging techniques and a new two-degree-of-freedom dry tuned-gimbal instrument which is capable of providing both angular rate and acceleration information. Such a system is capable of exceeding the established performance goals.
1990-12-01
Overviev . ......................................... 9 2. Programs , Syr!ems, and Services ........................ 11 a. National Weather Service...Equipment Appropriation. ADA, a computer system developed and maintained by the Office of Aviation Policy and rlans, facilitates APS-I processing... Program Plan. The primary benefit of LLWAS, TDWR, and modified airport surveillance radar is reduced risk and expected incidence of wind shear-related
Flexible session management in a distributed environment
NASA Astrophysics Data System (ADS)
Miller, Zach; Bradley, Dan; Tannenbaum, Todd; Sfiligoi, Igor
2010-04-01
Many secure communication libraries used by distributed systems, such as SSL, TLS, and Kerberos, fail to make a clear distinction between the authentication, session, and communication layers. In this paper we introduce CEDAR, the secure communication library used by the Condor High Throughput Computing software, and present the advantages to a distributed computing system resulting from CEDAR's separation of these layers. Regardless of the authentication method used, CEDAR establishes a secure session key, which has the flexibility to be used for multiple capabilities. We demonstrate how a layered approach to security sessions can avoid round-trips and latency inherent in network authentication. The creation of a distinct session management layer allows for optimizations to improve scalability by way of delegating sessions to other components in the system. This session delegation creates a chain of trust that reduces the overhead of establishing secure connections and enables centralized enforcement of system-wide security policies. Additionally, secure channels based upon UDP datagrams are often overlooked by existing libraries; we show how CEDAR's structure accommodates this as well. As an example of the utility of this work, we show how the use of delegated security sessions and other techniques inherent in CEDAR's architecture enables US CMS to meet their scalability requirements in deploying Condor over large-scale, wide-area grid systems.
COMOC: Three dimensional boundary region variant, programmer's manual
NASA Technical Reports Server (NTRS)
Orzechowski, J. A.; Baker, A. J.
1974-01-01
The three-dimensional boundary region variant of the COMOC computer program system solves the partial differential equation system governing certain three-dimensional flows of a viscous, heat conducting, multiple-species, compressible fluid including combustion. The solution is established in physical variables, using a finite element algorithm for the boundary value portion of the problem description in combination with an explicit marching technique for the initial value character. The computational lattice may be arbitrarily nonregular, and boundary condition constraints are readily applied. The theoretical foundation of the algorithm, a detailed description on the construction and operation of the program, and instructions on utilization of the many features of the code are presented.
PC-assisted translation of photogrammetric papers
NASA Astrophysics Data System (ADS)
Güthner, Karlheinz; Peipe, Jürgen
A PC-based system for machine translation of photogrammetric papers from the English into the German language and vice versa is described. The computer-assisted translating process is not intended to create a perfect interpretation of a text but to produce a rough rendering of the content of a paper. Starting with the original text, a continuous data flow is effected into the translated version by means of hardware (scanner, personal computer, printer) and software (OCR, translation, word processing, DTP). An essential component of the system is a photogrammetric microdictionary which is being established at present. It is based on several sources, including e.g. the ISPRS Multilingual Dictionary.
Data Reprocessing on Worldwide Distributed Systems
NASA Astrophysics Data System (ADS)
Wicke, Daniel
The DØ experiment faces many challenges in terms of enabling access to large datasets for physicists on four continents. The strategy for solving these problems on worldwide distributed computing clusters is presented. Since the beginning of Run II of the Tevatron (March 2001) all Monte-Carlo simulations for the experiment have been produced at remote systems. For data analysis, a system of regional analysis centers (RACs) was established which supply the associated institutes with the data. This structure, which is similar to the tiered structure foreseen for the LHC was used in Fall 2003 to reprocess all DØ data with a much improved version of the reconstruction software. This makes DØ the first running experiment that has implemented and operated all important computing tasks of a high energy physics experiment on systems distributed worldwide.
Zhang, Wei; Ding, Dong-Sheng; Dong, Ming-Xin; Shi, Shuai; Wang, Kai; Liu, Shi-Long; Li, Yan; Zhou, Zhi-Yuan; Shi, Bao-Sen; Guo, Guang-Can
2016-11-14
Entanglement in multiple degrees of freedom has many benefits over entanglement in a single one. The former enables quantum communication with higher channel capacity and more efficient quantum information processing and is compatible with diverse quantum networks. Establishing multi-degree-of-freedom entangled memories is not only vital for high-capacity quantum communication and computing, but also promising for enhanced violations of nonlocality in quantum systems. However, there have been yet no reports of the experimental realization of multi-degree-of-freedom entangled memories. Here we experimentally established hyper- and hybrid entanglement in multiple degrees of freedom, including path (K-vector) and orbital angular momentum, between two separated atomic ensembles by using quantum storage. The results are promising for achieving quantum communication and computing with many degrees of freedom.
Autonomous spacecraft maintenance study group
NASA Technical Reports Server (NTRS)
Marshall, M. H.; Low, G. D.
1981-01-01
A plan to incorporate autonomous spacecraft maintenance (ASM) capabilities into Air Force spacecraft by 1989 is outlined. It includes the successful operation of the spacecraft without ground operator intervention for extended periods of time. Mechanisms, along with a fault tolerant data processing system (including a nonvolatile backup memory) and an autonomous navigation capability, are needed to replace the routine servicing that is presently performed by the ground system. The state of the art fault handling capabilities of various spacecraft and computers are described, and a set conceptual design requirements needed to achieve ASM is established. Implementations for near term technology development needed for an ASM proof of concept demonstration by 1985, and a research agenda addressing long range academic research for an advanced ASM system for 1990s are established.
ERIC Educational Resources Information Center
Garner, Sue; Pierce, Robyn
2016-01-01
Although research shows that Computer Algebra Systems offer pedagogical opportunities, more than a decade later some teachers are reluctant to change established practices. In 2002, the University of Melbourne in Australia launched a research project to investigate implementation of a senior mathematics course in which students could use a…
New Systems to Beat Swimming Program Frustration.
ERIC Educational Resources Information Center
Simpson, Scott J.
1980-01-01
A swimming program with effective student placement has been designed in Colorado Springs. The beginner level established by the American Red Cross is further broken down to accommodate children under the age of five. Use of computer facilities will assist in accurate program enrollment/completion records. (CJ)
Key Facts about Higher Education in Washington
ERIC Educational Resources Information Center
Washington Higher Education Coordinating Board, 2011
2011-01-01
Since its establishment in the 1860s, Washington's higher education system has evolved rapidly to meet a myriad of state needs in fields as diverse as agriculture, bioscience, chemistry, environmental sciences, engineering, medicine, law, business, computer science, and architecture. Today, higher education, like other vital state functions, faces…
An integrated system for land resources supervision based on the IoT and cloud computing
NASA Astrophysics Data System (ADS)
Fang, Shifeng; Zhu, Yunqiang; Xu, Lida; Zhang, Jinqu; Zhou, Peiji; Luo, Kan; Yang, Jie
2017-01-01
Integrated information systems are important safeguards for the utilisation and development of land resources. Information technologies, including the Internet of Things (IoT) and cloud computing, are inevitable requirements for the quality and efficiency of land resources supervision tasks. In this study, an economical and highly efficient supervision system for land resources has been established based on IoT and cloud computing technologies; a novel online and offline integrated system with synchronised internal and field data that includes the entire process of 'discovering breaches, analysing problems, verifying fieldwork and investigating cases' was constructed. The system integrates key technologies, such as the automatic extraction of high-precision information based on remote sensing, semantic ontology-based technology to excavate and discriminate public sentiment on the Internet that is related to illegal incidents, high-performance parallel computing based on MapReduce, uniform storing and compressing (bitwise) technology, global positioning system data communication and data synchronisation mode, intelligent recognition and four-level ('device, transfer, system and data') safety control technology. The integrated system based on a 'One Map' platform has been officially implemented by the Department of Land and Resources of Guizhou Province, China, and was found to significantly increase the efficiency and level of land resources supervision. The system promoted the overall development of informatisation in fields related to land resource management.
Master--slave manipulators and remote maintenance at the Oak Ridge National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenness, R.G.; Wicker, C.D.
1975-01-01
The volume of master-slave manipulator maintenance at Oak Ridge National Laboratory has necessitated the establishment of a repair facility and the organization of a specially trained group of craftsmen. Emphasis on cell containment requires the use of manipulator boots and the development of precise procedures for accomplishing the maintenance of 283 installed units. To provide the most economical type of preventive maintenance, a very satisfactory computer- programmed maintenance system has been established at the Laboratory. (auth)
Converting laserdisc video to digital video: a demonstration project using brain animations.
Jao, C S; Hier, D B; Brint, S U
1995-01-01
Interactive laserdiscs are of limited value in large group learning situations due to the expense of establishing multiple workstations. The authors implemented an alternative to laserdisc video by using indexed digital video combined with an expert system. High-quality video was captured from a laserdisc player and combined with waveform audio into an audio-video-interleave (AVI) file format in the Microsoft Video-for-Windows environment (Microsoft Corp., Seattle, WA). With the use of an expert system, a knowledge-based computer program provided random access to these indexed AVI files. The program can be played on any multimedia computer without the need for laserdiscs. This system offers a high level of interactive video without the overhead and cost of a laserdisc player.
Open source system OpenVPN in a function of Virtual Private Network
NASA Astrophysics Data System (ADS)
Skendzic, A.; Kovacic, B.
2017-05-01
Using of Virtual Private Networks (VPN) can establish high security level in network communication. VPN technology enables high security networking using distributed or public network infrastructure. VPN uses different security and managing rules inside networks. It can be set up using different communication channels like Internet or separate ISP communication infrastructure. VPN private network makes security communication channel over public network between two endpoints (computers). OpenVPN is an open source software product under GNU General Public License (GPL) that can be used to establish VPN communication between two computers inside business local network over public communication infrastructure. It uses special security protocols and 256-bit Encryption and it is capable of traversing network address translators (NATs) and firewalls. It allows computers to authenticate each other using a pre-shared secret key, certificates or username and password. This work gives review of VPN technology with a special accent on OpenVPN. This paper will also give comparison and financial benefits of using open source VPN software in business environment.
Anyonic braiding in optical lattices
Zhang, Chuanwei; Scarola, V. W.; Tewari, Sumanta; Das Sarma, S.
2007-01-01
Topological quantum states of matter, both Abelian and non-Abelian, are characterized by excitations whose wavefunctions undergo nontrivial statistical transformations as one excitation is moved (braided) around another. Topological quantum computation proposes to use the topological protection and the braiding statistics of a non-Abelian topological state to perform quantum computation. The enormous technological prospect of topological quantum computation provides new motivation for experimentally observing a topological state. Here, we explicitly work out a realistic experimental scheme to create and braid the Abelian topological excitations in the Kitaev model built on a tunable robust system, a cold atom optical lattice. We also demonstrate how to detect the key feature of these excitations: their braiding statistics. Observation of this statistics would directly establish the existence of anyons, quantum particles that are neither fermions nor bosons. In addition to establishing topological matter, the experimental scheme we develop here can also be adapted to a non-Abelian topological state, supported by the same Kitaev model but in a different parameter regime, to eventually build topologically protected quantum gates. PMID:18000038
Resonant transition-based quantum computation
NASA Astrophysics Data System (ADS)
Chiang, Chen-Fu; Hsieh, Chang-Yu
2017-05-01
In this article we assess a novel quantum computation paradigm based on the resonant transition (RT) phenomenon commonly associated with atomic and molecular systems. We thoroughly analyze the intimate connections between the RT-based quantum computation and the well-established adiabatic quantum computation (AQC). Both quantum computing frameworks encode solutions to computational problems in the spectral properties of a Hamiltonian and rely on the quantum dynamics to obtain the desired output state. We discuss how one can adapt any adiabatic quantum algorithm to a corresponding RT version and the two approaches are limited by different aspects of Hamiltonians' spectra. The RT approach provides a compelling alternative to the AQC under various circumstances. To better illustrate the usefulness of the novel framework, we analyze the time complexity of an algorithm for 3-SAT problems and discuss straightforward methods to fine tune its efficiency.
1984-04-01
Scientific- Architecture 4% 4% Technical Computer Sci 38% 37% Math 40% 40% Meteorology 6% 6% Physics 12 % 13% Nontechnical Quality Freeflow 2/ Quality...Architecture 4 Computer Sci 48 43 40 Math 30 35 38 Meteorology 6 6 6 Physics 12 12 12 Engineer Electrical 40% 50% 50% Aero Group 25 25 30 Other / 35 25 20...with Technical Degrees by Major Weapon System. . . 12 FIGURE 4 - Pilots with Technical Degrees by Category . . . . . . 13 FIGURE 5 - Regression
1991-09-01
constant data into the gaining base’s computer records. Among the data elements to be loaded, the 1XT434 image contains the level detail effective date...the mission support effective date, and the PBR override (19:19-203). In conjunction with the 1XT434, the Mission Change Parameter Image (Constant...the gaining base (19:19-208). The level detail effective date establishes the date the MCDDFR and MCDDR "are considered by the requirements computation
1987-09-24
Some concerns take on rating (e.g., ’Zl’) that adequately reflects increased significance in the network how well the system provides each service...to how well a M.•.imum, Fair, Good); however, in specific spicific approach may be expected to achieve cases, ratings such as "plesent" or "approved...established thresholds, Supportive policies include idertification and and for detecting the fact that access to a authentication policies as well as
Computer-Assisted Drug Formulation Design: Novel Approach in Drug Delivery.
Metwally, Abdelkader A; Hathout, Rania M
2015-08-03
We hypothesize that, by using several chemo/bio informatics tools and statistical computational methods, we can study and then predict the behavior of several drugs in model nanoparticulate lipid and polymeric systems. Accordingly, two different matrices comprising tripalmitin, a core component of solid lipid nanoparticles (SLN), and PLGA were first modeled using molecular dynamics simulation, and then the interaction of drugs with these systems was studied by means of computing the free energy of binding using the molecular docking technique. These binding energies were hence correlated with the loadings of these drugs in the nanoparticles obtained experimentally from the available literature. The obtained relations were verified experimentally in our laboratory using curcumin as a model drug. Artificial neural networks were then used to establish the effect of the drugs' molecular descriptors on the binding energies and hence on the drug loading. The results showed that the used soft computing methods can provide an accurate method for in silico prediction of drug loading in tripalmitin-based and PLGA nanoparticulate systems. These results have the prospective of being applied to other nano drug-carrier systems, and this integrated statistical and chemo/bio informatics approach offers a new toolbox to the formulation science by proposing what we present as computer-assisted drug formulation design (CADFD).
Computer-aided video exposure monitoring.
Walsh, P T; Clark, R D; Flaherty, S; Gentry, S J
2000-01-01
A computer-aided video exposure monitoring system was used to record exposure information. The system comprised a handheld camcorder, portable video cassette recorder, radio-telemetry transmitter/receiver, and handheld or notebook computers for remote data logging, photoionization gas/vapor detectors (PIDs), and a personal aerosol monitor. The following workplaces were surveyed using the system: dry cleaning establishments--monitoring tetrachoroethylene in the air and in breath; printing works--monitoring white spirit type solvent; tire manufacturing factory--monitoring rubber fume; and a slate quarry--monitoring respirable dust and quartz. The system based on the handheld computer, in particular, simplified the data acquisition process compared with earlier systems in use by our laboratory. The equipment is more compact and easier to operate, and allows more accurate calibration of the instrument reading on the video image. Although a variety of data display formats are possible, the best format for videos intended for educational and training purposes was the review-preview chart superimposed on the video image of the work process. Recommendations for reducing exposure by engineering or by modifying work practice were possible through use of the video exposure system in the dry cleaning and tire manufacturing applications. The slate quarry work illustrated how the technique can be used to test ventilation configurations quickly to see their effect on the worker's personal exposure.
EPA/ECLSS consumables analyses for the Spacelab 1 flight
NASA Technical Reports Server (NTRS)
Steines, G. J.; Pipher, M. D.
1976-01-01
The results of electrical power system (EPS) and environmental control/life support system (ECLSS) consumables analyses of the Spacelab 1 mission are presented. The analyses were performed to assess the capability of the orbiter systems to support the proposed mission and to establish the various non propulsive consumables requirements. The EPS analysis was performed using the shuttle electrical power system (SEPS) analysis computer program. The ECLSS analysis was performed using the shuttle environmental consumables requirements evaluation tool (SECRET) program.
Managing computer-controlled operations
NASA Technical Reports Server (NTRS)
Plowden, J. B.
1985-01-01
A detailed discussion of Launch Processing System Ground Software Production is presented to establish the interrelationships of firing room resource utilization, configuration control, system build operations, and Shuttle data bank management. The production of a test configuration identifier is traced from requirement generation to program development. The challenge of the operational era is to implement fully automated utilities to interface with a resident system build requirements document to eliminate all manual intervention in the system build operations. Automatic update/processing of Shuttle data tapes will enhance operations during multi-flow processing.
AstroGrid-D: Grid technology for astronomical science
NASA Astrophysics Data System (ADS)
Enke, Harry; Steinmetz, Matthias; Adorf, Hans-Martin; Beck-Ratzka, Alexander; Breitling, Frank; Brüsemeister, Thomas; Carlson, Arthur; Ensslin, Torsten; Högqvist, Mikael; Nickelt, Iliya; Radke, Thomas; Reinefeld, Alexander; Reiser, Angelika; Scholl, Tobias; Spurzem, Rainer; Steinacker, Jürgen; Voges, Wolfgang; Wambsganß, Joachim; White, Steve
2011-02-01
We present status and results of AstroGrid-D, a joint effort of astrophysicists and computer scientists to employ grid technology for scientific applications. AstroGrid-D provides access to a network of distributed machines with a set of commands as well as software interfaces. It allows simple use of computer and storage facilities and to schedule or monitor compute tasks and data management. It is based on the Globus Toolkit middleware (GT4). Chapter 1 describes the context which led to the demand for advanced software solutions in Astrophysics, and we state the goals of the project. We then present characteristic astrophysical applications that have been implemented on AstroGrid-D in chapter 2. We describe simulations of different complexity, compute-intensive calculations running on multiple sites (Section 2.1), and advanced applications for specific scientific purposes (Section 2.2), such as a connection to robotic telescopes (Section 2.2.3). We can show from these examples how grid execution improves e.g. the scientific workflow. Chapter 3 explains the software tools and services that we adapted or newly developed. Section 3.1 is focused on the administrative aspects of the infrastructure, to manage users and monitor activity. Section 3.2 characterises the central components of our architecture: The AstroGrid-D information service to collect and store metadata, a file management system, the data management system, and a job manager for automatic submission of compute tasks. We summarise the successfully established infrastructure in chapter 4, concluding with our future plans to establish AstroGrid-D as a platform of modern e-Astronomy.
A collaborative institutional model for integrating computer applications in the medical curriculum.
Friedman, C. P.; Oxford, G. S.; Juliano, E. L.
1991-01-01
The introduction and promotion of information technology in an established medical curriculum with existing academic and technical support structures poses a number of challenges. The UNC School of Medicine has developed the Taskforce on Educational Applications in Medicine (TEAM), to coordinate this effort. TEAM works as a confederation of existing research and support units with interests in computers and education, along with a core of interested faculty with curricular responsibilities. Constituent units of the TEAM confederation include the medical center library, medical television studios, basic science teaching laboratories, educational development office, microcomputer and network support groups, academic affairs administration, and a subset of course directors and teaching faculty. Among our efforts have been the establishment of (1) a mini-grant program to support faculty initiated development and implementation of computer applications in the curriculum, (2) a symposium series with visiting speakers to acquaint faculty with current developments in medical informatics and related curricular efforts at other institution, (3) 20 computer workstations located in the multipurpose teaching labs where first and second year students do much of their academic work, (4) a demonstration center for evaluation of courseware and technologically advanced delivery systems. The student workstations provide convenient access to electronic mail, University schedules and calendars, the CoSy computer conferencing system, and several software applications integral to their courses in pathology, histology, microbiology, biochemistry, and neurobiology. The progress achieved toward the primary goal has modestly exceeded our initial expectations, while the collegiality and interest expressed toward TEAM activities in the local environment stand as empirical measures of the success of the concept. PMID:1807705
NASA Technical Reports Server (NTRS)
Radespiel, Rolf; Hemsch, Michael J.
2007-01-01
The complexity of modern military systems, as well as the cost and difficulty associated with experimentally verifying system and subsystem design makes the use of high-fidelity based simulation a future alternative for design and development. The predictive ability of such simulations such as computational fluid dynamics (CFD) and computational structural mechanics (CSM) have matured significantly. However, for numerical simulations to be used with confidence in design and development, quantitative measures of uncertainty must be available. The AVT 147 Symposium has been established to compile state-of-the art methods of assessing computational uncertainty, to identify future research and development needs associated with these methods, and to present examples of how these needs are being addressed and how the methods are being applied. Papers were solicited that address uncertainty estimation associated with high fidelity, physics-based simulations. The solicitation included papers that identify sources of error and uncertainty in numerical simulation from either the industry perspective or from the disciplinary or cross-disciplinary research perspective. Examples of the industry perspective were to include how computational uncertainty methods are used to reduce system risk in various stages of design or development.
Computer network access to scientific information systems for minority universities
NASA Astrophysics Data System (ADS)
Thomas, Valerie L.; Wakim, Nagi T.
1993-08-01
The evolution of computer networking technology has lead to the establishment of a massive networking infrastructure which interconnects various types of computing resources at many government, academic, and corporate institutions. A large segment of this infrastructure has been developed to facilitate information exchange and resource sharing within the scientific community. The National Aeronautics and Space Administration (NASA) supports both the development and the application of computer networks which provide its community with access to many valuable multi-disciplinary scientific information systems and on-line databases. Recognizing the need to extend the benefits of this advanced networking technology to the under-represented community, the National Space Science Data Center (NSSDC) in the Space Data and Computing Division at the Goddard Space Flight Center has developed the Minority University-Space Interdisciplinary Network (MU-SPIN) Program: a major networking and education initiative for Historically Black Colleges and Universities (HBCUs) and Minority Universities (MUs). In this paper, we will briefly explain the various components of the MU-SPIN Program while highlighting how, by providing access to scientific information systems and on-line data, it promotes a higher level of collaboration among faculty and students and NASA scientists.
Center for Advanced Computational Technology
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
2000-01-01
The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.
A survey to identify the clinical coding and classification systems currently in use across Europe.
de Lusignan, S; Minmagh, C; Kennedy, J; Zeimet, M; Bommezijn, H; Bryant, J
2001-01-01
This is a survey to identify what clinical coding systems are currently in use across the European Union, and the states seeking membership to it. We sought to identify what systems are currently used and to what extent they were subject to local adaptation. Clinical coding should facilitate identifying key medical events in a computerised medical record, and aggregating information across groups of records. The emerging new driver is as the enabler of the life-long computerised medical record. A prerequisite for this level of functionality is the transfer of information between different computer systems. This transfer can be facilitated either by working on the interoperability problems between disparate systems or by harmonising the underlying data. This paper examines the extent to which the latter has occurred across Europe. Literature and Internet search. Requests for information via electronic mail to pan-European mailing lists of health informatics professionals. Coding systems are now a de facto part of health information systems across Europe. There are relatively few coding systems in existence across Europe. ICD9 and ICD 10, ICPC and Read were the most established. However the local adaptation of these classification systems either on a by country or by computer software manufacturer basis; significantly reduces the ability for the meaning coded with patients computer records to be easily transferred from one medical record system to another. There is no longer any debate as to whether a coding or classification system should be used. Convergence of different classifications systems should be encouraged. Countries and computer manufacturers within the EU should be encouraged to stop making local modifications to coding and classification systems, as this practice risks significantly slowing progress towards easy transfer of records between computer systems.
S-Cube: Enabling the Next Generation of Software Services
NASA Astrophysics Data System (ADS)
Metzger, Andreas; Pohl, Klaus
The Service Oriented Architecture (SOA) paradigm is increasingly adopted by industry for building distributed software systems. However, when designing, developing and operating innovative software services and servicebased systems, several challenges exist. Those challenges include how to manage the complexity of those systems, how to establish, monitor and enforce Quality of Service (QoS) and Service Level Agreements (SLAs), as well as how to build those systems such that they can proactively adapt to dynamically changing requirements and context conditions. Developing foundational solutions for those challenges requires joint efforts of different research communities such as Business Process Management, Grid Computing, Service Oriented Computing and Software Engineering. This paper provides an overview of S-Cube, the European Network of Excellence on Software Services and Systems. S-Cube brings together researchers from leading research institutions across Europe, who join their competences to develop foundations, theories as well as methods and tools for future service-based systems.
State-Space System Realization with Input- and Output-Data Correlation
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan
1997-01-01
This paper introduces a general version of the information matrix consisting of the autocorrelation and cross-correlation matrices of the shifted input and output data. Based on the concept of data correlation, a new system realization algorithm is developed to create a model directly from input and output data. The algorithm starts by computing a special type of correlation matrix derived from the information matrix. The special correlation matrix provides information on the system-observability matrix and the state-vector correlation. A system model is then developed from the observability matrix in conjunction with other algebraic manipulations. This approach leads to several different algorithms for computing system matrices for use in representing the system model. The relationship of the new algorithms with other realization algorithms in the time and frequency domains is established with matrix factorization of the information matrix. Several examples are given to illustrate the validity and usefulness of these new algorithms.
Verifying Stability of Dynamic Soft-Computing Systems
NASA Technical Reports Server (NTRS)
Wen, Wu; Napolitano, Marcello; Callahan, John
1997-01-01
Soft computing is a general term for algorithms that learn from human knowledge and mimic human skills. Example of such algorithms are fuzzy inference systems and neural networks. Many applications, especially in control engineering, have demonstrated their appropriateness in building intelligent systems that are flexible and robust. Although recent research have shown that certain class of neuro-fuzzy controllers can be proven bounded and stable, they are implementation dependent and difficult to apply to the design and validation process. Many practitioners adopt the trial and error approach for system validation or resort to exhaustive testing using prototypes. In this paper, we describe our on-going research towards establishing necessary theoretic foundation as well as building practical tools for the verification and validation of soft-computing systems. A unified model for general neuro-fuzzy system is adopted. Classic non-linear system control theory and recent results of its applications to neuro-fuzzy systems are incorporated and applied to the unified model. It is hoped that general tools can be developed to help the designer to visualize and manipulate the regions of stability and boundedness, much the same way Bode plots and Root locus plots have helped conventional control design and validation.
Hanford meteorological station computer codes: Volume 9, The quality assurance computer codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burk, K.W.; Andrews, G.L.
1989-02-01
The Hanford Meteorological Station (HMS) was established in 1944 on the Hanford Site to collect and archive meteorological data and provide weather forecasts and related services for Hanford Site approximately 1/2 mile east of the 200 West Area and is operated by PNL for the US Department of Energy. Meteorological data are collected from various sensors and equipment located on and off the Hanford Site. These data are stored in data bases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS (hereafter referred to as the HMS computer). Files from those data bases are routinely transferred to themore » Emergency Management System (EMS) computer at the Unified Dose Assessment Center (UDAC). To ensure the quality and integrity of the HMS data, a set of Quality Assurance (QA) computer codes has been written. The codes will be routinely used by the HMS system manager or the data base custodian. The QA codes provide detailed output files that will be used in correcting erroneous data. The following sections in this volume describe the implementation and operation of QA computer codes. The appendices contain detailed descriptions, flow charts, and source code listings of each computer code. 2 refs.« less
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-11-11
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.
NASA Technical Reports Server (NTRS)
Young, Gerald W.; Clemons, Curtis B.
2004-01-01
The focus of this Cooperative Agreement between the Computational Materials Laboratory (CML) of the Processing Science and Technology Branch of the NASA Glenn Research Center (GRC) and the Department of Theoretical and Applied Mathematics at The University of Akron was in the areas of system development of the CML workstation environment, modeling of microgravity and earth-based material processing systems, and joint activities in laboratory projects. These efforts complement each other as the majority of the modeling work involves numerical computations to support laboratory investigations. Coordination and interaction between the modelers, system analysts, and laboratory personnel are essential toward providing the most effective simulations and communication of the simulation results. Toward these means, The University of Akron personnel involved in the agreement worked at the Applied Mathematics Research Laboratory (AMRL) in the Department of Theoretical and Applied Mathematics while maintaining a close relationship with the personnel of the Computational Materials Laboratory at GRC. Network communication between both sites has been established. A summary of the projects we undertook during the time period 9/1/03 - 6/30/04 is included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansmann, Ulrich H.E.
2012-07-02
This report summarizes the outcome of the international workshop From Computational Biophysics to Systems Biology (CBSB12) which was held June 3-5, 2012, at the University of Tennessee Conference Center in Knoxville, TN, and supported by DOE through the Conference Support Grant 120174. The purpose of CBSB12 was to provide a forum for the interaction between a data-mining interested systems biology community and a simulation and first-principle oriented computational biophysics/biochemistry community. CBSB12 was the sixth in a series of workshops of the same name organized in recent years, and the second that has been held in the USA. As in previousmore » years, it gave researchers from physics, biology, and computer science an opportunity to acquaint each other with current trends in computational biophysics and systems biology, to explore venues of cooperation, and to establish together a detailed understanding of cells at a molecular level. The conference grant of $10,000 was used to cover registration fees and provide travel fellowships to selected students and postdoctoral scientists. By educating graduate students and providing a forum for young scientists to perform research into the working of cells at a molecular level, the workshop adds to DOE's mission of paving the way to exploit the abilities of living systems to capture, store and utilize energy.« less
Geiger, Linda H.
1983-01-01
The report is an update of U.S. Geological Survey Open-File Report 77-703, which described a retrieval program for administrative index of active data-collection sites in Florida. Extensive changes to the Findex system have been made since 1977 , making the previous report obsolete. A description of the data base and computer programs that are available in the Findex system are documented in this report. This system serves a vital need in the administration of the many and diverse water-data collection activities. District offices with extensive data-collection activities will benefit from the documentation of the system. Largely descriptive, the report tells how a file of computer card images has been established which contains entries for all sites in Florida at which there is currently a water-data collection activity. Entries include information such as identification number, station name, location, type of site, county, frequency of data collection, funding, and other pertinent details. The computer program FINDEX selectively retrieves entries and lists them in a format suitable for publication. The index is updated routinely. (USGS)
KIDLINK: A Challenging and Safe Place for Children across the World.
ERIC Educational Resources Information Center
Burleigh, Mike; Weeg, Patti
1993-01-01
Describes the activities of KIDLINK, an international electronic conferencing system that was developed to establish communication between children 10 to 15 years old around the world using the Internet and other computer networks. A list of local KIDLINK contacts in 29 countries is included. (LRW)
Using Microcomputers in School Administration. Fastback No. 248.
ERIC Educational Resources Information Center
Connors, Eugene T.; Valesky, Thomas C.
This "fastback" outlines the steps to take in computerizing school administration. After an introduction that lists the potential benefits of microcomputers in administrative offices, the booklet begins by delineating a three-step process for establishing an administrative computer system: (1) creating a district-level committee of administrators,…
Solar Wind Monitor--A School Geophysics Project
ERIC Educational Resources Information Center
Robinson, Ian
2018-01-01
Described is an established geophysics project to construct a solar wind monitor based on a nT resolution fluxgate magnetometer. Low-cost and appropriate from school to university level it incorporates elements of astrophysics, geophysics, electronics, programming, computer networking and signal processing. The system monitors the earth's field in…
Help for Finding Missing Children.
ERIC Educational Resources Information Center
McCormick, Kathleen
1984-01-01
Efforts to locate missing children have expanded from a federal law allowing for entry of information into an F.B.I. computer system to companion bills before Congress for establishing a national missing child clearinghouse and a Justice Department center to help in conducting searches. Private organizations are also involved. (KS)
Program Manual for Producing Weight Scaling Conversion Tables
Gary L. Tyre; Clyde A. Fasick; Frank M. Riley; Frank O. Lege
1973-01-01
Three computer programs are presented which can be applied by individual firms to establish a weight-scaling information system, The first generates volume estimates from truckload weights for any combination of veneer, sawmill, and pulpwood volumes. The second provides quality-control information by tabulating differences between estimated volumes and observed check-...
USDA-ARS?s Scientific Manuscript database
Water moves through plants under tension and in a thermodynamically metastable state, leaving the non-living vessels that transport this water vulnerable to blockage by gas embolisms. Failure to re-establish flow in embolized vessels can lead to systemic loss of hydraulic conductivity and ultimately...
Software Development Outsourcing Decision Support Tool with Neural Network Learning
2004-03-01
science, the first neuro-computer was built in 1954 by Marvin Minsky . In 1956, Dartmouth established a new research field of NN. Shortly after...04-16 50 This system was capable of recognizing letters and received much attention until 1969 when the Minsky and Papert paper discussed the
36 CFR 1202.30 - How does NARA safeguard its systems of records?
Code of Federal Regulations, 2012 CFR
2012-07-01
... records are protected in accordance with the Computer Security Act, OMB Circular A-11 requiring privacy... appropriate administrative, technical, and physical safeguards are established to ensure the security and confidentiality of records. In order to protect against any threats or hazards to their security or loss of...
36 CFR 1202.30 - How does NARA safeguard its systems of records?
Code of Federal Regulations, 2011 CFR
2011-07-01
... records are protected in accordance with the Computer Security Act, OMB Circular A-11 requiring privacy... appropriate administrative, technical, and physical safeguards are established to ensure the security and confidentiality of records. In order to protect against any threats or hazards to their security or loss of...
36 CFR 1202.30 - How does NARA safeguard its systems of records?
Code of Federal Regulations, 2010 CFR
2010-07-01
... records are protected in accordance with the Computer Security Act, OMB Circular A-11 requiring privacy... appropriate administrative, technical, and physical safeguards are established to ensure the security and confidentiality of records. In order to protect against any threats or hazards to their security or loss of...
36 CFR 1202.30 - How does NARA safeguard its systems of records?
Code of Federal Regulations, 2014 CFR
2014-07-01
... records are protected in accordance with the Computer Security Act, OMB Circular A-11 requiring privacy... appropriate administrative, technical, and physical safeguards are established to ensure the security and confidentiality of records. In order to protect against any threats or hazards to their security or loss of...
Identifying the Key Weaknesses in Network Security at Colleges.
ERIC Educational Resources Information Center
Olsen, Florence
2000-01-01
A new study identifies and ranks the 10 security gaps responsible for most outsider attacks on college computer networks. The list is intended to help campus system administrators establish priorities as they work to increase security. One network security expert urges that institutions utilize multiple security layers. (DB)
NASA Technical Reports Server (NTRS)
1973-01-01
Calculations, curves, and substantiating data which support the engine design characteristics of the RL-10 engines are presented. A description of the RL-10 ignition system is provided. The performance calculations of the RL-10 derivative engines and the performance results obtained are reported. The computer simulations used to establish the control system requirements and to define the engine transient characteristics are included.
Extended observability of linear time-invariant systems under recurrent loss of output data
NASA Technical Reports Server (NTRS)
Luck, Rogelio; Ray, Asok; Halevi, Yoram
1989-01-01
Recurrent loss of sensor data in integrated control systems of an advanced aircraft may occur under different operating conditions that include detected frame errors and queue saturation in computer networks, and bad data suppression in signal processing. This paper presents an extension of the concept of observability based on a set of randomly selected nonconsecutive outputs in finite-dimensional, linear, time-invariant systems. Conditions for testing extended observability have been established.
Peters, Sinead E; Brennan, Patrick C
2002-09-01
Manufacturers offer exposure indices as a safeguard against overexposure in computed radiography, but the basis for recommended values is unclear. This study establishes an optimum exposure index to be used as a guideline for a specific CR system to minimise radiation exposures for computed mobile chest radiography, and compares this with manufacturer guidelines and current practice. An anthropomorphic phantom was employed to establish the minimum milliamperes consistent with acceptable image quality for mobile chest radiography images. This was found to be 2 mAs. Consecutively, 10 patients were exposed with this optimised milliampere value and 10 patients were exposed with the 3.2 mAs routinely used in the department of the study. Image quality was objectively assessed using anatomical criteria. Retrospective analyses of 717 exposure indices recorded over 2 months from mobile chest examinations were performed. The optimised milliampere value provided a significant reduction of the average exposure index from 1840 to 1570 ( p<0.0001). This new "optimum" exposure index is substantially lower than manufacturer guidelines of 2000 and significantly lower than exposure indices from the retrospective study (1890). Retrospective data showed a significant increase in exposure indices if the examination was performed out of hours. The data provided by this study emphasise the need for clinicians and personnel to consider establishing their own optimum exposure indices for digital investigations rather than simply accepting manufacturers' guidelines. Such an approach, along with regular monitoring of indices, may result in a substantial reduction in patient exposure.
COMOC 2: Two-dimensional aerodynamics sequence, computer program user's guide
NASA Technical Reports Server (NTRS)
Manhardt, P. D.; Orzechowski, J. A.; Baker, A. J.
1977-01-01
The COMOC finite element fluid mechanics computer program system is applicable to diverse problem classes. The two dimensional aerodynamics sequence was established for solution of the potential and/or viscous and turbulent flowfields associated with subsonic flight of elementary two dimensional isolated airfoils. The sequence is constituted of three specific flowfield options in COMOC for two dimensional flows. These include the potential flow option, the boundary layer option, and the parabolic Navier-Stokes option. By sequencing through these options, it is possible to computationally construct a weak-interaction model of the aerodynamic flowfield. This report is the user's guide to operation of COMOC for the aerodynamics sequence.
Computer-aided personal interviewing. A new technique for data collection in epidemiologic surveys.
Birkett, N J
1988-03-01
Most epidemiologic studies involve the collection of data directly from selected respondents. Traditionally, interviewers are provided with the interview in booklet form on paper and answers are recorded therein. On receipt at the study office, the interview results are coded, transcribed, and keypunched for analysis. The author's team has developed a method of personal interviewing which uses a structured interview stored on a lap-sized computer. Responses are entered into the computer and are subject to immediate error-checking and correction. All skip-patterns are automatic. Data entry to the final data-base involves no manual data transcription. A pilot evaluation with a preliminary version of the system using tape-recorded interviews in a test/re-test methodology revealed a slightly higher error rate, probably related to weaknesses in the pilot system and the training process. Computer interviews tended to be longer but other features of the interview process were not affected by computer. The author's team has now completed 2,505 interviews using this system in a community-based blood pressure survey. It has been well accepted by both interviewers and respondents. Failure to complete an interview on the computer was uncommon (5 per cent) and well-handled by paper back-up questionnaires. The results show that computer-aided personal interviewing in the home is feasible but that further evaluation is needed to establish the impact of this methodology on overall data quality.
Computational logic: its origins and applications
2018-01-01
Computational logic is the use of computers to establish facts in a logical formalism. Originating in nineteenth century attempts to understand the nature of mathematical reasoning, the subject now comprises a wide variety of formalisms, techniques and technologies. One strand of work follows the ‘logic for computable functions (LCF) approach’ pioneered by Robin Milner, where proofs can be constructed interactively or with the help of users’ code (which does not compromise correctness). A refinement of LCF, called Isabelle, retains these advantages while providing flexibility in the choice of logical formalism and much stronger automation. The main application of these techniques has been to prove the correctness of hardware and software systems, but increasingly researchers have been applying them to mathematics itself. PMID:29507522
2016 ISCB Overton Prize awarded to Debora Marks
Fogg, Christiana N.; Kovats, Diane E.
2016-01-01
The International Society for Computational Biology (ISCB) recognizes the achievements of an early- to mid-career scientist with the Overton Prize each year. The Overton Prize was established to honor the untimely loss of Dr. G. Christian Overton, a respected computational biologist and founding ISCB Board member. Winners of the Overton Prize are independent investigators in the early to middle phases of their careers who are selected because of their significant contributions to computational biology through research, teaching, and service. 2016 will mark the fifteenth bestowment of the ISCB Overton Prize. ISCB is pleased to confer this award the to Debora Marks, Assistant Professor of Systems Biology and director of the Raymond and Beverly Sackler Laboratory for Computational Biology at Harvard Medical School. PMID:27429747
2016 ISCB Overton Prize awarded to Debora Marks.
Fogg, Christiana N; Kovats, Diane E
2016-01-01
The International Society for Computational Biology (ISCB) recognizes the achievements of an early- to mid-career scientist with the Overton Prize each year. The Overton Prize was established to honor the untimely loss of Dr. G. Christian Overton, a respected computational biologist and founding ISCB Board member. Winners of the Overton Prize are independent investigators in the early to middle phases of their careers who are selected because of their significant contributions to computational biology through research, teaching, and service. 2016 will mark the fifteenth bestowment of the ISCB Overton Prize. ISCB is pleased to confer this award the to Debora Marks, Assistant Professor of Systems Biology and director of the Raymond and Beverly Sackler Laboratory for Computational Biology at Harvard Medical School.
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Koga, Dennis (Technical Monitor)
2000-01-01
In this first of two papers, strong limits on the accuracy of physical computation are established. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out any computational task in the subset of such tasks that can be posed to C. This result holds whether the computational tasks concern a system that is physically isolated from C, or instead concern a system that is coupled to C. As a particular example, this result means that there cannot be a physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly 'processing information faster than the universe does'. The results also mean that there cannot exist an infallible, general-purpose observation apparatus, and that there cannot be an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - a definition of 'physical computation' - is needed to address the issues considered in these papers. While this definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. The second in this pair of papers presents a preliminary exploration of some of this mathematical structure, including in particular that of prediction complexity, which is a 'physical computation analogue' of algorithmic information complexity. It is proven in that second paper that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike algorithmic information complexity), in that there is one and only version of it that can be applicable throughout our universe.
Apollo experience report: Real-time display system
NASA Technical Reports Server (NTRS)
Sullivan, C. J.; Burbank, L. W.
1976-01-01
The real time display system used in the Apollo Program is described; the systematic organization of the system, which resulted from hardware/software trade-offs and the establishment of system criteria, is emphasized. Each basic requirement of the real time display system was met by a separate subsystem. The computer input multiplexer subsystem, the plotting display subsystem, the digital display subsystem, and the digital television subsystem are described. Also described are the automated display design and the generation of precision photographic reference slides required for the three display subsystems.
MEDLARS and the Library Community
Adams, Scott
1964-01-01
The intention of the National Library of Medicine is to share with other libraries the products and the capabilities developed by the MEDLARS system. MEDLARS will provide bibliographic services of use to other libraries from the central system. The decentralization of the central system to permit libraries with access to computers to establish local machine retrieval systems is also indicated. The implications of such decentralization for the American medical library network and its effect on library evolution are suggested, as are the implications for international development of mechanized storage and retrieval systems. PMID:14119289
A multi-criteria approach to camera motion design for volume data animation.
Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu
2013-12-01
We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.
Parallel Calculations in LS-DYNA
NASA Astrophysics Data System (ADS)
Vartanovich Mkrtychev, Oleg; Aleksandrovich Reshetov, Andrey
2017-11-01
Nowadays, structural mechanics exhibits a trend towards numeric solutions being found for increasingly extensive and detailed tasks, which requires that capacities of computing systems be enhanced. Such enhancement can be achieved by different means. E.g., in case a computing system is represented by a workstation, its components can be replaced and/or extended (CPU, memory etc.). In essence, such modification eventually entails replacement of the entire workstation, i.e. replacement of certain components necessitates exchange of others (faster CPUs and memory devices require buses with higher throughput etc.). Special consideration must be given to the capabilities of modern video cards. They constitute powerful computing systems capable of running data processing in parallel. Interestingly, the tools originally designed to render high-performance graphics can be applied for solving problems not immediately related to graphics (CUDA, OpenCL, Shaders etc.). However, not all software suites utilize video cards’ capacities. Another way to increase capacity of a computing system is to implement a cluster architecture: to add cluster nodes (workstations) and to increase the network communication speed between the nodes. The advantage of this approach is extensive growth due to which a quite powerful system can be obtained by combining not particularly powerful nodes. Moreover, separate nodes may possess different capacities. This paper considers the use of a clustered computing system for solving problems of structural mechanics with LS-DYNA software. To establish a range of dependencies a mere 2-node cluster has proven sufficient.
1976-03-01
special access; PS2 will be for the variable perimeter; and PS3, PS4 , and PS5 will make up the normal access area. This added computer power will be...implementation of PS1 and PS4 will continue as new com- munications consoles are actively established for possible side-by-side opera- tion of the
Support requirements for remote sensor systems on unmanned planetary missions, phase 3
NASA Technical Reports Server (NTRS)
1971-01-01
The results of a study to determine the support requirements for remote sensor systems on unmanned planetary flyby and orbiter missions are presented. Sensors and experiment groupings for selected missions are also established. Computer programs were developed to relate measurement requirements to support requirements. Support requirements were determined for sensors capable of performing required measurements at various points along the trajectories of specific selected missions.
The Lister Hill National Center for Biomedical Communications.
Smith, K A
1994-09-01
On August 3, 1968, the Joint Resolution of the Congress established the program and construction of the Lister Hill National Center for Biomedical Communications. The facility dedicated in 1980 contains the latest in computer and communications technologies. The history, program requirements, construction management, and general planning are discussed including technical issues regarding cabling, systems functions, heating, ventilation, and air conditioning system (HVAC), fire suppression, research and development laboratories, among others.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driscoll, Frederick R.
The University of Washington (UW) - Northwest National Marine Renewable Energy Center (UW-NNMREC) and the National Renewable Energy Laboratory (NREL) will collaborate to advance research and development (R&D) of Marine Hydrokinetic (MHK) renewable energy technology, specifically renewable energy captured from ocean tidal currents. UW-NNMREC is endeavoring to establish infrastructure, capabilities and tools to support in-water testing of marine energy technology. NREL is leveraging its experience and capabilities in field testing of wind systems to develop protocols and instrumentation to advance field testing of MHK systems. Under this work, UW-NNMREC and NREL will work together to develop a common instrumentation systemmore » and testing methodologies, standards and protocols. UW-NNMREC is also establishing simulation capabilities for MHK turbine and turbine arrays. NREL has extensive experience in wind turbine array modeling and is developing several computer based numerical simulation capabilities for MHK systems. Under this CRADA, UW-NNMREC and NREL will work together to augment single device and array modeling codes. As part of this effort UW NNMREC will also work with NREL to run simulations on NREL's high performance computer system.« less
INDUCTIVE SYSTEM HEALTH MONITORING WITH STATISTICAL METRICS
NASA Technical Reports Server (NTRS)
Iverson, David L.
2005-01-01
Model-based reasoning is a powerful method for performing system monitoring and diagnosis. Building models for model-based reasoning is often a difficult and time consuming process. The Inductive Monitoring System (IMS) software was developed to provide a technique to automatically produce health monitoring knowledge bases for systems that are either difficult to model (simulate) with a computer or which require computer models that are too complex to use for real time monitoring. IMS processes nominal data sets collected either directly from the system or from simulations to build a knowledge base that can be used to detect anomalous behavior in the system. Machine learning and data mining techniques are used to characterize typical system behavior by extracting general classes of nominal data from archived data sets. In particular, a clustering algorithm forms groups of nominal values for sets of related parameters. This establishes constraints on those parameter values that should hold during nominal operation. During monitoring, IMS provides a statistically weighted measure of the deviation of current system behavior from the established normal baseline. If the deviation increases beyond the expected level, an anomaly is suspected, prompting further investigation by an operator or automated system. IMS has shown potential to be an effective, low cost technique to produce system monitoring capability for a variety of applications. We describe the training and system health monitoring techniques of IMS. We also present the application of IMS to a data set from the Space Shuttle Columbia STS-107 flight. IMS was able to detect an anomaly in the launch telemetry shortly after a foam impact damaged Columbia's thermal protection system.
Catecholamines alter the intrinsic variability of cortical population activity and perception
Avramiea, Arthur-Ervin; Nolte, Guido; Engel, Andreas K.; Linkenkaer-Hansen, Klaus; Donner, Tobias H.
2018-01-01
The ascending modulatory systems of the brain stem are powerful regulators of global brain state. Disturbances of these systems are implicated in several major neuropsychiatric disorders. Yet, how these systems interact with specific neural computations in the cerebral cortex to shape perception, cognition, and behavior remains poorly understood. Here, we probed into the effect of two such systems, the catecholaminergic (dopaminergic and noradrenergic) and cholinergic systems, on an important aspect of cortical computation: its intrinsic variability. To this end, we combined placebo-controlled pharmacological intervention in humans, recordings of cortical population activity using magnetoencephalography (MEG), and psychophysical measurements of the perception of ambiguous visual input. A low-dose catecholaminergic, but not cholinergic, manipulation altered the rate of spontaneous perceptual fluctuations as well as the temporal structure of “scale-free” population activity of large swaths of the visual and parietal cortices. Computational analyses indicate that both effects were consistent with an increase in excitatory relative to inhibitory activity in the cortical areas underlying visual perceptual inference. We propose that catecholamines regulate the variability of perception and cognition through dynamically changing the cortical excitation–inhibition ratio. The combined readout of fluctuations in perception and cortical activity we established here may prove useful as an efficient and easily accessible marker of altered cortical computation in neuropsychiatric disorders. PMID:29420565
Interfacing HTCondor-CE with OpenStack
NASA Astrophysics Data System (ADS)
Bockelman, B.; Caballero Bejar, J.; Hover, J.
2017-10-01
Over the past few years, Grid Computing technologies have reached a high level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources. However, despite this level of maturity, the Grid Computing world is lacking diversity in local execution platforms. As Grid Computing technologies have historically been driven by the needs of the High Energy Physics community, most resource providers run the platform (operating system version and architecture) that best suits the needs of their particular users. In parallel, the development of virtualization and cloud technologies has accelerated recently, making available a variety of solutions, both commercial and academic, proprietary and open source. Virtualization facilitates performing computational tasks on platforms not available at most computing sites. This work attempts to join the technologies, allowing users to interact with computing sites through one of the standard Computing Elements, HTCondor-CE, but running their jobs within VMs on a local cloud platform, OpenStack, when needed. The system will re-route, in a transparent way, end user jobs into dynamically-launched VM worker nodes when they have requirements that cannot be satisfied by the static local batch system nodes. Also, once the automated mechanisms are in place, it becomes straightforward to allow an end user to invoke a custom Virtual Machine at the site. This will allow cloud resources to be used without requiring the user to establish a separate account. Both scenarios are described in this work.
Welter, Petra; Riesmeier, Jörg; Fischer, Benedikt; Grouls, Christoph; Kuhl, Christiane; Deserno, Thomas M
2011-01-01
It is widely accepted that content-based image retrieval (CBIR) can be extremely useful for computer-aided diagnosis (CAD). However, CBIR has not been established in clinical practice yet. As a widely unattended gap of integration, a unified data concept for CBIR-based CAD results and reporting is lacking. Picture archiving and communication systems and the workflow of radiologists must be considered for successful data integration to be achieved. We suggest that CBIR systems applied to CAD should integrate their results in a picture archiving and communication systems environment such as Digital Imaging and Communications in Medicine (DICOM) structured reporting documents. A sample DICOM structured reporting template adaptable to CBIR and an appropriate integration scheme is presented. The proposed CBIR data concept may foster the promulgation of CBIR systems in clinical environments and, thereby, improve the diagnostic process.
Riesmeier, Jörg; Fischer, Benedikt; Grouls, Christoph; Kuhl, Christiane; Deserno (né Lehmann), Thomas M
2011-01-01
It is widely accepted that content-based image retrieval (CBIR) can be extremely useful for computer-aided diagnosis (CAD). However, CBIR has not been established in clinical practice yet. As a widely unattended gap of integration, a unified data concept for CBIR-based CAD results and reporting is lacking. Picture archiving and communication systems and the workflow of radiologists must be considered for successful data integration to be achieved. We suggest that CBIR systems applied to CAD should integrate their results in a picture archiving and communication systems environment such as Digital Imaging and Communications in Medicine (DICOM) structured reporting documents. A sample DICOM structured reporting template adaptable to CBIR and an appropriate integration scheme is presented. The proposed CBIR data concept may foster the promulgation of CBIR systems in clinical environments and, thereby, improve the diagnostic process. PMID:21672913
Toward an integrated software platform for systems pharmacology
Ghosh, Samik; Matsuoka, Yukiko; Asai, Yoshiyuki; Hsin, Kun-Yi; Kitano, Hiroaki
2013-01-01
Understanding complex biological systems requires the extensive support of computational tools. This is particularly true for systems pharmacology, which aims to understand the action of drugs and their interactions in a systems context. Computational models play an important role as they can be viewed as an explicit representation of biological hypotheses to be tested. A series of software and data resources are used for model development, verification and exploration of the possible behaviors of biological systems using the model that may not be possible or not cost effective by experiments. Software platforms play a dominant role in creativity and productivity support and have transformed many industries, techniques that can be applied to biology as well. Establishing an integrated software platform will be the next important step in the field. © 2013 The Authors. Biopharmaceutics & Drug Disposition published by John Wiley & Sons, Ltd. PMID:24150748
Computational Toxicology as Implemented by the US EPA ...
Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the T
Computational modeling of epidermal cell fate determination systems.
Ryu, Kook Hui; Zheng, Xiaohua; Huang, Ling; Schiefelbein, John
2013-02-01
Cell fate decisions are of primary importance for plant development. Their simple 'either-or' outcome and dynamic nature has attracted the attention of computational modelers. Recent efforts have focused on modeling the determination of several epidermal cell types in the root and shoot of Arabidopsis where many molecular components have been defined. Results of integrated modeling and molecular biology experimentation in these systems have highlighted the importance of competitive positive and negative factors and interconnected feedback loops in generating flexible yet robust mechanisms for establishing distinct gene expression programs in neighboring cells. These models have proven useful in judging hypotheses and guiding future research. Copyright © 2012 Elsevier Ltd. All rights reserved.
Kirkwood-Buff integrals of finite systems: shape effects
NASA Astrophysics Data System (ADS)
Dawass, Noura; Krüger, Peter; Simon, Jean-Marc; Vlugt, Thijs J. H.
2018-06-01
The Kirkwood-Buff (KB) theory provides an important connection between microscopic density fluctuations in liquids and macroscopic properties. Recently, Krüger et al. derived equations for KB integrals for finite subvolumes embedded in a reservoir. Using molecular simulation of finite systems, KB integrals can be computed either from density fluctuations inside such subvolumes, or from integrals of radial distribution functions (RDFs). Here, based on the second approach, we establish a framework to compute KB integrals for subvolumes with arbitrary convex shapes. This requires a geometric function w(x) which depends on the shape of the subvolume, and the relative position inside the subvolume. We present a numerical method to compute w(x) based on Umbrella Sampling Monte Carlo (MC). We compute KB integrals of a liquid with a model RDF for subvolumes with different shapes. KB integrals approach the thermodynamic limit in the same way: for sufficiently large volumes, KB integrals are a linear function of area over volume, which is independent of the shape of the subvolume.
Fusing Symbolic and Numerical Diagnostic Computations
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
X-2000 Anomaly Detection Language denotes a developmental computing language, and the software that establishes and utilizes the language, for fusing two diagnostic computer programs, one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for realtime detection of events (e.g., failures) in a spacecraft, aircraft, or other complex engineering system. The numerical analysis method is performed by beacon-based exception analysis for multi-missions (BEAMs), which has been discussed in several previous NASA Tech Briefs articles. The symbolic analysis method is, more specifically, an artificial-intelligence method of the knowledge-based, inference engine type, and its implementation is exemplified by the Spacecraft Health Inference Engine (SHINE) software. The goal in developing the capability to fuse numerical and symbolic diagnostic components is to increase the depth of analysis beyond that previously attainable, thereby increasing the degree of confidence in the computed results. In practical terms, the sought improvement is to enable detection of all or most events, with no or few false alarms.
Monitoring techniques and alarm procedures for CMS services and sites in WLCG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molina-Perez, J.; Bonacorsi, D.; Gutsche, O.
2012-01-01
The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operatingmore » worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.« less
Computer-Based Technologies in Dentistry: Types and Applications
Albuha Al-Mussawi, Raja’a M.; Farid, Farzaneh
2016-01-01
During dental education, dental students learn how to examine patients, make diagnosis, plan treatment and perform dental procedures perfectly and efficiently. However, progresses in computer-based technologies including virtual reality (VR) simulators, augmented reality (AR) and computer aided design/computer aided manufacturing (CAD/CAM) systems have resulted in new modalities for instruction and practice of dentistry. Virtual reality dental simulators enable repeated, objective and assessable practice in various controlled situations. Superimposition of three-dimensional (3D) virtual images on actual images in AR allows surgeons to simultaneously visualize the surgical site and superimpose informative 3D images of invisible regions on the surgical site to serve as a guide. The use of CAD/CAM systems for designing and manufacturing of dental appliances and prostheses has been well established. This article reviews computer-based technologies, their application in dentistry and their potentials and limitations in promoting dental education, training and practice. Practitioners will be able to choose from a broader spectrum of options in their field of practice by becoming familiar with new modalities of training and practice. PMID:28392819
Computer-Based Technologies in Dentistry: Types and Applications.
Albuha Al-Mussawi, Raja'a M; Farid, Farzaneh
2016-06-01
During dental education, dental students learn how to examine patients, make diagnosis, plan treatment and perform dental procedures perfectly and efficiently. However, progresses in computer-based technologies including virtual reality (VR) simulators, augmented reality (AR) and computer aided design/computer aided manufacturing (CAD/CAM) systems have resulted in new modalities for instruction and practice of dentistry. Virtual reality dental simulators enable repeated, objective and assessable practice in various controlled situations. Superimposition of three-dimensional (3D) virtual images on actual images in AR allows surgeons to simultaneously visualize the surgical site and superimpose informative 3D images of invisible regions on the surgical site to serve as a guide. The use of CAD/CAM systems for designing and manufacturing of dental appliances and prostheses has been well established. This article reviews computer-based technologies, their application in dentistry and their potentials and limitations in promoting dental education, training and practice. Practitioners will be able to choose from a broader spectrum of options in their field of practice by becoming familiar with new modalities of training and practice.
NASA Technical Reports Server (NTRS)
Pandya, Abhilash; Maida, James; Hasson, Scott; Greenisen, Michael; Woolford, Barbara
1993-01-01
As manned exploration of space continues, analytical evaluation of human strength characteristics is critical. These extraterrestrial environments will spawn issues of human performance which will impact the designs of tools, work spaces, and space vehicles. Computer modeling is an effective method of correlating human biomechanical and anthropometric data with models of space structures and human work spaces. The aim of this study is to provide biomechanical data from isolated joints to be utilized in a computer modeling system for calculating torque resulting from any upper extremity motions: in this study, the ratchet wrench push-pull operation (a typical extravehicular activity task). Established here are mathematical relationships used to calculate maximum torque production of isolated upper extremity joints. These relationships are a function of joint angle and joint velocity.
Experimental realization of entanglement in multiple degrees of freedom between two quantum memories
Zhang, Wei; Ding, Dong-Sheng; Dong, Ming-Xin; Shi, Shuai; Wang, Kai; Liu, Shi-Long; Li, Yan; Zhou, Zhi-Yuan; Shi, Bao-Sen; Guo, Guang-Can
2016-01-01
Entanglement in multiple degrees of freedom has many benefits over entanglement in a single one. The former enables quantum communication with higher channel capacity and more efficient quantum information processing and is compatible with diverse quantum networks. Establishing multi-degree-of-freedom entangled memories is not only vital for high-capacity quantum communication and computing, but also promising for enhanced violations of nonlocality in quantum systems. However, there have been yet no reports of the experimental realization of multi-degree-of-freedom entangled memories. Here we experimentally established hyper- and hybrid entanglement in multiple degrees of freedom, including path (K-vector) and orbital angular momentum, between two separated atomic ensembles by using quantum storage. The results are promising for achieving quantum communication and computing with many degrees of freedom. PMID:27841274
[Application of electronic fence technology based on GIS in Oncomelania hupensis snail monitoring].
Zhi-Hua, Chen; Yi-Sheng, Zhu; Zhi-Qiang, Xue; Xue-Bing, Li; Yi-Min, Ding; Li-Jun, Bi; Kai-Min, Gao; You, Zhang
2017-07-27
To study the application of Geographic Information System (GIS) electronic fence technique in Oncomelania hupensis snail monitoring. The electronic fence was set around the history and existing snail environments in the electronic map, the information about snail monitoring and controlling was linked to the electronic fence, and the snail monitoring information system was established on these bases. The monitoring information was input through the computer and smart phone. The electronic fence around the history and existing snail environments was set in the electronic map (Baidu map), and the snail monitoring information system and smart phone APP were established. The monitoring information was input and upload real-time, and the snail monitoring information was demonstrated in real time on Baidu map. By using the electronic fence technology based on GIS, the unique "environment electronic archives" for each snail monitoring environment can be established in the electronic map, and real-time, dynamic monitoring and visual management can be realized.
Damage Detection and Verification System (DDVS) for In-Situ Health Monitoring
NASA Technical Reports Server (NTRS)
Williams, Martha K.; Lewis, Mark; Szafran, J.; Shelton, C.; Ludwig, L.; Gibson, T.; Lane, J.; Trautwein, T.
2015-01-01
Project presentation for Game Changing Program Smart Book Release. Detection and Verification System (DDVS) expands the Flat Surface Damage Detection System (FSDDS) sensory panels damage detection capabilities and includes an autonomous inspection capability utilizing cameras and dynamic computer vision algorithms to verify system health. Objectives of this formulation task are to establish the concept of operations, formulate the system requirements for a potential ISS flight experiment, and develop a preliminary design of an autonomous inspection capability system that will be demonstrated as a proof-of-concept ground based damage detection and inspection system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruwart, T M; Eldel, A
2000-01-01
The primary objectives of this project were to evaluate the performance of the SGI CXFS File System in a Storage Area Network (SAN) and compare/contrast it to the performance of a locally attached XFS file system on the same computer and storage subsystems. The University of Minnesota participants were asked to verify that the performance of the SAN/CXFS configuration did not fall below 85% of the performance of the XFS local configuration. There were two basic hardware test configurations constructed from the following equipment: Two Onyx 2 computer systems each with two Qlogic-based Fibre Channel/XIO Host Bus Adapter (HBA); Onemore » 8-Port Brocade Silkworm 2400 Fibre Channel Switch; and Four Ciprico RF7000 RAID Disk Arrays populated Seagate Barracuda 50GB disk drives. The Operating System on each of the ONYX 2 computer systems was IRIX 6.5.6. The first hardware configuration consisted of directly connecting the Ciprico arrays to the Qlogic controllers without the Brocade switch. The purpose for this configuration was to establish baseline performance data on the Qlogic controllers / Ciprico disk raw subsystem. This baseline performance data would then be used to demonstrate any performance differences arising from the addition of the Brocade Fibre Channel Switch. Furthermore, the performance of the Qlogic controllers could be compared to that of the older, Adaptec-based XIO dual-channel Fibre Channel adapters previously used on these systems. It should be noted that only raw device tests were performed on this configuration. No file system testing was performed on this configuration. The second hardware configuration introduced the Brocade Fibre Channel Switch. Two FC ports from each of the ONYX2 computer systems were attached to four ports of the switch and the four Ciprico arrays were attached to the remaining four. Raw disk subsystem tests were performed on the SAN configuration in order to demonstrate the performance differences between the direct-connect and the switched configurations. After this testing was completed, the Ciprico arrays were formatted with an XFS file system and performance numbers were gathered to establish a File System Performance Baseline. Finally, the disks were formatted with CXFS and further tests were run to demonstrate the performance of the CXFS file system. A summary of the results of these tests is given.« less
Addressing failures in exascale computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snir, Marc; Wisniewski, Robert W.; Abraham, Jacob A.
2014-05-01
We present here a report produced by a workshop on “Addressing Failures in Exascale Computing” held in Park City, Utah, August 4–11, 2012. The charter of this workshop was to establish a common taxonomy about resilience across all the levels in a computing system; discuss existing knowledge on resilience across the various hardware and software layers of an exascale system; and build on those results, examining potential solutions from both a hardware and software perspective and focusing on a combined approach. The workshop brought together participants with expertise in applications, system software, and hardware; they came from industry, government, andmore » academia; and their interests ranged from theory to implementation. The combination allowed broad and comprehensive discussions and led to this document, which summarizes and builds on those discussions.« less
Sled, Elizabeth A.; Sheehy, Lisa M.; Felson, David T.; Costigan, Patrick A.; Lam, Miu; Cooke, T. Derek V.
2010-01-01
The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. 1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. 2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis (MOST) Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977 – 0.999 for computer analysis; 0.820 – 0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839 – 0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers. PMID:19882339
Computational Control Workstation: Users' perspectives
NASA Technical Reports Server (NTRS)
Roithmayr, Carlos M.; Straube, Timothy M.; Tave, Jeffrey S.
1993-01-01
A Workstation has been designed and constructed for rapidly simulating motions of rigid and elastic multibody systems. We examine the Workstation from the point of view of analysts who use the machine in an industrial setting. Two aspects of the device distinguish it from other simulation programs. First, one uses a series of windows and menus on a computer terminal, together with a keyboard and mouse, to provide a mathematical and geometrical description of the system under consideration. The second hallmark is a facility for animating simulation results. An assessment of the amount of effort required to numerically describe a system to the Workstation is made by comparing the process to that used with other multibody software. The apparatus for displaying results as a motion picture is critiqued as well. In an effort to establish confidence in the algorithms that derive, encode, and solve equations of motion, simulation results from the Workstation are compared to answers obtained with other multibody programs. Our study includes measurements of computational speed.
A biased filter for linear discrete dynamic systems.
NASA Technical Reports Server (NTRS)
Chang, J. W.; Hoerl, A. E.; Leathrum, J. F.
1972-01-01
A recursive estimator, the ridge filter, was developed for the linear discrete dynamic estimation problem. Theorems were established to show that the ridge filter can be, on the average, closer to the expected value of the system state than the Kalman filter. On the other hand, Kalman filter, on the average, is closer to the instantaneous system state than the ridge filter. The ridge filter has been formulated in such a way that the computational features of the Kalman filter are preserved.
Research summary, January 1989 - June 1990
NASA Technical Reports Server (NTRS)
1990-01-01
The Research Institute for Advanced Computer Science (RIACS) was established at NASA ARC in June of 1983. RIACS is privately operated by the Universities Space Research Association (USRA), a consortium of 62 universities with graduate programs in the aerospace sciences, under a Cooperative Agreement with NASA. RIACS serves as the representative of the USRA universities at ARC. This document reports our activities and accomplishments for the period 1 Jan. 1989 - 30 Jun. 1990. The following topics are covered: learning systems, networked systems, and parallel systems.
Manafian Heris, Jalil; Lakestani, Mehrdad
2014-01-01
We establish exact solutions including periodic wave and solitary wave solutions for the integrable sixth-order Drinfeld-Sokolov-Satsuma-Hirota system. We employ this system by using a generalized (G'/G)-expansion and the generalized tanh-coth methods. These methods are developed for searching exact travelling wave solutions of nonlinear partial differential equations. It is shown that these methods, with the help of symbolic computation, provide a straightforward and powerful mathematical tool for solving nonlinear partial differential equations.
2003-04-22
The Food and Drug Administration (FDA) is publishing an order granting a petition requesting exemption from the premarket notification requirements for data acquisition units for ceramic dental restoration systems. This rule exempts from premarket notification data acquisition units for ceramic dental restoration systems and establishes a guidance document as a special control for this device. FDA is publishing this order in accordance with the Food and Drug Administration Modernization Act of 1997 (FDAMA).
The Nett Warrior System: A Case Study for the Acquisition of Soldier Systems
2011-12-15
rpfkbpp=C=mr_if`=mlif`v - 10 - k^s^i=mlpqdo^ar^qb=p`elli The evolution of wearable computers continued as an open system– bus wearable design was...established. The success of NW will depend on the program?s ability to incorporate soldier-driven design requirements, commercial technology, and...on the program’s ability to incorporate soldier-driven design requirements, commercial technology, and thorough system testing. = = ^Åèìáëáíáçå
NASA Technical Reports Server (NTRS)
Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.
1986-01-01
To establish a long-term research facility for experimental investigations of design diversity as a means of achieving fault-tolerant systems, a distributed testbed for multiple-version software was designed. It is part of a local network, which utilizes the Locus distributed operating system to operate a set of 20 VAX 11/750 computers. It is used in experiments to measure the efficacy of design diversity and to investigate reliability increases under large-scale, controlled experimental conditions.
NASA Technical Reports Server (NTRS)
Migneault, G. E.
1979-01-01
Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.
Jiang, Taoran; Zhu, Ming; Zan, Tao; Gu, Bin; Li, Qingfeng
2017-08-01
In perforator flap transplantation, dissection of the perforator is an important but difficult procedure because of the high variability in vascular anatomy. Preoperative imaging techniques could provide substantial information about vascular anatomy; however, it cannot provide direct guidance for surgeons during the operation. In this study, a navigation system (NS) was established to overlie a vascular map on surgical sites to further provide a direct guide for perforator flap transplantation. The NS was established based on computed tomographic angiography and augmented reality techniques. A virtual vascular map was reconstructed according to computed tomographic angiography data and projected onto real patient images using ARToolKit software. Additionally, a screw-fixation marker holder was created to facilitate registration. With the use of a tracking and display system, we conducted the NS on an animal model and measured the system error on a rapid prototyping model. The NS assistance allowed for correct identification, as well as a safe and precise dissection of the perforator. The mean value of the system error was determined to be 3.474 ± 1.546 mm. Augmented reality-based NS can provide precise navigation information by directly displaying a 3-dimensional individual anatomical virtual model onto the operative field in real time. It will allow rapid identification and safe dissection of a perforator in free flap transplantation surgery.
plasmaFoam: An OpenFOAM framework for computational plasma physics and chemistry
NASA Astrophysics Data System (ADS)
Venkattraman, Ayyaswamy; Verma, Abhishek Kumar
2016-09-01
As emphasized in the 2012 Roadmap for low temperature plasmas (LTP), scientific computing has emerged as an essential tool for the investigation and prediction of the fundamental physical and chemical processes associated with these systems. While several in-house and commercial codes exist, with each having its own advantages and disadvantages, a common framework that can be developed by researchers from all over the world will likely accelerate the impact of computational studies on advances in low-temperature plasma physics and chemistry. In this regard, we present a finite volume computational toolbox to perform high-fidelity simulations of LTP systems. This framework, primarily based on the OpenFOAM solver suite, allows us to enhance our understanding of multiscale plasma phenomenon by performing massively parallel, three-dimensional simulations on unstructured meshes using well-established high performance computing tools that are widely used in the computational fluid dynamics community. In this talk, we will present preliminary results obtained using the OpenFOAM-based solver suite with benchmark three-dimensional simulations of microplasma devices including both dielectric and plasma regions. We will also discuss the future outlook for the solver suite.
Roadmap for cardiovascular circulation model
Bradley, Christopher P.; Suresh, Vinod; Mithraratne, Kumar; Muller, Alexandre; Ho, Harvey; Ladd, David; Hellevik, Leif R.; Omholt, Stig W.; Chase, J. Geoffrey; Müller, Lucas O.; Watanabe, Sansuke M.; Blanco, Pablo J.; de Bono, Bernard; Hunter, Peter J.
2016-01-01
Abstract Computational models of many aspects of the mammalian cardiovascular circulation have been developed. Indeed, along with orthopaedics, this area of physiology is one that has attracted much interest from engineers, presumably because the equations governing blood flow in the vascular system are well understood and can be solved with well‐established numerical techniques. Unfortunately, there have been only a few attempts to create a comprehensive public domain resource for cardiovascular researchers. In this paper we propose a roadmap for developing an open source cardiovascular circulation model. The model should be registered to the musculo‐skeletal system. The computational infrastructure for the cardiovascular model should provide for near real‐time computation of blood flow and pressure in all parts of the body. The model should deal with vascular beds in all tissues, and the computational infrastructure for the model should provide links into CellML models of cell function and tissue function. In this work we review the literature associated with 1D blood flow modelling in the cardiovascular system, discuss model encoding standards, software and a model repository. We then describe the coordinate systems used to define the vascular geometry, derive the equations and discuss the implementation of these coupled equations in the open source computational software OpenCMISS. Finally, some preliminary results are presented and plans outlined for the next steps in the development of the model, the computational software and the graphical user interface for accessing the model. PMID:27506597
Roadmap for cardiovascular circulation model.
Safaei, Soroush; Bradley, Christopher P; Suresh, Vinod; Mithraratne, Kumar; Muller, Alexandre; Ho, Harvey; Ladd, David; Hellevik, Leif R; Omholt, Stig W; Chase, J Geoffrey; Müller, Lucas O; Watanabe, Sansuke M; Blanco, Pablo J; de Bono, Bernard; Hunter, Peter J
2016-12-01
Computational models of many aspects of the mammalian cardiovascular circulation have been developed. Indeed, along with orthopaedics, this area of physiology is one that has attracted much interest from engineers, presumably because the equations governing blood flow in the vascular system are well understood and can be solved with well-established numerical techniques. Unfortunately, there have been only a few attempts to create a comprehensive public domain resource for cardiovascular researchers. In this paper we propose a roadmap for developing an open source cardiovascular circulation model. The model should be registered to the musculo-skeletal system. The computational infrastructure for the cardiovascular model should provide for near real-time computation of blood flow and pressure in all parts of the body. The model should deal with vascular beds in all tissues, and the computational infrastructure for the model should provide links into CellML models of cell function and tissue function. In this work we review the literature associated with 1D blood flow modelling in the cardiovascular system, discuss model encoding standards, software and a model repository. We then describe the coordinate systems used to define the vascular geometry, derive the equations and discuss the implementation of these coupled equations in the open source computational software OpenCMISS. Finally, some preliminary results are presented and plans outlined for the next steps in the development of the model, the computational software and the graphical user interface for accessing the model. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Computer-Assisted Instruction in the N.W.T.
ERIC Educational Resources Information Center
Garraway, Tom
For the past seven years, the Division of Educational Research Services at the University of Alberta has been operating an IBM 1500 CAI system. This paper describes demonstration projects set up in anticipation of the establishment of remote CAI in the North West Territories. These include a moon landing simulation program; a diagnostic program in…
Data management for Computer-Aided Engineering (CAE)
NASA Technical Reports Server (NTRS)
Bryant, W. A.; Smith, M. R.
1984-01-01
Analysis of data flow through the design and manufacturing processes has established specific information management requirements and identified unique problems. The application of data management technology to the engineering/manufacturing environment addresses these problems. An overview of the IPAD prototype data base management system, representing a partial solution to these problems, is presented here.
Technology Resource Teachers: Is This a New Role for Instructional Technologists?
ERIC Educational Resources Information Center
Moallem, Mahnaz; And Others
Public schools have created the position of the Technology Resource Teacher (TRT) in an attempt to establish a technical and instructional support system at the school level to assure the proper usage of technology (particularly computers) by both teachers and students. This study explores the roles and responsibilities of the Technology Resource…
ERIC Educational Resources Information Center
Ardiel, Evan L.; Giles, Andrew C.; Yu, Alex J.; Lindsay, Theodore H.; Lockery, Shawn R.; Rankin, Catharine H.
2016-01-01
Habituation is a highly conserved phenomenon that remains poorly understood at the molecular level. Invertebrate model systems, like "Caenorhabditis elegans," can be a powerful tool for investigating this fundamental process. Here we established a high-throughput learning assay that used real-time computer vision software for behavioral…
36 CFR § 1202.30 - How does NARA safeguard its systems of records?
Code of Federal Regulations, 2013 CFR
2013-07-01
... records are protected in accordance with the Computer Security Act, OMB Circular A-11 requiring privacy... appropriate administrative, technical, and physical safeguards are established to ensure the security and confidentiality of records. In order to protect against any threats or hazards to their security or loss of...
A Process for Evaluating Student Records Management Software. ERIC/AE Digest.
ERIC Educational Resources Information Center
Vecchioli, Lisa
This digest provides practical advice on evaluating software for managing student records. An evaluation of record-keeping software should start with a process to identify all of the individual needs the software produce must meet in order to be considered for purchase. The first step toward establishing an administrative computing system is…
Bang, Magnus; Timpka, Toomas
2007-06-01
Co-located teams often use material objects to communicate messages in collaboration. Modern desktop computing systems with abstract graphical user interface (GUIs) fail to support this material dimension of inter-personal communication. The aim of this study is to investigate how tangible user interfaces can be used in computer systems to better support collaborative routines among co-located clinical teams. The semiotics of physical objects used in team collaboration was analyzed from data collected during 1 month of observations at an emergency room. The resulting set of communication patterns was used as a framework when designing an experimental system. Following the principles of augmented reality, physical objects were mapped into a physical user interface with the goal of maintaining the symbolic value of those objects. NOSTOS is an experimental ubiquitous computing environment that takes advantage of interaction devices integrated into the traditional clinical environment, including digital pens, walk-up displays, and a digital desk. The design uses familiar workplace tools to function as user interfaces to the computer in order to exploit established cognitive and collaborative routines. Paper-based tangible user interfaces and digital desks are promising technologies for co-located clinical teams. A key issue that needs to be solved before employing such solutions in practice is associated with limited feedback from the passive paper interfaces.
Wing Leading Edge Concepts for Noise Reduction
NASA Technical Reports Server (NTRS)
Shmilovich, Arvin; Yadlin, Yoram; Pitera, David M.
2010-01-01
This study focuses on the development of wing leading edge concepts for noise reduction during high-lift operations, without compromising landing stall speeds, stall characteristics or cruise performance. High-lift geometries, which can be obtained by conventional mechanical systems or morphing structures have been considered. A systematic aerodynamic analysis procedure was used to arrive at several promising configurations. The aerodynamic design of new wing leading edge shapes is obtained from a robust Computational Fluid Dynamics procedure. Acoustic benefits are qualitatively established through the evaluation of the computed flow fields.
Learning control system design based on 2-D theory - An application to parallel link manipulator
NASA Technical Reports Server (NTRS)
Geng, Z.; Carroll, R. L.; Lee, J. D.; Haynes, L. H.
1990-01-01
An approach to iterative learning control system design based on two-dimensional system theory is presented. A two-dimensional model for the iterative learning control system which reveals the connections between learning control systems and two-dimensional system theory is established. A learning control algorithm is proposed, and the convergence of learning using this algorithm is guaranteed by two-dimensional stability. The learning algorithm is applied successfully to the trajectory tracking control problem for a parallel link robot manipulator. The excellent performance of this learning algorithm is demonstrated by the computer simulation results.
On decentralized control of large-scale systems
NASA Technical Reports Server (NTRS)
Siljak, D. D.
1978-01-01
A scheme is presented for decentralized control of large-scale linear systems which are composed of a number of interconnected subsystems. By ignoring the interconnections, local feedback controls are chosen to optimize each decoupled subsystem. Conditions are provided to establish compatibility of the individual local controllers and achieve stability of the overall system. Besides computational simplifications, the scheme is attractive because of its structural features and the fact that it produces a robust decentralized regulator for large dynamic systems, which can tolerate a wide range of nonlinearities and perturbations among the subsystems.
Liénard Equation and Its Generalizations
NASA Astrophysics Data System (ADS)
Giné, Jaume
2017-06-01
In this paper, we first present a survey of the known results on limit cycles and center conditions for Liénard differential systems. Next we propose a generalization of such systems and we study their center conditions and the number of small-amplitude limit cycles that can bifurcate from the origin. Computing the focal values and using Gröbner bases we find the center conditions for such systems up to a certain degree. We also establish a conjecture about the center conditions for such systems when they have arbitrary degree.
Automated social skills training with audiovisual information.
Tanaka, Hiroki; Sakti, Sakriani; Neubig, Graham; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi
2016-08-01
People with social communication difficulties tend to have superior skills using computers, and as a result computer-based social skills training systems are flourishing. Social skills training, performed by human trainers, is a well-established method to obtain appropriate skills in social interaction. Previous works have attempted to automate one or several parts of social skills training through human-computer interaction. However, while previous work on simulating social skills training considered only acoustic and linguistic features, human social skills trainers take into account visual features (e.g. facial expression, posture). In this paper, we create and evaluate a social skills training system that closes this gap by considering audiovisual features regarding ratio of smiling, yaw, and pitch. An experimental evaluation measures the difference in effectiveness of social skill training when using audio features and audiovisual features. Results showed that the visual features were effective to improve users' social skills.
Laboratory and software applications for clinical trials: the global laboratory environment.
Briscoe, Chad
2011-11-01
The Applied Pharmaceutical Software Meeting is held annually. It is sponsored by The Boston Society, a not-for-profit organization that coordinates a series of meetings within the global pharmaceutical industry. The meeting generally focuses on laboratory applications, but in recent years has expanded to include some software applications for clinical trials. The 2011 meeting emphasized the global laboratory environment. Global clinical trials generate massive amounts of data in many locations that must be centralized and processed for efficient analysis. Thus, the meeting had a strong focus on establishing networks and systems for dealing with the computer infrastructure to support such environments. In addition to the globally installed laboratory information management system, electronic laboratory notebook and other traditional laboratory applications, cloud computing is quickly becoming the answer to provide efficient, inexpensive options for managing the large volumes of data and computing power, and thus it served as a central theme for the meeting.
Counterfactuals cannot count: a rejoinder to David Chalmers.
Bishop, Mark
2002-12-01
The initial argument presented herein is not significantly original--it is a simple reflection upon a notion of computation originally developed by Putnam (Putnam 1988; see also Searle, 1990) and criticised by Chalmers et al. (Chalmers, 1994; 1996a, b; see also the special issue, What is Computation?, in Minds and Machines, 4:4, November 1994). In what follows, instead of seeking to justify Putnam's conclusion that every open system implements every Finite State Automaton (FSA) and hence that psychological states of the brain cannot be functional states of a computer, I will establish the weaker result that, over a finite time window every open system implements the trace of FSA Q, as it executes program (P) on input (I). If correct the resulting bold philosophical claim is that phenomenal states--such as feelings and visual experiences--can never be understood or explained functionally. Copyright 2002 Elsevier Science (USA)
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1988-01-01
The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.
Probabilistic Structural Analysis Theory Development
NASA Technical Reports Server (NTRS)
Burnside, O. H.
1985-01-01
The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.
Structural Analysis Made 'NESSUSary'
NASA Technical Reports Server (NTRS)
2005-01-01
Everywhere you look, chances are something that was designed and tested by a computer will be in plain view. Computers are now utilized to design and test just about everything imaginable, from automobiles and airplanes to bridges and boats, and elevators and escalators to streets and skyscrapers. Computer-design engineering first emerged in the 1970s, in the automobile and aerospace industries. Since computers were in their infancy, however, architects and engineers during the time were limited to producing only designs similar to hand-drafted drawings. (At the end of 1970s, a typical computer-aided design system was a 16-bit minicomputer with a price tag of $125,000.) Eventually, computers became more affordable and related software became more sophisticated, offering designers the "bells and whistles" to go beyond the limits of basic drafting and rendering, and venture into more skillful applications. One of the major advancements was the ability to test the objects being designed for the probability of failure. This advancement was especially important for the aerospace industry, where complicated and expensive structures are designed. The ability to perform reliability and risk assessment without using extensive hardware testing is critical to design and certification. In 1984, NASA initiated the Probabilistic Structural Analysis Methods (PSAM) project at Glenn Research Center to develop analysis methods and computer programs for the probabilistic structural analysis of select engine components for current Space Shuttle and future space propulsion systems. NASA envisioned that these methods and computational tools would play a critical role in establishing increased system performance and durability, and assist in structural system qualification and certification. Not only was the PSAM project beneficial to aerospace, it paved the way for a commercial risk- probability tool that is evaluating risks in diverse, down- to-Earth application
Love, Erika; Butzin, Diane; Robinson, Robert E.; Lee, Soo
1971-01-01
A project to recatalog and reclassify the book collection of the Bowman Gray School of Medicine Library utilizing the Magnetic Tape/Selectric Typwriter system for simultaneous catalog card production and computer stored data acquisition marks the beginning of eventual computerization of all library operations. A keyboard optical display system will be added by late 1970. Major input operations requiring the creation of “hard copy” will continue via the MTST system. Updating, editing and retrieval operations as well as input without hard copy production will be done through the “on-line” keyboard optical display system. Once the library's first data bank, the book catalog, has been established the computer may be consulted directly for library holdings from any optical display terminal throughout the medical center. Three basic information retrieval operations may be carried out through “on-line” optical display terminals. Output options include the reproduction of part or all of a given document, or the generation of statistical data, which are derived from two Acquisition Code lines. The creation of a central bibliographic record of Bowman Gray Faculty publications patterned after the cataloging program is presently under way. The cataloging and computer storage of serial holdings records will begin after completion of the reclassification project. All acquisitions added to the collection since October 1967 are computer-stored and fully retrievable. Reclassification of older titles will be completed in early 1971. PMID:5542915
Lu, Jiao Yang; Zhang, Xin Xing; Huang, Wei Tao; Zhu, Qiu Yan; Ding, Xue Zhi; Xia, Li Qiu; Luo, Hong Qun; Li, Nian Bing
2017-09-19
The most serious and yet unsolved problems of molecular logic computing consist in how to connect molecular events in complex systems into a usable device with specific functions and how to selectively control branchy logic processes from the cascading logic systems. This report demonstrates that a Boolean logic tree is utilized to organize and connect "plug and play" chemical events DNA, nanomaterials, organic dye, biomolecule, and denaturant for developing the dual-signal electrochemical evolution aptasensor system with good resettability for amplification detection of thrombin, controllable and selectable three-state logic computation, and keypad lock security operation. The aptasensor system combines the merits of DNA-functionalized nanoamplification architecture and simple dual-signal electroactive dye brilliant cresyl blue for sensitive and selective detection of thrombin with a wide linear response range of 0.02-100 nM and a detection limit of 1.92 pM. By using these aforementioned chemical events as inputs and the differential pulse voltammetry current changes at different voltages as dual outputs, a resettable three-input biomolecular keypad lock based on sequential logic is established. Moreover, the first example of controllable and selectable three-state molecular logic computation with active-high and active-low logic functions can be implemented and allows the output ports to assume a high impediment or nothing (Z) state in addition to the 0 and 1 logic levels, effectively controlling subsequent branchy logic computation processes. Our approach is helpful in developing the advanced controllable and selectable logic computing and sensing system in large-scale integration circuits for application in biomedical engineering, intelligent sensing, and control.
NASA Technical Reports Server (NTRS)
Sanz, J.; Pischel, K.; Hubler, D.
1992-01-01
An application for parallel computation on a combined cluster of powerful workstations and supercomputers was developed. A Parallel Virtual Machine (PVM) is used as message passage language on a macro-tasking parallelization of the Aerodynamic Inverse Design and Analysis for a Full Engine computer code. The heterogeneous nature of the cluster is perfectly handled by the controlling host machine. Communication is established via Ethernet with the TCP/IP protocol over an open network. A reasonable overhead is imposed for internode communication, rendering an efficient utilization of the engaged processors. Perhaps one of the most interesting features of the system is its versatile nature, that permits the usage of the computational resources available that are experiencing less use at a given point in time.
ISCB: past-present perspective for the International Society for Computational Biology.
Rost, Burkhard
2014-01-01
Since its establishment in 1997, International Society for Computational Biology (ISCB) has contributed importantly toward advancing the understanding of living systems through computation. The ISCB represents nearly 3000 members working in >70 countries. It has doubled the number of members since 2007. At the same time, the number of meetings organized by the ISCB has increased from two in 2007 to eight in 2013, and the society has cemented many lasting alliances with regional societies and specialist groups. ISCB is ready to grow into a challenging and promising future. The progress over the past 7 years has resulted from the vision, and possibly more importantly, the passion and hard working dedication of many individuals.
ISCB: past-present perspective for the International Society for Computational Biology.
Rost, Burkhard
2013-12-15
Since its establishment in 1997, International Society for Computational Biology (ISCB) has contributed importantly toward advancing the understanding of living systems through computation. The ISCB represents nearly 3000 members working in >70 countries. It has doubled the number of members since 2007. At the same time, the number of meetings organized by the ISCB has increased from two in 2007 to eight in 2013, and the society has cemented many lasting alliances with regional societies and specialist groups. ISCB is ready to grow into a challenging and promising future. The progress over the past 7 years has resulted from the vision, and possibly more importantly, the passion and hard working dedication of many individuals.
Computational Study of the Genomic and Epigenomic Phenomena
NASA Astrophysics Data System (ADS)
Yang, Wenjing
Biological systems are perhaps the ultimate complex systems, uniquely capable of processing and communicating information, reproducing in their lifetimes, and adapting in evolutionary time scales. My dissertation research focuses on using computational approaches to understand the biocomplexity manifested in the multitude of length scales and time scales. At the molecular and cellular level, central to the complex behavior of a biological system is the regulatory network. My research study focused on epigenetics, which is essential for multicellular organisms to establish cellular identity during development or in response to intracellular and environmental stimuli. My computational study of epigenomics is greatly facilitated by recent advances in high-throughput sequencing technology, which enables high-resolution snapshots of epigenomes and transcriptomes. Using human CD4+ T cell as a model system, the dynamical changes in epigenome and transcriptome pertinent to T cell activation were investigated at the genome scale. Going beyond traditional focus on transcriptional regulation, I provided evidences that post-transcriptional regulation may serve as a major component of the regulatory network. In addition, I explored alternative polyadenylation, another novel aspect of gene regulation, and how it cross-talks with the local chromatin structure. As the renowned theoretical biologist Theodosius Dobzhansky said eloquently, "Nothing in biology makes sense except in the light of evolution''. To better understand this ubiquitous driving force in the biological world, I went beyond molecular events in a single organism, and investigated the dynamical changes of population structure along the evolutionary time scale. To this end, we used HIV virus population dynamics in the host immune system as a model system. The evolution of HIV viral population plays a key role in AIDS immunopathogenesis with its exceptionally high mutation rate. However, the theoretical studies of the effect of recombination have been rather limited. Given the phylogenetic and experimental evidences for the high recombination rate and its important role in HIV evolution and epidemics, I established a mathematical model to study the effect of recombination, and explored the complex behavior of this dynamics system.
Establishing a Novel Modeling Tool: A Python-Based Interface for a Neuromorphic Hardware System
Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz
2008-01-01
Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated. PMID:19562085
Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system.
Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz
2009-01-01
Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.
Control mechanism of double-rotator-structure ternary optical computer
NASA Astrophysics Data System (ADS)
Kai, SONG; Liping, YAN
2017-03-01
Double-rotator-structure ternary optical processor (DRSTOP) has two characteristics, namely, giant data-bits parallel computing and reconfigurable processor, which can handle thousands of data bits in parallel, and can run much faster than computers and other optical computer systems so far. In order to put DRSTOP into practical application, this paper established a series of methods, namely, task classification method, data-bits allocation method, control information generation method, control information formatting and sending method, and decoded results obtaining method and so on. These methods form the control mechanism of DRSTOP. This control mechanism makes DRSTOP become an automated computing platform. Compared with the traditional calculation tools, DRSTOP computing platform can ease the contradiction between high energy consumption and big data computing due to greatly reducing the cost of communications and I/O. Finally, the paper designed a set of experiments for DRSTOP control mechanism to verify its feasibility and correctness. Experimental results showed that the control mechanism is correct, feasible and efficient.
GPU-accelerated computing for Lagrangian coherent structures of multi-body gravitational regimes
NASA Astrophysics Data System (ADS)
Lin, Mingpei; Xu, Ming; Fu, Xiaoyu
2017-04-01
Based on a well-established theoretical foundation, Lagrangian Coherent Structures (LCSs) have elicited widespread research on the intrinsic structures of dynamical systems in many fields, including the field of astrodynamics. Although the application of LCSs in dynamical problems seems straightforward theoretically, its associated computational cost is prohibitive. We propose a block decomposition algorithm developed on Compute Unified Device Architecture (CUDA) platform for the computation of the LCSs of multi-body gravitational regimes. In order to take advantage of GPU's outstanding computing properties, such as Shared Memory, Constant Memory, and Zero-Copy, the algorithm utilizes a block decomposition strategy to facilitate computation of finite-time Lyapunov exponent (FTLE) fields of arbitrary size and timespan. Simulation results demonstrate that this GPU-based algorithm can satisfy double-precision accuracy requirements and greatly decrease the time needed to calculate final results, increasing speed by approximately 13 times. Additionally, this algorithm can be generalized to various large-scale computing problems, such as particle filters, constellation design, and Monte-Carlo simulation.
On the Use of Electrooculogram for Efficient Human Computer Interfaces
Usakli, A. B.; Gurkan, S.; Aloise, F.; Vecchiato, G.; Babiloni, F.
2010-01-01
The aim of this study is to present electrooculogram signals that can be used for human computer interface efficiently. Establishing an efficient alternative channel for communication without overt speech and hand movements is important to increase the quality of life for patients suffering from Amyotrophic Lateral Sclerosis or other illnesses that prevent correct limb and facial muscular responses. We have made several experiments to compare the P300-based BCI speller and EOG-based new system. A five-letter word can be written on average in 25 seconds and in 105 seconds with the EEG-based device. Giving message such as “clean-up” could be performed in 3 seconds with the new system. The new system is more efficient than P300-based BCI system in terms of accuracy, speed, applicability, and cost efficiency. Using EOG signals, it is possible to improve the communication abilities of those patients who can move their eyes. PMID:19841687
A study on spatial decision support systems for HIV/AIDS prevention based on COM GIS technology
NASA Astrophysics Data System (ADS)
Yang, Kun; Luo, Huasong; Peng, Shungyun; Xu, Quanli
2007-06-01
Based on the deeply analysis of the current status and the existing problems of GIS technology applications in Epidemiology, this paper has proposed the method and process for establishing the spatial decision support systems of AIDS epidemic prevention by integrating the COM GIS, Spatial Database, GPS, Remote Sensing, and Communication technologies, as well as ASP and ActiveX software development technologies. One of the most important issues for constructing the spatial decision support systems of AIDS epidemic prevention is how to integrate the AIDS spreading models with GIS. The capabilities of GIS applications in the AIDS epidemic prevention have been described here in this paper firstly. Then some mature epidemic spreading models have also been discussed for extracting the computation parameters. Furthermore, a technical schema has been proposed for integrating the AIDS spreading models with GIS and relevant geospatial technologies, in which the GIS and model running platforms share a common spatial database and the computing results can be spatially visualized on Desktop or Web GIS clients. Finally, a complete solution for establishing the decision support systems of AIDS epidemic prevention has been offered in this paper based on the model integrating methods and ESRI COM GIS software packages. The general decision support systems are composed of data acquisition sub-systems, network communication sub-systems, model integrating sub-systems, AIDS epidemic information spatial database sub-systems, AIDS epidemic information querying and statistical analysis sub-systems, AIDS epidemic dynamic surveillance sub-systems, AIDS epidemic information spatial analysis and decision support sub-systems, as well as AIDS epidemic information publishing sub-systems based on Web GIS.
Lu, Li; Liu, Shusheng; Shi, Shenggen; Yang, Jianzhong
2011-10-01
China-made 5-axis simultaneous contouring CNC machine tool and domestically developed industrial computer-aided manufacture (CAM) technology were used for full crown fabrication and measurement of crown accuracy, with an attempt to establish an open CAM system for dental processing and to promote the introduction of domestic dental computer-aided design (CAD)/CAM system. Commercially available scanning equipment was used to make a basic digital tooth model after preparation of crown, and CAD software that comes with the scanning device was employed to design the crown by using domestic industrial CAM software to process the crown data in order to generate a solid model for machining purpose, and then China-made 5-axis simultaneous contouring CNC machine tool was used to complete machining of the whole crown and the internal accuracy of the crown internal was measured by using 3D-MicroCT. The results showed that China-made 5-axis simultaneous contouring CNC machine tool in combination with domestic industrial CAM technology can be used for crown making and the crown was well positioned in die. The internal accuracy was successfully measured by using 3D-MicroCT. It is concluded that an open CAM system for dentistry on the basis of China-made 5-axis simultaneous contouring CNC machine tool and domestic industrial CAM software has been established, and development of the system will promote the introduction of domestically-produced dental CAD/CAM system.
Computational power and generative capacity of genetic systems.
Igamberdiev, Abir U; Shklovskiy-Kordi, Nikita E
2016-01-01
Semiotic characteristics of genetic sequences are based on the general principles of linguistics formulated by Ferdinand de Saussure, such as the arbitrariness of sign and the linear nature of the signifier. Besides these semiotic features that are attributable to the basic structure of the genetic code, the principle of generativity of genetic language is important for understanding biological transformations. The problem of generativity in genetic systems arises to a possibility of different interpretations of genetic texts, and corresponds to what Alexander von Humboldt called "the infinite use of finite means". These interpretations appear in the individual development as the spatiotemporal sequences of realizations of different textual meanings, as well as the emergence of hyper-textual statements about the text itself, which underlies the process of biological evolution. These interpretations are accomplished at the level of the readout of genetic texts by the structures defined by Efim Liberman as "the molecular computer of cell", which includes DNA, RNA and the corresponding enzymes operating with molecular addresses. The molecular computer performs physically manifested mathematical operations and possesses both reading and writing capacities. Generativity paradoxically resides in the biological computational system as a possibility to incorporate meta-statements about the system, and thus establishes the internal capacity for its evolution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Systems design and analysis of the microwave radiometer spacecraft
NASA Technical Reports Server (NTRS)
Garrett, L. B.
1981-01-01
Systems design and analysis data were generated for microwave radiometer spacecraft concept using the Large Advanced Space Systems (LASS) computer aided design and analysis program. Parametric analyses were conducted for perturbations off the nominal-orbital-altitude/antenna-reflector-size and for control/propulsion system options. Optimized spacecraft mass, structural element design, and on-orbit loading data are presented. Propulsion and rigid-body control systems sensitivities to current and advanced technology are established. Spacecraft-induced and environmental effects on antenna performance (surface accuracy, defocus, and boresight off-set) are quantified and structured material frequencies and modal shapes are defined.
A data analysis expert system for large established distributed databases
NASA Technical Reports Server (NTRS)
Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick
1987-01-01
A design for a natural language database interface system, called the Deductively Augmented NASA Management Decision support System (DANMDS), is presented. The DANMDS system components have been chosen on the basis of the following considerations: maximal employment of the existing NASA IBM-PC computers and supporting software; local structuring and storing of external data via the entity-relationship model; a natural easy-to-use error-free database query language; user ability to alter query language vocabulary and data analysis heuristic; and significant artificial intelligence data analysis heuristic techniques that allow the system to become progressively and automatically more useful.
NASA Astrophysics Data System (ADS)
Lou, Yang
Photoacoustic computed tomography(PACT), also known as optoacoustic tomography (OAT), is an emerging imaging technique that has developed rapidly in recent years. The combination of the high optical contrast and the high acoustic resolution of this hybrid imaging technique makes it a promising candidate for human breast imaging, where conventional imaging techniques including X-ray mammography, B-mode ultrasound, and MRI suffer from low contrast, low specificity for certain breast types, and additional risks related to ionizing radiation. Though significant works have been done to push the frontier of PACT breast imaging, it is still challenging to successfully build a PACT breast imaging system and apply it to wide clinical use because of various practical reasons. First, computer simulation studies are often conducted to guide imaging system designs, but the numerical phantoms employed in most previous works consist of simple geometries and do not reflect the true anatomical structures within the breast. Therefore the effectiveness of such simulation-guided PACT system in clinical experiments will be compromised. Second, it is challenging to design a system to simultaneously illuminate the entire breast with limited laser power. Some heuristic designs have been proposed where the illumination is non-stationary during the imaging procedure, but the impact of employing such a design has not been carefully studied. Third, current PACT imaging systems are often optimized with respect to physical measures such as resolution or signal-to-noise ratio (SNR). It would be desirable to establish an assessing framework where the detectability of breast tumor can be directly quantified, therefore the images produced by such optimized imaging systems are not only visually appealing, but most informative in terms of the tumor detection task. Fourth, when imaging a large three-dimensional (3D) object such as the breast, iterative reconstruction algorithms are often utilized to alleviate the need to collect densely sampled measurement data hence a long scanning time. However, the heavy computation burden associated with iterative algorithms largely hinders its application in PACT breast imaging. This dissertation is dedicated to address these aforementioned problems in PACT breast imaging. A method that generates anatomically realistic numerical breast phantoms is first proposed to facilitate computer simulation studies in PACT. The non-stationary illumination designs for PACT breast imaging are then systematically investigated in terms of its impact on reconstructed images. We then apply signal detection theory to assess different system designs to demonstrate how an objective, task-based measure can be established for PACT breast imaging. To address the slow computation time of iterative algorithms for PACT imaging, we propose an acceleration method that employs an approximated but much faster adjoint operator during iterations, which can reduce the computation time by a factor of six without significantly compromising image quality. Finally, some clinical results are presented to demonstrate that the PACT breast imaging can resolve most major and fine vascular structures within the breast, along with some pathological biomarkers that may indicate tumor development.
An imaging system for PLIF/Mie measurements for a combusting flow
NASA Technical Reports Server (NTRS)
Wey, C. C.; Ghorashi, B.; Marek, C. J.; Wey, C.
1990-01-01
The equipment required to establish an imaging system can be divided into four parts: (1) the light source and beam shaping optics; (2) camera and recording; (3) image acquisition and processing; and (4) computer and output systems. A pulsed, Nd:YAG-pummped, frequency-doubled dye laser which can freeze motion in the flowfield is used for an illumination source. A set of lenses is used to form the laser beam into a sheet. The induced fluorescence is collected by an UV-enhanced lens and passes through an UV-enhanced microchannel plate intensifier which is optically coupled to a gated solid state CCD camera. The output of the camera is simultaneously displayed on a monitor and recorded on either a laser videodisc set of a Super VHS VCR. This videodisc set is controlled by a minicomputer via a connection to the RS-232C interface terminals. The imaging system is connected to the host computer by a bus repeater and can be multiplexed between four video input sources. Sample images from a planar shear layer experiment are presented to show the processing capability of the imaging system with the host computer.
Lee, Cheens; Robinson, Kerin M; Wendt, Kate; Williamson, Dianne
The unimpeded functioning of hospital Health Information Services (HIS) is essential for patient care, clinical governance, organisational performance measurement, funding and research. In an investigation of hospital Health Information Services' preparedness for internal disasters, all hospitals in the state of Victoria with the following characteristics were surveyed: they have a Health Information Service/ Department; there is a Manager of the Health Information Service/Department; and their inpatient capacity is greater than 80 beds. Fifty percent of the respondents have experienced an internal disaster within the past decade, the majority affecting the Health Information Service. The most commonly occurring internal disasters were computer system failure and floods. Two-thirds of the hospitals have internal disaster plans; the most frequently occurring scenarios provided for are computer system failure, power failure and fire. More large hospitals have established back-up systems than medium- and small-size hospitals. Fifty-three percent of hospitals have a recovery plan for internal disasters. Hospitals typically self-rate as having a 'medium' level of internal disaster preparedness. Overall, large hospitals are better prepared for internal disasters than medium and small hospitals, and preparation for disruption of computer systems and medical record services is relatively high on their agendas.
NASA Technical Reports Server (NTRS)
Crawford, Bradley L.
2007-01-01
The angle measurement system (AMS) developed at NASA Langley Research Center (LaRC) is a system for many uses. It was originally developed to check taper fits in the wind tunnel model support system. The system was further developed to measure simultaneous pitch and roll angles using 3 orthogonally mounted accelerometers (3-axis). This 3-axis arrangement is used as a transfer standard from the calibration standard to the wind tunnel facility. It is generally used to establish model pitch and roll zero and performs the in-situ calibration on model attitude devices. The AMS originally used a laptop computer running DOS based software but has recently been upgraded to operate in a windows environment. Other improvements have also been made to the software to enhance its accuracy and add features. This paper will discuss the accuracy and calibration methodologies used in this system and some of the features that have contributed to its popularity.
NASA Technical Reports Server (NTRS)
Pao, S. Paul; Deere, Karen A.; Abdol-Hamid, Khales S.
2011-01-01
Approaches were established for modeling the roll control system and analyzing the jet interactions of the activated roll control system on Ares-type configurations using the USM3D Navier-Stokes solver. Components of the modeling approach for the roll control system include a choice of turbulence models, basis for computing a dynamic equivalence of the real gas rocket exhaust flow in terms of an ideal gas, and techniques to evaluate roll control system performance for wind tunnel and flight conditions. A simplified Ares I-X configuration was used during the development phase of the roll control system modeling approach. A limited set of Navier-Stokes solutions was obtained for the purposes of this investigation and highlights of the results are included in this paper. The USM3D solutions were compared to equivalent solutions at select flow conditions from a real gas Navier- Stokes solver (Loci-CHEM) and a structured overset grid Navier-Stokes solver (OVERFLOW).
Schwartz, Christopher; Sarlette, Ralf; Weinmann, Michael; Rump, Martin; Klein, Reinhard
2014-04-28
Understanding as well as realistic reproduction of the appearance of materials play an important role in computer graphics, computer vision and industry. They enable applications such as digital material design, virtual prototyping and faithful virtual surrogates for entertainment, marketing, education or cultural heritage documentation. A particularly fruitful way to obtain the digital appearance is the acquisition of reflectance from real-world material samples. Therefore, a great variety of devices to perform this task has been proposed. In this work, we investigate their practical usefulness. We first identify a set of necessary attributes and establish a general categorization of different designs that have been realized. Subsequently, we provide an in-depth discussion of three particular implementations by our work group, demonstrating advantages and disadvantages of different system designs with respect to the previously established attributes. Finally, we survey the existing literature to compare our implementation with related approaches.
Supersonic nonlinear potential analysis
NASA Technical Reports Server (NTRS)
Siclari, M. J.
1984-01-01
The NCOREL computer code was established to compute supersonic flow fields of wings and bodies. The method encompasses an implicit finite difference transonic relaxation method to solve the full potential equation in a spherical coordinate system. Two basic topic to broaden the applicability and usefulness of the present method which is encompassed within the computer code NCOREL for the treatment of supersonic flow problems were studied. The first topic is that of computing efficiency. Accelerated schemes are in use for transonic flow problems. One such scheme is the approximate factorization (AF) method and an AF scheme to the supersonic flow problem is developed. The second topic is the computation of wake flows. The proper modeling of wake flows is important for multicomponent configurations such as wing-body and multiple lifting surfaces where the wake of one lifting surface has a pronounced effect on a downstream body or other lifting surfaces.
Generalized simulation technique for turbojet engine system analysis
NASA Technical Reports Server (NTRS)
Seldner, K.; Mihaloew, J. R.; Blaha, R. J.
1972-01-01
A nonlinear analog simulation of a turbojet engine was developed. The purpose of the study was to establish simulation techniques applicable to propulsion system dynamics and controls research. A schematic model was derived from a physical description of a J85-13 turbojet engine. Basic conservation equations were applied to each component along with their individual performance characteristics to derive a mathematical representation. The simulation was mechanized on an analog computer. The simulation was verified in both steady-state and dynamic modes by comparing analytical results with experimental data obtained from tests performed at the Lewis Research Center with a J85-13 engine. In addition, comparison was also made with performance data obtained from the engine manufacturer. The comparisons established the validity of the simulation technique.
BlueSky Cloud Framework: An E-Learning Framework Embracing Cloud Computing
NASA Astrophysics Data System (ADS)
Dong, Bo; Zheng, Qinghua; Qiao, Mu; Shu, Jian; Yang, Jie
Currently, E-Learning has grown into a widely accepted way of learning. With the huge growth of users, services, education contents and resources, E-Learning systems are facing challenges of optimizing resource allocations, dealing with dynamic concurrency demands, handling rapid storage growth requirements and cost controlling. In this paper, an E-Learning framework based on cloud computing is presented, namely BlueSky cloud framework. Particularly, the architecture and core components of BlueSky cloud framework are introduced. In BlueSky cloud framework, physical machines are virtualized, and allocated on demand for E-Learning systems. Moreover, BlueSky cloud framework combines with traditional middleware functions (such as load balancing and data caching) to serve for E-Learning systems as a general architecture. It delivers reliable, scalable and cost-efficient services to E-Learning systems, and E-Learning organizations can establish systems through these services in a simple way. BlueSky cloud framework solves the challenges faced by E-Learning, and improves the performance, availability and scalability of E-Learning systems.
M/A-COM linkabit eastern operations
NASA Astrophysics Data System (ADS)
Mills, D. L.; Avramovic, Z.
1983-03-01
This first Quarterly Project Report on LINKABIT's contribution to the Defense Advanced Research Projects Agency (DARPA) Internet Program covers the period from 22 December 1982 through 21 March 1983. LINKABIT's support of the Internet Program is concentrated in the areas of protocol design, implementation, testing, and evaluation. In addition, LINKABIT staff are providing integration and support services for certain computer systems to be installed at DARPA sites in Washington, D.C., and Stuttgart, West Germany. During the period covered by this report, LINKABIT organized the project activities and established staff responsibilities. Several computers and peripheral devices were made available from Government sources for use in protocol development and network testing. Considerable time was devoted to installing this equipment, integrating the software, and testing it with the Internet system.
Effectiveness evaluation of STOL transport operations
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bruckner, J. M. H.; Drago, V. J.; Brown, R. A.; Rea, F. G.; Porter, R. F.
1973-01-01
A short-takeoff and landing (STOL) systems simulation model has been developed and implemented in a computer code (known as STOL OPS) which permits evaluation of the operation of a STOL aircraft and its avionics in a commercial airline operating environment. STOL OPS concentrated on the avionics functions of navigation, guidance, control, communication, hazard aviodance, and systems management. External world factors influencing the operation of the STOL aircraft include each airport and its geometry, air traffic at each airport, air traffic control equipment and procedures, weather (including winds and visibility), and the flight path between each airport served by the route. The development of the STOL OPS program provides NASA a set of computer programs which can be used for detailed analysis of a STOL aircraft and its avionics and permit establishment of system requirements as a function of airline mission performance goals.
A small Unix-based data acquisition system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engberg, D.; Glanzman, T.
1994-02-01
The proposed SLAC B Factory detector plans to use Unix-based machines for all aspects of computing, including real-time data acquisition and experimental control. A R and D program has been established to investigate the use of Unix in the various aspects of experimental computation. Earlier R and D work investigated the basic real-time aspects of the IBM RS/6000 workstation running AIX, which claims to be a real-time operating system. The next step in this R and D is the construction of a prototype data acquisition system which attempts to exercise many of the features needed in the final on-line systemmore » in a realistic situation. For this project, the authors have combined efforts with a team studying the use of novel cell designs and gas mixtures in a new prototype drift chamber.« less
The Tacitness of Tacitus. A Methodological Approach to European Thought. No. 46.
ERIC Educational Resources Information Center
Bierschenk, Bernhard
This study measured the analysis of verbal flows by means of volume-elasticity measures and the analysis of information flow structures and their representations in the form of a metaphysical cube. A special purpose system of computer programs (PERTEX) was used to establish the language space in which the textual flow patterns occurred containing…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Troy Hiltbrand; Daniel Jones
As we look at the cyber security ecosystem, are we planning to fight the battle as we did yesterday, with firewalls and intrusion detection systems (IDS), or are we sensing a change in how security is evolving and planning accordingly? With the technology enablement and possible financial benefits of cloud computing, the traditional tools for establishing and maintaining our cyber security ecosystems are being dramatically altered.
Use of a Colony of Cooperating Agents and MAPLE To Solve the Traveling Salesman Problem.
ERIC Educational Resources Information Center
Guerrieri, Bruno
This paper reviews an approach for finding optimal solutions to the traveling salesman problem, a well-known problem in combinational optimization, and describes implementing the approach using the MAPLE computer algebra system. The method employed in this approach to the problem is similar to the way ant colonies manage to establish shortest…
NASA Astrophysics Data System (ADS)
Bjørner, Dines
Before software can be designed we must know its requirements. Before requirements can be expressed we must understand the domain. So it follows, from our dogma, that we must first establish precise descriptions of domains; then, from such descriptions, “derive” at least domain and interface requirements; and from those and machine requirements design the software, or, more generally, the computing systems.
Future in biomolecular computation
NASA Astrophysics Data System (ADS)
Wimmer, E.
1988-01-01
Large-scale computations for biomolecules are dominated by three levels of theory: rigorous quantum mechanical calculations for molecules with up to about 30 atoms, semi-empirical quantum mechanical calculations for systems with up to several hundred atoms, and force-field molecular dynamics studies of biomacromolecules with 10,000 atoms and more including surrounding solvent molecules. It can be anticipated that increased computational power will allow the treatment of larger systems of ever growing complexity. Due to the scaling of the computational requirements with increasing number of atoms, the force-field approaches will benefit the most from increased computational power. On the other hand, progress in methodologies such as density functional theory will enable us to treat larger systems on a fully quantum mechanical level and a combination of molecular dynamics and quantum mechanics can be envisioned. One of the greatest challenges in biomolecular computation is the protein folding problem. It is unclear at this point, if an approach with current methodologies will lead to a satisfactory answer or if unconventional, new approaches will be necessary. In any event, due to the complexity of biomolecular systems, a hierarchy of approaches will have to be established and used in order to capture the wide ranges of length-scales and time-scales involved in biological processes. In terms of hardware development, speed and power of computers will increase while the price/performance ratio will become more and more favorable. Parallelism can be anticipated to become an integral architectural feature in a range of computers. It is unclear at this point, how fast massively parallel systems will become easy enough to use so that new methodological developments can be pursued on such computers. Current trends show that distributed processing such as the combination of convenient graphics workstations and powerful general-purpose supercomputers will lead to a new style of computing in which the calculations are monitored and manipulated as they proceed. The combination of a numeric approach with artificial-intelligence approaches can be expected to open up entirely new possibilities. Ultimately, the most exciding aspect of the future in biomolecular computing will be the unexpected discoveries.
A diagnosis system using object-oriented fault tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.
Systems Research and Development Service Report of R&D Activity,
1980-05-01
failure and with the ARTCC and ARTS computers via a retain the data on the displays. The most patch panel. When the ATC system is critical position is...necessary delays any, and provides the necessary in the most efficient manner, commands to satisfy the established schedules. M&S also performs In...less most unpredictable segment since Large 12,500 current ATC practices differ widely 300.000 lbs. and are dependent upon load. Thus, if Heavy more
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1984-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Teglas, Russell
1987-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
Vehicle Integrated Prognostic Reasoner (VIPR) Final Report
NASA Technical Reports Server (NTRS)
Bharadwaj, Raj; Mylaraswamy, Dinkar; Cornhill, Dennis; Biswas, Gautam; Koutsoukos, Xenofon; Mack, Daniel
2013-01-01
A systems view is necessary to detect, diagnose, predict, and mitigate adverse events during the flight of an aircraft. While most aircraft subsystems look for simple threshold exceedances and report them to a central maintenance computer, the vehicle integrated prognostic reasoner (VIPR) proactively generates evidence and takes an active role in aircraft-level health assessment. Establishing the technical feasibility and a design trade-space for this next-generation vehicle-level reasoning system (VLRS) is the focus of our work.
NASA Astrophysics Data System (ADS)
Rakowsky, N.; Harig, S.; Androsov, A.; Fuchs, A.; Immerz, A.; Schröter, J.; Hiller, W.
2012-04-01
Starting in 2005, the GITEWS project (German-Indonesian Tsunami Early Warning System) established from scratch a fully operational tsunami warning system at BMKG in Jakarta. Numerical simulations of prototypic tsunami scenarios play a decisive role in a priori risk assessment for coastal regions and in the early warning process itself. Repositories with currently 3470 regional tsunami scenarios for GITEWS and 1780 Indian Ocean wide scenarios in support of Indonesia as a Regional Tsunami Service Provider (RTSP) were computed with the non-linear shallow water modell TsunAWI. It is based on a finite element discretisation, employs unstructured grids with high resolution along the coast and includes inundation. This contribution gives an overview on the model itself, the enhancement of the model physics, and the experiences gained during the process of establishing an operational code suited for thousands of model runs. Technical aspects like computation time, disk space needed for each scenario in the repository, or post processing techniques have a much larger impact than they had in the beginning when TsunAWI started as a research code. Of course, careful testing on artificial benchmarks and real events remains essential, but furthermore, quality control for the large number of scenarios becomes an important issue.
NASA Astrophysics Data System (ADS)
Li, J.; Xiong, L. Y.; Peng, N.; Dong, B.; Wang, P.; Liu, L. Q.
2014-01-01
An experimental platform for cryogenic Helium gas bearing turbo-expanders is established at the Technical Institute of Physics and Chemistry, Chinese Academy of Sciences. This turbo-expander experimental platform is designed for performance testing and experimental research on Helium turbo-expanders with different sizes from the liquid hydrogen temperature to the room temperature region. A measurement and control system based on Siemens PLC S7-300 for this turbo-expander experimental platform is developed. Proper sensors are selected to measure such parameters as temperature, pressure, rotation speed and air flow rate. All the collected data to be processed are transformed and transmitted to S7-300 CPU. Siemens S7-300 series PLC CPU315-2PN/DP is as master station and two sets of ET200M DP remote expand I/O is as slave station. Profibus-DP field communication is established between master station and slave stations. The upper computer Human Machine Interface (HMI) is compiled using Siemens configuration software WinCC V6.2. The upper computer communicates with PLC by means of industrial Ethernet. Centralized monitoring and distributed control is achieved. Experimental results show that this measurement and control system has fulfilled the test requirement for the turbo-expander experimental platform.
Cloud-based hospital information system as a service for grassroots healthcare institutions.
Yao, Qin; Han, Xiong; Ma, Xi-Kun; Xue, Yi-Feng; Chen, Yi-Jun; Li, Jing-Song
2014-09-01
Grassroots healthcare institutions (GHIs) are the smallest administrative levels of medical institutions, where most patients access health services. The latest report from the National Bureau of Statistics of China showed that 96.04 % of 950,297 medical institutions in China were at the grassroots level in 2012, including county-level hospitals, township central hospitals, community health service centers, and rural clinics. In developing countries, these institutions are facing challenges involving a shortage of funds and talent, inconsistent medical standards, inefficient information sharing, and difficulties in management during the adoption of health information technologies (HIT). Because of the necessity and gravity for GHIs, our aim is to provide hospital information services for GHIs using Cloud computing technologies and service modes. In this medical scenario, the computing resources are pooled by means of a Cloud-based Virtual Desktop Infrastructure (VDI) to serve multiple GHIs, with different hospital information systems dynamically assigned and reassigned according to demand. This paper is concerned with establishing a Cloud-based Hospital Information Service Center to provide hospital information software as a service (HI-SaaS) with the aim of providing GHIs with an attractive and high-performance medical information service. Compared with individually establishing all hospital information systems, this approach is more cost-effective and affordable for GHIs and does not compromise HIT performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, J.; Xiong, L. Y.; Peng, N.
2014-01-29
An experimental platform for cryogenic Helium gas bearing turbo-expanders is established at the Technical Institute of Physics and Chemistry, Chinese Academy of Sciences. This turbo-expander experimental platform is designed for performance testing and experimental research on Helium turbo-expanders with different sizes from the liquid hydrogen temperature to the room temperature region. A measurement and control system based on Siemens PLC S7-300 for this turbo-expander experimental platform is developed. Proper sensors are selected to measure such parameters as temperature, pressure, rotation speed and air flow rate. All the collected data to be processed are transformed and transmitted to S7-300 CPU. Siemensmore » S7-300 series PLC CPU315-2PN/DP is as master station and two sets of ET200M DP remote expand I/O is as slave station. Profibus-DP field communication is established between master station and slave stations. The upper computer Human Machine Interface (HMI) is compiled using Siemens configuration software WinCC V6.2. The upper computer communicates with PLC by means of industrial Ethernet. Centralized monitoring and distributed control is achieved. Experimental results show that this measurement and control system has fulfilled the test requirement for the turbo-expander experimental platform.« less
Runtime optimization of an application executing on a parallel computer
None
2014-11-25
Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the collective operation in the tuning session; if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site; if all compute nodes identified the collective operation at the same call site, establishing a tuning session and executing the collective operation in the tuning session; and if all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session.
Runtime optimization of an application executing on a parallel computer
Faraj, Daniel A; Smith, Brian E
2014-11-18
Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the collective operation in the tuning session; if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site; if all compute nodes identified the collective operation at the same call site, establishing a tuning session and executing the collective operation in the tuning session; and if all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session.
Runtime optimization of an application executing on a parallel computer
Faraj, Daniel A.; Smith, Brian E.
2013-01-29
Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the collective operation in the tuning session; if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site; if all compute nodes identified the collective operation at the same call site, establishing a tuning session and executing the collective operation in the tuning session; and if all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session.
NASA Astrophysics Data System (ADS)
Kim, Euiyoung; Cho, Maenghyo
2017-11-01
In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.
Classical command of quantum systems.
Reichardt, Ben W; Unger, Falk; Vazirani, Umesh
2013-04-25
Quantum computation and cryptography both involve scenarios in which a user interacts with an imperfectly modelled or 'untrusted' system. It is therefore of fundamental and practical interest to devise tests that reveal whether the system is behaving as instructed. In 1969, Clauser, Horne, Shimony and Holt proposed an experimental test that can be passed by a quantum-mechanical system but not by a system restricted to classical physics. Here we extend this test to enable the characterization of a large quantum system. We describe a scheme that can be used to determine the initial state and to classically command the system to evolve according to desired dynamics. The bipartite system is treated as two black boxes, with no assumptions about their inner workings except that they obey quantum physics. The scheme works even if the system is explicitly designed to undermine it; any misbehaviour is detected. Among its applications, our scheme makes it possible to test whether a claimed quantum computer is truly quantum. It also advances towards a goal of quantum cryptography: namely, the use of 'untrusted' devices to establish a shared random key, with security based on the validity of quantum physics.
User's guide to the Reliability Estimation System Testbed (REST)
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam
1992-01-01
The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.
Multiphasic Health Testing in the Clinic Setting
LaDou, Joseph
1971-01-01
The economy of automated multiphasic health testing (amht) activities patterned after the high-volume Kaiser program can be realized in low-volume settings. amht units have been operated at daily volumes of 20 patients in three separate clinical environments. These programs have displayed economics entirely compatible with cost figures published by the established high-volume centers. This experience, plus the expanding capability of small, general purpose, digital computers (minicomputers) indicates that a group of six or more physicians generating 20 laboratory appraisals per day can economically justify a completely automated multiphasic health testing facility. This system would reside in the clinic or hospital where it is used and can be configured to do analyses such as electrocardiography and generate laboratory reports, and communicate with large computer systems in university medical centers. Experience indicates that the most effective means of implementing these benefits of automation is to make them directly available to the medical community with the physician playing the central role. Economic justification of a dedicated computer through low-volume health testing then allows, as a side benefit, automation of administrative as well as other diagnostic activities—for example, patient billing, computer-aided diagnosis, and computer-aided therapeutics. PMID:4935771
NASA Astrophysics Data System (ADS)
Li, Yan; Li, Lin; Huang, Yi-Fan; Du, Bao-Lin
2009-07-01
This paper analyses the dynamic residual aberrations of a conformal optical system and introduces adaptive optics (AO) correction technology to this system. The image sharpening AO system is chosen as the correction scheme. Communication between MATLAB and Code V is established via ActiveX technique in computer simulation. The SPGD algorithm is operated at seven zoom positions to calculate the optimized surface shape of the deformable mirror. After comparison of performance of the corrected system with the baseline system, AO technology is proved to be a good way of correcting the dynamic residual aberration in conformal optical design.
Activities of the Research Institute for Advanced Computer Science
NASA Technical Reports Server (NTRS)
Oliger, Joseph
1994-01-01
The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. Research at RIACS is currently being done in the following areas: (1) parallel computing; (2) advanced methods for scientific computing; (3) high performance networks; and (4) learning systems. RIACS technical reports are usually preprints of manuscripts that have been submitted to research journals or conference proceedings. A list of these reports for the period January 1, 1994 through December 31, 1994 is in the Reports and Abstracts section of this report.
Methodology for extracting local constants from petroleum cracking flows
Chang, Shen-Lin; Lottes, Steven A.; Zhou, Chenn Q.
2000-01-01
A methodology provides for the extraction of local chemical kinetic model constants for use in a reacting flow computational fluid dynamics (CFD) computer code with chemical kinetic computations to optimize the operating conditions or design of the system, including retrofit design improvements to existing systems. The coupled CFD and kinetic computer code are used in combination with data obtained from a matrix of experimental tests to extract the kinetic constants. Local fluid dynamic effects are implicitly included in the extracted local kinetic constants for each particular application system to which the methodology is applied. The extracted local kinetic model constants work well over a fairly broad range of operating conditions for specific and complex reaction sets in specific and complex reactor systems. While disclosed in terms of use in a Fluid Catalytic Cracking (FCC) riser, the inventive methodology has application in virtually any reaction set to extract constants for any particular application and reaction set formulation. The methodology includes the step of: (1) selecting the test data sets for various conditions; (2) establishing the general trend of the parametric effect on the measured product yields; (3) calculating product yields for the selected test conditions using coupled computational fluid dynamics and chemical kinetics; (4) adjusting the local kinetic constants to match calculated product yields with experimental data; and (5) validating the determined set of local kinetic constants by comparing the calculated results with experimental data from additional test runs at different operating conditions.
Outline of Toshiba Business Information Center
NASA Astrophysics Data System (ADS)
Nagata, Yoshihiro
Toshiba Business Information Center gathers and stores inhouse and external business information used in common within the Toshiba Corp., and provides companywide circulation, reference and other services. The Center established centralized information management system by employing decentralized computers, electronic file apparatus (30cm laser disc) and other office automation equipments. Online retrieval through LAN is available to search the stored documents and increasing copying requests are processed by electronic file. This paper describes the purpose of establishment of the Center, the facilities, management scheme, systematization of the files and the present situation and plan of each information service.
NASA Astrophysics Data System (ADS)
Weidinger, Simon A.; Knap, Michael
2017-04-01
We study the regimes of heating in the periodically driven O(N)-model, which is a well established model for interacting quantum many-body systems. By computing the absorbed energy with a non-equilibrium Keldysh Green’s function approach, we establish three dynamical regimes: at short times a single-particle dominated regime, at intermediate times a stable Floquet prethermal regime in which the system ceases to absorb, and at parametrically late times a thermalizing regime. Our simulations suggest that in the thermalizing regime the absorbed energy grows algebraically in time with an exponent that approaches the universal value of 1/2, and is thus significantly slower than linear Joule heating. Our results demonstrate the parametric stability of prethermal states in a many-body system driven at frequencies that are comparable to its microscopic scales. This paves the way for realizing exotic quantum phases, such as time crystals or interacting topological phases, in the prethermal regime of interacting Floquet systems.
Epstein-Barr virus: a paradigm for persistent infection - for real and in virtual reality.
Thorley-Lawson, David A; Duca, Karen A; Shapiro, Michael
2008-04-01
The really interesting thing about herpesviruses is that they can establish lifelong persistant infections in immunocompetent hosts. At first glance, they would seem to have very different ways of doing this. Here we will use as a model our current understanding of how the human herpesvirus Epstein-Barr virus establishes and maintains such an infection. We apply information from a wide range of sources including laboratory experimentation, clinical observation, animal models and a new computer simulation. We propose that the detailed mechanisms for establishing infection are dependent on the virus and tissues involved, but the strategy is the same - to persist in a long-lived cell type where the virus is invisible to the immune system and nonpathogenic.
Small Unix data acquisition system
NASA Astrophysics Data System (ADS)
Engberg, D.; Glanzman, T.
1994-02-01
A R&D program has been established to investigate the use of Unix in the various aspects of experimental computation. Earlier R&D work investigated the basic real-time aspects of the IBMRS/6000 workstation running AIX, which claims to be a real-time operating system. The next step in this R&D is the construction of prototype data acquisition system which attempts to exercise many of the features needed in the final on-line system in a realistic situation. For this project, we have combined efforts with a team studying the use of novel cell designs and gas mixtures in a new prototype drift chamber.
Inflight redesign of the IUE attitude control system
NASA Technical Reports Server (NTRS)
Femiano, M. D.
1986-01-01
The one- and two-gyro system designs of the International Ultraviolet Explorer (IUE) attitude control system (ACS) are examined. The inertial reference assembly that provides the primary attitude reference for IUE consists of six rate sensors which are single-axis rate integrating gyros. The gyros operate in a pulse rebalanced mode that produces an output pulse for 0.01 arcsec of motion about the input axis. The functions of the fine error sensor, fine sun sensor (FSS), the IUE reaction wheels, the onboard computer, and the hold/slew algorithm are described. The use of the hold/slew algorithm to compute the control voltage for the ACS based on the Kalman filter is studied. A two-gyro system was incorporated into IUE following gyro failure. The procedures for establishing attitude control with the two-gyro design based on the FSS is analyzed. The performance of the two-gyro system is evaluated; it is observed that the pitch and yaw gyro control is 0.24 arcsec and the control is sufficient to permit extended periods of observation.
Sumner, T; Shephard, E; Bogle, I D L
2012-09-07
One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified.
14 CFR 152.319 - Monitoring and reporting of program performance.
Code of Federal Regulations, 2011 CFR
2011-01-01
... established for the period, made, if applicable, on a quantitative basis related to cost data for computation... established for the period, made, if applicable, on a quantitative basis related to costs for computation of...
14 CFR 152.319 - Monitoring and reporting of program performance.
Code of Federal Regulations, 2013 CFR
2013-01-01
... established for the period, made, if applicable, on a quantitative basis related to cost data for computation... established for the period, made, if applicable, on a quantitative basis related to costs for computation of...
14 CFR 152.319 - Monitoring and reporting of program performance.
Code of Federal Regulations, 2014 CFR
2014-01-01
... established for the period, made, if applicable, on a quantitative basis related to cost data for computation... established for the period, made, if applicable, on a quantitative basis related to costs for computation of...
14 CFR 152.319 - Monitoring and reporting of program performance.
Code of Federal Regulations, 2010 CFR
2010-01-01
... established for the period, made, if applicable, on a quantitative basis related to cost data for computation... established for the period, made, if applicable, on a quantitative basis related to costs for computation of...
14 CFR 152.319 - Monitoring and reporting of program performance.
Code of Federal Regulations, 2012 CFR
2012-01-01
... established for the period, made, if applicable, on a quantitative basis related to cost data for computation... established for the period, made, if applicable, on a quantitative basis related to costs for computation of...
The finite element method in low speed aerodynamics
NASA Technical Reports Server (NTRS)
Baker, A. J.; Manhardt, P. D.
1975-01-01
The finite element procedure is shown to be of significant impact in design of the 'computational wind tunnel' for low speed aerodynamics. The uniformity of the mathematical differential equation description, for viscous and/or inviscid, multi-dimensional subsonic flows about practical aerodynamic system configurations, is utilized to establish the general form of the finite element algorithm. Numerical results for inviscid flow analysis, as well as viscous boundary layer, parabolic, and full Navier Stokes flow descriptions verify the capabilities and overall versatility of the fundamental algorithm for aerodynamics. The proven mathematical basis, coupled with the distinct user-orientation features of the computer program embodiment, indicate near-term evolution of a highly useful analytical design tool to support computational configuration studies in low speed aerodynamics.
NASA Astrophysics Data System (ADS)
Govoni, Marco; Galli, Giulia
Green's function based many-body perturbation theory (MBPT) methods are well established approaches to compute quasiparticle energies and electronic lifetimes. However, their application to large systems - for instance to heterogeneous systems, nanostructured, disordered, and defective materials - has been hindered by high computational costs. We will discuss recent MBPT methodological developments leading to an efficient formulation of electron-electron and electron-phonon interactions, and that can be applied to systems with thousands of electrons. Results using a formulation that does not require the explicit calculation of virtual states, nor the storage and inversion of large dielectric matrices will be presented. We will discuss data collections obtained using the WEST code, the advantages of the algorithms used in WEST over standard techniques, and the parallel performance. Work done in collaboration with I. Hamada, R. McAvoy, P. Scherpelz, and H. Zheng. This work was supported by MICCoM, as part of the Computational Materials Sciences Program funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division and by ANL.
Cognitive control predicts use of model-based reinforcement learning.
Otto, A Ross; Skatova, Anya; Madlon-Kay, Seth; Daw, Nathaniel D
2015-02-01
Accounts of decision-making and its neural substrates have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental work suggest that this classic distinction between behaviorally and neurally dissociable systems for habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning (RL), called model-free and model-based RL, but the cognitive or computational processes by which one system may dominate over the other in the control of behavior is a matter of ongoing investigation. To elucidate this question, we leverage the theoretical framework of cognitive control, demonstrating that individual differences in utilization of goal-related contextual information--in the service of overcoming habitual, stimulus-driven responses--in established cognitive control paradigms predict model-based behavior in a separate, sequential choice task. The behavioral correspondence between cognitive control and model-based RL compellingly suggests that a common set of processes may underpin the two behaviors. In particular, computational mechanisms originally proposed to underlie controlled behavior may be applicable to understanding the interactions between model-based and model-free choice behavior.
Computer-generated holographic near-eye display system based on LCoS phase only modulator
NASA Astrophysics Data System (ADS)
Sun, Peng; Chang, Shengqian; Zhang, Siman; Xie, Ting; Li, Huaye; Liu, Siqi; Wang, Chang; Tao, Xiao; Zheng, Zhenrong
2017-09-01
Augmented reality (AR) technology has been applied in various areas, such as large-scale manufacturing, national defense, healthcare, movie and mass media and so on. An important way to realize AR display is using computer-generated hologram (CGH), which is accompanied by low image quality and heavy computing defects. Meanwhile, the diffraction of Liquid Crystal on Silicon (LCoS) has a negative effect on image quality. In this paper, a modified algorithm based on traditional Gerchberg-Saxton (GS) algorithm was proposed to improve the image quality, and new method to establish experimental system was used to broaden field of view (FOV). In the experiment, undesired zero-order diffracted light was eliminated and high definition 2D image was acquired with FOV broadened to 36.1 degree. We have also done some pilot research in 3D reconstruction with tomography algorithm based on Fresnel diffraction. With the same experimental system, experimental results demonstrate the feasibility of 3D reconstruction. These modifications are effective and efficient, and may provide a better solution in AR realization.
Computer systems and software engineering
NASA Technical Reports Server (NTRS)
Mckay, Charles W.
1988-01-01
The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.
NASA Technical Reports Server (NTRS)
Migneault, G. E.
1979-01-01
Emulation techniques applied to the analysis of the reliability of highly reliable computer systems for future commercial aircraft are described. The lack of credible precision in reliability estimates obtained by analytical modeling techniques is first established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Next, the technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. Use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques. Finally an illustrative example is presented to demonstrate from actual use the promise of the proposed application of emulation.
Chung, Chi-Jung; Kuo, Yu-Chen; Hsieh, Yun-Yu; Li, Tsai-Chung; Lin, Cheng-Chieh; Liang, Wen-Miin; Liao, Li-Na; Li, Chia-Ing; Lin, Hsueh-Chun
2017-11-01
This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 10 5 sets per second. The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley
2015-04-01
The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially increasing data volumes at NCI. Traditional HPC and data environments are still made available in a way that flexibly provides the tools, services and supporting software systems on these new petascale infrastructures. But to enable the research to take place at this scale, the data, metadata and software now need to evolve together - creating a new integrated high performance infrastructure. The new infrastructure at NCI currently supports a catalogue of integrated, reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. One of the challenges for NCI has been to support existing techniques and methods, while carefully preparing the underlying infrastructure for the transition needed for the next class of Data-intensive Science. In doing so, a flexible range of techniques and software can be made available for application across the corpus of data collections available, and to provide a new infrastructure for future interdisciplinary research.
L'Utilisation de l'ordinateur en lexicometrie (The Use of the Computer in Lexicometry). Series B-1.
ERIC Educational Resources Information Center
Savard, Jean-Guy
This report treats some of the technical difficulties encountered in lexicological studies that were undertaken in order to establish a basic vocabulary. Its purpose is to show that the computer can overcome some of these difficulties, and specifically that computer programming can serve to establish a vocabulary common to scientific and technical…
2013-01-01
Background Information and communication technologies (ICTs) are often proposed as ‘technological fixes’ for problems facing healthcare. They promise to deliver services more quickly and cheaply. Yet research on the implementation of ICTs reveals a litany of delays, compromises and failures. Case studies have established that these technologies are difficult to embed in everyday healthcare. Methods We undertook an ethnographic comparative analysis of a single computer decision support system in three different settings to understand the implementation and everyday use of this technology which is designed to deal with calls to emergency and urgent care services. We examined the deployment of this technology in an established 999 ambulance call-handling service, a new single point of access for urgent care and an established general practice out-of-hours service. We used Normalization Process Theory as a framework to enable systematic cross-case analysis. Results Our data comprise nearly 500 hours of observation, interviews with 64 call-handlers, and stakeholders and documents about the technology and settings. The technology has been implemented and is used distinctively in each setting reflecting important differences between work and contexts. Using Normalisation Process Theory we show how the work (collective action) of implementing the system and maintaining its routine use was enabled by a range of actors who established coherence for the technology, secured buy-in (cognitive participation) and engaged in on-going appraisal and adjustment (reflexive monitoring). Conclusions Huge effort was expended and continues to be required to implement and keep this technology in use. This innovation must be understood both as a computer technology and as a set of practices related to that technology, kept in place by a network of actors in particular contexts. While technologies can be ‘made to work’ in different settings, successful implementation has been achieved, and will only be maintained, through the efforts of those involved in the specific settings and if the wider context continues to support the coherence, cognitive participation, and reflective monitoring processes that surround this collective action. Implementation is more than simply putting technologies in place – it requires new resources and considerable effort, perhaps on an on-going basis. PMID:23522021
Using old technology to implement modern computer-aided decision support for primary diabetes care.
Hunt, D. L.; Haynes, R. B.; Morgan, D.
2001-01-01
BACKGROUND: Implementation rates of interventions known to be beneficial for people with diabetes mellitus are often suboptimal. Computer-aided decision support systems (CDSSs) can improve these rates. The complexity of establishing a fully integrated electronic medical record that provides decision support, however, often prevents their use. OBJECTIVE: To develop a CDSS for diabetes care that can be easily introduced into primary care settings and diabetes clinics. THE SYSTEM: The CDSS uses fax-machine-based optical character recognition software for acquiring patient information. Simple, 1-page paper forms, completed by patients or health practitioners, are faxed to a central location. The information is interpreted and recorded in a database. This initiates a routine that matches the information against a knowledge base so that patient-specific recommendations can be generated. These are formatted and faxed back within 4-5 minutes. IMPLEMENTATION: The system is being introduced into 2 diabetes clinics. We are collecting information on frequency of use of the system, as well as satisfaction with the information provided. CONCLUSION: Computer-aided decision support can be provided in any setting with a fax machine, without the need for integrated electronic medical records or computerized data-collection devices. PMID:11825194
Using old technology to implement modern computer-aided decision support for primary diabetes care.
Hunt, D L; Haynes, R B; Morgan, D
2001-01-01
Implementation rates of interventions known to be beneficial for people with diabetes mellitus are often suboptimal. Computer-aided decision support systems (CDSSs) can improve these rates. The complexity of establishing a fully integrated electronic medical record that provides decision support, however, often prevents their use. To develop a CDSS for diabetes care that can be easily introduced into primary care settings and diabetes clinics. THE SYSTEM: The CDSS uses fax-machine-based optical character recognition software for acquiring patient information. Simple, 1-page paper forms, completed by patients or health practitioners, are faxed to a central location. The information is interpreted and recorded in a database. This initiates a routine that matches the information against a knowledge base so that patient-specific recommendations can be generated. These are formatted and faxed back within 4-5 minutes. The system is being introduced into 2 diabetes clinics. We are collecting information on frequency of use of the system, as well as satisfaction with the information provided. Computer-aided decision support can be provided in any setting with a fax machine, without the need for integrated electronic medical records or computerized data-collection devices.
Costa - Introduction to 2015 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, James E.
In parallel with Sandia National Laboratories having two major locations (NM and CA), along with a number of smaller facilities across the nation, so too is the distribution of scientific, engineering and computing resources. As a part of Sandia’s Institutional Computing Program, CA site-based Sandia computer scientists and engineers have been providing mission and research staff with local CA resident expertise on computing options while also focusing on two growing high performance computing research problems. The first is how to increase system resilience to failure, as machines grow larger, more complex and heterogeneous. The second is how to ensure thatmore » computer hardware and configurations are optimized for specialized data analytical mission needs within the overall Sandia computing environment, including the HPC subenvironment. All of these activities support the larger Sandia effort in accelerating development and integration of high performance computing into national security missions. Sandia continues to both promote national R&D objectives, including the recent Presidential Executive Order establishing the National Strategic Computing Initiative and work to ensure that the full range of computing services and capabilities are available for all mission responsibilities, from national security to energy to homeland defense.« less
Computational Nanotechnology at NASA Ames Research Center, 1996
NASA Technical Reports Server (NTRS)
Globus, Al; Bailey, David; Langhoff, Steve; Pohorille, Andrew; Levit, Creon; Chancellor, Marisa K. (Technical Monitor)
1996-01-01
Some forms of nanotechnology appear to have enormous potential to improve aerospace and computer systems; computational nanotechnology, the design and simulation of programmable molecular machines, is crucial to progress. NASA Ames Research Center has begun a computational nanotechnology program including in-house work, external research grants, and grants of supercomputer time. Four goals have been established: (1) Simulate a hypothetical programmable molecular machine replicating itself and building other products. (2) Develop molecular manufacturing CAD (computer aided design) software and use it to design molecular manufacturing systems and products of aerospace interest, including computer components. (3) Characterize nanotechnologically accessible materials of aerospace interest. Such materials may have excellent strength and thermal properties. (4) Collaborate with experimentalists. Current in-house activities include: (1) Development of NanoDesign, software to design and simulate a nanotechnology based on functionalized fullerenes. Early work focuses on gears. (2) A design for high density atomically precise memory. (3) Design of nanotechnology systems based on biology. (4) Characterization of diamonoid mechanosynthetic pathways. (5) Studies of the laplacian of the electronic charge density to understand molecular structure and reactivity. (6) Studies of entropic effects during self-assembly. Characterization of properties of matter for clusters up to sizes exhibiting bulk properties. In addition, the NAS (NASA Advanced Supercomputing) supercomputer division sponsored a workshop on computational molecular nanotechnology on March 4-5, 1996 held at NASA Ames Research Center. Finally, collaborations with Bill Goddard at CalTech, Ralph Merkle at Xerox Parc, Don Brenner at NCSU (North Carolina State University), Tom McKendree at Hughes, and Todd Wipke at UCSC are underway.
Viscous streaming for locomotion and transport
NASA Astrophysics Data System (ADS)
Gazzola, Mattia; Parthasarathy, Tejaswin
2017-11-01
Rectified and oscillatory flows associated with vibrating boundaries have been employed in a variety of tasks, especially in microfluidics. The associated fluid mechanics is well known in the case of simple geometries, cylinders in particular, yet little is known in the case of active, complex systems. Motivated by potential applications in swimming mini-bots, we established an accurate and robust computational framework to investigate the flow behavior associated with oscillations of multiple and deforming shapes with an emphasis on streaming assisted locomotion and transport systems.
Communications satellite system for Africa
NASA Astrophysics Data System (ADS)
Kriegl, W.; Laufenberg, W.
1980-09-01
Earlier established requirement estimations were improved upon by contacting African administrations and organizations. An enormous demand is shown to exist for telephony and teletype services in rural areas. It is shown that educational television broadcasting should be realized in the current African transport and communications decade (1978-1987). Radio broadcasting is proposed in order to overcome illiteracy and to improve educational levels. The technical and commercial feasibility of the system is provided by computer simulations which demonstrate how the required objectives can be fulfilled in conjunction with ground networks.
Data acquisition, processing and firing aid software for multichannel EMP simulation
NASA Astrophysics Data System (ADS)
Eumurian, Gregoire; Arbaud, Bruno
1986-08-01
Electromagnetic compatibility testing yields a large quantity of data for systematic analysis. An automated data acquisition system has been developed. It is based on standard EMP instrumentation which allows a pre-established program to be followed whilst orientating the measurements according to the results obtained. The system is controlled by a computer running interactive programs (multitask windows, scrollable menus, mouse, etc.) which handle the measurement channels, files, displays and process data in addition to providing an aid to firing.
Antiviral Innate Immunity through the lens of Systems Biology
Tripathi, Shashank; García-Sastre, Adolfo
2015-01-01
Cellular innate immunity poses the first hurdle against invading viruses in their attempt to establish infection. This antiviral response is manifested with the detection of viral components by the host cell, followed by transduction of antiviral signals, transcription and translation of antiviral effectors and leads to the establishment of an antiviral state. These events occur in a rather branched and interconnected sequence than a linear path. Traditionally, these processes were studied in the context of a single virus and a host component. However, with the advent of rapid and affordable OMICS technologies it has become feasible to address such questions on a global scale. In the discipline of Systems Biology’, extensive omics datasets are assimilated using computational tools and mathematical models to acquire deeper understanding of complex biological processes. In this review we have catalogued and discussed the application of Systems Biology approaches in dissecting the antiviral innate immune responses. PMID:26657882
Philip A. Araman
1977-01-01
The design of a rough mill for the production of interior furniture parts is used to illustrate a simulation technique for analyzing and evaluating established and proposed sequential production systems. Distributions representing the real-world random characteristics of lumber, equipment feed speeds and delay times are programmed into the simulation. An example is...
ERIC Educational Resources Information Center
University of Southwestern Louisiana, Lafayette.
A student who plans to enter the field of technology education must be especially motivated to incorporate computer technology into the theories of learning. Evaluation prior to the learning process establishes a frame of reference for students. After preparing students with the basic concepts of resistors and the mental tools, the expert system…
An Overview of the NASA Aerospace Flight Battery Systems Program
NASA Technical Reports Server (NTRS)
Manzo, Michelle
2003-01-01
Develop an understanding of the safety issues relating to space use and qualification of new Li-Ion technology for manned applications. Enable use of new technology batteries into GFE equipment - laptop computers, camcorders. Establish a data base for an optimized set of cells (and batteries) exhibiting acceptable performance and abuse characteristics for utilization as building blocks for numerous applications.
ERIC Educational Resources Information Center
Bidwell, Charles M.; Auricchio, Dominick
The project set out to establish an operational film scheduling network to improve service to New York State teachers using 16mm educational films. The Network is designed to serve local libraries located in Boards of Cooperative Educational Services (BOCES), regional libraries, and a statewide Syracuse University Film Rental Library (SUFRL). The…
ERIC Educational Resources Information Center
Sayre, Scott Alan
The ultimate goal of the science of artificial intelligence (AI) is to establish programs that will use algorithmic computer techniques to imitate the heuristic thought processes of humans. Most AI programs, especially expert systems, organize their knowledge into three specific areas: data storage, a rule set, and a control structure. Limitations…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grulke, Eric; Stencel, John
2011-09-13
The KY DOE EPSCoR Program supports two research clusters. The Materials Cluster uses unique equipment and computational methods that involve research expertise at the University of Kentucky and University of Louisville. This team determines the physical, chemical and mechanical properties of nanostructured materials and examines the dominant mechanisms involved in the formation of new self-assembled nanostructures. State-of-the-art parallel computational methods and algorithms are used to overcome current limitations of processing that otherwise are restricted to small system sizes and short times. The team also focuses on developing and applying advanced microtechnology fabrication techniques and the application of microelectrornechanical systems (MEMS)more » for creating new materials, novel microdevices, and integrated microsensors. The second research cluster concentrates on High Energy and Nuclear Physics. lt connects research and educational activities at the University of Kentucky, Eastern Kentucky University and national DOE research laboratories. Its vision is to establish world-class research status dedicated to experimental and theoretical investigations in strong interaction physics. The research provides a forum, facilities, and support for scientists to interact and collaborate in subatomic physics research. The program enables increased student involvement in fundamental physics research through the establishment of graduate fellowships and collaborative work.« less
Evolution of the ATLAS distributed computing system during the LHC long shutdown
NASA Astrophysics Data System (ADS)
Campana, S.; Atlas Collaboration
2014-06-01
The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younkin, James R; Kuhn, Michael J; Gradle, Colleen
New Brunswick Laboratory (NBL) has a numerous inventory containing thousands of plutonium and uranium certified reference materials. The current manual inventory process is well established but is a lengthy process which requires significant oversight and double checking to ensure correctness. Oak Ridge National Laboratory has worked with NBL to develop and deploy a new inventory system which utilizes handheld computers with barcode scanners and radio frequency identification (RFID) readers termed the Tagged Item Inventory System (TIIS). Certified reference materials are identified by labels which incorporate RFID tags and barcodes. The label printing process and RFID tag association process are integratedmore » into the main desktop software application. Software on the handheld computers syncs with software on designated desktop machines and the NBL inventory database to provide a seamless inventory process. This process includes: 1) identifying items to be inventoried, 2) downloading the current inventory information to the handheld computer, 3) using the handheld to read item and location labels, and 4) syncing the handheld computer with a designated desktop machine to analyze the results, print reports, etc. The security of this inventory software has been a major concern. Designated roles linked to authenticated logins are used to control access to the desktop software while password protection and badge verification are used to control access to the handheld computers. The overall system design and deployment at NBL will be presented. The performance of the system will also be discussed with respect to a small piece of the overall inventory. Future work includes performing a full inventory at NBL with the Tagged Item Inventory System and comparing performance, cost, and radiation exposures to the current manual inventory process.« less
13C-based metabolic flux analysis: fundamentals and practice.
Yang, Tae Hoon
2013-01-01
Isotope-based metabolic flux analysis is one of the emerging technologies applied to system level metabolic phenotype characterization in metabolic engineering. Among the developed approaches, (13)C-based metabolic flux analysis has been established as a standard tool and has been widely applied to quantitative pathway characterization of diverse biological systems. To implement (13)C-based metabolic flux analysis in practice, comprehending the underlying mathematical and computational modeling fundamentals is of importance along with carefully conducted experiments and analytical measurements. Such knowledge is also crucial when designing (13)C-labeling experiments and properly acquiring key data sets essential for in vivo flux analysis implementation. In this regard, the modeling fundamentals of (13)C-labeling systems and analytical data processing are the main topics we will deal with in this chapter. Along with this, the relevant numerical optimization techniques are addressed to help implementation of the entire computational procedures aiming at (13)C-based metabolic flux analysis in vivo.
Emerging approaches in predictive toxicology.
Zhang, Luoping; McHale, Cliona M; Greene, Nigel; Snyder, Ronald D; Rich, Ivan N; Aardema, Marilyn J; Roy, Shambhu; Pfuhler, Stefan; Venkatactahalam, Sundaresan
2014-12-01
Predictive toxicology plays an important role in the assessment of toxicity of chemicals and the drug development process. While there are several well-established in vitro and in vivo assays that are suitable for predictive toxicology, recent advances in high-throughput analytical technologies and model systems are expected to have a major impact on the field of predictive toxicology. This commentary provides an overview of the state of the current science and a brief discussion on future perspectives for the field of predictive toxicology for human toxicity. Computational models for predictive toxicology, needs for further refinement and obstacles to expand computational models to include additional classes of chemical compounds are highlighted. Functional and comparative genomics approaches in predictive toxicology are discussed with an emphasis on successful utilization of recently developed model systems for high-throughput analysis. The advantages of three-dimensional model systems and stem cells and their use in predictive toxicology testing are also described. © 2014 Wiley Periodicals, Inc.
Emerging Approaches in Predictive Toxicology
Zhang, Luoping; McHale, Cliona M.; Greene, Nigel; Snyder, Ronald D.; Rich, Ivan N.; Aardema, Marilyn J.; Roy, Shambhu; Pfuhler, Stefan; Venkatactahalam, Sundaresan
2016-01-01
Predictive toxicology plays an important role in the assessment of toxicity of chemicals and the drug development process. While there are several well-established in vitro and in vivo assays that are suitable for predictive toxicology, recent advances in high-throughput analytical technologies and model systems are expected to have a major impact on the field of predictive toxicology. This commentary provides an overview of the state of the current science and a brief discussion on future perspectives for the field of predictive toxicology for human toxicity. Computational models for predictive toxicology, needs for further refinement and obstacles to expand computational models to include additional classes of chemical compounds are highlighted. Functional and comparative genomics approaches in predictive toxicology are discussed with an emphasis on successful utilization of recently developed model systems for high-throughput analysis. The advantages of three-dimensional model systems and stem cells and their use in predictive toxicology testing are also described. PMID:25044351
Decentralized Grid Scheduling with Evolutionary Fuzzy Systems
NASA Astrophysics Data System (ADS)
Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander
In this paper, we address the problem of finding workload exchange policies for decentralized Computational Grids using an Evolutionary Fuzzy System. To this end, we establish a non-invasive collaboration model on the Grid layer which requires minimal information about the participating High Performance and High Throughput Computing (HPC/HTC) centers and which leaves the local resource managers completely untouched. In this environment of fully autonomous sites, independent users are assumed to submit their jobs to the Grid middleware layer of their local site, which in turn decides on the delegation and execution either on the local system or on remote sites in a situation-dependent, adaptive way. We find for different scenarios that the exchange policies show good performance characteristics not only with respect to traditional metrics such as average weighted response time and utilization, but also in terms of robustness and stability in changing environments.
Solvable Family of Driven-Dissipative Many-Body Systems.
Foss-Feig, Michael; Young, Jeremy T; Albert, Victor V; Gorshkov, Alexey V; Maghrebi, Mohammad F
2017-11-10
Exactly solvable models have played an important role in establishing the sophisticated modern understanding of equilibrium many-body physics. Conversely, the relative scarcity of solutions for nonequilibrium models greatly limits our understanding of systems away from thermal equilibrium. We study a family of nonequilibrium models, some of which can be viewed as dissipative analogues of the transverse-field Ising model, in that an effectively classical Hamiltonian is frustrated by dissipative processes that drive the system toward states that do not commute with the Hamiltonian. Surprisingly, a broad and experimentally relevant subset of these models can be solved efficiently. We leverage these solutions to compute the effects of decoherence on a canonical trapped-ion-based quantum computation architecture, and to prove a no-go theorem on steady-state phase transitions in a many-body model that can be realized naturally with Rydberg atoms or trapped ions.
Solvable Family of Driven-Dissipative Many-Body Systems
NASA Astrophysics Data System (ADS)
Foss-Feig, Michael; Young, Jeremy T.; Albert, Victor V.; Gorshkov, Alexey V.; Maghrebi, Mohammad F.
2017-11-01
Exactly solvable models have played an important role in establishing the sophisticated modern understanding of equilibrium many-body physics. Conversely, the relative scarcity of solutions for nonequilibrium models greatly limits our understanding of systems away from thermal equilibrium. We study a family of nonequilibrium models, some of which can be viewed as dissipative analogues of the transverse-field Ising model, in that an effectively classical Hamiltonian is frustrated by dissipative processes that drive the system toward states that do not commute with the Hamiltonian. Surprisingly, a broad and experimentally relevant subset of these models can be solved efficiently. We leverage these solutions to compute the effects of decoherence on a canonical trapped-ion-based quantum computation architecture, and to prove a no-go theorem on steady-state phase transitions in a many-body model that can be realized naturally with Rydberg atoms or trapped ions.
Blurriness in Live Forensics: An Introduction
NASA Astrophysics Data System (ADS)
Savoldi, Antonio; Gubian, Paolo
The Live Forensics discipline aims at answering basic questions related to a digital crime, which usually involves a computer-based system. The investigation should be carried out with the very goal to establish which processes were running, when they were started and by whom, what specific activities those processes were doing and the state of active network connections. Besides, a set of tools needs to be launched on the running system by altering, as a consequence of the Locard’s exchange principle [2], the system’s memory. All the methodologies for the live forensics field proposed until now have a basic, albeit important, weakness, which is the inability to quantify the perturbation, or blurriness, of the system’s memory of the investigated computer. This is the very last goal of this paper: to provide a set of guidelines which can be effectively used for measuring the uncertainty of the collected volatile memory on a live system being investigated.
SSME leak detection feasibility investigation by utilization of infrared sensor technology
NASA Technical Reports Server (NTRS)
Shohadaee, Ahmad A.; Crawford, Roger A.
1990-01-01
This investigation examined the potential of using state-of-the-art technology of infrared (IR) thermal imaging systems combined with computer, digital image processing and expert systems for Space Shuttle Main Engines (SSME) propellant path peak detection as an early warning system of imminent engine failure. A low-cost, laboratory experiment was devised and an experimental approach was established. The system was installed, checked out, and data were successfully acquired demonstrating the proof-of-concept. The conclusion from this investigation is that both numerical and experimental results indicate that the leak detection by using infrared sensor technology proved to be feasible for a rocket engine health monitoring system.
A video-based system for hand-driven stop-motion animation.
Han, Xiaoguang; Fu, Hongbo; Zheng, Hanlin; Liu, Ligang; Wang, Jue
2013-01-01
Stop-motion is a well-established animation technique but is often laborious and requires craft skills. A new video-based system can animate the vast majority of everyday objects in stop-motion style, more flexibly and intuitively. Animators can perform and capture motions continuously instead of breaking them into increments and shooting one still picture per increment. More important, the system permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners. The system's key component is two-phase keyframe-based capturing and processing, assisted by computer vision techniques. With this system, even amateurs can generate high-quality stop-motion animations.
Space station automation and robotics study. Operator-systems interface
NASA Technical Reports Server (NTRS)
1984-01-01
This is the final report of a Space Station Automation and Robotics Planning Study, which was a joint project of the Boeing Aerospace Company, Boeing Commercial Airplane Company, and Boeing Computer Services Company. The study is in support of the Advanced Technology Advisory Committee established by NASA in accordance with a mandate by the U.S. Congress. Boeing support complements that provided to the NASA Contractor study team by four aerospace contractors, the Stanford Research Institute (SRI), and the California Space Institute. This study identifies automation and robotics (A&R) technologies that can be advanced by requirements levied by the Space Station Program. The methodology used in the study is to establish functional requirements for the operator system interface (OSI), establish the technologies needed to meet these requirements, and to forecast the availability of these technologies. The OSI would perform path planning, tracking and control, object recognition, fault detection and correction, and plan modifications in connection with extravehicular (EV) robot operations.
Design of on-board Bluetooth wireless network system based on fault-tolerant technology
NASA Astrophysics Data System (ADS)
You, Zheng; Zhang, Xiangqi; Yu, Shijie; Tian, Hexiang
2007-11-01
In this paper, the Bluetooth wireless data transmission technology is applied in on-board computer system, to realize wireless data transmission between peripherals of the micro-satellite integrating electronic system, and in view of the high demand of reliability of a micro-satellite, a design of Bluetooth wireless network based on fault-tolerant technology is introduced. The reliability of two fault-tolerant systems is estimated firstly using Markov model, then the structural design of this fault-tolerant system is introduced; several protocols are established to make the system operate correctly, some related problems are listed and analyzed, with emphasis on Fault Auto-diagnosis System, Active-standby switch design and Data-Integrity process.
Architectural Methodology Report
NASA Technical Reports Server (NTRS)
Dhas, Chris
2000-01-01
The establishment of conventions between two communicating entities in the end systems is essential for communications. Examples of the kind of decisions that need to be made in establishing a protocol convention include the nature of the data representation, the for-mat and the speed of the date representation over the communications path, and the sequence of control messages (if any) which are sent. One of the main functions of a protocol is to establish a standard path between the communicating entities. This is necessary to create a virtual communications medium with certain desirable characteristics. In essence, it is the function of the protocol to transform the characteristics of the physical communications environment into a more useful virtual communications model. The final function of a protocol is to establish standard data elements for communications over the path; that is, the protocol serves to create a virtual data element for exchange. Other systems may be constructed in which the transferred element is a program or a job. Finally, there are special purpose applications in which the element to be transferred may be a complex structure such as all or part of a graphic display. NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to describe the methodologies used in developing a protocol architecture for an in-space Internet node. The node would support NASA:s four mission areas: Earth Science; Space Science; Human Exploration and Development of Space (HEDS); Aerospace Technology. This report presents the methodology for developing the protocol architecture. The methodology addresses the architecture for a computer communications environment. It does not address an analog voice architecture.
The application of quantum mechanics in structure-based drug design.
Mucs, Daniel; Bryce, Richard A
2013-03-01
Computational chemistry has become an established and valuable component in structure-based drug design. However the chemical complexity of many ligands and active sites challenges the accuracy of the empirical potentials commonly used to describe these systems. Consequently, there is a growing interest in utilizing electronic structure methods for addressing problems in protein-ligand recognition. In this review, the authors discuss recent progress in the development and application of quantum chemical approaches to modeling protein-ligand interactions. The authors specifically consider the development of quantum mechanics (QM) approaches for studying large molecular systems pertinent to biology, focusing on protein-ligand docking, protein-ligand binding affinities and ligand strain on binding. Although computation of binding energies remains a challenging and evolving area, current QM methods can underpin improved docking approaches and offer detailed insights into ligand strain and into the nature and relative strengths of complex active site interactions. The authors envisage that QM will become an increasingly routine and valued tool of the computational medicinal chemist.
Dish layouts analysis method for concentrative solar power plant.
Xu, Jinshan; Gan, Shaocong; Li, Song; Ruan, Zhongyuan; Chen, Shengyong; Wang, Yong; Gui, Changgui; Wan, Bin
2016-01-01
Designs leading to maximize the use of sun radiation of a given reflective area without increasing the expense on investment are important to solar power plants construction. We here provide a method that allows one to compute shade area at any given time as well as the total shading effect of a day. By establishing a local coordinate system with the origin at the apex of a parabolic dish and z -axis pointing to the sun, neighboring dishes only with [Formula: see text] would shade onto the dish when in tracking mode. This procedure reduces the required computational resources, simplifies the calculation and allows a quick search for the optimum layout by considering all aspects leading to optimized arrangement: aspect ratio, shifting and rotation. Computer simulations done with information on dish Stirling system as well as DNI data released from NREL, show that regular-spacing is not an optimal layout, shifting and rotating column by certain amount can bring more benefits.
Computational quantum-classical boundary of noisy commuting quantum circuits
Fujii, Keisuke; Tamate, Shuhei
2016-01-01
It is often said that the transition from quantum to classical worlds is caused by decoherence originated from an interaction between a system of interest and its surrounding environment. Here we establish a computational quantum-classical boundary from the viewpoint of classical simulatability of a quantum system under decoherence. Specifically, we consider commuting quantum circuits being subject to decoherence. Or equivalently, we can regard them as measurement-based quantum computation on decohered weighted graph states. To show intractability of classical simulation in the quantum side, we utilize the postselection argument and crucially strengthen it by taking noise effect into account. Classical simulatability in the classical side is also shown constructively by using both separable criteria in a projected-entangled-pair-state picture and the Gottesman-Knill theorem for mixed state Clifford circuits. We found that when each qubit is subject to a single-qubit complete-positive-trace-preserving noise, the computational quantum-classical boundary is sharply given by the noise rate required for the distillability of a magic state. The obtained quantum-classical boundary of noisy quantum dynamics reveals a complexity landscape of controlled quantum systems. This paves a way to an experimentally feasible verification of quantum mechanics in a high complexity limit beyond classically simulatable region. PMID:27189039
Computational quantum-classical boundary of noisy commuting quantum circuits.
Fujii, Keisuke; Tamate, Shuhei
2016-05-18
It is often said that the transition from quantum to classical worlds is caused by decoherence originated from an interaction between a system of interest and its surrounding environment. Here we establish a computational quantum-classical boundary from the viewpoint of classical simulatability of a quantum system under decoherence. Specifically, we consider commuting quantum circuits being subject to decoherence. Or equivalently, we can regard them as measurement-based quantum computation on decohered weighted graph states. To show intractability of classical simulation in the quantum side, we utilize the postselection argument and crucially strengthen it by taking noise effect into account. Classical simulatability in the classical side is also shown constructively by using both separable criteria in a projected-entangled-pair-state picture and the Gottesman-Knill theorem for mixed state Clifford circuits. We found that when each qubit is subject to a single-qubit complete-positive-trace-preserving noise, the computational quantum-classical boundary is sharply given by the noise rate required for the distillability of a magic state. The obtained quantum-classical boundary of noisy quantum dynamics reveals a complexity landscape of controlled quantum systems. This paves a way to an experimentally feasible verification of quantum mechanics in a high complexity limit beyond classically simulatable region.
Computational quantum-classical boundary of noisy commuting quantum circuits
NASA Astrophysics Data System (ADS)
Fujii, Keisuke; Tamate, Shuhei
2016-05-01
It is often said that the transition from quantum to classical worlds is caused by decoherence originated from an interaction between a system of interest and its surrounding environment. Here we establish a computational quantum-classical boundary from the viewpoint of classical simulatability of a quantum system under decoherence. Specifically, we consider commuting quantum circuits being subject to decoherence. Or equivalently, we can regard them as measurement-based quantum computation on decohered weighted graph states. To show intractability of classical simulation in the quantum side, we utilize the postselection argument and crucially strengthen it by taking noise effect into account. Classical simulatability in the classical side is also shown constructively by using both separable criteria in a projected-entangled-pair-state picture and the Gottesman-Knill theorem for mixed state Clifford circuits. We found that when each qubit is subject to a single-qubit complete-positive-trace-preserving noise, the computational quantum-classical boundary is sharply given by the noise rate required for the distillability of a magic state. The obtained quantum-classical boundary of noisy quantum dynamics reveals a complexity landscape of controlled quantum systems. This paves a way to an experimentally feasible verification of quantum mechanics in a high complexity limit beyond classically simulatable region.
Project Integration Architecture
NASA Technical Reports Server (NTRS)
Jones, William Henry
2008-01-01
The Project Integration Architecture (PIA) is a distributed, object-oriented, conceptual, software framework for the generation, organization, publication, integration, and consumption of all information involved in any complex technological process in a manner that is intelligible to both computers and humans. In the development of PIA, it was recognized that in order to provide a single computational environment in which all information associated with any given complex technological process could be viewed, reviewed, manipulated, and shared, it is necessary to formulate all the elements of such a process on the most fundamental level. In this formulation, any such element is regarded as being composed of any or all of three parts: input information, some transformation of that input information, and some useful output information. Another fundamental principle of PIA is the assumption that no consumer of information, whether human or computer, can be assumed to have any useful foreknowledge of an element presented to it. Consequently, a PIA-compliant computing system is required to be ready to respond to any questions, posed by the consumer, concerning the nature of the proffered element. In colloquial terms, a PIA-compliant system must be prepared to provide all the information needed to place the element in context. To satisfy this requirement, PIA extends the previously established object-oriented- programming concept of self-revelation and applies it on a grand scale. To enable pervasive use of self-revelation, PIA exploits another previously established object-oriented-programming concept - that of semantic infusion through class derivation. By means of self-revelation and semantic infusion through class derivation, a consumer of information can inquire about the contents of all information entities (e.g., databases and software) and can interact appropriately with those entities. Other key features of PIA are listed.
Active Inference and Learning in the Cerebellum.
Friston, Karl; Herreros, Ivan
2016-09-01
This letter offers a computational account of Pavlovian conditioning in the cerebellum based on active inference and predictive coding. Using eyeblink conditioning as a canonical paradigm, we formulate a minimal generative model that can account for spontaneous blinking, startle responses, and (delay or trace) conditioning. We then establish the face validity of the model using simulated responses to unconditioned and conditioned stimuli to reproduce the sorts of behavior that are observed empirically. The scheme's anatomical validity is then addressed by associating variables in the predictive coding scheme with nuclei and neuronal populations to match the (extrinsic and intrinsic) connectivity of the cerebellar (eyeblink conditioning) system. Finally, we try to establish predictive validity by reproducing selective failures of delay conditioning, trace conditioning, and extinction using (simulated and reversible) focal lesions. Although rather metaphorical, the ensuing scheme can account for a remarkable range of anatomical and neurophysiological aspects of cerebellar circuitry-and the specificity of lesion-deficit mappings that have been established experimentally. From a computational perspective, this work shows how conditioning or learning can be formulated in terms of minimizing variational free energy (or maximizing Bayesian model evidence) using exactly the same principles that underlie predictive coding in perception.
Computational Analysis of a Thermoelectric Generator for Waste-Heat Harvesting in Wearable Systems
NASA Astrophysics Data System (ADS)
Kossyvakis, D. N.; Vassiliadis, S. G.; Vossou, C. G.; Mangiorou, E. E.; Potirakis, S. M.; Hristoforou, E. V.
2016-06-01
Over recent decades, a constantly growing interest in the field of portable electronic devices has been observed. Recent developments in the scientific areas of integrated circuits and sensing technologies have enabled realization and design of lightweight low-power wearable sensing systems that can be of great use, especially for continuous health monitoring and performance recording applications. However, to facilitate wide penetration of such systems into the market, the issue of ensuring their seamless and reliable power supply still remains a major concern. In this work, the performance of a thermoelectric generator, able to exploit the temperature difference established between the human body and the environment, has been examined computationally using ANSYS 14.0 finite-element modeling (FEM) software, as a means for providing the necessary power to various portable electronic systems. The performance variation imposed due to different thermoelement geometries has been estimated to identify the most appropriate solution for the considered application. Furthermore, different ambient temperature and heat exchange conditions between the cold side of the generator and the environment have been investigated. The computational analysis indicated that power output in the order of 1.8 mW can be obtained by a 100-cm2 system, if specific design criteria can be fulfilled.
A distributed program composition system
NASA Technical Reports Server (NTRS)
Brown, Robert L.
1989-01-01
A graphical technique for creating distributed computer programs is investigated and a prototype implementation is described which serves as a testbed for the concepts. The type of programs under examination is restricted to those comprising relatively heavyweight parts that intercommunicate by passing messages of typed objects. Such programs are often presented visually as a directed graph with computer program parts as the nodes and communication channels as the edges. This class of programs, called parts-based programs, is not well supported by existing computer systems; much manual work is required to describe the program to the system, establish the communication paths, accommodate the heterogeneity of data types, and to locate the parts of the program on the various systems involved. The work described solves most of these problems by providing an interface for describing parts-based programs in this class in a way that closely models the way programmers think about them: using sketches of diagraphs. Program parts, the computational modes of the larger program system are categorized in libraries and are accessed with browsers. The process of programming has the programmer draw the program graph interactively. Heterogeneity is automatically accommodated by the insertion of type translators where necessary between the parts. Many decisions are necessary in the creation of a comprehensive tool for interactive creation of programs in this class. Possibilities are explored and the issues behind such decisions are presented. An approach to program composition is described, not a carefully implemented programming environment. However, a prototype implementation is described that can demonstrate the ideas presented.
NASA Astrophysics Data System (ADS)
Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan
2018-03-01
The conventional engineering optimization problems considering uncertainties are based on the probabilistic model. However, the probabilistic model may be unavailable because of the lack of sufficient objective information to construct the precise probability distribution of uncertainties. This paper proposes a possibility-based robust design optimization (PBRDO) framework for the uncertain structural-acoustic system based on the fuzzy set model, which can be constructed by expert opinions. The objective of robust design is to optimize the expectation and variability of system performance with respect to uncertainties simultaneously. In the proposed PBRDO, the entropy of the fuzzy system response is used as the variability index; the weighted sum of the entropy and expectation of the fuzzy response is used as the objective function, and the constraints are established in the possibility context. The computations for the constraints and objective function of PBRDO are a triple-loop and a double-loop nested problem, respectively, whose computational costs are considerable. To improve the computational efficiency, the target performance approach is introduced to transform the calculation of the constraints into a double-loop nested problem. To further improve the computational efficiency, a Chebyshev fuzzy method (CFM) based on the Chebyshev polynomials is proposed to estimate the objective function, and the Chebyshev interval method (CIM) is introduced to estimate the constraints, thereby the optimization problem is transformed into a single-loop one. Numerical results on a shell structural-acoustic system verify the effectiveness and feasibility of the proposed methods.
Automation of the CFD Process on Distributed Computing Systems
NASA Technical Reports Server (NTRS)
Tejnil, Ed; Gee, Ken; Rizk, Yehia M.
2000-01-01
A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational resources required to compute and store the information. The scripts were continually modified to improve the utilization of the computational resources and reduce the likelihood of data loss due to failures. An ad-hoc file server was created to manage the large amount of data being generated as part of the design event. Files were stored and retrieved as needed to create new jobs and analyze the results. Additional information is contained in the original.
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Wolpert, David
2004-01-01
Due to the increasing sophistication and miniaturization of computational components, complex, distributed systems of interacting agents are becoming ubiquitous. Such systems, where each agent aims to optimize its own performance, but where there is a well-defined set of system-level performance criteria, are called collectives. The fundamental problem in analyzing/designing such systems is in determining how the combined actions of self-interested agents leads to 'coordinated' behavior on a iarge scale. Examples of artificial systems which exhibit such behavior include packet routing across a data network, control of an array of communication satellites, coordination of multiple deployables, and dynamic job scheduling across a distributed computer grid. Examples of natural systems include ecosystems, economies, and the organelles within a living cell. No current scientific discipline provides a thorough understanding of the relation between the structure of collectives and how well they meet their overall performance criteria. Although still very young, research on collectives has resulted in successes both in understanding and designing such systems. It is eqected that as it matures and draws upon other disciplines related to collectives, this field will greatly expand the range of computationally addressable tasks. Moreover, in addition to drawing on them, such a fully developed field of collective intelligence may provide insight into already established scientific fields, such as mechanism design, economics, game theory, and population biology. This chapter provides a survey to the emerging science of collectives.
USE OF COMPUTER-AIDED PROCESS ENGINEERING TOOL IN POLLUTION PREVENTION
Computer-Aided Process Engineering has become established in industry as a design tool. With the establishment of the CAPE-OPEN software specifications for process simulation environments. CAPE-OPEN provides a set of "middleware" standards that enable software developers to acces...
Computational chemistry research
NASA Technical Reports Server (NTRS)
Levin, Eugene
1987-01-01
Task 41 is composed of two parts: (1) analysis and design studies related to the Numerical Aerodynamic Simulation (NAS) Extended Operating Configuration (EOC) and (2) computational chemistry. During the first half of 1987, Dr. Levin served as a member of an advanced system planning team to establish the requirements, goals, and principal technical characteristics of the NAS EOC. A paper entitled 'Scaling of Data Communications for an Advanced Supercomputer Network' is included. The high temperature transport properties (such as viscosity, thermal conductivity, etc.) of the major constituents of air (oxygen and nitrogen) were correctly determined. The results of prior ab initio computer solutions of the Schroedinger equation were combined with the best available experimental data to obtain complete interaction potentials for both neutral and ion-atom collision partners. These potentials were then used in a computer program to evaluate the collision cross-sections from which the transport properties could be determined. A paper entitled 'High Temperature Transport Properties of Air' is included.
Fiber pushout test: A three-dimensional finite element computational simulation
NASA Technical Reports Server (NTRS)
Mital, Subodh K.; Chamis, Christos C.
1990-01-01
A fiber pushthrough process was computationally simulated using three-dimensional finite element method. The interface material is replaced by an anisotropic material with greatly reduced shear modulus in order to simulate the fiber pushthrough process using a linear analysis. Such a procedure is easily implemented and is computationally very effective. It can be used to predict fiber pushthrough load for a composite system at any temperature. The average interface shear strength obtained from pushthrough load can easily be separated into its two components: one that comes from frictional stresses and the other that comes from chemical adhesion between fiber and the matrix and mechanical interlocking that develops due to shrinkage of the composite because of phase change during the processing. Step-by-step procedures are described to perform the computational simulation, to establish bounds on interfacial bond strength and to interpret interfacial bond quality.
NASA Astrophysics Data System (ADS)
Clay, Alexis; Delord, Elric; Couture, Nadine; Domenger, Gaël
We describe the joint research that we conduct in gesture-based emotion recognition and virtual augmentation of a stage, bridging together the fields of computer science and dance. After establishing a common ground for dialogue, we could conduct a research process that equally benefits both fields. As computer scientists, dance is a perfect application case. Dancer's artistic creativity orient our research choices. As dancers, computer science provides new tools for creativity, and more importantly a new point of view that forces us to reconsider dance from its fundamentals. In this paper we hence describe our scientific work and its implications on dance. We provide an overview of our system to augment a ballet stage, taking a dancer's emotion into account. To illustrate our work in both fields, we describe three events that mixed dance, emotion recognition and augmented reality.
Use of a secure Internet Web site for collaborative medical research.
Marshall, W W; Haley, R W
2000-10-11
Researchers who collaborate on clinical research studies from diffuse locations need a convenient, inexpensive, secure way to record and manage data. The Internet, with its World Wide Web, provides a vast network that enables researchers with diverse types of computers and operating systems anywhere in the world to log data through a common interface. Development of a Web site for scientific data collection can be organized into 10 steps, including planning the scientific database, choosing a database management software system, setting up database tables for each collaborator's variables, developing the Web site's screen layout, choosing a middleware software system to tie the database software to the Web site interface, embedding data editing and calculation routines, setting up the database on the central server computer, obtaining a unique Internet address and name for the Web site, applying security measures to the site, and training staff who enter data. Ensuring the security of an Internet database requires limiting the number of people who have access to the server, setting up the server on a stand-alone computer, requiring user-name and password authentication for server and Web site access, installing a firewall computer to prevent break-ins and block bogus information from reaching the server, verifying the identity of the server and client computers with certification from a certificate authority, encrypting information sent between server and client computers to avoid eavesdropping, establishing audit trails to record all accesses into the Web site, and educating Web site users about security techniques. When these measures are carefully undertaken, in our experience, information for scientific studies can be collected and maintained on Internet databases more efficiently and securely than through conventional systems of paper records protected by filing cabinets and locked doors. JAMA. 2000;284:1843-1849.
[Cost analysis for navigation in knee endoprosthetics].
Cerha, O; Kirschner, S; Günther, K-P; Lützner, J
2009-12-01
Total knee arthroplasty (TKA) is one of the most frequent procedures in orthopaedic surgery. The outcome depends on a range of factors including alignment of the leg and the positioning of the implant in addition to patient-associated factors. Computer-assisted navigation systems can improve the restoration of a neutral leg alignment. This procedure has been established especially in Europe and North America. The additional expenses are not reimbursed in the German DRG system (Diagnosis Related Groups). In the present study a cost analysis of computer-assisted TKA compared to the conventional technique was performed. The acquisition expenses of various navigation systems (5 and 10 year depreciation), annual costs for maintenance and software updates as well as the accompanying costs per operation (consumables, additional operating time) were considered. The additional operating time was determined on the basis of a meta-analysis according to the current literature. Situations with 25, 50, 100, 200 and 500 computer-assisted TKAs per year were simulated. The amount of the incremental costs of the computer-assisted TKA depends mainly on the annual volume and the additional operating time. A relevant decrease of the incremental costs was detected between 50 and 100 procedures per year. In a model with 100 computer-assisted TKAs per year an additional operating time of 14 mins and a 10 year depreciation of the investment costs, the incremental expenses amount to
NASA Astrophysics Data System (ADS)
Bareth, G.; Bolten, A.; Gnyp, M. L.; Reusch, S.; Jasper, J.
2016-06-01
The development of UAV-based sensing systems for agronomic applications serves the improvement of crop management. The latter is in the focus of precision agriculture which intends to optimize yield, fertilizer input, and crop protection. Besides, in some cropping systems vehicle-based sensing devices are less suitable because fields cannot be entered from certain growing stages onwards. This is true for rice, maize, sorghum, and many more crops. Consequently, UAV-based sensing approaches fill a niche of very high resolution data acquisition on the field scale in space and time. While mounting RGB digital compact cameras to low-weight UAVs (< 5 kg) is well established, the miniaturization of sensors in the last years also enables hyperspectral data acquisition from those platforms. From both, RGB and hyperspectral data, vegetation indices (VIs) are computed to estimate crop growth parameters. In this contribution, we compare two different sensing approaches from a low-weight UAV platform (< 5 kg) for monitoring a nitrogen field experiment of winter wheat and a corresponding farmers' field in Western Germany. (i) A standard digital compact camera was flown to acquire RGB images which are used to compute the RGBVI and (ii) NDVI is computed from a newly modified version of the Yara N-Sensor. The latter is a well-established tractor-based hyperspectral sensor for crop management and is available on the market since a decade. It was modified for this study to fit the requirements of UAV-based data acquisition. Consequently, we focus on three objectives in this contribution: (1) to evaluate the potential of the uncalibrated RGBVI for monitoring nitrogen status in winter wheat, (2) investigate the UAV-based performance of the modified Yara N-Sensor, and (3) compare the results of the two different UAV-based sensing approaches for winter wheat.
Parallel Architectures for Planetary Exploration Requirements (PAPER)
NASA Astrophysics Data System (ADS)
Cezzar, Ruknet
1993-08-01
The project's main contributions have been in the area of student support. Throughout the project, at least one, in some cases two, undergraduate students have been supported. By working with the project, these students gained valuable knowledge involving the scientific research project, including the not-so-pleasant reporting requirements to the funding agencies. The other important contribution was towards the establishment of a graduate program in computer science at Hampton University. Primarily, the PAPER project has served as the main research basis in seeking funds from other agencies, such as the National Science Foundation, for establishing a research infrastructure in the department. In technical areas, especially in the first phase, we believe the trip to Jet Propulsion Laboratory, and gathering together all the pertinent information involving experimental computer architectures aimed for planetary explorations was very helpful. Indeed, if this effort is to be revived in the future due to congressional funding for planetary explorations, say an unmanned mission to Mars, our interim report will be an important starting point. In other technical areas, our simulator has pinpointed and highlighted several important performance issues related to the design of operating system kernels for MIMD machines. In particular, the critical issue of how the kernel itself will run in parallel on a multiple-processor system has been addressed through the various ready list organization and access policies. In the area of neural computing, our main contribution was an introductory tutorial package to familiarize the researchers at NASA with this new and promising field zone axes (20). Finally, we have introduced the notion of reversibility in programming systems which may find applications in various areas of space research.
Parallel Architectures for Planetary Exploration Requirements (PAPER)
NASA Technical Reports Server (NTRS)
Cezzar, Ruknet
1993-01-01
The project's main contributions have been in the area of student support. Throughout the project, at least one, in some cases two, undergraduate students have been supported. By working with the project, these students gained valuable knowledge involving the scientific research project, including the not-so-pleasant reporting requirements to the funding agencies. The other important contribution was towards the establishment of a graduate program in computer science at Hampton University. Primarily, the PAPER project has served as the main research basis in seeking funds from other agencies, such as the National Science Foundation, for establishing a research infrastructure in the department. In technical areas, especially in the first phase, we believe the trip to Jet Propulsion Laboratory, and gathering together all the pertinent information involving experimental computer architectures aimed for planetary explorations was very helpful. Indeed, if this effort is to be revived in the future due to congressional funding for planetary explorations, say an unmanned mission to Mars, our interim report will be an important starting point. In other technical areas, our simulator has pinpointed and highlighted several important performance issues related to the design of operating system kernels for MIMD machines. In particular, the critical issue of how the kernel itself will run in parallel on a multiple-processor system has been addressed through the various ready list organization and access policies. In the area of neural computing, our main contribution was an introductory tutorial package to familiarize the researchers at NASA with this new and promising field zone axes (20). Finally, we have introduced the notion of reversibility in programming systems which may find applications in various areas of space research.
Requirements specification for nickel cadmium battery expert system
NASA Technical Reports Server (NTRS)
1986-01-01
The requirements for performance, design, test, and qualification of a computer program identified as NICBES, Nickel Cadmium Battery Expert System, is established. The specific spacecraft power system configuration selected was the Hubble Space Telescope (HST) Electrical Power System (EPS) Testbed. Power for the HST comes from a system of 13 Solar Panel Arrays (SPAs) linked to 6 Nickel Cadmium Batteries which are connected to 3 Busses. An expert system, NICBES, will be developed at Martin Marietta Aerospace to recognize a testbed anomaly, identify the malfunctioning component and recommend a course of action. Besides fault diagnosis, NICBES will be able to evaluate battery status, give advice on battery status and provide decision support for the operator. These requirements are detailed.
The materiel manager-chief financial officer alliance.
Henning, W K
1987-08-01
There is a gold mine of potential inventory reductions, expense reductions, and revenue increases in most hospitals that can be tapped by more intensive materiel management. The first step is incorporating the necessary ingredients for a strong materiel management effort--the right people and a state-of-the-art computer program. Reorganization may be necessary to establish a more unified, consolidated approach to materiel management. Second, conduct an audit of the entire hospital to identify opportunities for improvement and to establish baseline management data. Finally, push forward the process of system changes (which also establishes necessary controls) until results are accomplished--a process that usually requires one to three years. The alliance between the materiel manager and the CFO is definitely beneficial to the hospital and to the individuals involved.
Modified computation of the nozzle damping coefficient in solid rocket motors
NASA Astrophysics Data System (ADS)
Liu, Peijin; Wang, Muxin; Yang, Wenjing; Gupta, Vikrant; Guan, Yu; Li, Larry K. B.
2018-02-01
In solid rocket motors, the bulk advection of acoustic energy out of the nozzle constitutes a significant source of damping and can thus influence the thermoacoustic stability of the system. In this paper, we propose and test a modified version of a historically accepted method of calculating the nozzle damping coefficient. Building on previous work, we separate the nozzle from the combustor, but compute the acoustic admittance at the nozzle entry using the linearized Euler equations (LEEs) rather than with short nozzle theory. We compute the combustor's acoustic modes also with the LEEs, taking the nozzle admittance as the boundary condition at the combustor exit while accounting for the mean flow field in the combustor using an analytical solution to Taylor-Culick flow. We then compute the nozzle damping coefficient via a balance of the unsteady energy flux through the nozzle. Compared with established methods, the proposed method offers competitive accuracy at reduced computational costs, helping to improve predictions of thermoacoustic instability in solid rocket motors.