Sample records for central control computer

  1. Closely Spaced Independent Parallel Runway Simulation.

    DTIC Science & Technology

    1984-10-01

    facility consists of the Central Computer Facility, the Controller Laboratory, and the Simulator Pilot Complex. CENTRAL COMPUTER FACILITY. The Central... Computer Facility consists of a group of mainframes, minicomputers, and associated peripherals which host the operational and data acquisition...in the Controller Laboratory and convert their verbal directives into a keyboard entry which is transmitted to the Central Computer Complex, where

  2. Command, Control, Communications, Computers and Intelligence Electronic Warfare (C4IEW) Project Book, Fiscal Year 1994. (Non-FOUO Version)

    DTIC Science & Technology

    1994-04-01

    TSW-7A, AIR TRAFFIC CONTROL CENTRAL (ATCC) 32- 8 AN/TTC-41(V), CENTRAL OFFICE, TELEPHONE, AUTOMATIC 32- 9 MISSILE COUNTERMEASURE DEVICE (MCD) .- 0 MK...a Handheld Terminal Unit (HTU), Portable Computer Unit (PCU), Transportable Computer Unit (TCU), and compatible NOI peripheral devices . All but the...CLASSIFICATION: ASARC-III, Jun 80, Standard. I I I AN/TIC-39 IS A MOBILE , AUTOMATIC , MODULAR ELECTRONIC CIRCUIT SWITCH UNDER PROCESSOR CONTROL WITH INTEGRAL

  3. Computer Instructional Aids for Undergraduate Control Education. 1978 Edition.

    ERIC Educational Resources Information Center

    Volz, Richard A.; And Others

    This work represents the development of computer tools for undergraduate students. Emphasis is on automatic control theory using hybrid and digital computation. The routine calculations of control system analysis are presented as students would use them on the University of Michigan's central digital computer and the time-shared graphic terminals…

  4. Computer code for controller partitioning with IFPC application: A user's manual

    NASA Technical Reports Server (NTRS)

    Schmidt, Phillip H.; Yarkhan, Asim

    1994-01-01

    A user's manual for the computer code for partitioning a centralized controller into decentralized subcontrollers with applicability to Integrated Flight/Propulsion Control (IFPC) is presented. Partitioning of a centralized controller into two subcontrollers is described and the algorithm on which the code is based is discussed. The algorithm uses parameter optimization of a cost function which is described. The major data structures and functions are described. Specific instructions are given. The user is led through an example of an IFCP application.

  5. Design of a modular digital computer system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A Central Control Element (CCE) module which controls the Automatically Reconfigurable Modular System (ARMS) and allows both redundant processing and multi-computing in the same computer with real time mode switching, is discussed. The same hardware is used for either reliability enhancement, speed enhancement, or for a combination of both.

  6. 51. VIEW OF LORAL ADS 100A COMPUTERS LOCATED CENTRALLY ON ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    51. VIEW OF LORAL ADS 100A COMPUTERS LOCATED CENTRALLY ON NORTH WALL OF TELEMETRY ROOM (ROOM 106). SLC-3W CONTROL ROOM IS VISIBLE IN BACKGROUND THROUGH WINDOW IN NORTH WALL. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  7. A Computer Program for Training Eccentric Reading in Persons with Central Scotoma

    ERIC Educational Resources Information Center

    Kasten, Erich; Haschke, Peggy; Meinhold, Ulrike; Oertel-Verweyen, Petra

    2010-01-01

    This article explores the effectiveness of a computer program--Xcentric viewing--for training eccentric reading in persons with central scotoma. The authors conducted a small study to investigate whether this program increases the reading capacities of individuals with age-related macular degeneration (AMD). Instead of a control group, they…

  8. TOWARD A COMPUTER BASED INSTRUCTIONAL SYSTEM.

    ERIC Educational Resources Information Center

    GARIGLIO, LAWRENCE M.; RODGERS, WILLIAM A.

    THE INFORMATION FOR THIS REPORT WAS OBTAINED FROM VARIOUS COMPUTER ASSISTED INSTRUCTION INSTALLATIONS. COMPUTER BASED INSTRUCTION REFERS TO A SYSTEM AIMED AT INDIVIDUALIZED INSTRUCTION, WITH THE COMPUTER AS CENTRAL CONTROL. SUCH A SYSTEM HAS 3 MAJOR SUBSYSTEMS--INSTRUCTIONAL, RESEARCH, AND MANAGERIAL. THIS REPORT EMPHASIZES THE INSTRUCTIONAL…

  9. BIO-Plex Information System Concept

    NASA Technical Reports Server (NTRS)

    Jones, Harry; Boulanger, Richard; Arnold, James O. (Technical Monitor)

    1999-01-01

    This paper describes a suggested design for an integrated information system for the proposed BIO-Plex (Bioregenerative Planetary Life Support Systems Test Complex) at Johnson Space Center (JSC), including distributed control systems, central control, networks, database servers, personal computers and workstations, applications software, and external communications. The system will have an open commercial computing and networking, architecture. The network will provide automatic real-time transfer of information to database server computers which perform data collection and validation. This information system will support integrated, data sharing applications for everything, from system alarms to management summaries. Most existing complex process control systems have information gaps between the different real time subsystems, between these subsystems and central controller, between the central controller and system level planning and analysis application software, and between the system level applications and management overview reporting. An integrated information system is vitally necessary as the basis for the integration of planning, scheduling, modeling, monitoring, and control, which will allow improved monitoring and control based on timely, accurate and complete data. Data describing the system configuration and the real time processes can be collected, checked and reconciled, analyzed and stored in database servers that can be accessed by all applications. The required technology is available. The only opportunity to design a distributed, nonredundant, integrated system is before it is built. Retrofit is extremely difficult and costly.

  10. A Computer Program Functional Design of the Simulation Subsystem of an Automated Central Flow Control System

    DOT National Transportation Integrated Search

    1976-08-01

    This report contains a functional design for the simulation of a future automation concept in support of the ATC Systems Command Center. The simulation subsystem performs airport airborne arrival delay predictions and computes flow control tables for...

  11. Multiple-User, Multitasking, Virtual-Memory Computer System

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1993-01-01

    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  12. Data processing for water monitoring system

    NASA Technical Reports Server (NTRS)

    Monford, L.; Linton, A. T.

    1978-01-01

    Water monitoring data acquisition system is structured about central computer that controls sampling and sensor operation, and analyzes and displays data in real time. Unit is essentially separated into two systems: computer system, and hard wire backup system which may function separately or with computer.

  13. Emergent Adaptive Noise Reduction from Communal Cooperation of Sensor Grid

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.; Jones, Michael G.; Nark, Douglas M.; Lodding, Kenneth N.

    2010-01-01

    In the last decade, the realization of small, inexpensive, and powerful devices with sensors, computers, and wireless communication has promised the development of massive sized sensor networks with dense deployments over large areas capable of high fidelity situational assessments. However, most management models have been based on centralized control and research has concentrated on methods for passing data from sensor devices to the central controller. Most implementations have been small but, as it is not scalable, this methodology is insufficient for massive deployments. Here, a specific application of a large sensor network for adaptive noise reduction demonstrates a new paradigm where communities of sensor/computer devices assess local conditions and make local decisions from which emerges a global behaviour. This approach obviates many of the problems of centralized control as it is not prone to single point of failure and is more scalable, efficient, robust, and fault tolerant

  14. Experiments in Computing: A Survey

    PubMed Central

    Moisseinen, Nella

    2014-01-01

    Experiments play a central role in science. The role of experiments in computing is, however, unclear. Questions about the relevance of experiments in computing attracted little attention until the 1980s. As the discipline then saw a push towards experimental computer science, a variety of technically, theoretically, and empirically oriented views on experiments emerged. As a consequence of those debates, today's computing fields use experiments and experiment terminology in a variety of ways. This paper analyzes experimentation debates in computing. It presents five ways in which debaters have conceptualized experiments in computing: feasibility experiment, trial experiment, field experiment, comparison experiment, and controlled experiment. This paper has three aims: to clarify experiment terminology in computing; to contribute to disciplinary self-understanding of computing; and, due to computing's centrality in other fields, to promote understanding of experiments in modern science in general. PMID:24688404

  15. Experiments in computing: a survey.

    PubMed

    Tedre, Matti; Moisseinen, Nella

    2014-01-01

    Experiments play a central role in science. The role of experiments in computing is, however, unclear. Questions about the relevance of experiments in computing attracted little attention until the 1980s. As the discipline then saw a push towards experimental computer science, a variety of technically, theoretically, and empirically oriented views on experiments emerged. As a consequence of those debates, today's computing fields use experiments and experiment terminology in a variety of ways. This paper analyzes experimentation debates in computing. It presents five ways in which debaters have conceptualized experiments in computing: feasibility experiment, trial experiment, field experiment, comparison experiment, and controlled experiment. This paper has three aims: to clarify experiment terminology in computing; to contribute to disciplinary self-understanding of computing; and, due to computing's centrality in other fields, to promote understanding of experiments in modern science in general.

  16. Determination of Stability and Control Derivatives using Computational Fluid Dynamics and Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Green, Lawrence L.; Montgomery, Raymond C.; Raney, David L.

    1999-01-01

    With the recent interest in novel control effectors there is a need to determine the stability and control derivatives of new aircraft configurations early in the design process. These derivatives are central to most control law design methods and would allow the determination of closed-loop control performance of the vehicle. Early determination of the static and dynamic behavior of an aircraft may permit significant improvement in configuration weight, cost, stealth, and performance through multidisciplinary design. The classical method of determining static stability and control derivatives - constructing and testing wind tunnel models - is expensive and requires a long lead time for the resultant data. Wind tunnel tests are also limited to the preselected control effectors of the model. To overcome these shortcomings, computational fluid dynamics (CFD) solvers are augmented via automatic differentiation, to directly calculate the stability and control derivatives. The CFD forces and moments are differentiated with respect to angle of attack, angle of sideslip, and aircraft shape parameters to form these derivatives. A subset of static stability and control derivatives of a tailless aircraft concept have been computed by two differentiated inviscid CFD codes and verified for accuracy with central finite-difference approximations and favorable comparisons to a simulation database.

  17. Hybrid Quantum-Classical Approach to Quantum Optimal Control.

    PubMed

    Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu

    2017-04-14

    A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.

  18. Automated Power Systems Management (APSM)

    NASA Technical Reports Server (NTRS)

    Bridgeforth, A. O.

    1981-01-01

    A breadboard power system incorporating autonomous functions of monitoring, fault detection and recovery, command and control was developed, tested and evaluated to demonstrate technology feasibility. Autonomous functions including switching of redundant power processing elements, individual load fault removal, and battery charge/discharge control were implemented by means of a distributed microcomputer system within the power subsystem. Three local microcomputers provide the monitoring, control and command function interfaces between the central power subsystem microcomputer and the power sources, power processing and power distribution elements. The central microcomputer is the interface between the local microcomputers and the spacecraft central computer or ground test equipment.

  19. Analysis of Selected Enhancements to the En Route Central Computing Complex

    DOT National Transportation Integrated Search

    1981-09-01

    This report analyzes selected hardware enhancements that could improve the performance of the 9020 computer systems, which are used to provide en route air traffic control services. These enhancements could be implemented quickly, would be relatively...

  20. A modified approach to controller partitioning

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Veillette, Robert J.

    1993-01-01

    The idea of computing a decentralized control law for the integrated flight/propulsion control of an aircraft by partitioning a given centralized controller is investigated. An existing controller partitioning methodology is described, and a modified approach is proposed with the objective of simplifying the associated controller approximation problem. Under the existing approach, the decentralized control structure is a variable in the partitioning process; by contrast, the modified approach assumes that the structure is fixed a priori. Hence, the centralized controller design may take the decentralized control structure into account. Specifically, the centralized controller may be designed to include all the same inputs and outputs as the decentralized controller; then, the two controllers may be compared directly, simplifying the partitioning process considerably. Following the modified approach, a centralized controller is designed for an example aircraft mode. The design includes all the inputs and outputs to be used in a specified decentralized control structure. However, it is shown that the resulting centralized controller is not well suited for approximation by a decentralized controller of the given structure. The results indicate that it is not practical in general to cast the controller partitioning problem as a direct controller approximation problem.

  1. SACFIR: SDN-Based Application-Aware Centralized Adaptive Flow Iterative Reconfiguring Routing Protocol for WSNs.

    PubMed

    Aslam, Muhammad; Hu, Xiaopeng; Wang, Fan

    2017-12-13

    Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN). Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs) utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR) routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR's routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS). This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR) to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime and stability period when compared to existing routing protocols.

  2. SACFIR: SDN-Based Application-Aware Centralized Adaptive Flow Iterative Reconfiguring Routing Protocol for WSNs

    PubMed Central

    Hu, Xiaopeng; Wang, Fan

    2017-01-01

    Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN). Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs) utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR) routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR’s routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS). This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR) to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime and stability period when compared to existing routing protocols. PMID:29236031

  3. Reach a New Threshold of Freedom and Control with Dell's Flexible Computing Solution: On-Demand Desktop Streaming

    ERIC Educational Resources Information Center

    Technology & Learning, 2008

    2008-01-01

    When it comes to IT, there has always been an important link between data center control and client flexibility. As computing power increases, so do the potentially crippling threats to security, productivity and financial stability. This article talks about Dell's On-Demand Desktop Streaming solution which is designed to centralize complete…

  4. Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems

    NASA Astrophysics Data System (ADS)

    Varela Rodriguez, F.

    2011-12-01

    The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.

  5. Propulsion/flight control integration technology (PROFIT) software system definition

    NASA Technical Reports Server (NTRS)

    Carlin, C. M.; Hastings, W. J.

    1978-01-01

    The Propulsion Flight Control Integration Technology (PROFIT) program is designed to develop a flying testbed dedicated to controls research. The control software for PROFIT is defined. Maximum flexibility, needed for long term use of the flight facility, is achieved through a modular design. The Host program, processes inputs from the telemetry uplink, aircraft central computer, cockpit computer control and plant sensors to form an input data base for use by the control algorithms. The control algorithms, programmed as application modules, process the input data to generate an output data base. The Host program formats the data for output to the telemetry downlink, the cockpit computer control, and the control effectors. Two applications modules are defined - the bill of materials F-100 engine control and the bill of materials F-15 inlet control.

  6. Digital system for structural dynamics simulation

    NASA Technical Reports Server (NTRS)

    Krauter, A. I.; Lagace, L. J.; Wojnar, M. K.; Glor, C.

    1982-01-01

    State-of-the-art digital hardware and software for the simulation of complex structural dynamic interactions, such as those which occur in rotating structures (engine systems). System were incorporated in a designed to use an array of processors in which the computation for each physical subelement or functional subsystem would be assigned to a single specific processor in the simulator. These node processors are microprogrammed bit-slice microcomputers which function autonomously and can communicate with each other and a central control minicomputer over parallel digital lines. Inter-processor nearest neighbor communications busses pass the constants which represent physical constraints and boundary conditions. The node processors are connected to the six nearest neighbor node processors to simulate the actual physical interface of real substructures. Computer generated finite element mesh and force models can be developed with the aid of the central control minicomputer. The control computer also oversees the animation of a graphics display system, disk-based mass storage along with the individual processing elements.

  7. Proceedings of the Ship Control Systems Symposium (9th) Held in Bethesda, Maryland on 10-14 September 1990. Theme: Automation in Surface Ship Control Systems, Today’s Applications and Future Trends. Volume 1

    DTIC Science & Technology

    1990-09-14

    transmission of detected variations through sound lines of communication to centrally located standard Navy computers . These computers would be programmed to...have been programmed in C language. The program runs under the operating system ,OS9 on a VME-bus computer with a 68000 microprocessor. A number of full...present practice of"add-on" supervisory controls during ship design and construction,and "fix-it" R&D programs implemented after the ship isoperational

  8. Scalable software-defined optical networking with high-performance routing and wavelength assignment algorithms.

    PubMed

    Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin

    2015-10-19

    The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.

  9. Inertial subsystem functional and design requirements for the orbiter (Phase B extension baseline)

    NASA Technical Reports Server (NTRS)

    Flanders, J. H.; Green, J. P., Jr.

    1972-01-01

    The design requirements use the Phase B extension baseline system definition. This means that a GNC computer is specified for all command control functions instead of a central computer communicating with the ISS through a databus. Forced air cooling is used instead of cold plate cooling.

  10. A spacecraft computer repairable via command.

    NASA Technical Reports Server (NTRS)

    Fimmel, R. O.; Baker, T. E.

    1971-01-01

    The MULTIPAC is a central data system developed for deep-space probes with the distinctive feature that it may be repaired during flight via command and telemetry links by reprogramming around the failed unit. The computer organization uses pools of identical modules which the program organizes into one or more computers called processors. The interaction of these modules is dynamically controlled by the program rather than hardware. In the event of a failure, new programs are entered which reorganize the central data system with a somewhat reduced total processing capability aboard the spacecraft. Emphasis is placed on the evolution of the system architecture and the final overall system design rather than the specific logic design.

  11. Design of a modular digital computer system, CDRL no. D001, final design plan

    NASA Technical Reports Server (NTRS)

    Easton, R. A.

    1975-01-01

    The engineering breadboard implementation for the CDRL no. D001 modular digital computer system developed during design of the logic system was documented. This effort followed the architecture study completed and documented previously, and was intended to verify the concepts of a fault tolerant, automatically reconfigurable, modular version of the computer system conceived during the architecture study. The system has a microprogrammed 32 bit word length, general register architecture and an instruction set consisting of a subset of the IBM System 360 instruction set plus additional fault tolerance firmware. The following areas were covered: breadboard packaging, central control element, central processing element, memory, input/output processor, and maintenance/status panel and electronics.

  12. A Computational Account of Children's Analogical Reasoning: Balancing Inhibitory Control in Working Memory and Relational Representation

    ERIC Educational Resources Information Center

    Morrison, Robert G.; Doumas, Leonidas A. A.; Richland, Lindsey E.

    2011-01-01

    Theories accounting for the development of analogical reasoning tend to emphasize either the centrality of relational knowledge accretion or changes in information processing capability. Simulations in LISA (Hummel & Holyoak, 1997, 2003), a neurally inspired computer model of analogical reasoning, allow us to explore how these factors may…

  13. The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2

    NASA Technical Reports Server (NTRS)

    Kusmanoff, Antone; Martin, Nancy L.

    1989-01-01

    In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions.

  14. CDL description of the CDC 6600 stunt box

    NASA Technical Reports Server (NTRS)

    Hertzog, J. B.

    1971-01-01

    The CDC 6600 central memory control (stunt box) is described utilizing CDL (Computer Design Language), block diagrams, and text. The stunt box is a clearing house for all central memory references from the 6600 central and peripheral processors. Since memory requests can be issued simultaneously, the stunt box must be capable of assigning priorities to requests, of labeling requests so that the data will be distributed correctly, and of remembering rejected addresses due to memory conflicts.

  15. Computer Security for Commercial Nuclear Power Plants - Literature Review for Korea Hydro Nuclear Power Central Research Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duran, Felicia Angelica; Waymire, Russell L.

    2013-10-01

    Sandia National Laboratories (SNL) is providing training and consultation activities on security planning and design for the Korea Hydro and Nuclear Power Central Research Institute (KHNPCRI). As part of this effort, SNL performed a literature review on computer security requirements, guidance and best practices that are applicable to an advanced nuclear power plant. This report documents the review of reports generated by SNL and other organizations [U.S. Nuclear Regulatory Commission, Nuclear Energy Institute, and International Atomic Energy Agency] related to protection of information technology resources, primarily digital controls and computer resources and their data networks. Copies of the key documentsmore » have also been provided to KHNP-CRI.« less

  16. ONR Europe Reports. Computer Science/Computer Engineering in Central Europe: A Report on Czechoslovakia, Hungary, and Poland

    DTIC Science & Technology

    1992-08-01

    Rychlik J.: Simulation of distributed control systems. Research report of Institute of Technology in 22 Pilsen no. 209-07-85, Jun. 1985 Kocur P... Kocur P.: Sensitivity analysis of reliability parameters. Proceedings of conf. FTSD, Brno, Jun. 1986, pp. 97-101 Smrha P., Kocur P., Racek S.: A

  17. GABAA-benzodiazepine-chloride receptor-targeted therapy for tinnitus control: preliminary report.

    PubMed

    Shulman, Abraham; Strashun, Arnold M; Goldstein, Barbara A

    2002-01-01

    Our goal was to attempt to establish neuropharmacological tinnitus control (i.e., relief) with medication directed to restoration of a deficiency in the gamma-aminobutyric acid-benzodiazepine-chloride receptor in tinnitus patients with a diagnosis of a predominantly central type tinnitus. Thirty tinnitus patients completed a medical audiological tinnitus patient protocol and brain magnetic resonance imaging and single-photon emission computed tomography of brain. Treatment with GABAergic and benzodiazepine medication continued for 4-6 weeks. A maintenance dose was continued when tinnitus control was positive. Intake and outcome questionnaires were completed. Of 30 patients, 21 completed the trial (70%). Tinnitus control lasting from 4-6 weeks to 3 years was reported by 19 of the 21 (90%). The trial was not completed by 9 of the 30 (30%). No patient experienced an increase in tinnitus intensity or annoyance. Sequential brain single-photon emission computed tomography in 10 patients revealed objective evidence of increased brain perfusion. Patients with a predominantly central type tinnitus experience significant tinnitus control with medication directed to the gamma-aminobutyric acid-benzodiazepine-chloride receptor.

  18. A Logically Centralized Approach for Control and Management of Large Computer Networks

    ERIC Educational Resources Information Center

    Iqbal, Hammad A.

    2012-01-01

    Management of large enterprise and Internet service provider networks is a complex, error-prone, and costly challenge. It is widely accepted that the key contributors to this complexity are the bundling of control and data forwarding in traditional routers and the use of fully distributed protocols for network control. To address these…

  19. A parameter optimization approach to controller partitioning for integrated flight/propulsion control application

    NASA Technical Reports Server (NTRS)

    Schmidt, Phillip; Garg, Sanjay; Holowecky, Brian

    1992-01-01

    A parameter optimization framework is presented to solve the problem of partitioning a centralized controller into a decentralized hierarchical structure suitable for integrated flight/propulsion control implementation. The controller partitioning problem is briefly discussed and a cost function to be minimized is formulated, such that the resulting 'optimal' partitioned subsystem controllers will closely match the performance (including robustness) properties of the closed-loop system with the centralized controller while maintaining the desired controller partitioning structure. The cost function is written in terms of parameters in a state-space representation of the partitioned sub-controllers. Analytical expressions are obtained for the gradient of this cost function with respect to parameters, and an optimization algorithm is developed using modern computer-aided control design and analysis software. The capabilities of the algorithm are demonstrated by application to partitioned integrated flight/propulsion control design for a modern fighter aircraft in the short approach to landing task. The partitioning optimization is shown to lead to reduced-order subcontrollers that match the closed-loop command tracking and decoupling performance achieved by a high-order centralized controller.

  20. A parameter optimization approach to controller partitioning for integrated flight/propulsion control application

    NASA Technical Reports Server (NTRS)

    Schmidt, Phillip H.; Garg, Sanjay; Holowecky, Brian R.

    1993-01-01

    A parameter optimization framework is presented to solve the problem of partitioning a centralized controller into a decentralized hierarchical structure suitable for integrated flight/propulsion control implementation. The controller partitioning problem is briefly discussed and a cost function to be minimized is formulated, such that the resulting 'optimal' partitioned subsystem controllers will closely match the performance (including robustness) properties of the closed-loop system with the centralized controller while maintaining the desired controller partitioning structure. The cost function is written in terms of parameters in a state-space representation of the partitioned sub-controllers. Analytical expressions are obtained for the gradient of this cost function with respect to parameters, and an optimization algorithm is developed using modern computer-aided control design and analysis software. The capabilities of the algorithm are demonstrated by application to partitioned integrated flight/propulsion control design for a modern fighter aircraft in the short approach to landing task. The partitioning optimization is shown to lead to reduced-order subcontrollers that match the closed-loop command tracking and decoupling performance achieved by a high-order centralized controller.

  1. Help at Hand

    ERIC Educational Resources Information Center

    Demski, Jennifer

    2009-01-01

    This article describes how centralized presentation control systems enable IT support staff to monitor equipment and assist end users more efficiently. At Temple University, 70 percent of the classrooms are equipped with an AMX touch panel, linked via a Netlink controller to an in-classroom computer, projector, DVD/VCR player, and speakers. The…

  2. X-wing fly-by-wire vehicle management system

    NASA Technical Reports Server (NTRS)

    Fischer, Jr., William C. (Inventor)

    1990-01-01

    A complete, computer based, vehicle management system (VMS) for X-Wing aircraft using digital fly-by-wire technology controlling many subsystems and providing functions beyond the classical aircraft flight control system. The vehicle management system receives input signals from a multiplicity of sensors and provides commands to a large number of actuators controlling many subsystems. The VMS includes--segregating flight critical and mission critical factors and providing a greater level of back-up or redundancy for the former; centralizing the computation of functions utilized by several subsystems (e.g. air data, rotor speed, etc.); integrating the control of the flight control functions, the compressor control, the rotor conversion control, vibration alleviation by higher harmonic control, engine power anticipation and self-test, all in the same flight control computer (FCC) hardware units. The VMS uses equivalent redundancy techniques to attain quadruple equivalency levels; includes alternate modes of operation and recovery means to back-up any functions which fail; and uses back-up control software for software redundancy.

  3. The revolution in data gathering systems

    NASA Technical Reports Server (NTRS)

    Cambra, J. M.; Trover, W. F.

    1975-01-01

    Data acquisition systems used in NASA's wind tunnels from the 1950's through the present time are summarized as a baseline for assessing the impact of minicomputers and microcomputers on data acquisition and data processing. Emphasis is placed on the cyclic evolution in computer technology which transformed the central computer system, and finally the distributed computer system. Other developments discussed include: medium scale integration, large scale integration, combining the functions of data acquisition and control, and micro and minicomputers.

  4. Doppler compensation by shifting transmitted object frequency within limits

    NASA Technical Reports Server (NTRS)

    Laughlin, C. R., Jr.; Hollenbaugh, R. C.; Allen, W. K. (Inventor)

    1973-01-01

    A system and method are disclosed for position locating, deriving centralized air traffic control data, and communicating via voice and digital signals between a multiplicity of remote aircraft, including supersonic transports, and a central station. Such communication takes place through a synchronous satellite relay station. Side tone ranging patterns, as well as the digital and voice signals, are modulated on a carrier transmitted from the central station and received on all of the supersonic transports. Each aircraft communicates with the ground stations via a different frequency multiplexed spectrum. Supersonic transport position is derived from a computer at the central station and supplied to a local air traffic controller. Position is determined in response to variable phase information imposed on the side tones at the aircrafts. Common to all of the side tone techniques is Doppler compensation for the supersonic transport velocity.

  5. The Effects of Using Dynamic Geometry on Eighth Grade Students' Achievement and Attitude towards Triangles

    ERIC Educational Resources Information Center

    Turk, Halime Samur; Akyuz, Didem

    2016-01-01

    This study investigates the effects of dynamic geometry based computer instruction on eighth grade students' achievement in geometry and their attitudes toward geometry and technology compared to traditional instruction. Central to the study was a controlled experiment, which contained experimental and control groups both instructed by the same…

  6. The operation of large computer-controlled manufacturing systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upton, D.M.

    1988-01-01

    This work examines methods for operation of large computer-controlled manufacturing systems, with more than 50 or so disparate CNC machines in congregation. The central theme is the development of a distributed control system, which requires minimal central supervision, and allows manufacturing system re-configuration without extensive control software re-writes. Provision is made for machines to learn from their experience and provide estimates of the time necessary to effect various tasks. Routing is opportunistic, with varying degrees of myopia depending on the prevailing situation. Necessary curtailments of opportunism are built in to the system, in order to provide a society of machinesmore » that operate in unison rather than in chaos. Negotiation and contention resolution are carried out using a UHF radio communications network, along with processing capability on both pallets and tools. Graceful and robust error recovery is facilitated by ensuring adequate pessimistic consideration of failure modes at each stage in the scheme. Theoretical models are developed and an examination is made of fundamental characteristics of auction-based scheduling methods.« less

  7. Centralized Accounting and Electronic Filing Provides Efficient Receivables Collection.

    ERIC Educational Resources Information Center

    School Business Affairs, 1983

    1983-01-01

    An electronic filing system makes financial control manageable at Bowling Green State University, Ohio. The system enables quick access to computer-stored consolidated account data and microfilm images of charges, statements, and other billing documents. (MLF)

  8. An Upgrade of the Aeroheating Software ''MINIVER''

    NASA Technical Reports Server (NTRS)

    Louderback, Pierce

    2013-01-01

    Detailed computational modeling: CFO often used to create and execute computational domains. Increasing complexity when moving from 20 to 30 geometries. Computational time increased as finer grids are used (accuracy). Strong tool, but takes time to set up and run. MINIVER: Uses theoretical and empirical correlations. Orders of magnitude faster to set up and run. Not as accurate as CFO, but gives reasonable estimations. MINIVER's Drawbacks: Rigid command-line interface. Lackluster, unorganized documentation. No central control; multiple versions exist and have diverged.

  9. Resiliency in Future Cyber Combat

    DTIC Science & Technology

    2016-04-04

    including the Internet , telecommunications networks, computer systems, and embed- ded processors and controllers.”6 One important point emerging from the...definition is that while the Internet is part of cyberspace, it is not all of cyberspace. Any computer processor capable of communicating with a...central proces- sor on a modern car are all part of cyberspace, although only some of them are routinely connected to the Internet . Most modern

  10. Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    1994-01-01

    In the mid-1980s, Kinetic Systems and Langley Research Center determined that high speed CAMAC (Computer Automated Measurement and Control) data acquisition systems could significantly improve Langley's ARTS (Advanced Real Time Simulation) system. The ARTS system supports flight simulation R&D, and the CAMAC equipment allowed 32 high performance simulators to be controlled by centrally located host computers. This technology broadened Kinetic Systems' capabilities and led to several commercial applications. One of them is General Atomics' fusion research program. Kinetic Systems equipment allows tokamak data to be acquired four to 15 times more rapidly. Ford Motor company uses the same technology to control and monitor transmission testing facilities.

  11. The Information Revolution.

    ERIC Educational Resources Information Center

    Gilder, George

    1993-01-01

    A technological revolution is erupting all about us. A millionfold rise in computation and communications cost effectiveness will transform all industries and bureaucracies. The information revolution is a decentralizing, microcosmic electronic force opposing the centralizing, controlling Industrial-Age mentality persisting in schools. Television…

  12. [Gender differences of the influence of an "aggressive" computer game on the variability of the heart rhythm].

    PubMed

    Stepanian, L S; Grigorian, V G; Stepanian, A Iu

    2010-01-01

    The influence of virtual aggressogenic environment on psychoemotional sphere of male and female teenagers with different levels of personal aggression was studied by characteristics of the heart rate. The exposure to the aggressogenic factors produced a change of the centralized control for the autonomic control in female and male teenagers with the high level of aggression and females with the low level of aggression. Male teenagers with the low level of aggression displayed the prevalence of the centralized control. Thus, the influence of exposure of teenagers to the virtual aggressogenic environment was shown to be ambiguous depending on the level of the level of personal aggression and gender.

  13. Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia*

    PubMed Central

    Kim, Sung-Phil; Simeral, John D; Hochberg, Leigh R; Donoghue, John P; Black, Michael J

    2010-01-01

    Computer-mediated connections between human motor cortical neurons and assistive devices promise to improve or restore lost function in people with paralysis. Recently, a pilot clinical study of an intracortical neural interface system demonstrated that a tetraplegic human was able to obtain continuous two-dimensional control of a computer cursor using neural activity recorded from his motor cortex. This control, however, was not sufficiently accurate for reliable use in many common computer control tasks. Here, we studied several central design choices for such a system including the kinematic representation for cursor movement, the decoding method that translates neuronal ensemble spiking activity into a control signal and the cursor control task used during training for optimizing the parameters of the decoding method. In two tetraplegic participants, we found that controlling a cursor's velocity resulted in more accurate closed-loop control than controlling its position directly and that cursor velocity control was achieved more rapidly than position control. Control quality was further improved over conventional linear filters by using a probabilistic method, the Kalman filter, to decode human motor cortical activity. Performance assessment based on standard metrics used for the evaluation of a wide range of pointing devices demonstrated significantly improved cursor control with velocity rather than position decoding. PMID:19015583

  14. Altered cardiorespiratory coupling in young male adults with excessive online gaming.

    PubMed

    Chang, Jae Seung; Kim, Eun Young; Jung, Dooyoung; Jeong, Seong Hoon; Kim, Yeni; Roh, Myoung-Sun; Ahn, Yong Min; Hahm, Bong-Jin

    2015-09-01

    This study aimed to investigate changes in heart rate variability and cardiorespiratory coupling in male college students with problematic Internet use (PIU) excessive gaming type during action video game play to assess the relationship between PIU tendency and central autonomic regulation. Electrocardiograms and respiration were simultaneously recorded from 22 male participants with excessive online gaming and 22 controls during action video game play. Sample entropy (SampEn) was computed to assess autonomic regularity, and cross-SampEn was calculated to quantify autonomic coordination. During video game play, reduced cardiorespiratory coupling (CRC) was observed in individuals with PIU excessive gaming type compared with controls, implicating central autonomic dysregulation. The PIU tendency was associated with the severity of autonomic dysregulation. These findings indicate impaired CRC in PIU excessive gaming type, which may reflect alterations of central inhibitory control over autonomic responses to pleasurable online stimuli. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Uncovering many-body correlations in nanoscale nuclear spin baths by central spin decoherence

    PubMed Central

    Ma, Wen-Long; Wolfowicz, Gary; Zhao, Nan; Li, Shu-Shen; Morton, John J.L.; Liu, Ren-Bao

    2014-01-01

    Central spin decoherence caused by nuclear spin baths is often a critical issue in various quantum computing schemes, and it has also been used for sensing single-nuclear spins. Recent theoretical studies suggest that central spin decoherence can act as a probe of many-body physics in spin baths; however, identification and detection of many-body correlations of nuclear spins in nanoscale systems are highly challenging. Here, taking a phosphorus donor electron spin in a 29Si nuclear spin bath as our model system, we discover both theoretically and experimentally that many-body correlations in nanoscale nuclear spin baths produce identifiable signatures in decoherence of the central spin under multiple-pulse dynamical decoupling control. We demonstrate that under control by an odd or even number of pulses, the central spin decoherence is principally caused by second- or fourth-order nuclear spin correlations, respectively. This study marks an important step toward studying many-body physics using spin qubits. PMID:25205440

  16. Computing and information services at the Jet Propulsion Laboratory - A management approach to a diversity of needs

    NASA Technical Reports Server (NTRS)

    Felberg, F. H.

    1984-01-01

    The Jet Propulsion Laboratory, a research and development organization with about 5,000 employees, presents a complicated set of requirements for an institutional system of computing and informational services. The approach taken by JPL in meeting this challenge is one of controlled flexibility. A central communications network is provided, together with selected computing facilities for common use. At the same time, staff members are given considerable discretion in choosing the mini- and microcomputers that they believe will best serve their needs. Consultation services, computer education, and other support functions are also provided.

  17. ALMA test interferometer control system: past experiences and future developments

    NASA Astrophysics Data System (ADS)

    Marson, Ralph G.; Pokorny, Martin; Kern, Jeff; Stauffer, Fritz; Perrigouard, Alain; Gustafsson, Birger; Ramey, Ken

    2004-09-01

    The Atacama Large Millimeter Array (ALMA) will, when it is completed in 2012, be the world's largest millimeter & sub-millimeter radio telescope. It will consist of 64 antennas, each one 12 meters in diameter, connected as an interferometer. The ALMA Test Interferometer Control System (TICS) was developed as a prototype for the ALMA control system. Its initial task was to provide sufficient functionality for the evaluation of the prototype antennas. The main antenna evaluation tasks include surface measurements via holography and pointing accuracy, measured at both optical and millimeter wavelengths. In this paper we will present the design of TICS, which is a distributed computing environment. In the test facility there are four computers: three real-time computers running VxWorks (one on each antenna and a central one) and a master computer running Linux. These computers communicate via Ethernet, and each of the real-time computers is connected to the hardware devices via an extension of the CAN bus. We will also discuss our experience with this system and outline changes we are making in light of our experiences.

  18. Space-Shuttle Emulator Software

    NASA Technical Reports Server (NTRS)

    Arnold, Scott; Askew, Bill; Barry, Matthew R.; Leigh, Agnes; Mermelstein, Scott; Owens, James; Payne, Dan; Pemble, Jim; Sollinger, John; Thompson, Hiram; hide

    2007-01-01

    A package of software has been developed to execute a raw binary image of the space shuttle flight software for simulation of the computational effects of operation of space shuttle avionics. This software can be run on inexpensive computer workstations. Heretofore, it was necessary to use real flight computers to perform such tests and simulations. The package includes a program that emulates the space shuttle orbiter general- purpose computer [consisting of a central processing unit (CPU), input/output processor (IOP), master sequence controller, and buscontrol elements]; an emulator of the orbiter display electronics unit and models of the associated cathode-ray tubes, keyboards, and switch controls; computational models of the data-bus network; computational models of the multiplexer-demultiplexer components; an emulation of the pulse-code modulation master unit; an emulation of the payload data interleaver; a model of the master timing unit; a model of the mass memory unit; and a software component that ensures compatibility of telemetry and command services between the simulated space shuttle avionics and a mission control center. The software package is portable to several host platforms.

  19. Supporting railroad roadway work communications with a wireless handheld computer. Volume 1 : usability for the roadway worker

    DOT National Transportation Integrated Search

    2004-10-01

    Communications in current railroad operations rely heavily on voice communications. Radio congestion impairs roadway workers ability to communicate effectively with dispatchers at the Central Traffic Control Center and has adverse consequences for...

  20. Supporting railroad roadway worker communications with a wireless handheld computer. Volume 1, Usability for the roadway worker.

    DOT National Transportation Integrated Search

    2004-10-31

    Communications in current railroad operations rely heavily on voice communications. Radio congestion impairs roadway workers ability to communicate effectively with dispatchers at the Central Traffic Control Center and has adverse consequences for...

  1. Using Microcomputers to Manage Grants.

    ERIC Educational Resources Information Center

    Joseph, Jonathan L.; And Others

    1982-01-01

    Features of microcomputer systems and software that can be useful in administration of research grants are outlined, including immediacy of reporting, flexibility, accurate balance availability, useful coding, accurate payroll control, and forecasting capabilities. These are contrasted with the less flexible centralized computer operation. (MSE)

  2. LSU Slashes Energy Use

    ERIC Educational Resources Information Center

    Collier, Herbert I.

    1978-01-01

    Energy conservation programs at Louisiana State University reduced energy use 23 percent. The programs involved computer controlled power management systems, adjustment of building temperatures and lighting levels to prescribed standards, consolidation of night classes, centralization of chilled water systems, and manual monitoring of heating and…

  3. [Research on the Application of Fuzzy Logic to Systems Analysis and Control

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Research conducted with the support of NASA Grant NCC2-275 has been focused in the main on the development of fuzzy logic and soft computing methodologies and their applications to systems analysis and control. with emphasis 011 problem areas which are of relevance to NASA's missions. One of the principal results of our research has been the development of a new methodology called Computing with Words (CW). Basically, in CW words drawn from a natural language are employed in place of numbers for computing and reasoning. There are two major imperatives for computing with words. First, computing with words is a necessity when the available information is too imprecise to justify the use of numbers, and second, when there is a tolerance for imprecision which can be exploited to achieve tractability, robustness, low solution cost, and better rapport with reality. Exploitation of the tolerance for imprecision is an issue of central importance in CW.

  4. Computer graphics for management: An abstract of capabilities and applications of the EIS system

    NASA Technical Reports Server (NTRS)

    Solem, B. J.

    1975-01-01

    The Executive Information Services (EIS) system, developed as a computer-based, time-sharing tool for making and implementing management decisions, and including computer graphics capabilities, was described. The following resources are available through the EIS languages: centralized corporate/gov't data base, customized and working data bases, report writing, general computational capability, specialized routines, modeling/programming capability, and graphics. Nearly all EIS graphs can be created by a single, on-line instruction. A large number of options are available, such as selection of graphic form, line control, shading, placement on the page, multiple images on a page, control of scaling and labeling, plotting of cum data sets, optical grid lines, and stack charts. The following are examples of areas in which the EIS system may be used: research, estimating services, planning, budgeting, and performance measurement, national computer hook-up negotiations.

  5. Distributed Control of Turbofan Engines

    DTIC Science & Technology

    2009-08-01

    performance of the engine. Thus the Full Authority Digital Engine Controller ( FADEC ) still remains the central arbiter of the engine’s dynamic behavior...instance, if the control laws are not distributed the dependence on the FADEC remains high, and system reliability can only be insured through many...if distributed computing is used at the local level and only coordinated by the FADEC . Such an architecture must be studied in the context of noisy

  6. A Smart City Application: A Fully Controlled Street Lighting Isle Based on Raspberry-Pi Card, a ZigBee Sensor Network and WiMAX

    PubMed Central

    Leccese, Fabio; Cagnetti, Marco; Trinca, Daniele

    2014-01-01

    A smart city application has been realized and tested. It is a fully remote controlled isle of lamp posts based on new technologies. It has been designed and organized in different hierarchical layers, which perform local activities to physically control the lamp posts and transmit information with another for remote control. Locally, each lamp post uses an electronic card for management and a ZigBee tlc network transmits data to a central control unit, which manages the whole isle. The central unit is realized with a Raspberry-Pi control card due to its good computing performance at very low price. Finally, a WiMAX connection was tested and used to remotely control the smart grid, thus overcoming the distance limitations of commercial Wi-Fi networks. The isle has been realized and tested for some months in the field. PMID:25529206

  7. A smart city application: a fully controlled street lighting isle based on Raspberry-Pi card, a ZigBee sensor network and WiMAX.

    PubMed

    Leccese, Fabio; Cagnetti, Marco; Trinca, Daniele

    2014-12-18

    A smart city application has been realized and tested. It is a fully remote controlled isle of lamp posts based on new technologies. It has been designed and organized in different hierarchical layers, which perform local activities to physically control the lamp posts and transmit information with another for remote control. Locally, each lamp post uses an electronic card for management and a ZigBee tlc network transmits data to a central control unit, which manages the whole isle. The central unit is realized with a Raspberry-Pi control card due to its good computing performance at very low price. Finally, a WiMAX connection was tested and used to remotely control the smart grid, thus overcoming the distance limitations of commercial Wi-Fi networks. The isle has been realized and tested for some months in the field.

  8. Survey of methods for secure connection to the internet

    NASA Astrophysics Data System (ADS)

    Matsui, Shouichi

    1994-04-01

    This paper describes a study of a security method of protecting inside network computers against outside miscreants and unwelcome visitors and a control method when these computers are connected with the Internet. In the present Internet, a method to encipher all data cannot be used, so that it is necessary to utilize PEM (Privacy Enhanced Mail) capable of the encipherment and conversion of secret information. For preventing miscreant access by eavesdropping password, one-time password is effective. The most cost-effective method is a firewall system. This system lies between the outside and inside network. By limiting computers that directly communicate with the Internet, control is centralized and inside network security is protected. If the security of firewall systems is strictly controlled under correct setting, security within the network can be secured even in open networks such as the Internet.

  9. Distributed intelligent control and status networking

    NASA Technical Reports Server (NTRS)

    Fortin, Andre; Patel, Manoj

    1993-01-01

    Over the past two years, the Network Control Systems Branch (Code 532) has been investigating control and status networking technologies. These emerging technologies use distributed processing over a network to accomplish a particular custom task. These networks consist of small intelligent 'nodes' that perform simple tasks. Containing simple, inexpensive hardware and software, these nodes can be easily developed and maintained. Once networked, the nodes can perform a complex operation without a central host. This type of system provides an alternative to more complex control and status systems which require a central computer. This paper will provide some background and discuss some applications of this technology. It will also demonstrate the suitability of one particular technology for the Space Network (SN) and discuss the prototyping activities of Code 532 utilizing this technology.

  10. Measurement of fault latency in a digital avionic mini processor, part 2

    NASA Technical Reports Server (NTRS)

    Mcgough, J.; Swern, F.

    1983-01-01

    The results of fault injection experiments utilizing a gate-level emulation of the central processor unit of the Bendix BDX-930 digital computer are described. Several earlier programs were reprogrammed, expanding the instruction set to capitalize on the full power of the BDX-930 computer. As a final demonstration of fault coverage an extensive, 3-axis, high performance flght control computation was added. The stages in the development of a CPU self-test program emphasizing the relationship between fault coverage, speed, and quantity of instructions were demonstrated.

  11. 21 CFR 1304.04 - Maintenance of records and inventories.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... manual, or computer readable, form. (2) A registered retail pharmacy that possesses additional... this part for those additional registered sites at the retail pharmacy or other approved central...) Each registered pharmacy shall maintain the inventories and records of controlled substances as follows...

  12. 21 CFR 1304.04 - Maintenance of records and inventories.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... manual, or computer readable, form. (2) A registered retail pharmacy that possesses additional... this part for those additional registered sites at the retail pharmacy or other approved central...) Each registered pharmacy shall maintain the inventories and records of controlled substances as follows...

  13. 21 CFR 1304.04 - Maintenance of records and inventories.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... manual, or computer readable, form. (2) A registered retail pharmacy that possesses additional... this part for those additional registered sites at the retail pharmacy or other approved central...) Each registered pharmacy shall maintain the inventories and records of controlled substances as follows...

  14. 21 CFR 1304.04 - Maintenance of records and inventories.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... manual, or computer readable, form. (2) A registered retail pharmacy that possesses additional... this part for those additional registered sites at the retail pharmacy or other approved central...) Each registered pharmacy shall maintain the inventories and records of controlled substances as follows...

  15. Quantum error correction in crossbar architectures

    NASA Astrophysics Data System (ADS)

    Helsen, Jonas; Steudtner, Mark; Veldhorst, Menno; Wehner, Stephanie

    2018-07-01

    A central challenge for the scaling of quantum computing systems is the need to control all qubits in the system without a large overhead. A solution for this problem in classical computing comes in the form of so-called crossbar architectures. Recently we made a proposal for a large-scale quantum processor (Li et al arXiv:1711.03807 (2017)) to be implemented in silicon quantum dots. This system features a crossbar control architecture which limits parallel single-qubit control, but allows the scheme to overcome control scaling issues that form a major hurdle to large-scale quantum computing systems. In this work, we develop a language that makes it possible to easily map quantum circuits to crossbar systems, taking into account their architecture and control limitations. Using this language we show how to map well known quantum error correction codes such as the planar surface and color codes in this limited control setting with only a small overhead in time. We analyze the logical error behavior of this surface code mapping for estimated experimental parameters of the crossbar system and conclude that logical error suppression to a level useful for real quantum computation is feasible.

  16. A new concept of a unified parameter management, experiment control, and data analysis in fMRI: application to real-time fMRI at 3T and 7T.

    PubMed

    Hollmann, M; Mönch, T; Mulla-Osman, S; Tempelmann, C; Stadler, J; Bernarding, J

    2008-10-30

    In functional MRI (fMRI) complex experiments and applications require increasingly complex parameter handling as the experimental setup usually consists of separated soft- and hardware systems. Advanced real-time applications such as neurofeedback-based training or brain computer interfaces (BCIs) may even require adaptive changes of the paradigms and experimental setup during the measurement. This would be facilitated by an automated management of the overall workflow and a control of the communication between all experimental components. We realized a concept based on an XML software framework called Experiment Description Language (EDL). All parameters relevant for real-time data acquisition, real-time fMRI (rtfMRI) statistical data analysis, stimulus presentation, and activation processing are stored in one central EDL file, and processed during the experiment. A usability study comparing the central EDL parameter management with traditional approaches showed an improvement of the complete experimental handling. Based on this concept, a feasibility study realizing a dynamic rtfMRI-based brain computer interface showed that the developed system in combination with EDL was able to reliably detect and evaluate activation patterns in real-time. The implementation of a centrally controlled communication between the subsystems involved in the rtfMRI experiments reduced potential inconsistencies, and will open new applications for adaptive BCIs.

  17. Distributed control system for parallel-connected DC boost converters

    DOEpatents

    Goldsmith, Steven

    2017-08-15

    The disclosed invention is a distributed control system for operating a DC bus fed by disparate DC power sources that service a known or unknown load. The voltage sources vary in v-i characteristics and have time-varying, maximum supply capacities. Each source is connected to the bus via a boost converter, which may have different dynamic characteristics and power transfer capacities, but are controlled through PWM. The invention tracks the time-varying power sources and apportions their power contribution while maintaining the DC bus voltage within the specifications. A central digital controller solves the steady-state system for the optimal duty cycle settings that achieve a desired power supply apportionment scheme for a known or predictable DC load. A distributed networked control system is derived from the central system that utilizes communications among controllers to compute a shared estimate of the unknown time-varying load through shared bus current measurements and bus voltage measurements.

  18. Spacecraft flight control with the new phase space control law and optimal linear jet select

    NASA Technical Reports Server (NTRS)

    Bergmann, E. V.; Croopnick, S. R.; Turkovich, J. J.; Work, C. C.

    1977-01-01

    An autopilot designed for rotation and translation control of a rigid spacecraft is described. The autopilot uses reaction control jets as control effectors and incorporates a six-dimensional phase space control law as well as a linear programming algorithm for jet selection. The interaction of the control law and jet selection was investigated and a recommended configuration proposed. By means of a simulation procedure the new autopilot was compared with an existing system and was found to be superior in terms of core memory, central processing unit time, firings, and propellant consumption. But it is thought that the cycle time required to perform the jet selection computations might render the new autopilot unsuitable for existing flight computer applications, without modifications. The new autopilot is capable of maintaining attitude control in the presence of a large number of jet failures.

  19. Programmable Direct-Memory-Access Controller

    NASA Technical Reports Server (NTRS)

    Hendry, David F.

    1990-01-01

    Proposed programmable direct-memory-access controller (DMAC) operates with computer systems of 32000 series, which have 32-bit data buses and use addresses of 24 (or potentially 32) bits. Controller functions with or without help of central processing unit (CPU) and starts itself. Includes such advanced features as ability to compare two blocks of memory for equality and to search block of memory for specific value. Made as single very-large-scale integrated-circuit chip.

  20. Feasibility of Decentralized Linear-Quadratic-Gaussian Control of Autonomous Distributed Spacecraft

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    1999-01-01

    A distributed satellite formation, modeled as an arbitrary number of fully connected nodes in a network, could be controlled using a decentralized controller framework that distributes operations in parallel over the network. For such problems, a solution that minimizes data transmission requirements, in the context of linear-quadratic-Gaussian (LQG) control theory, was given by Speyer. This approach is advantageous because it is non-hierarchical, detected failures gracefully degrade system performance, fewer local computations are required than for a centralized controller, and it is optimal with respect to the standard LQG cost function. Disadvantages of the approach are the need for a fully connected communications network, the total operations performed over all the nodes are greater than for a centralized controller, and the approach is formulated for linear time-invariant systems. To investigate the feasibility of the decentralized approach to satellite formation flying, a simple centralized LQG design for a spacecraft orbit control problem is adapted to the decentralized framework. The simple design uses a fixed reference trajectory (an equatorial, Keplerian, circular orbit), and by appropriate choice of coordinates and measurements is formulated as a linear time-invariant system.

  1. Vertebrobasilar system computed tomographic angiography in central vertigo

    PubMed Central

    Paşaoğlu, Lale

    2017-01-01

    Abstract The incidence of vertigo in the population is 20% to 30% and one-fourth of the cases are related to central causes. The aim of this study was to evaluate computed tomography angiography (CTA) findings of the vertebrobasilar system in central vertigo without stroke. CTA and magnetic resonance images of patients with vertigo were retrospectively evaluated. One hundred twenty-nine patients suspected of having central vertigo according to history, physical examination, and otological and neurological tests without signs of infarction on diffusion-weighted magnetic resonance imaging were included in the study. The control group included 120 patients with similar vascular disease risk factors but without vertigo. Vertebral and basilar artery diameters, hypoplasias, exit-site variations of vertebral artery, vertebrobasilar tortuosity, and stenosis of ≥50% detected on CTA were recorded for all patients. Independent-samples t test was used in variables with normal distribution, and Mann–Whitney U test in non-normal distribution. The difference of categorical variable distribution according to groups was analyzed with χ2 and/or Fisher exact test. Vertebral artery hypoplasia and ≥50% stenosis were seen more often in the vertigo group (P = 0.000, <0.001). Overall 78 (60.5%) vertigo patients had ≥50% stenosis, 54 (69.2%) had stenosis at V1 segment, 9 (11.5%) at V2 segment, 2 (2.5%) at V3 segment, and 13 (16.6%) at V4 segment. Both vertigo and control groups had similar basilar artery hypoplasia and ≥50% stenosis rates (P = 0.800, >0.05). CTA may be helpful to clarify the association between abnormal CTA findings of vertebral arteries and central vertigo. This article reveals the opportunity to diagnose posterior circulation abnormalities causing central vertigo with a feasible method such as CTA. PMID:28328808

  2. Vertebrobasilar system computed tomographic angiography in central vertigo.

    PubMed

    Paşaoğlu, Lale

    2017-03-01

    The incidence of vertigo in the population is 20% to 30% and one-fourth of the cases are related to central causes. The aim of this study was to evaluate computed tomography angiography (CTA) findings of the vertebrobasilar system in central vertigo without stroke.CTA and magnetic resonance images of patients with vertigo were retrospectively evaluated. One hundred twenty-nine patients suspected of having central vertigo according to history, physical examination, and otological and neurological tests without signs of infarction on diffusion-weighted magnetic resonance imaging were included in the study. The control group included 120 patients with similar vascular disease risk factors but without vertigo. Vertebral and basilar artery diameters, hypoplasias, exit-site variations of vertebral artery, vertebrobasilar tortuosity, and stenosis of ≥50% detected on CTA were recorded for all patients. Independent-samples t test was used in variables with normal distribution, and Mann-Whitney U test in non-normal distribution. The difference of categorical variable distribution according to groups was analyzed with χ and/or Fisher exact test.Vertebral artery hypoplasia and ≥50% stenosis were seen more often in the vertigo group (P = 0.000, <0.001). Overall 78 (60.5%) vertigo patients had ≥50% stenosis, 54 (69.2%) had stenosis at V1 segment, 9 (11.5%) at V2 segment, 2 (2.5%) at V3 segment, and 13 (16.6%) at V4 segment. Both vertigo and control groups had similar basilar artery hypoplasia and ≥50% stenosis rates (P = 0.800, >0.05).CTA may be helpful to clarify the association between abnormal CTA findings of vertebral arteries and central vertigo.This article reveals the opportunity to diagnose posterior circulation abnormalities causing central vertigo with a feasible method such as CTA.

  3. Computational models of neuromodulation.

    PubMed

    Fellous, J M; Linster, C

    1998-05-15

    Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.

  4. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  5. Emotor control: computations underlying bodily resource allocation, emotions, and confidence.

    PubMed

    Kepecs, Adam; Mensh, Brett D

    2015-12-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience-approaching subjective behavior as the result of mental computations instantiated in the brain-to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This "emotor" control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on "confidence." Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior.

  6. Unifying model of carpal mechanics based on computationally derived isometric constraints and rules-based motion - the stable central column theory.

    PubMed

    Sandow, M J; Fisher, T J; Howard, C Q; Papas, S

    2014-05-01

    This study was part of a larger project to develop a (kinetic) theory of carpal motion based on computationally derived isometric constraints. Three-dimensional models were created from computed tomography scans of the wrists of ten normal subjects and carpal spatial relationships at physiological motion extremes were assessed. Specific points on the surface of the various carpal bones and the radius that remained isometric through range of movement were identified. Analysis of the isometric constraints and intercarpal motion suggests that the carpus functions as a stable central column (lunate-capitate-hamate-trapezoid-trapezium) with a supporting lateral column (scaphoid), which behaves as a 'two gear four bar linkage'. The triquetrum functions as an ulnar translation restraint, as well as controlling lunate flexion. The 'trapezoid'-shaped trapezoid places the trapezium anterior to the transverse plane of the radius and ulna, and thus rotates the principal axis of the central column to correspond to that used in the 'dart thrower's motion'. This study presents a forward kinematic analysis of the carpus that provides the basis for the development of a unifying kinetic theory of wrist motion based on isometric constraints and rules-based motion.

  7. The C23A system, an exmaple of quantitative control of plant growth associated with a data base

    NASA Technical Reports Server (NTRS)

    Andre, M.; Daguenet, A.; Massimino, D.; Gerbaud, A.

    1986-01-01

    The architecture of the C23A (Chambers de Culture Automatique en Atmosphere Artificielles) system for the controlled study of plant physiology is described. A modular plant growth chambers and associated instruments (I.R. CO2 analyser, Mass spectrometer and Chemical analyser); network of frontal processors controlling this apparatus; a central computer for the periodic control and the multiplex work of processors; and a network of terminal computers able to ask the data base for data processing and modeling are discussed. Examples of present results are given. A growth curve analysis study of CO2 and O2 gas exchanges of shoots and roots, and daily evolution of algal photosynthesis and of the pools of dissolved CO2 in sea water are discussed.

  8. 21 CFR 1305.24 - Central processing of orders.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... or more registered locations and maintains a central processing computer system in which orders are... order with all linked records on the central computer system. (b) A company that has central processing... the company owns and operates. ...

  9. 21 CFR 1305.24 - Central processing of orders.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... or more registered locations and maintains a central processing computer system in which orders are... order with all linked records on the central computer system. (b) A company that has central processing... the company owns and operates. ...

  10. 21 CFR 1305.24 - Central processing of orders.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... or more registered locations and maintains a central processing computer system in which orders are... order with all linked records on the central computer system. (b) A company that has central processing... the company owns and operates. ...

  11. 21 CFR 1305.24 - Central processing of orders.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... or more registered locations and maintains a central processing computer system in which orders are... order with all linked records on the central computer system. (b) A company that has central processing... the company owns and operates. ...

  12. 21 CFR 1305.24 - Central processing of orders.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... or more registered locations and maintains a central processing computer system in which orders are... order with all linked records on the central computer system. (b) A company that has central processing... the company owns and operates. ...

  13. Operating room integration and telehealth.

    PubMed

    Bucholz, Richard D; Laycock, Keith A; McDurmont, Leslie

    2011-01-01

    The increasing use of advanced automated and computer-controlled systems and devices in surgical procedures has resulted in problems arising from the crowding of the operating room with equipment and the incompatible control and communication standards associated with each system. This lack of compatibility between systems and centralized control means that the surgeon is frequently required to interact with multiple computer interfaces in order to obtain updates and exert control over the various devices at his disposal. To reduce this complexity and provide the surgeon with more complete and precise control of the operating room systems, a unified interface and communication network has been developed. In addition to improving efficiency, this network also allows the surgeon to grant remote access to consultants and observers at other institutions, enabling experts to participate in the procedure without having to travel to the site.

  14. Fault tolerant computer control for a Maglev transportation system

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Nagle, Gail A.; Anagnostopoulos, George

    1994-01-01

    Magnetically levitated (Maglev) vehicles operating on dedicated guideways at speeds of 500 km/hr are an emerging transportation alternative to short-haul air and high-speed rail. They have the potential to offer a service significantly more dependable than air and with less operating cost than both air and high-speed rail. Maglev transportation derives these benefits by using magnetic forces to suspend a vehicle 8 to 200 mm above the guideway. Magnetic forces are also used for propulsion and guidance. The combination of high speed, short headways, stringent ride quality requirements, and a distributed offboard propulsion system necessitates high levels of automation for the Maglev control and operation. Very high levels of safety and availability will be required for the Maglev control system. This paper describes the mission scenario, functional requirements, and dependability and performance requirements of the Maglev command, control, and communications system. A distributed hierarchical architecture consisting of vehicle on-board computers, wayside zone computers, a central computer facility, and communication links between these entities was synthesized to meet the functional and dependability requirements on the maglev. Two variations of the basic architecture are described: the Smart Vehicle Architecture (SVA) and the Zone Control Architecture (ZCA). Preliminary dependability modeling results are also presented.

  15. Computer program for post-flight evaluation of the control surface response for an attitude controlled missile

    NASA Technical Reports Server (NTRS)

    Knauber, R. N.

    1982-01-01

    A FORTRAN IV coded computer program is presented for post-flight analysis of a missile's control surface response. It includes preprocessing of digitized telemetry data for time lags, biases, non-linear calibration changes and filtering. Measurements include autopilot attitude rate and displacement gyro output and four control surface deflections. Simple first order lags are assumed for the pitch, yaw and roll axes of control. Each actuator is also assumed to be represented by a first order lag. Mixing of pitch, yaw and roll commands to four control surfaces is assumed. A pseudo-inverse technique is used to obtain the pitch, yaw and roll components from the four measured deflections. This program has been used for over 10 years on the NASA/SCOUT launch vehicle for post-flight analysis and was helpful in detecting incipient actuator stall due to excessive hinge moments. The program is currently set up for a CDC CYBER 175 computer system. It requires 34K words of memory and contains 675 cards. A sample problem presented herein including the optional plotting requires eleven (11) seconds of central processor time.

  16. TFTR CAMAC systems and components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauch, W.A.; Bergin, W.; Sichta, P.

    1987-08-01

    Princeton's tokamak fusion test reactor (TFTR) utilizes Computer Automated Measurement and Control (CAMAC) to provide instrumentation for real and quasi real time control, monitoring, and data acquisition systems. This paper describes and discusses the complement of CAMAC hardware systems and components that comprise the interface for tokamak control and measurement instrumentation, and communication with the central instrumentation control and data acquisition (CICADA) system. It also discusses CAMAC reliability and calibration, types of modules used, a summary of data acquisition and control points, and various diagnostic maintenance tools used to support and troubleshoot typical CAMAC systems on TFTR.

  17. An observatory control system for the University of Hawai'i 2.2m Telescope

    NASA Astrophysics Data System (ADS)

    McKay, Luke; Erickson, Christopher; Mukensnable, Donn; Stearman, Anthony; Straight, Brad

    2016-07-01

    The University of Hawai'i 2.2m telescope at Maunakea has operated since 1970, and has had several controls upgrades to date. The newest system will operate as a distributed hierarchy of GNU/Linux central server, networked single-board computers, microcontrollers, and a modular motion control processor for the main axes. Rather than just a telescope control system, this new effort is towards a cohesive, modular, and robust whole observatory control system, with design goals of fully robotic unattended operation, high reliability, and ease of maintenance and upgrade.

  18. Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications

    NASA Astrophysics Data System (ADS)

    Zu, Yue

    Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.

  19. Equipment for linking the AutoAnalyzer on-line to a computer

    PubMed Central

    Simpson, D.; Sims, G. E.; Harrison, M. I.; Whitby, L. G.

    1971-01-01

    An Elliott 903 computer with 8K central core store and magnetic tape backing store has been operated for approximately 20 months in a clinical chemistry laboratory. Details of the equipment designed for linking AutoAnalyzers on-line to the computer are described, and data presented concerning the time required by the computer for different processes. The reliability of the various components in daily operation is discussed. Limitations in the system's capabilities have been defined, and ways of overcoming these are delineated. At present, routine operations include the preparation of worksheets for a limited range of tests (five channels), monitoring of up to 11 AutoAnalyzer channels at a time on a seven-day week basis (with process control and automatic calculation of results), and the provision of quality control data. Cumulative reports can be printed out on those analyses for which computer-prepared worksheets are provided but the system will require extension before these can be issued sufficiently rapidly for routine use. PMID:5551384

  20. Improving Target Detection in Visual Search Through the Augmenting Multi-Sensory Cues

    DTIC Science & Technology

    2013-01-01

    target detection, visual search James Merlo, Joseph E. Mercado , Jan B.F. Van Erp, Peter A. Hancock University of Central Florida 12201 Research Parkway...were controlled by a purpose-created, LabView- based software computer program that synchronised the respective displays and recorded response times and

  1. From Chaos to Control.

    ERIC Educational Resources Information Center

    Hermann, Jeffrey T.

    1992-01-01

    Every college should have a campuswide computer publishing policy. Policy options include (1) centralized publishing; (2) franchise, with a number of units doing their own publishing; or (3) limited, with the publications office producing the basics and assisting other units when feasible. To be successful, the policy must also be enforced. (MSE)

  2. CHEMICAL AND PHYSICAL CHARACTERISTICS OF OUTDOOR, INDOOR, AND PERSONAL PARTICULATE AIR SAMPLES COLLECTED IN AND AROUND A RETIREMENT FACILITY

    EPA Science Inventory

    Residential, personal, indoor, and outdoor sampling of particulate matter was conducted at a retirement center in the Towson area of northern Baltimore County in 1998. Concurrent sampling was conducted at a central community site. Computer-controlled scanning electron microsco...

  3. CHEMICAL AND PHYSICAL CHARACTERIZATION OF INDOOR, OUTDOOR, AND PERSONAL SAMPLES COLLECTED IN AND AROUND A RETIREMENT FACILITY

    EPA Science Inventory

    Residential, personal, indoor, and outdoor sampling of particulate matter was conducted at a retirement center in the Towson area of northern Baltimore County in 1998. Concurrent sampling was conducted at a central community site. Computer-controlled scanning electron microsco...

  4. Information management system breadboard data acquisition and control system.

    NASA Technical Reports Server (NTRS)

    Mallary, W. E.

    1972-01-01

    Description of a breadboard configuration of an advanced information management system based on requirements for high data rates and local and centralized computation for subsystems and experiments to be housed on a space station. The system is to contain a 10-megabit-per-second digital data bus, remote terminals with preprocessor capabilities, and a central multiprocessor. A concept definition is presented for the data acquisition and control system breadboard, and a detailed account is given of the operation of the bus control unit, the bus itself, and the remote acquisition and control unit. The data bus control unit is capable of operating under control of both its own test panel and the test processor. In either mode it is capable of both single- and multiple-message operation in that it can accept a block of data requests or update commands for transmission to the remote acquisition and control unit, which in turn is capable of three levels of data-handling complexity.

  5. Biomechanics as a window into the neural control of movement

    PubMed Central

    2016-01-01

    Abstract Biomechanics and motor control are discussed as parts of a more general science, physics of living systems. Major problems of biomechanics deal with exact definition of variables and their experimental measurement. In motor control, major problems are associated with formulating currently unknown laws of nature specific for movements by biological objects. Mechanics-based hypotheses in motor control, such as those originating from notions of a generalized motor program and internal models, are non-physical. The famous problem of motor redundancy is wrongly formulated; it has to be replaced by the principle of abundance, which does not pose computational problems for the central nervous system. Biomechanical methods play a central role in motor control studies. This is illustrated with studies with the reconstruction of hypothetical control variables and those exploring motor synergies within the framework of the uncontrolled manifold hypothesis. Biomechanics and motor control have to merge into physics of living systems, and the earlier this process starts the better. PMID:28149390

  6. Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Phil; Simeral, John D.; Hochberg, Leigh R.; Donoghue, John P.; Black, Michael J.

    2008-12-01

    Computer-mediated connections between human motor cortical neurons and assistive devices promise to improve or restore lost function in people with paralysis. Recently, a pilot clinical study of an intracortical neural interface system demonstrated that a tetraplegic human was able to obtain continuous two-dimensional control of a computer cursor using neural activity recorded from his motor cortex. This control, however, was not sufficiently accurate for reliable use in many common computer control tasks. Here, we studied several central design choices for such a system including the kinematic representation for cursor movement, the decoding method that translates neuronal ensemble spiking activity into a control signal and the cursor control task used during training for optimizing the parameters of the decoding method. In two tetraplegic participants, we found that controlling a cursor's velocity resulted in more accurate closed-loop control than controlling its position directly and that cursor velocity control was achieved more rapidly than position control. Control quality was further improved over conventional linear filters by using a probabilistic method, the Kalman filter, to decode human motor cortical activity. Performance assessment based on standard metrics used for the evaluation of a wide range of pointing devices demonstrated significantly improved cursor control with velocity rather than position decoding. Disclosure. JPD is the Chief Scientific Officer and a director of Cyberkinetics Neurotechnology Systems (CYKN); he holds stock and receives compensation. JDS has been a consultant for CYKN. LRH receives clinical trial support from CYKN.

  7. Morphing Aircraft Structures: Research in AFRL/RB

    DTIC Science & Technology

    2008-09-01

    various iterative steps in the process, etc. The solver also internally controls the step size for integration, as this is independent of the step...Coupling of Substructures for Dynamic Analyses,” AIAA Journal , Vol. 6, No. 7, 1968, pp. 1313-1319. 2“Using the State-Dependent Modal Force (MFORCE),” AFL...an actuation system consisting of multiple internal actuators, centrally computer controlled to implement any commanded morphing configuration; and

  8. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  9. Different approaches for centralized and decentralized water system management in multiple decision makers' problems

    NASA Astrophysics Data System (ADS)

    Anghileri, D.; Giuliani, M.; Castelletti, A.

    2012-04-01

    There is a general agreement that one of the most challenging issues related to water system management is the presence of many and often conflicting interests as well as the presence of several and independent decision makers. The traditional approach to multi-objective water systems management is a centralized management, in which an ideal central regulator coordinates the operation of the whole system, exploiting all the available information and balancing all the operating objectives. Although this approach allows to obtain Pareto-optimal solutions representing the maximum achievable benefit, it is based on assumptions which strongly limits its application in real world contexts: 1) top-down management, 2) existence of a central regulation institution, 3) complete information exchange within the system, 4) perfect economic efficiency. A bottom-up decentralized approach seems therefore to be more suitable for real case applications since different reservoir operators may maintain their independence. In this work we tested the consequences of a change in the water management approach moving from a centralized toward a decentralized one. In particular we compared three different cases: the centralized management approach, the independent management approach where each reservoir operator takes the daily release decision maximizing (or minimizing) his operating objective independently from each other, and an intermediate approach, leading to the Nash equilibrium of the associated game, where different reservoir operators try to model the behaviours of the other operators. The three approaches are demonstrated using a test case-study composed of two reservoirs regulated for the minimization of flooding in different locations. The operating policies are computed by solving one single multi-objective optimal control problem, in the centralized management approach; multiple single-objective optimization problems, i.e. one for each operator, in the independent case; using techniques related to game theory for the description of the interaction between the two operators, in the last approach. Computational results shows that the Pareto-optimal control policies obtained in the centralized approach dominate the control policies of both the two cases of decentralized management and that the so called price of anarchy increases moving toward the independent management approach. However, the Nash equilibrium solution seems to be the most promising alternative because it represents a good compromise in maximizing management efficiency without limiting the behaviours of the reservoir operators.

  10. The emergence of understanding in a computer model of concepts and analogy-making

    NASA Astrophysics Data System (ADS)

    Mitchell, Melanie; Hofstadter, Douglas R.

    1990-06-01

    This paper describes Copycat, a computer model of the mental mechanisms underlying the fluidity and adaptability of the human conceptual system in the context of analogy-making. Copycat creates analogies between idealized situations in a microworld that has been designed to capture and isolate many of the central issues of analogy-making. In Copycat, an understanding of the essence of a situation and the recognition of deep similarity between two superficially different situations emerge from the interaction of a large number of perceptual agents with an associative, overlapping, and context-sensitive network of concepts. Central features of the model are: a high degree of parallelism; competition and cooperation among a large number of small, locally acting agents that together create a global understanding of the situation at hand; and a computational temperature that measures the amount of perceptual organization as processing proceeds and that in turn controls the degree of randomness with which decisions are made in the system.

  11. Emotor control: computations underlying bodily resource allocation, emotions, and confidence

    PubMed Central

    Kepecs, Adam; Mensh, Brett D.

    2015-01-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience—approaching subjective behavior as the result of mental computations instantiated in the brain—to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This “emotor” control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on “confidence.” Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior. PMID:26869840

  12. A radiation-hardened, computer for satellite applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaona, J.I. Jr.

    1996-08-01

    This paper describes high reliability radiation hardened computers built by Sandia for application aboard DOE satellite programs requiring 32 bit processing. The computers highlight a radiation hardened (10 kGy(Si)) R3000 executing up to 10 million reduced instruction set instructions (RISC) per second (MIPS), a dual purpose module control bus used for real-time default and power management which allows for extended mission operation on as little as 1.2 watts, and a local area network capable of 480 Mbits/s. The central processing unit (CPU) is the NASA Goddard R3000 nicknamed the ``Mongoose or Mongoose 1``. The Sandia Satellite Computer (SSC) uses Rational`smore » Ada compiler, debugger, operating system kernel, and enhanced floating point emulation library targeted at the Mongoose. The SSC gives Sandia the capability of processing complex types of spacecraft attitude determination and control algorithms and of modifying programmed control laws via ground command. And in general, SSC offers end users the ability to process data onboard the spacecraft that would normally have been sent to the ground which allows reconsideration of traditional space-grounded partitioning options.« less

  13. Fluid Centrality: A Social Network Analysis of Social-Technical Relations in Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Enriquez, Judith Guevarra

    2010-01-01

    In this article, centrality is explored as a measure of computer-mediated communication (CMC) in networked learning. Centrality measure is quite common in performing social network analysis (SNA) and in analysing social cohesion, strength of ties and influence in CMC, and computer-supported collaborative learning research. It argues that measuring…

  14. Order parameter for bursting polyrhythms in multifunctional central pattern generators

    NASA Astrophysics Data System (ADS)

    Wojcik, Jeremy; Clewley, Robert; Shilnikov, Andrey

    2011-05-01

    We examine multistability of several coexisting bursting patterns in a central pattern generator network composed of three Hodgkin-Huxley type cells coupled reciprocally by inhibitory synapses. We establish that the control of switching between bursting polyrhythms and their bifurcations are determined by the temporal characteristics, such as the duty cycle, of networked interneurons and the coupling strength asymmetry. A computationally effective approach to the reduction of dynamics of the nine-dimensional network to two-dimensional Poincaré return mappings for phase lags between the interneurons is presented.

  15. Deployment and early experience with remote-presence patient care in a community hospital.

    PubMed

    Petelin, J B; Nelson, M E; Goodman, J

    2007-01-01

    The introduction of the RP6 (InTouch Health, Santa Barbara, CA, USA) remote-presence "robot" appears to offer a useful telemedicine device. The authors describe the deployment and early experience with the RP6 in a community hospital and provided a live demonstration of the system on April 16, 2005 during the Emerging Technologies Session of the 2005 SAGES Meeting in Fort Lauderdale, Florida. The RP6 is a 5-ft 4-in. tall, 215-pound robot that can be remotely controlled from an appropriately configured computer located anywhere on the Internet (i.e., on this planet). The system is composed of a control station (a computer at the central station), a mechanical robot, a wireless network (at the remote facility: the hospital), and a high-speed Internet connection at both the remote (hospital) and central locations. The robot itself houses a rechargeable power supply. Its hardware and software allows communication over the Internet with the central station, interpretation of commands from the central station, and conversion of the commands into mechanical and nonmechanical actions at the remote location, which are communicated back to the central station over the Internet. The RP6 system allows the central party (e.g., physician) to control the movements of the robot itself, see and hear at the remote location (hospital), and be seen and heard at the remote location (hospital) while not physically there. Deployment of the RP6 system at the hospital was accomplished in less than a day. The wireless network at the institution was already in place. The control station setup time ranged from 1 to 4 h and was dependent primarily on the quality of the Internet connection (bandwidth) at the remote locations. Patients who visited with the RP6 on their discharge day could be discharged more than 4 h earlier than with conventional visits, thereby freeing up hospital beds on a busy med-surg floor. Patient visits during "off hours" (nights and weekends) were three times more efficient than conventional visits during these times (20 min per visit vs 40-min round trip travel + 20-min visit). Patients and nursing personnel both expressed tremendous satisfaction with the remote-presence interaction. The authors' early experience suggests a significant benefit to patients, hospitals, and physicians with the use of RP6. The implications for future development are enormous.

  16. Central mechanisms for force and motion--towards computational synthesis of human movement.

    PubMed

    Hemami, Hooshang; Dariush, Behzad

    2012-12-01

    Anatomical, physiological and experimental research on the human body can be supplemented by computational synthesis of the human body for all movement: routine daily activities, sports, dancing, and artistic and exploratory involvements. The synthesis requires thorough knowledge about all subsystems of the human body and their interactions, and allows for integration of known knowledge in working modules. It also affords confirmation and/or verification of scientific hypotheses about workings of the central nervous system (CNS). A simple step in this direction is explored here for controlling the forces of constraint. It requires co-activation of agonist-antagonist musculature. The desired trajectories of motion and the force of contact have to be provided by the CNS. The spinal control involves projection onto a muscular subset that induces the force of contact. The projection of force in the sensory motor cortex is implemented via a well-defined neural population unit, and is executed in the spinal cord by a standard integral controller requiring input from tendon organs. The sensory motor cortex structure is extended to the case for directing motion via two neural population units with vision input and spindle efferents. Digital computer simulations show the feasibility of the system. The formulation is modular and can be extended to multi-link limbs, robot and humanoid systems with many pairs of actuators or muscles. It can be expanded to include reticular activating structures and learning. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems

    DOE PAGES

    Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik; ...

    2017-07-25

    Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.

  18. A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik

    Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.

  19. Introduction to the LaRC central scientific computing complex

    NASA Technical Reports Server (NTRS)

    Shoosmith, John N.

    1993-01-01

    The computers and associated equipment that make up the Central Scientific Computing Complex of the Langley Research Center are briefly described. The electronic networks that provide access to the various components of the complex and a number of areas that can be used by Langley and contractors staff for special applications (scientific visualization, image processing, software engineering, and grid generation) are also described. Flight simulation facilities that use the central computers are described. Management of the complex, procedures for its use, and available services and resources are discussed. This document is intended for new users of the complex, for current users who wish to keep appraised of changes, and for visitors who need to understand the role of central scientific computers at Langley.

  20. HEP - A semaphore-synchronized multiprocessor with central control. [Heterogeneous Element Processor

    NASA Technical Reports Server (NTRS)

    Gilliland, M. C.; Smith, B. J.; Calvert, W.

    1976-01-01

    The paper describes the design concept of the Heterogeneous Element Processor (HEP), a system tailored to the special needs of scientific simulation. In order to achieve high-speed computation required by simulation, HEP features a hierarchy of processes executing in parallel on a number of processors, with synchronization being largely accomplished by hardware. A full-empty-reserve scheme of synchronization is realized by zero-one-valued hardware semaphores. A typical system has, besides the control computer and the scheduler, an algebraic module, a memory module, a first-in first-out (FIFO) module, an integrator module, and an I/O module. The architecture of the scheduler and the algebraic module is examined in detail.

  1. On TESOL '83. The Question of Control. Selected Papers from the Annual Convention of Teachers of English to Speakers of Other Languages (17th, Toronto, Canada, March 15-20, 1983).

    ERIC Educational Resources Information Center

    Handscombe, Jean, Ed.; And Others

    The conference papers presented in this volume explore various aspects of a central question: how will computers be used in language teaching or, more broadly, who will be in control? The volume is divided into three sections: Critical Interactions, Promising Approaches, and Political Influences. Papers included within each of these categories are…

  2. The Effects of a Computerized Study Program on the Acquisition of Science Vocabulary

    ERIC Educational Resources Information Center

    Rollins, Karen F.

    2012-01-01

    The following study examined the difference in science vocabulary acquisition comparing computer-assisted learning and a traditional study review sheet. Fourth and fifth grade students from a suburban school in central Texas were randomly selected and randomly assigned to either experimental group or control group. Both groups were given a…

  3. Vectors in Use in a 3D Juggling Game Simulation

    ERIC Educational Resources Information Center

    Kynigos, Chronis; Latsi, Maria

    2006-01-01

    The new representations enabled by the educational computer game the "Juggler" can place vectors in a central role both for controlling and measuring the behaviours of objects in a virtual environment simulating motion in three-dimensional spaces. The mathematical meanings constructed by 13 year-old students in relation to vectors as…

  4. Going behind IT's Back

    ERIC Educational Resources Information Center

    Schaffhauser, Dian

    2013-01-01

    The pendulum of technology control in higher education has swung away from central IT toward the users. It has become easier for individuals and departments to find their own computing solutions via mobile apps, the cloud, BYOD, web services, and other means. As a result, IT can often find itself out of the loop in certain technology decisions.…

  5. Anomaly Detection Techniques for Ad Hoc Networks

    ERIC Educational Resources Information Center

    Cai, Chaoli

    2009-01-01

    Anomaly detection is an important and indispensable aspect of any computer security mechanism. Ad hoc and mobile networks consist of a number of peer mobile nodes that are capable of communicating with each other absent a fixed infrastructure. Arbitrary node movements and lack of centralized control make them vulnerable to a wide variety of…

  6. The efficacy of computer-enabled discharge communication interventions: a systematic review.

    PubMed

    Motamedi, Soror Mona; Posadas-Calleja, Juan; Straus, Sharon; Bates, David W; Lorenzetti, Diane L; Baylis, Barry; Gilmour, Janet; Kimpton, Shandra; Ghali, William A

    2011-05-01

    Traditional manual/dictated discharge summaries are inaccurate, inconsistent and untimely. Computer-enabled discharge communications may improve information transfer by providing a standardised document that immediately links acute and community healthcare providers. To conduct a systematic review evaluating the efficacy of computer-enabled discharge communication compared with traditional communication for patients discharged from acute care hospitals. MEDLINE, EMBASE, Cochrane CENTRAL Register of Controlled Trials and MEDLINE In-Process. Keywords from three themes were combined: discharge communication, electronic/online/web-based and controlled interventional studies. Study types included: clinical trials, quasiexperimental studies with concurrent controls and controlled before--after studies. Interventions included: (1) automatic population of a discharge document by computer database(s); (2) transmission of discharge information via computer technology; or (3) computer technology providing a 'platform' for dynamic discharge communication. Controls included: no intervention or traditional manual/dictated discharge summaries. Primary outcomes included: mortality, readmission and adverse events/near misses. Secondary outcomes included: timeliness, accuracy, quality/completeness and physician/patient satisfaction. Description of interventions and study outcomes were extracted by two independent reviewers. 12 unique studies were identified: eight randomised controlled trials and four quasi-experimental studies. Pooling/meta-analysis was not possible, given the heterogeneity of measures and outcomes reported. The primary outcomes of mortality and readmission were inconsistently reported. There was no significant difference in mortality, and one study reported reduced long-term readmission. Intervention groups experienced reductions in perceived medical errors/adverse events, and improvements in timeliness and physician/patient satisfaction. Computer-enabled discharge communications appear beneficial with respect to a number of important secondary outcomes. Primary outcomes of mortality and readmission are less commonly reported in this literature and require further study.

  7. Airborne Advanced Reconfigurable Computer System (ARCS)

    NASA Technical Reports Server (NTRS)

    Bjurman, B. E.; Jenkins, G. M.; Masreliez, C. J.; Mcclellan, K. L.; Templeman, J. E.

    1976-01-01

    A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility.

  8. Central Fetal Monitoring With and Without Computer Analysis: A Randomized Controlled Trial.

    PubMed

    Nunes, Inês; Ayres-de-Campos, Diogo; Ugwumadu, Austin; Amin, Pina; Banfield, Philip; Nicoll, Antony; Cunningham, Simon; Sousa, Paulo; Costa-Santos, Cristina; Bernardes, João

    2017-01-01

    To evaluate whether intrapartum fetal monitoring with computer analysis and real-time alerts decreases the rate of newborn metabolic acidosis or obstetric intervention when compared with visual analysis. A randomized clinical trial carried out in five hospitals in the United Kingdom evaluated women with singleton, vertex fetuses of 36 weeks of gestation or greater during labor. Continuous central fetal monitoring by computer analysis and online alerts (experimental arm) was compared with visual analysis (control arm). Fetal blood sampling and electrocardiographic ST waveform analysis were available in both arms. The primary outcome was incidence of newborn metabolic acidosis (pH less than 7.05 and base deficit greater than 12 mmol/L). Prespecified secondary outcomes included operative delivery, use of fetal blood sampling, low 5-minute Apgar score, neonatal intensive care unit admission, hypoxic-ischemic encephalopathy, and perinatal death. A sample size of 3,660 per group (N=7,320) was planned to be able to detect a reduction in the rate of metabolic acidosis from 2.8% to 1.8% (two-tailed α of 0.05 with 80% power). From August 2011 through July 2014, 32,306 women were assessed for eligibility and 7,730 were randomized: 3,961 to computer analysis and online alerts, and 3,769 to visual analysis. Baseline characteristics were similar in both groups. Metabolic acidosis occurred in 16 participants (0.40%) in the experimental arm and 22 participants (0.58%) in the control arm (relative risk 0.69 [0.36-1.31]). No statistically significant differences were found in the incidence of secondary outcomes. Compared with visual analysis, computer analysis of fetal monitoring signals with real-time alerts did not significantly reduce the rate of metabolic acidosis or obstetric intervention. A lower-than-expected rate of newborn metabolic acidosis was observed in both arms of the trial. ISRCTN Registry, http://www.isrctn.com, ISRCTN42314164.

  9. Biology Inspired Approach for Communal Behavior in Sensor Networks

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin, Chunsheng

    2006-01-01

    Research in wireless sensor network technology has exploded in the last decade. Promises of complex and ubiquitous control of the physical environment by these networks open avenues for new kinds of science and business. Due to the small size and low cost of sensor devices, visionaries promise systems enabled by deployment of massive numbers of sensors working in concert. Although the reduction in size has been phenomenal it results in severe limitations on the computing, communicating, and power capabilities of these devices. Under these constraints, research efforts have concentrated on developing techniques for performing relatively simple tasks with minimal energy expense assuming some form of centralized control. Unfortunately, centralized control does not scale to massive size networks and execution of simple tasks in sparsely populated networks will not lead to the sophisticated applications predicted. These must be enabled by new techniques dependent on local and autonomous cooperation between sensors to effect global functions. As a step in that direction, in this work we detail a technique whereby a large population of sensors can attain a global goal using only local information and by making only local decisions without any form of centralized control.

  10. Design of Control Plane Architecture Based on Cloud Platform and Experimental Network Demonstration for Multi-domain SDON

    NASA Astrophysics Data System (ADS)

    Li, Ming; Yin, Hongxi; Xing, Fangyuan; Wang, Jingchao; Wang, Honghuan

    2016-02-01

    With the features of network virtualization and resource programming, Software Defined Optical Network (SDON) is considered as the future development trend of optical network, provisioning a more flexible, efficient and open network function, supporting intraconnection and interconnection of data centers. Meanwhile cloud platform can provide powerful computing, storage and management capabilities. In this paper, with the coordination of SDON and cloud platform, a multi-domain SDON architecture based on cloud control plane has been proposed, which is composed of data centers with database (DB), path computation element (PCE), SDON controller and orchestrator. In addition, the structure of the multidomain SDON orchestrator and OpenFlow-enabled optical node are proposed to realize the combination of centralized and distributed effective management and control platform. Finally, the functional verification and demonstration are performed through our optical experiment network.

  11. [An experimental study of the computer-controlled equipment for delivering volatile anesthetic agent].

    PubMed

    Sun, B; Li, W Z; Yue, Y; Jiang, C W; Xiao, L Y

    2001-11-01

    Our newly-designed computer-controlled equipment for delivering volatile anesthetic agent uses the subminiature singlechip processor as the central controlling unit. The variables, such as anesthesia method, anesthetic agent, the volume of respiratory loop, age of patient, sex, height, weight, environment temperature and the grade of ASA are all input from the keyboard. The anesthetic dosage, calculated by the singlechip processor, is converted into the signals controlling the pump to accurately deliver anesthetic agent into respiratory loop. We have designed an electrocircuit for the equipment to detect the status of the pump's operation, so we can assure of the safety and the stability of the equipment. The output precision of the equipment, with a good anti-jamming capability, is 1-2% for high flow anesthesia and 1-5% for closed-circuit anesthesia and its self-detecting working is reliable.

  12. Centralized vs. decentralized nursing stations: effects on nurses' functional use of space and work environment.

    PubMed

    Zborowsky, Terri; Bunker-Hellmich, Lou; Morelli, Agneta; O'Neill, Mike

    2010-01-01

    Evidence-based findings of the effects of nursing station design on nurses' work environment and work behavior are essential to improve conditions and increase retention among these fundamental members of the healthcare delivery team. The purpose of this exploratory study was to investigate how nursing station design (i.e., centralized and decentralized nursing station layouts) affected nurses' use of space, patient visibility, noise levels, and perceptions of the work environment. Advances in information technology have enabled nurses to move away from traditional centralized paper-charting stations to smaller decentralized work stations and charting substations located closer to, or inside of, patient rooms. Improved understanding of the trade-offs presented by centralized and decentralized nursing station design has the potential to provide useful information for future nursing station layouts. This information will be critical for understanding the nurse environment "fit." The study used an exploratory design with both qualitative and quantitative methods. Qualitative data regarding the effects of nursing station design on nurses' health and work environment were gathered by means of focus group interviews. Quantitative data-gathering techniques included place- and person-centered space use observations, patient visibility assessments, sound level measurements, and an online questionnaire regarding perceptions of the work environment. Nurses on all units were observed most frequently performing telephone, computer, and administrative duties. Time spent using telephones, computers, and performing other administrative duties was significantly higher in the centralized nursing stations. Consultations with medical staff and social interactions were significantly less frequent in decentralized nursing stations. There were no indications that either centralized or decentralized nursing station designs resulted in superior visibility. Sound levels measured in all nursing stations exceeded recommended levels during all shifts. No significant differences were identified in nurses' perceptions of work control-demand-support in centralized and decentralized nursing station designs. The "hybrid" nursing design model in which decentralized nursing stations are coupled with centralized meeting rooms for consultation between staff members may strike a balance between the increase in computer duties and the ongoing need for communication and consultation that addresses the conflicting demands of technology and direct patient care.

  13. The Fermilab Accelerator control system

    NASA Astrophysics Data System (ADS)

    Bogert, Dixon

    1986-06-01

    With the advent of the Tevatron, considerable upgrades have been made to the controls of all the Fermilab Accelerators. The current system is based on making as large an amount of data as possible available to many operators or end-users. Specifically there are about 100 000 separate readings, settings, and status and control registers in the various machines, all of which can be accessed by seventeen consoles, some in the Main Control Room and others distributed throughout the complex. A "Host" computer network of approximately eighteen PDP-11/34's, seven PDP-11/44's, and three VAX-11/785's supports a distributed data acquisition system including Lockheed MAC-16's left from the original Main Ring and Booster instrumentation and upwards of 1000 Z80, Z8002, and M68000 microprocessors in dozens of configurations. Interaction of the various parts of the system is via a central data base stored on the disk of one of the VAXes. The primary computer-hardware communication is via CAMAC for the new Tevatron and Antiproton Source; certain subsystems, among them vacuum, refrigeration, and quench protection, reside in the distributed microprocessors and communicate via GAS, an in-house protocol. An important hardware feature is an accurate clock system making a large number of encoded "events" in the accelerator supercycle available for both hardware modules and computers. System software features include the ability to save the current state of the machine or any subsystem and later restore it or compare it with the state at another time, a general logging facility to keep track of specific variables over long periods of time, detection of "exception conditions" and the posting of alarms, and a central filesharing capability in which files on VAX disks are available for access by any of the "Host" processors.

  14. TFTR diagnostic control and data acquisition system

    NASA Astrophysics Data System (ADS)

    Sauthoff, N. R.; Daniels, R. E.

    1985-05-01

    General computerized control and data-handling support for TFTR diagnostics is presented within the context of the Central Instrumentation, Control and Data Acquisition (CICADA) System. Procedures, hardware, the interactive man-machine interface, event-driven task scheduling, system-wide arming and data acquisition, and a hierarchical data base of raw data and results are described. Similarities in data structures involved in control, monitoring, and data acquisition afford a simplification of the system functions, based on ``groups'' of devices. Emphases and optimizations appropriate for fusion diagnostic system designs are provided. An off-line data reduction computer system is under development.

  15. TFTR diagnostic control and data acquisition system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sauthoff, N.R.; Daniels, R.E.; PPL Computer Division

    1985-05-01

    General computerized control and data-handling support for TFTR diagnostics is presented within the context of the Central Instrumentation, Control and Data Acquisition (CICADA) System. Procedures, hardware, the interactive man--machine interface, event-driven task scheduling, system-wide arming and data acquisition, and a hierarchical data base of raw data and results are described. Similarities in data structures involved in control, monitoring, and data acquisition afford a simplification of the system functions, based on ''groups'' of devices. Emphases and optimizations appropriate for fusion diagnostic system designs are provided. An off-line data reduction computer system is under development.

  16. Bus-Programmable Slave Card

    NASA Technical Reports Server (NTRS)

    Hall, William A.

    1990-01-01

    Slave microprocessors in multimicroprocessor computing system contains modified circuit cards programmed via bus connecting master processor with slave microprocessors. Enables interactive, microprocessor-based, single-loop control. Confers ability to load and run program from master/slave bus, without need for microprocessor development station. Tristate buffers latch all data and information on status. Slave central processing unit never connected directly to bus.

  17. Interactive Computer-Supported Learning in Mathematics: A Comparison of Three Learning Programs on Trigonometry

    ERIC Educational Resources Information Center

    Sander, Elisabeth; Heiß, Andrea

    2014-01-01

    Three different versions of a learning program on trigonometry were compared, a program controlled, non-interactive version (CG), an interactive, conflict inducing version (EG 1), and an interactive one which was supposed to reduce the occurrence of a cognitive conflict regarding the central problem solution (EG 2). Pupils (N = 101) of a…

  18. Prediction and control of slender-wing rock

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.; Salman, Ahmed A.

    1992-01-01

    The unsteady Euler equations and the Euler equations of rigid-body dynamics, both written in the moving frame of reference, are sequentially solved to simulate the limit-cycle rock motion of slender delta wings. The governing equations of the fluid flow and the dynamics of the present multidisciplinary problem are solved using an implicit, approximately-factored, central-difference-like, finite-volume scheme and a four-stage Runge-Kutta scheme, respectively. For the control of wing-rock motion, leading-edge flaps are forced to oscillate anti-symmetrically at prescribed frequency and amplitude, which are tuned in order to suppress the rock motion. Since the computational grid deforms due to the leading-edge flaps motion, the grid is dynamically deformed using the Navier-displacement equations. Computational applications cover locally-conical and three-dimensional solutions for the wing-rock simulation and its control.

  19. Computer graphics and the graphic artist

    NASA Technical Reports Server (NTRS)

    Taylor, N. L.; Fedors, E. G.; Pinelli, T. E.

    1985-01-01

    A centralized computer graphics system is being developed at the NASA Langley Research Center. This system was required to satisfy multiuser needs, ranging from presentation quality graphics prepared by a graphic artist to 16-mm movie simulations generated by engineers and scientists. While the major thrust of the central graphics system was directed toward engineering and scientific applications, hardware and software capabilities to support the graphic artists were integrated into the design. This paper briefly discusses the importance of computer graphics in research; the central graphics system in terms of systems, software, and hardware requirements; the application of computer graphics to graphic arts, discussed in terms of the requirements for a graphic arts workstation; and the problems encountered in applying computer graphics to the graphic arts. The paper concludes by presenting the status of the central graphics system.

  20. A computer system for analysis and transmission of spirometry waveforms using volume sampling.

    PubMed

    Ostler, D V; Gardner, R M; Crapo, R O

    1984-06-01

    A microprocessor-controlled data gathering system for telemetry and analysis of spirometry waveforms was implemented using a completely digital design. Spirometry waveforms were obtained from an optical shaft encoder attached to a rolling seal spirometer. Time intervals between 10-ml volume changes (volume sampling) were stored. The digital design eliminated problems of analog signal sampling. The system measured flows up to 12 liters/sec with 5% accuracy and volumes up to 10 liters with 1% accuracy. Transmission of 10 waveforms took about 3 min. Error detection assured that no data were lost or distorted during transmission. A pulmonary physician at the central hospital reviewed the volume-time and flow-volume waveforms and interpretations generated by the central computer before forwarding the results and consulting with the rural physician. This system is suitable for use in a major hospital, rural hospital, or small clinic because of the system's simplicity and small size.

  1. Saccadic eye movements analysis as a measure of drug effect on central nervous system function.

    PubMed

    Tedeschi, G; Quattrone, A; Bonavita, V

    1986-04-01

    Peak velocity (PSV) and duration (SD) of horizontal saccadic eye movements are demonstrably under the control of specific brain stem structures. Experimental and clinical evidence suggest the existence of an immediate premotor system for saccade generation located in the paramedian pontine reticular formation (PPRF). Effects on saccadic eye movements have been studied in normal volunteers with barbiturates, benzodiazepines, amphetamine and ethanol. On two occasions computer analysis of PSV, SD, saccade reaction time (SRT) and saccade accuracy (SA) was carried out in comparison with more traditional methods of assessment of human psychomotor performance like choice reaction time (CRT) and critical flicker fusion threshold (CFFT). The computer system proved to be a highly sensitive and objective method for measuring drug effect on central nervous system (CNS) function. It allows almost continuous sampling of data and appears to be particularly suitable for studying rapidly changing drug effects on the CNS.

  2. Virtualized Networks and Virtualized Optical Line Terminal (vOLT)

    NASA Astrophysics Data System (ADS)

    Ma, Jonathan; Israel, Stephen

    2017-03-01

    The success of the Internet and the proliferation of the Internet of Things (IoT) devices is forcing telecommunications carriers to re-architecture a central office as a datacenter (CORD) so as to bring the datacenter economics and cloud agility to a central office (CO). The Open Network Operating System (ONOS) is the first open-source software-defined network (SDN) operating system which is capable of managing and controlling network, computing, and storage resources to support CORD infrastructure and network virtualization. The virtualized Optical Line Termination (vOLT) is one of the key components in such virtualized networks.

  3. 31 CFR 285.7 - Salary offset.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Secretary, has waived certain requirements of the Computer Matching and Privacy Protection Act of 1988, 5 U... process known as centralized salary offset computer matching, identify Federal employees who owe delinquent nontax debt to the United States. Centralized salary offset computer matching is the computerized...

  4. Analysis of a display and control system man-machine interface concept. Volume 1: Final technical report

    NASA Technical Reports Server (NTRS)

    Karl, D. R.

    1972-01-01

    An evaluation was made of the feasibility of utilizing a simplified man machine interface concept to manage and control a complex space system involving multiple redundant computers that control multiple redundant subsystems. The concept involves the use of a CRT for display and a simple keyboard for control, with a tree-type control logic for accessing and controlling mission, systems, and subsystem elements. The concept was evaluated in terms of the Phase B space shuttle orbiter, to utilize the wide scope of data management and subsystem control inherent in the central data management subsystem provided by the Phase B design philosophy. Results of these investigations are reported in four volumes.

  5. Data Recording Room in the 10-by 10-Foot Supersonic Wind Tunnel

    NASA Image and Video Library

    1973-04-21

    The test data recording equipment located in the office building of the 10-by 10-Foot Supersonic Wind Tunnel at the NASA Lewis Research Center. The data system was the state of the art when the facility began operating in 1955 and was upgraded over time. NASA engineers used solenoid valves to measure pressures from different locations within the test section. Up 48 measurements could be fed into a single transducer. The 10-by 10 data recorders could handle up to 200 data channels at once. The Central Automatic Digital Data Encoder (CADDE) converted this direct current raw data from the test section into digital format on magnetic tape. The digital information was sent to the Lewis Central Computer Facility for additional processing. It could also be displayed in the control room via strip charts or oscillographs. The 16-by 56-foot long ERA 1103 UNIVAC mainframe computer processed most of the digital data. The paper tape with the raw data was fed into the ERA 1103 which performed the needed calculations. The information was then sent back to the control room. There was a lag of several minutes before the computed information was available, but it was exponentially faster than the hand calculations performed by the female computers. The 10- by 10-foot tunnel, which had its official opening in May 1956, was built under the Congressional Unitary Plan Act which coordinated wind tunnel construction at the NACA, Air Force, industry, and universities. The 10- by 10 was the largest of the three NACA tunnels built under the act.

  6. Power system distributed oscilation detection based on Synchrophasor data

    NASA Astrophysics Data System (ADS)

    Ning, Jiawei

    Along with increasing demand for electricity, integration of renewable energy and deregulation of power market, power industry is facing unprecedented challenges nowadays. Within the last couple of decades, several serious blackouts have been taking place in United States. As an effective approach to prevent that, power system small signal stability monitoring has been drawing more interests and attentions from researchers. With wide-spread implementation of Synchrophasors around the world in the last decade, power systems real-time online monitoring becomes much more feasible. Comparing with planning study analysis, real-time online monitoring would benefit control room operators immediately and directly. Among all online monitoring methods, Oscillation Modal Analysis (OMA), a modal identification method based on routine measurement data where the input is unmeasured ambient excitation, is a great tool to evaluate and monitor power system small signal stability. Indeed, high sampling Synchrophasor data around power system is fitted perfectly as inputs to OMA. Existing methods in OMA for power systems are all based on centralized algorithms applying at control centers only; however, with rapid growing number of online Synchrophasors the computation burden at control centers is and will be continually exponentially expanded. The increasing computation time at control center compromises the real-time feature of online monitoring. The communication efforts between substation and control center will also be out of reach. Meanwhile, it is difficult or even impossible for centralized algorithms to detect some poorly damped local modes. In order to avert previous shortcomings of centralized OMA methods and embrace the new changes in the power systems, two new distributed oscillation detection methods with two new decentralized structures are presented in this dissertation. Since the new schemes brought substations into the big oscillation detection picture, the proposed methods could achieve faster and more reliable results. Subsequently, this claim is tested and approved by test results of IEEE Two-area simulation test system and real power system historian synchrophasor data case studies.

  7. The development and testing of a fieldworthy system of improved fluid pumping device and liquid sensor for oil wells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckman, W.G.

    1991-12-31

    A major expenditure to maintain oil and gas leases is the support of pumpers, those individuals who maintain the pumping systems on wells to achieve optimum production. Many leases are marginal and are in remote areas and this requires considerable driving time for the pumper. The Air Pulse Oil Pump System is designed to be an economical system for the shallow stripper wells. To improve on the economics of this system, we have designed a Remote Oil Field Monitor and Controller to enable us to acquire data from the lease to our central office at anytime and to control themore » pumping activities from the central office by using a personal computer. The advent and economics of low-power microcontrollers have made it feasible to use this type of system for numerous remote control systems. We can also adapt this economical system to monitor and control the production of gas wells and/or pump jacks.« less

  8. Evaluation of the lambda model for human postural control during ankle strategy.

    PubMed

    Micheau, Philippe; Kron, Aymeric; Bourassa, Paul

    2003-09-01

    An accurate modeling of human stance might be helpful in assessing postural deficit. The objective of this article is to validate a mathematical postural control model for quiet standing posture. The postural dynamics is modeled in the sagittal plane as an inverted pendulum with torque applied at the ankle joint. The torque control system is represented by the physiological lambda model. Two neurophysiological command variables of the central nervous system, designated lambda and micro, establish the dynamic threshold muscle at which motoneuron recruitment begins. Kinematic data and electromyographic signals were collected on four young males in order to measure small voluntary sway and quiet standing posture. Validation of the mathematical model was achieved through comparison of the experimental and simulated results. The mathematical model allows computation of the unmeasurable neurophysiological commands lambda and micro that control the equilibrium position and stability. Furthermore, with the model it is possible to conclude that low-amplitude body sway during quiet stance is commanded by the central nervous system.

  9. 31 CFR 285.7 - Salary offset.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... requirements of the Computer Matching and Privacy Protection Act of 1988, 5 U.S.C. 552a, as amended, for... known as centralized salary offset computer matching, identify Federal employees who owe delinquent nontax debt to the United States. Centralized salary offset computer matching is the computerized...

  10. 31 CFR 285.7 - Salary offset.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... requirements of the Computer Matching and Privacy Protection Act of 1988, 5 U.S.C. 552a, as amended, for... known as centralized salary offset computer matching, identify Federal employees who owe delinquent nontax debt to the United States. Centralized salary offset computer matching is the computerized...

  11. 31 CFR 285.7 - Salary offset.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... requirements of the Computer Matching and Privacy Protection Act of 1988, 5 U.S.C. 552a, as amended, for... known as centralized salary offset computer matching, identify Federal employees who owe delinquent nontax debt to the United States. Centralized salary offset computer matching is the computerized...

  12. 31 CFR 285.7 - Salary offset.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... requirements of the Computer Matching and Privacy Protection Act of 1988, 5 U.S.C. 552a, as amended, for... known as centralized salary offset computer matching, identify Federal employees who owe delinquent nontax debt to the United States. Centralized salary offset computer matching is the computerized...

  13. Optimal Synthesis of the Joint Unitary Evolutions

    NASA Astrophysics Data System (ADS)

    Wei, Hai-Rui; Alsaedi, Ahmed; Hobiny, Aatef; Deng, Fu-Guo; Hu, Hui; Zhang, Dun

    2018-07-01

    Joint unitary operations play a central role in quantum communication and computation. We give a quantum circuit for implementing a type of unconstructed useful joint unitary evolutions in terms of controlled-NOT (CNOT) gates and single-qubit rotations. Our synthesis is optimal and possible in experiment. Two CNOT gates and seven R x , R y or R z rotations are required for our synthesis, and the arbitrary parameter contained in the evolutions can be controlled by local Hamiltonian or external fields.

  14. Preliminary design of a solar central receiver for a site-specific repowering application (Saguaro Power Plant). Volume IV. Appendixes. Final report, October 1982-September 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, E.R.

    1983-09-01

    The appendixes for the Saguaro Power Plant includes the following: receiver configuration selection report; cooperating modes and transitions; failure modes analysis; control system analysis; computer codes and simulation models; procurement package scope descriptions; responsibility matrix; solar system flow diagram component purpose list; thermal storage component and system test plans; solar steam generator tube-to-tubesheet weld analysis; pipeline listing; management control schedule; and system list and definitions.

  15. Optimal Synthesis of the Joint Unitary Evolutions

    NASA Astrophysics Data System (ADS)

    Wei, Hai-Rui; Alsaedi, Ahmed; Hobiny, Aatef; Deng, Fu-Guo; Hu, Hui; Zhang, Dun

    2018-03-01

    Joint unitary operations play a central role in quantum communication and computation. We give a quantum circuit for implementing a type of unconstructed useful joint unitary evolutions in terms of controlled-NOT (CNOT) gates and single-qubit rotations. Our synthesis is optimal and possible in experiment. Two CNOT gates and seven R x , R y or R z rotations are required for our synthesis, and the arbitrary parameter contained in the evolutions can be controlled by local Hamiltonian or external fields.

  16. Radar Detection Models in Computer Supported Naval War Games

    DTIC Science & Technology

    1979-06-08

    revealed a requirement for the effective centralized manage- ment of computer supported war game development and employment in the U.S. Navy. A...considerations and supports the requirement for centralized Io 97 management of computerized war game development . Therefore it is recommended that a central...managerial and fiscal authority be estab- lished for computerized tactical war game development . This central authority should ensure that new games

  17. En Garde: Fencing at Kansas City's Central Computers Unlimited/Classical Greek Magnet High School, 1991-1995

    ERIC Educational Resources Information Center

    Poos, Bradley W.

    2015-01-01

    Central High School in Kansas City, Missouri is one of the oldest schools west of the Mississippi and the first public high school built in Kansas City. Kansas City's magnet plan resulted in Central High School being rebuilt as the Central Computers Unlimited/Classical Greek Magnet High School, a school that was designed to offer students an…

  18. Arm coordination in octopus crawling involves unique motor control strategies.

    PubMed

    Levy, Guy; Flash, Tamar; Hochner, Binyamin

    2015-05-04

    To cope with the exceptional computational complexity that is involved in the control of its hyper-redundant arms [1], the octopus has adopted unique motor control strategies in which the central brain activates rather autonomous motor programs in the elaborated peripheral nervous system of the arms [2, 3]. How octopuses coordinate their eight long and flexible arms in locomotion is still unknown. Here, we present the first detailed kinematic analysis of octopus arm coordination in crawling. The results are surprising in several respects: (1) despite its bilaterally symmetrical body, the octopus can crawl in any direction relative to its body orientation; (2) body and crawling orientation are monotonically and independently controlled; and (3) contrasting known animal locomotion, octopus crawling lacks any apparent rhythmical patterns in limb coordination, suggesting a unique non-rhythmical output of the octopus central controller. We show that this uncommon maneuverability is derived from the radial symmetry of the arms around the body and the simple pushing-by-elongation mechanism by which the arms create the crawling thrust. These two together enable a mechanism whereby the central controller chooses in a moment-to-moment fashion which arms to recruit for pushing the body in an instantaneous direction. Our findings suggest that the soft molluscan body has affected in an embodied way [4, 5] the emergence of the adaptive motor behavior of the octopus. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Stretching the Traditional Notion of Experiment in Computing: Explorative Experiments.

    PubMed

    Schiaffonati, Viola

    2016-06-01

    Experimentation represents today a 'hot' topic in computing. If experiments made with the support of computers, such as computer simulations, have received increasing attention from philosophers of science and technology, questions such as "what does it mean to do experiments in computer science and engineering and what are their benefits?" emerged only recently as central in the debate over the disciplinary status of the discipline. In this work we aim at showing, also by means of paradigmatic examples, how the traditional notion of controlled experiment should be revised to take into account a part of the experimental practice in computing along the lines of experimentation as exploration. Taking inspiration from the discussion on exploratory experimentation in the philosophy of science-experimentation that is not theory-driven-we advance the idea of explorative experiments that, although not new, can contribute to enlarge the debate about the nature and role of experimental methods in computing. In order to further refine this concept we recast explorative experiments as socio-technical experiments, that test new technologies in their socio-technical contexts. We suggest that, when experiments are explorative, control should be intended in a posteriori form, in opposition to the a priori form that usually takes place in traditional experimental contexts.

  20. Federal Research Opportunities: DOE, DOD, and HHS Need Better Guidance for Participant Activities

    DTIC Science & Technology

    2016-01-01

    process controls of advanced power systems, gas sensors and high temperatures, improving extraction of earth elements, quantum computing, biofilms ...chronic diseases (e.g., heart, obesity, cancer ), environmental health, toxic substances, health statistics, and public health preparedness. Food and...Health Localization of proteins using molecular markers, gene regulatory effects in cancer , medical informatics, and central nervous system

  1. A Directory of Sources of Information and Data Bases on Education and Training.

    DTIC Science & Technology

    1980-09-01

    ACADO07 National Opinion Research Center (NORC) ... ............. ... ACADOO8 U of California Union Catalog Supp. (1963-1967...Records (RSR) ...... .................. ... ARMYO30 Union Central Registry System (UCRSYS) .... .............. ... ARMY032 Training Control Card Report...research. Your query directs a computer search of the Compre- hensive Dissertation Database. The search produces a list of all titles matching your

  2. Central tendency effects in time interval reproduction in autism

    PubMed Central

    Karaminis, Themelis; Cicchini, Guido Marco; Neil, Louise; Cappagli, Giulia; Aagten-Murphy, David; Burr, David; Pellicano, Elizabeth

    2016-01-01

    Central tendency, the tendency of judgements of quantities (lengths, durations etc.) to gravitate towards their mean, is one of the most robust perceptual effects. A Bayesian account has recently suggested that central tendency reflects the integration of noisy sensory estimates with prior knowledge representations of a mean stimulus, serving to improve performance. The process is flexible, so prior knowledge is weighted more heavily when sensory estimates are imprecise, requiring more integration to reduce noise. In this study we measure central tendency in autism to evaluate a recent theoretical hypothesis suggesting that autistic perception relies less on prior knowledge representations than typical perception. If true, autistic children should show reduced central tendency than theoretically predicted from their temporal resolution. We tested autistic and age- and ability-matched typical children in two child-friendly tasks: (1) a time interval reproduction task, measuring central tendency in the temporal domain; and (2) a time discrimination task, assessing temporal resolution. Central tendency reduced with age in typical development, while temporal resolution improved. Autistic children performed far worse in temporal discrimination than the matched controls. Computational simulations suggested that central tendency was much less in autistic children than predicted by theoretical modelling, given their poor temporal resolution. PMID:27349722

  3. Mechanistic experimental pain assessment in computer users with and without chronic musculoskeletal pain.

    PubMed

    Ge, Hong-You; Vangsgaard, Steffen; Omland, Øyvind; Madeleine, Pascal; Arendt-Nielsen, Lars

    2014-12-06

    Musculoskeletal pain from the upper extremity and shoulder region is commonly reported by computer users. However, the functional status of central pain mechanisms, i.e., central sensitization and conditioned pain modulation (CPM), has not been investigated in this population. The aim was to evaluate sensitization and CPM in computer users with and without chronic musculoskeletal pain. Pressure pain threshold (PPT) mapping in the neck-shoulder (15 points) and the elbow (12 points) was assessed together with PPT measurement at mid-point in the tibialis anterior (TA) muscle among 47 computer users with chronic pain in the upper extremity and/or neck-shoulder pain (pain group) and 17 pain-free computer users (control group). Induced pain intensities and profiles over time were recorded using a 0-10 cm electronic visual analogue scale (VAS) in response to different levels of pressure stimuli on the forearm with a new technique of dynamic pressure algometry. The efficiency of CPM was assessed using cuff-induced pain as conditioning pain stimulus and PPT at TA as test stimulus. The demographics, job seniority and number of working hours/week using a computer were similar between groups. The PPTs measured at all 15 points in the neck-shoulder region were not significantly different between groups. There were no significant differences between groups neither in PPTs nor pain intensity induced by dynamic pressure algometry. No significant difference in PPT was observed in TA between groups. During CPM, a significant increase in PPT at TA was observed in both groups (P < 0.05) without significant differences between groups. For the chronic pain group, higher clinical pain intensity, lower PPT values from the neck-shoulder and higher pain intensity evoked by the roller were all correlated with less efficient descending pain modulation (P < 0.05). This suggests that the excitability of the central pain system is normal in a large group of computer users with low pain intensity chronic upper extremity and/or neck-shoulder pain and that increased excitability of the pain system cannot explain the reported pain. However, computer users with higher pain intensity and lower PPTs were found to have decreased efficiency in descending pain modulation.

  4. The PLATO IV Architecture.

    ERIC Educational Resources Information Center

    Stifle, Jack

    The PLATO IV computer-based instructional system consists of a large scale centrally located CDC 6400 computer and a large number of remote student terminals. This is a brief and general description of the proposed input/output hardware necessary to interface the student terminals with the computer's central processing unit (CPU) using available…

  5. Central Computational Facility CCF communications subsystem options

    NASA Technical Reports Server (NTRS)

    Hennigan, K. B.

    1979-01-01

    A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.

  6. Quantification of peripheral and central blood pressure variability using a time-frequency method.

    PubMed

    Kouchaki, Z; Butlin, M; Qasem, A; Avolio, A P

    2016-08-01

    Systolic blood pressure variability (BPV) is associated with cardiovascular events. As the beat-to-beat variation of blood pressure is due to interaction of several cardiovascular control systems operating with different response times, assessment of BPV by spectral analysis using the continuous measurement of arterial pressure in the finger is used to differentiate the contribution of these systems in regulating blood pressure. However, as baroreceptors are centrally located, this study considered applying a continuous aortic pressure signal estimated noninvasively from finger pressure for assessment of systolic BPV by a time-frequency method using Short Time Fourier Transform (STFT). The average ratio of low frequency and high frequency power band (LF PB /HF PB ) was computed by time-frequency decomposition of peripheral systolic pressure (pSBP) and derived central aortic systolic blood pressure (cSBP) in 30 healthy subjects (25-62 years) as a marker of balance between cardiovascular control systems contributing in low and high frequency blood pressure variability. The results showed that the BPV assessed from finger pressure (pBPV) overestimated the BPV values compared to that assessed from central aortic pressure (cBPV) for identical cardiac cycles (P<;0.001), with the overestimation being greater at higher power.

  7. ALLY: An operator's associate for satellite ground control systems

    NASA Technical Reports Server (NTRS)

    Bushman, J. B.; Mitchell, Christine M.; Jones, P. M.; Rubin, K. S.

    1991-01-01

    The key characteristics of an intelligent advisory system is explored. A central feature is that human-machine cooperation should be based on a metaphor of human-to-human cooperation. ALLY, a computer-based operator's associate which is based on a preliminary theory of human-to-human cooperation, is discussed. ALLY assists the operator in carrying out the supervisory control functions for a simulated NASA ground control system. Experimental evaluation of ALLY indicates that operators using ALLY performed at least as well as they did when using a human associate and in some cases even better.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Junghyun; Gangwon, Jo; Jaehoon, Jung

    Applications written solely in OpenCL or CUDA cannot execute on a cluster as a whole. Most previous approaches that extend these programming models to clusters are based on a common idea: designating a centralized host node and coordinating the other nodes with the host for computation. However, the centralized host node is a serious performance bottleneck when the number of nodes is large. In this paper, we propose a scalable and distributed OpenCL framework called SnuCL-D for large-scale clusters. SnuCL-D's remote device virtualization provides an OpenCL application with an illusion that all compute devices in a cluster are confined inmore » a single node. To reduce the amount of control-message and data communication between nodes, SnuCL-D replicates the OpenCL host program execution and data in each node. We also propose a new OpenCL host API function and a queueing optimization technique that significantly reduce the overhead incurred by the previous centralized approaches. To show the effectiveness of SnuCL-D, we evaluate SnuCL-D with a microbenchmark and eleven benchmark applications on a large-scale CPU cluster and a medium-scale GPU cluster.« less

  9. Computing at DESY — current setup, trends and strategic directions

    NASA Astrophysics Data System (ADS)

    Ernst, Michael

    1998-05-01

    Since the HERA experiments H1 and ZEUS started data taking in '92, the computing environment at DESY has changed dramatically. Running a mainframe centred computing for more than 20 years, DESY switched to a heterogeneous, fully distributed computing environment within only about two years in almost every corner where computing has its applications. The computing strategy was highly influenced by the needs of the user community. The collaborations are usually limited by current technology and their ever increasing demands is the driving force for central computing to always move close to the technology edge. While DESY's central computing has a multidecade experience in running Central Data Recording/Central Data Processing for HEP experiments, the most challenging task today is to provide for clear and homogeneous concepts in the desktop area. Given that lowest level commodity hardware draws more and more attention, combined with the financial constraints we are facing already today, we quickly need concepts for integrated support of a versatile device which has the potential to move into basically any computing area in HEP. Though commercial solutions, especially addressing the PC management/support issues, are expected to come to market in the next 2-3 years, we need to provide for suitable solutions now. Buying PC's at DESY currently at a rate of about 30/month will otherwise absorb any available manpower in central computing and still will leave hundreds of unhappy people alone. Though certainly not the only region, the desktop issue is one of the most important one where we need HEP-wide collaboration to a large extent, and right now. Taking into account that there is traditionally no room for R&D at DESY, collaboration, meaning sharing experience and development resources within the HEP community, is a predominant factor for us.

  10. Sensor Control And Film Annotation For Long Range, Standoff Reconnaissance

    NASA Astrophysics Data System (ADS)

    Schmidt, Thomas G.; Peters, Owen L.; Post, Lawrence H.

    1984-12-01

    This paper describes a Reconnaissance Data Annotation System that incorporates off-the-shelf technology and system designs providing a high degree of adaptability and interoperability to satisfy future reconnaissance data requirements. The history of data annotation for reconnaissance is reviewed in order to provide the base from which future developments can be assessed and technical risks minimized. The system described will accommodate new developments in recording head assemblies and the incorporation of advanced cameras of both the film and electro-optical type. Use of microprocessor control and digital bus inter-face form the central design philosophy. For long range, high altitude, standoff missions, the Data Annotation System computes the projected latitude and longitude of central target position from aircraft position and attitude. This complements the use of longer ranges and high altitudes for reconnaissance missions.

  11. Instrumentation and test methods of an automated radiated susceptibility system

    NASA Astrophysics Data System (ADS)

    Howard, M. W.; Deere, J.

    1983-09-01

    The instrumentation and test methods of an automated electromagnetic compatibility (EMC) system for performing radiated susceptibility tests from 14 kHz to 1000 MHz is described. Particular emphasis is given to the effectiveness of the system in the evaluation of electronic circuits for susceptibility to RF radiation. The system consists of a centralized data acquisition/control unit which interfaces with the equipment under test (EUT), the RF isolated field probes, and RF amplifier ALC output; four broadband linear RF amplifiers; and a frequency synthesizer with drive level increments in steps of 0.1 dB. Centralized control of the susceptibility test system is provided by a desktop computer. It is found that the system can reduce the execution time of RF susceptibility tests by as much as 70 percent. A block diagram of the system is provided.

  12. Integrating Micro-computers with a Centralized DBMS: ORACLE, SEED AND INGRES

    NASA Technical Reports Server (NTRS)

    Hoerger, J.

    1984-01-01

    Users of ADABAS, a relational-like data base management system (ADABAS) with its data base programming language (NATURAL) are acquiring microcomputers with hopes of solving their individual word processing, office automation, decision support, and simple data processing problems. As processor speeds, memory sizes, and disk storage capacities increase, individual departments begin to maintain "their own" data base on "their own" micro-computer. This situation can adversely affect several of the primary goals set for implementing a centralized DBMS. In order to avoid this potential problem, these micro-computers must be integrated with the centralized DBMS. An easy to use and flexible means for transferring logic data base files between the central data base machine and micro-computers must be provided. Some of the problems encounted in an effort to accomplish this integration and possible solutions are discussed.

  13. Hand-held computer operating system program for collection of resident experience data.

    PubMed

    Malan, T K; Haffner, W H; Armstrong, A Y; Satin, A J

    2000-11-01

    To describe a system for recording resident experience involving hand-held computers with the Palm Operating System (3 Com, Inc., Santa Clara, CA). Hand-held personal computers (PCs) are popular, easy to use, inexpensive, portable, and can share data among other operating systems. Residents in our program carry individual hand-held database computers to record Residency Review Committee (RRC) reportable patient encounters. Each resident's data is transferred to a single central relational database compatible with Microsoft Access (Microsoft Corporation, Redmond, WA). Patient data entry and subsequent transfer to a central database is accomplished with commercially available software that requires minimal computer expertise to implement and maintain. The central database can then be used for statistical analysis or to create required RRC resident experience reports. As a result, the data collection and transfer process takes less time for residents and program director alike, than paper-based or central computer-based systems. The system of collecting resident encounter data using hand-held computers with the Palm Operating System is easy to use, relatively inexpensive, accurate, and secure. The user-friendly system provides prompt, complete, and accurate data, enhancing the education of residents while facilitating the job of the program director.

  14. Dependable control systems with Internet of Things.

    PubMed

    Tran, Tri; Ha, Q P

    2015-11-01

    This paper presents an Internet of Things (IoT)-enabled dependable control system (DepCS) for continuous processes. In a DepCS, an actuator and a transmitter form a regulatory control loop. Each processor inside such actuator and transmitter is designed as a computational platform implementing the feedback control algorithm. The connections between actuators and transmitters via IoT create a reliable backbone for a DepCS. The centralized input-output marshaling system is not required in DepCSs. A state feedback control synthesis method for DepCS applying the self-recovery constraint is presented in the second part of the paper. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Implementation of Autonomous Control Technology for Plant Growth Chambers

    NASA Technical Reports Server (NTRS)

    Costello, Thomas A.; Sager, John C.; Krumins, Valdis; Wheeler, Raymond M.

    2002-01-01

    The Kennedy Space Center has significant infrastructure for research using controlled environment plant growth chambers. Such research supports development of bioregenerative life support technology for long-term space missions. Most of the existing chambers in Hangar L and Little L will be moved to the new Space Experiment Research and Processing Laboratory (SERPL) in the summer of 2003. The impending move has created an opportunity to update the control system technologies to allow for greater flexibility, less labor for set-up and maintenance, better diagnostics, better reliability and easier data retrieval. Part of these improvements can be realized using hardware which communicates through an ethernet connection to a central computer for supervisory control but can be operated independently of the computer during routine run-time. Both the hardware and software functionality of an envisioned system were tested on a prototype plant growth chamber (CEC-4) in Hangar L. Based upon these tests, recommendations for hardware and software selection and system design for implementation in SERPL are included.

  16. Space shuttle main engine controller

    NASA Technical Reports Server (NTRS)

    Mattox, R. M.; White, J. B.

    1981-01-01

    A technical description of the space shuttle main engine controller, which provides engine checkout prior to launch, engine control and monitoring during launch, and engine safety and monitoring in orbit, is presented. Each of the major controller subassemblies, the central processing unit, the computer interface electronics, the input electronics, the output electronics, and the power supplies are described and discussed in detail along with engine and orbiter interfaces and operational requirements. The controller represents a unique application of digital concepts, techniques, and technology in monitoring, managing, and controlling a high performance rocket engine propulsion system. The operational requirements placed on the controller, the extremely harsh operating environment to which it is exposed, and the reliability demanded, result in the most complex and rugged digital system ever designed, fabricated, and flown.

  17. Control of a solar-energy-supplied electrical-power system without intermediate circuitry

    NASA Astrophysics Data System (ADS)

    Leistner, K.

    A computer control system is developed for electric-power systems comprising solar cells and small numbers of users with individual centrally controlled converters (and storage facilities when needed). Typical system structures are reviewed; the advantages of systems without an intermediate network are outlined; the demands on a control system in such a network (optimizing generator working point and power distribution) are defined; and a flexible modular prototype system is described in detail. A charging station for lead batteries used in electric automobiles is analyzed as an example. The power requirements of the control system (30 W for generator control and 50 W for communications and distribution control) are found to limit its use to larger networks.

  18. Impaired associative learning in schizophrenia: behavioral and computational studies

    PubMed Central

    Diwadkar, Vaibhav A.; Flaugher, Brad; Jones, Trevor; Zalányi, László; Ujfalussy, Balázs; Keshavan, Matcheri S.

    2008-01-01

    Associative learning is a central building block of human cognition and in large part depends on mechanisms of synaptic plasticity, memory capacity and fronto–hippocampal interactions. A disorder like schizophrenia is thought to be characterized by altered plasticity, and impaired frontal and hippocampal function. Understanding the expression of this dysfunction through appropriate experimental studies, and understanding the processes that may give rise to impaired behavior through biologically plausible computational models will help clarify the nature of these deficits. We present a preliminary computational model designed to capture learning dynamics in healthy control and schizophrenia subjects. Experimental data was collected on a spatial-object paired-associate learning task. The task evinces classic patterns of negatively accelerated learning in both healthy control subjects and patients, with patients demonstrating lower rates of learning than controls. Our rudimentary computational model of the task was based on biologically plausible assumptions, including the separation of dorsal/spatial and ventral/object visual streams, implementation of rules of learning, the explicit parameterization of learning rates (a plausible surrogate for synaptic plasticity), and learning capacity (a plausible surrogate for memory capacity). Reductions in learning dynamics in schizophrenia were well-modeled by reductions in learning rate and learning capacity. The synergy between experimental research and a detailed computational model of performance provides a framework within which to infer plausible biological bases of impaired learning dynamics in schizophrenia. PMID:19003486

  19. A modular architecture for transparent computation in recurrent neural networks.

    PubMed

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. A scalable quantum computer with ions in an array of microtraps

    PubMed

    Cirac; Zoller

    2000-04-06

    Quantum computers require the storage of quantum information in a set of two-level systems (called qubits), the processing of this information using quantum gates and a means of final readout. So far, only a few systems have been identified as potentially viable quantum computer models--accurate quantum control of the coherent evolution is required in order to realize gate operations, while at the same time decoherence must be avoided. Examples include quantum optical systems (such as those utilizing trapped ions or neutral atoms, cavity quantum electrodynamics and nuclear magnetic resonance) and solid state systems (using nuclear spins, quantum dots and Josephson junctions). The most advanced candidates are the quantum optical and nuclear magnetic resonance systems, and we expect that they will allow quantum computing with about ten qubits within the next few years. This is still far from the numbers required for useful applications: for example, the factorization of a 200-digit number requires about 3,500 qubits, rising to 100,000 if error correction is implemented. Scalability of proposed quantum computer architectures to many qubits is thus of central importance. Here we propose a model for an ion trap quantum computer that combines scalability (a feature usually associated with solid state proposals) with the advantages of quantum optical systems (in particular, quantum control and long decoherence times).

  1. Energy consumption and load profiling at major airports. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, J.

    1998-12-01

    This report describes the results of energy audits at three major US airports. These studies developed load profiles and quantified energy usage at these airports while identifying procedures and electrotechnologies that could reduce their power consumption. The major power consumers at the airports studied included central plants, runway and taxiway lighting, fuel farms, terminals, people mover systems, and hangar facilities. Several major findings emerged during the study. The amount of energy efficient equipment installed at an airport is directly related to the age of the facility. Newer facilities had more energy efficient equipment while older facilities had much of themore » original electric and natural gas equipment still in operation. As redesign, remodeling, and/or replacement projects proceed, responsible design engineers are selecting more energy efficient equipment to replace original devices. The use of computer-controlled energy management systems varies. At airports, the primary purpose of these systems is to monitor and control the lighting and environmental air conditioning and heating of the facility. Of the facilities studied, one used computer management extensively, one used it only marginally, and one had no computer controlled management devices. At all of the facilities studied, natural gas is used to provide heat and hot water. Natural gas consumption is at its highest in the months of November, December, January, and February. The Central Plant contains most of the inductive load at an airport and is also a major contributor to power consumption inefficiency. Power factor correction equipment was used at one facility but was not installed at the other two facilities due to high power factor and/or lack of need.« less

  2. Nursing operations automation and health care technology innovations: 2025 and beyond.

    PubMed

    Suby, ChrysMarie

    2013-01-01

    This article reviews why nursing operations automation is important, reviews the impact of computer technology on nursing from a historical perspective, and considers the future of nursing operations automation and health care technology innovations in 2025 and beyond. The increasing automation in health care organizations will benefit patient care, staffing and scheduling systems and central staffing offices, census control, and measurement of patient acuity.

  3. Production planning, production systems for flexible automation

    NASA Astrophysics Data System (ADS)

    Spur, G.; Mertins, K.

    1982-09-01

    Trends in flexible manufacturing system (FMS) applications are reviewed. Machining systems contain machines which complement each other and can replace each other. Computer controlled storage systems are widespread, with central storage capacity ranging from 20 pallet spaces to 200 magazine spaces. Handling function is fulfilled by pallet chargers in over 75% of FMS's. Data system degree of automation varies considerably. No trends are noted for transport systems.

  4. Flexible structure control laboratory development and technology demonstration

    NASA Technical Reports Server (NTRS)

    Vivian, H. C.; Blaire, P. E.; Eldred, D. B.; Fleischer, G. E.; Ih, C.-H. C.; Nerheim, N. M.; Scheid, R. E.; Wen, J. T.

    1987-01-01

    An experimental structure is described which was constructed to demonstrate and validate recent emerging technologies in the active control and identification of large flexible space structures. The configuration consists of a large, 20 foot diameter antenna-like flexible structure in the horizontal plane with a gimballed central hub, a flexible feed-boom assembly hanging from the hub, and 12 flexible ribs radiating outward. Fourteen electrodynamic force actuators mounted to the hub and to the individual ribs provide the means to excite the structure and exert control forces. Thirty permanently mounted sensors, including optical encoders and analog induction devices provide measurements of structural response at widely distributed points. An experimental remote optical sensor provides sixteen additional sensing channels. A computer samples the sensors, computes the control updates and sends commands to the actuators in real time, while simultaneously displaying selected outputs on a graphics terminal and saving them in memory. Several control experiments were conducted thus far and are documented. These include implementation of distributed parameter system control, model reference adaptive control, and static shape control. These experiments have demonstrated the successful implementation of state-of-the-art control approaches using actual hardware.

  5. Weighted link graphs: a distributed IDS for secondary intrusion detection and defense

    NASA Astrophysics Data System (ADS)

    Zhou, Mian; Lang, Sheau-Dong

    2005-03-01

    While a firewall installed at the perimeter of a local network provides the first line of defense against the hackers, many intrusion incidents are the results of successful penetration of the firewalls. One computer"s compromise often put the entire network at risk. In this paper, we propose an IDS that provides a finer control over the internal network. The system focuses on the variations of connection-based behavior of each single computer, and uses a weighted link graph to visualize the overall traffic abnormalities. The functionality of our system is of a distributed personal IDS system that also provides a centralized traffic analysis by graphical visualization. We use a novel weight assignment schema for the local detection within each end agent. The local abnormalities are quantitatively carried out by the node weight and link weight and further sent to the central analyzer to build the weighted link graph. Thus, we distribute the burden of traffic processing and visualization to each agent and make it more efficient for the overall intrusion detection. As the LANs are more vulnerable to inside attacks, our system is designed as a reinforcement to prevent corruption from the inside.

  6. Effect of rotation rate on the forces of a rotating cylinder: Simulation and control

    NASA Technical Reports Server (NTRS)

    Burns, John A.; Ou, Yuh-Roung

    1993-01-01

    In this paper we present numerical solutions to several optimal control problems for an unsteady viscous flow. The main thrust of this work is devoted to simulation and control of an unsteady flow generated by a circular cylinder undergoing rotary motion. By treating the rotation rate as a control variable, we can formulate two optimal control problems and use a central difference/pseudospectral transform method to numerically compute the optimal control rates. Several types of rotations are considered as potential controls, and we show that a proper synchronization of forcing frequency with the natural vortex shedding frequency can greatly influence the flow. The results here indicate that using moving boundary controls for such systems may provide a feasible mechanism for flow control.

  7. REVIEW: Widespread access to predictive models in the motor system: a short review

    NASA Astrophysics Data System (ADS)

    Davidson, Paul R.; Wolpert, Daniel M.

    2005-09-01

    Recent behavioural and computational studies suggest that access to internal predictive models of arm and object dynamics is widespread in the sensorimotor system. Several systems, including those responsible for oculomotor and skeletomotor control, perceptual processing, postural control and mental imagery, are able to access predictions of the motion of the arm. A capacity to make and use predictions of object dynamics is similarly widespread. Here, we review recent studies looking at the predictive capacity of the central nervous system which reveal pervasive access to forward models of the environment.

  8. Controllability of Surface Water Networks

    NASA Astrophysics Data System (ADS)

    Riasi, M. Sadegh; Yeghiazarian, Lilit

    2017-12-01

    To sustainably manage water resources, we must understand how to control complex networked systems. In this paper, we study surface water networks from the perspective of structural controllability, a concept that integrates classical control theory with graph-theoretic formalism. We present structural controllability theory and compute four metrics: full and target controllability, control centrality and control profile (FTCP) that collectively determine the structural boundaries of the system's control space. We use these metrics to answer the following questions: How does the structure of a surface water network affect its controllability? How to efficiently control a preselected subset of the network? Which nodes have the highest control power? What types of topological structures dominate controllability? Finally, we demonstrate the structural controllability theory in the analysis of a wide range of surface water networks, such as tributary, deltaic, and braided river systems.

  9. Encryption for Remote Control via Internet or Intranet

    NASA Technical Reports Server (NTRS)

    Lineberger, Lewis

    2005-01-01

    A data-communication protocol has been devised to enable secure, reliable remote control of processes and equipment via a collision-based network, while using minimal bandwidth and computation. The network could be the Internet or an intranet. Control is made secure by use of both a password and a dynamic key, which is sent transparently to a remote user by the controlled computer (that is, the computer, located at the site of the equipment or process to be controlled, that exerts direct control over the process). The protocol functions in the presence of network latency, overcomes errors caused by missed dynamic keys, and defeats attempts by unauthorized remote users to gain control. The protocol is not suitable for real-time control, but is well suited for applications in which control latencies up to about 0.5 second are acceptable. The encryption scheme involves the use of both a dynamic and a private key, without any additional overhead that would degrade performance. The dynamic key is embedded in the equipment- or process-monitor data packets sent out by the controlled computer: in other words, the dynamic key is a subset of the data in each such data packet. The controlled computer maintains a history of the last 3 to 5 data packets for use in decrypting incoming control commands. In addition, the controlled computer records a private key (password) that is given to the remote computer. The encrypted incoming command is permuted by both the dynamic and private key. A person who records the command data in a given packet for hostile purposes cannot use that packet after the public key expires (typically within 3 seconds). Even a person in possession of an unauthorized copy of the command/remote-display software cannot use that software in the absence of the password. The use of a dynamic key embedded in the outgoing data makes the central-processing unit overhead very small. The use of a National Instruments DataSocket(TradeMark) (or equivalent) protocol or the User Datagram Protocol makes it possible to obtain reasonably short response times: Typical response times in event-driven control, using packets sized .300 bytes, are <0.2 second for commands issued from locations anywhere on Earth. The protocol requires that control commands represent absolute values of controlled parameters (e.g., a specified temperature), as distinguished from changes in values of controlled parameters (e.g., a specified increment of temperature). Each command is issued three or more times to ensure delivery in crowded networks. The use of absolute-value commands prevents additional (redundant) commands from causing trouble. Because a remote controlling computer receives "talkback" in the form of data packets from the controlled computer, typically within a time interval < or =1 s, the controlling computer can re-issue a command if network failure has occurred. The controlled computer, the process or equipment that it controls, and any human operator(s) at the site of the controlled equipment or process should be equipped with safety measures to prevent damage to equipment or injury to humans. These features could be a combination of software, external hardware, and intervention by the human operator(s). The protocol is not fail-safe, but by adopting these safety measures as part of the protocol, one makes the protocol a robust means of controlling remote processes and equipment by use of typical office computers via intranets and/or the Internet.

  10. FlpStop, a tool for conditional gene control in Drosophila

    PubMed Central

    Fisher, Yvette E; Yang, Helen H; Isaacman-Beck, Jesse; Xie, Marjorie; Gohl, Daryl M; Clandinin, Thomas R

    2017-01-01

    Manipulating gene function cell type-specifically is a common experimental goal in Drosophila research and has been central to studies of neural development, circuit computation, and behavior. However, current cell type-specific gene disruption techniques in flies often reduce gene activity incompletely or rely on cell division. Here we describe FlpStop, a generalizable tool for conditional gene disruption and rescue in post-mitotic cells. In proof-of-principle experiments, we manipulated apterous, a regulator of wing development. Next, we produced conditional null alleles of Glutamic acid decarboxylase 1 (Gad1) and Resistant to dieldrin (Rdl), genes vital for GABAergic neurotransmission, as well as cacophony (cac) and paralytic (para), voltage-gated ion channels central to neuronal excitability. To demonstrate the utility of this approach, we manipulated cac in a specific visual interneuron type and discovered differential regulation of calcium signals across subcellular compartments. Thus, FlpStop will facilitate investigations into the interactions between genes, circuits, and computation. DOI: http://dx.doi.org/10.7554/eLife.22279.001 PMID:28211790

  11. Competence with Fractions Predicts Gains in Mathematics Achievement

    PubMed Central

    Bailey, Drew H.; Hoard, Mary K.; Nugent, Lara; Geary, David C.

    2012-01-01

    Competence with fractions predicts later mathematics achievement, but the co-developmental pattern between fractions knowledge and mathematics achievement is not well understood. We assessed this co-development through examination of the cross-lagged relation between a measure of conceptual knowledge of fractions and mathematics achievement in sixth and seventh grade (n = 212). The cross-lagged effects indicated that performance on the sixth grade fractions concepts measure predicted one year gains in mathematics achievement (β = .14, p<.01), controlling for the central executive component of working memory and intelligence, but sixth grade mathematics achievement did not predict gains on the fractions concepts measure (β = .03, p>.50). In a follow-up assessment, we demonstrated that measures of fluency with computational fractions significantly predicted seventh grade mathematics achievement above and beyond the influence of fluency in computational whole number arithmetic, performance on number fluency and number line tasks, and central executive span and intelligence. Results provide empirical support for the hypothesis that competence with fractions underlies, in part, subsequent gains in mathematics achievement. PMID:22832199

  12. Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC

    NASA Technical Reports Server (NTRS)

    Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet

    1999-01-01

    The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.

  13. Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC

    NASA Technical Reports Server (NTRS)

    Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet

    1998-01-01

    The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.

  14. The development and testing of a fieldworthy system of improved fluid pumping device and liquid sensor for oil wells. Fourth quarter technical progress report, 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckman, W.G.

    1991-12-31

    A major expenditure to maintain oil and gas leases is the support of pumpers, those individuals who maintain the pumping systems on wells to achieve optimum production. Many leases are marginal and are in remote areas and this requires considerable driving time for the pumper. The Air Pulse Oil Pump System is designed to be an economical system for the shallow stripper wells. To improve on the economics of this system, we have designed a Remote Oil Field Monitor and Controller to enable us to acquire data from the lease to our central office at anytime and to control themore » pumping activities from the central office by using a personal computer. The advent and economics of low-power microcontrollers have made it feasible to use this type of system for numerous remote control systems. We can also adapt this economical system to monitor and control the production of gas wells and/or pump jacks.« less

  15. Perfusion characteristics of Moyamoya disease: an anatomically and clinically oriented analysis and comparison.

    PubMed

    Schubert, Gerrit Alexander; Czabanka, Marcus; Seiz, Marcel; Horn, Peter; Vajkoczy, Peter; Thomé, Claudius

    2014-01-01

    Moyamoya disease (MMD) is characterized by unique angiographic features of collateralization. However, a detailed quantification as well as comparative analysis with cerebrovascular atherosclerotic disease (CAD) and healthy controls have not been performed to date. We reviewed 67 patients with MMD undergoing Xenon-enhanced computed tomography, as well as 108 patients with CAD and 5 controls. In addition to cortical, central, and infratentorial regions of interest, particular emphasis was put on regions that are typically involved in MMD (pericallosal territory, basal ganglia). Cerebral blood flow (CBF), cerebrovascular reserve capacity (CVRC), and hemodynamic stress distribution were calculated. MMD is characterized by a significant, ubiquitous decrease in CVRC and a cortical but not pericallosal decrease in CBF when compared with controls. Baseline perfusion is maintained within the basal ganglia, and hemodynamic stress distribution confirmed a relative preservation of central regions of interest in MMD, indicative for its characteristic proximal collateralization pattern. In MMD and CAD, cortical and central CBF decreased significantly with age, whereas CVRC and hemodynamic stress distribution are relatively unaffected by age. No difference in CVRC of comparable regions of interest was seen between MMD and CAD, but stress distribution was significantly higher in MMD, illustrating the functionality of the characteristic rete mirabilis. Our data provide quantitative support for a territory-specific perfusion pattern that is unique for MMD, including central preservation of CBF compared with controls and patients with CAD. This correlates well with its characteristic feature of proximal collateralization. CVRC and hemodynamic stress distribution seem to be more robust parameters than CBF alone for assessment of disease severity.

  16. Percolation Centrality: Quantifying Graph-Theoretic Impact of Nodes during Percolation in Networks

    PubMed Central

    Piraveenan, Mahendra; Prokopenko, Mikhail; Hossain, Liaquat

    2013-01-01

    A number of centrality measures are available to determine the relative importance of a node in a complex network, and betweenness is prominent among them. However, the existing centrality measures are not adequate in network percolation scenarios (such as during infection transmission in a social network of individuals, spreading of computer viruses on computer networks, or transmission of disease over a network of towns) because they do not account for the changing percolation states of individual nodes. We propose a new measure, percolation centrality, that quantifies relative impact of nodes based on their topological connectivity, as well as their percolation states. The measure can be extended to include random walk based definitions, and its computational complexity is shown to be of the same order as that of betweenness centrality. We demonstrate the usage of percolation centrality by applying it to a canonical network as well as simulated and real world scale-free and random networks. PMID:23349699

  17. Internet SCADA Utilizing API's as Data Source

    NASA Astrophysics Data System (ADS)

    Robles, Rosslin John; Kim, Haeng-Kon; Kim, Tai-Hoon

    An Application programming interface or API is an interface implemented by a software program that enables it to interact with other software. Many companies provide free API services which can be utilized in Control Systems. SCADA is an example of a control system and it is a system that collects data from various sensors at a factory, plant or in other remote locations and then sends this data to a central computer which then manages and controls the data. In this paper, we designed a scheme for Weather Condition in Internet SCADA Environment utilizing data from external API services. The scheme was designed to double check the weather information in SCADA.

  18. Miniature Heat Pipes

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Small Business Innovation Research contracts from Goddard Space Flight Center to Thermacore Inc. have fostered the company work on devices tagged "heat pipes" for space application. To control the extreme temperature ranges in space, heat pipes are important to spacecraft. The problem was to maintain an 8-watt central processing unit (CPU) at less than 90 C in a notebook computer using no power, with very little space available and without using forced convection. Thermacore's answer was in the design of a powder metal wick that transfers CPU heat from a tightly confined spot to an area near available air flow. The heat pipe technology permits a notebook computer to be operated in any position without loss of performance. Miniature heat pipe technology has successfully been applied, such as in Pentium Processor notebook computers. The company expects its heat pipes to accommodate desktop computers as well. Cellular phones, camcorders, and other hand-held electronics are forsible applications for heat pipes.

  19. Centralized Fabric Management Using Puppet, Git, and GLPI

    NASA Astrophysics Data System (ADS)

    Smith, Jason A.; De Stefano, John S., Jr.; Fetzko, John; Hollowell, Christopher; Ito, Hironori; Karasawa, Mizuki; Pryor, James; Rao, Tejas; Strecker-Kellogg, William

    2012-12-01

    Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of recent technological advances in administrative automation. Puppet is a seasoned open-source tool that is designed for enterprise class centralized configuration management. At the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory, we use Puppet along with Git, GLPI, and some custom scripts as part of our centralized configuration management system. In this paper, we discuss how we use these tools for centralized configuration management of our servers and services, change management requiring authorized approval of production changes, a complete version controlled history of all changes made, separation of production, testing and development systems using puppet environments, semi-automated server inventory using GLPI, and configuration change monitoring and reporting using the Puppet dashboard. We will also discuss scalability and performance results from using these tools on a 2,000+ node cluster and 400+ infrastructure servers with an administrative staff of approximately 25 full-time employees (FTEs).

  20. Mesoscale Severe Weather Development under Orographic Influences

    DTIC Science & Technology

    1992-06-30

    control procedures will have to operate centrally before data transmission to the field or will have to be enacted in the field by expert meteorologists...can be depicted uniquely and recognizable on the computer screen as " icons ’. (E.g. in the presence of several thunderstorms, each one should be...appropriate Icon at the proper forecast time and coordinate location. From the numerical forecast output (and, If necessary, from climatological or

  1. Automatic Mexican sign language and digits recognition using normalized central moments

    NASA Astrophysics Data System (ADS)

    Solís, Francisco; Martínez, David; Espinosa, Oscar; Toxqui, Carina

    2016-09-01

    This work presents a framework for automatic Mexican sign language and digits recognition based on computer vision system using normalized central moments and artificial neural networks. Images are captured by digital IP camera, four LED reflectors and a green background in order to reduce computational costs and prevent the use of special gloves. 42 normalized central moments are computed per frame and used in a Multi-Layer Perceptron to recognize each database. Four versions per sign and digit were used in training phase. 93% and 95% of recognition rates were achieved for Mexican sign language and digits respectively.

  2. ACToR-Aggregated Computational Resource | Science ...

    EPA Pesticide Factsheets

    ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food & Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Center for Computational Toxicology, ACToR helps manage large data sets being used in a high throughput environmental chemical screening and prioritization program called ToxCast(TM).

  3. ACToR - Aggregated Computational Toxicology Resource

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judson, Richard; Richard, Ann; Dix, David

    2008-11-15

    ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food and Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Centermore » for Computational Toxicology, ACToR helps manage large data sets being used in a high-throughput environmental chemical screening and prioritization program called ToxCast{sup TM}.« less

  4. Vectorization of transport and diffusion computations on the CDC Cyber 205

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Shumays, I.K.

    1986-01-01

    The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less

  5. An overview of selected information storage and retrieval issues in computerized document processing

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Ihebuzor, Valentine U.

    1984-01-01

    The rapid development of computerized information storage and retrieval techniques has introduced the possibility of extending the word processing concept to document processing. A major advantage of computerized document processing is the relief of the tedious task of manual editing and composition usually encountered by traditional publishers through the immense speed and storage capacity of computers. Furthermore, computerized document processing provides an author with centralized control, the lack of which is a handicap of the traditional publishing operation. A survey of some computerized document processing techniques is presented with emphasis on related information storage and retrieval issues. String matching algorithms are considered central to document information storage and retrieval and are also discussed.

  6. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 6: Specification for EOS Central Data Processing Facility (CDPF)

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The specifications and functions of the Central Data Processing (CDPF) Facility which supports the Earth Observatory Satellite (EOS) are discussed. The CDPF will receive the EOS sensor data and spacecraft data through the Spaceflight Tracking and Data Network (STDN) and the Operations Control Center (OCC). The CDPF will process the data and produce high density digital tapes, computer compatible tapes, film and paper print images, and other data products. The specific aspects of data inputs and data processing are identified. A block diagram of the CDPF to show the data flow and interfaces of the subsystems is provided.

  7. Technical Challenges and Opportunities of Centralizing Space Science Mission Operations (SSMO) at NASA Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Ido, Haisam; Burns, Rich

    2015-01-01

    The NASA Goddard Space Science Mission Operations project (SSMO) is performing a technical cost-benefit analysis for centralizing and consolidating operations of a diverse set of missions into a unified and integrated technical infrastructure. The presentation will focus on the notion of normalizing spacecraft operations processes, workflows, and tools. It will also show the processes of creating a standardized open architecture, creating common security models and implementations, interfaces, services, automations, notifications, alerts, logging, publish, subscribe and middleware capabilities. The presentation will also discuss how to leverage traditional capabilities, along with virtualization, cloud computing services, control groups and containers, and possibly Big Data concepts.

  8. Threshold-Based Random Charging Scheme for Decentralized PEV Charging Operation in a Smart Grid.

    PubMed

    Kwon, Ojin; Kim, Pilkee; Yoon, Yong-Jin

    2016-12-26

    Smart grids have been introduced to replace conventional power distribution systems without real time monitoring for accommodating the future market penetration of plug-in electric vehicles (PEVs). When a large number of PEVs require simultaneous battery charging, charging coordination techniques have become one of the most critical factors to optimize the PEV charging performance and the conventional distribution system. In this case, considerable computational complexity of a central controller and exchange of real time information among PEVs may occur. To alleviate these problems, a novel threshold-based random charging (TBRC) operation for a decentralized charging system is proposed. Using PEV charging thresholds and random access rates, the PEVs themselves can participate in the charging requests. As PEVs with a high battery state do not transmit the charging requests to the central controller, the complexity of the central controller decreases due to the reduction of the charging requests. In addition, both the charging threshold and the random access rate are statistically calculated based on the average of supply power of the PEV charging system that do not require a real time update. By using the proposed TBRC with a tolerable PEV charging degradation, a 51% reduction of the PEV charging requests is achieved.

  9. Threshold-Based Random Charging Scheme for Decentralized PEV Charging Operation in a Smart Grid

    PubMed Central

    Kwon, Ojin; Kim, Pilkee; Yoon, Yong-Jin

    2016-01-01

    Smart grids have been introduced to replace conventional power distribution systems without real time monitoring for accommodating the future market penetration of plug-in electric vehicles (PEVs). When a large number of PEVs require simultaneous battery charging, charging coordination techniques have become one of the most critical factors to optimize the PEV charging performance and the conventional distribution system. In this case, considerable computational complexity of a central controller and exchange of real time information among PEVs may occur. To alleviate these problems, a novel threshold-based random charging (TBRC) operation for a decentralized charging system is proposed. Using PEV charging thresholds and random access rates, the PEVs themselves can participate in the charging requests. As PEVs with a high battery state do not transmit the charging requests to the central controller, the complexity of the central controller decreases due to the reduction of the charging requests. In addition, both the charging threshold and the random access rate are statistically calculated based on the average of supply power of the PEV charging system that do not require a real time update. By using the proposed TBRC with a tolerable PEV charging degradation, a 51% reduction of the PEV charging requests is achieved. PMID:28035963

  10. An Introduction to the Industrial Applications of Microcontrollers

    NASA Astrophysics Data System (ADS)

    Carelse, Xavier F.

    A microcontroller is sometimes described as a “computer on a chip” because it contains all the features of a full computer including central processor, in-built clock circuitry, ROM, RAM, input and output ports with special features'such as serial communication, analogue-to-digital conversion and, more recently, signal processing. The smallest microcontroller has only eight pins but some having 68 pins are also being marketed. In the last five years, the prices of microcontrollers have dropped by 80% and are now one of the most cost-effective components in industry. Being software-driven, microcontrollers greatly simplify the design of sophisticated instrumentation and control circuitry. They are able to effect precise calculations sometimes needed for feedback in control systems and now form the basis of all intelligent embedded systems such as those required in television and VCR remote controls, microwave ovens, washing machines, etc. More than ten times as many microcontrollers than microprocessors are manufactured and sold in the world in spite of the high profile that the latter enjoys because of the personal computer market. In Zimbabwe, extensive research is being carried out to use microcontrollers to aid the cost recovery of domestic and commercial solar installations as part of the rural electrification programme.

  11. Walking robot: A design project for undergraduate students

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The design and construction of the University of Maryland walking machine was completed during the 1989 to 1990 academic year. It was required that the machine be capable of completing a number of tasks including walking a straight line, turning to change direction, and manuevering over an obstacle such as a set of stairs. The machine consists of two sets of four telescoping legs that alternately support the entire structure. A gear box and crank arm assembly is connected to the leg sets to provide the power required for the translational motion of the machine. By retracting all eight legs, the robot comes to rest on a central Bigfoot support. Turning is accomplished by rotating this machine about this support. The machine can be controlled by using either a user-operated remote tether or the onboard computer for the execution of control commands. Absolute encoders are attached to all motors to provide the control computer with information regarding the status of the motors. Long and short range infrared sensors provide the computer with feedback information regarding the machine's position relative to a series of stripes and reflectors. These infrared sensors simulate how the robot might sense and gain information about the environment of Mars.

  12. University of Maryland walking robot: A design project for undergraduate students

    NASA Technical Reports Server (NTRS)

    Olsen, Bob; Bielec, Jim; Hartsig, Dave; Oliva, Mani; Grotheer, Phil; Hekmat, Morad; Russell, David; Tavakoli, Hossein; Young, Gary; Nave, Tom

    1990-01-01

    The design and construction required that the walking robot machine be capable of completing a number of tasks including walking in a straight line, turning to change direction, and maneuvering over an obstable such as a set of stairs. The machine consists of two sets of four telescoping legs that alternately support the entire structure. A gear-box and crank-arm assembly is connected to the leg sets to provide the power required for the translational motion of the machine. By retracting all eight legs, the robot comes to rest on a central Bigfoot support. Turning is accomplished by rotating the machine about this support. The machine can be controlled by using either a user operated remote tether or the on-board computer for the execution of control commands. Absolute encoders are attached to all motors (leg, main drive, and Bigfoot) to provide the control computer with information regarding the status of the motors (up-down motion, forward or reverse rotation). Long and short range infrared sensors provide the computer with feedback information regarding the machine's relative position to a series of stripes and reflectors. These infrared sensors simulate how the robot might sense and gain information about the environment of Mars.

  13. Spinal circuits can accommodate interaction torques during multijoint limb movements.

    PubMed

    Buhrmann, Thomas; Di Paolo, Ezequiel A

    2014-01-01

    The dynamic interaction of limb segments during movements that involve multiple joints creates torques in one joint due to motion about another. Evidence shows that such interaction torques are taken into account during the planning or control of movement in humans. Two alternative hypotheses could explain the compensation of these dynamic torques. One involves the use of internal models to centrally compute predicted interaction torques and their explicit compensation through anticipatory adjustment of descending motor commands. The alternative, based on the equilibrium-point hypothesis, claims that descending signals can be simple and related to the desired movement kinematics only, while spinal feedback mechanisms are responsible for the appropriate creation and coordination of dynamic muscle forces. Partial supporting evidence exists in each case. However, until now no model has explicitly shown, in the case of the second hypothesis, whether peripheral feedback is really sufficient on its own for coordinating the motion of several joints while at the same time accommodating intersegmental interaction torques. Here we propose a minimal computational model to examine this question. Using a biomechanics simulation of a two-joint arm controlled by spinal neural circuitry, we show for the first time that it is indeed possible for the neuromusculoskeletal system to transform simple descending control signals into muscle activation patterns that accommodate interaction forces depending on their direction and magnitude. This is achieved without the aid of any central predictive signal. Even though the model makes various simplifications and abstractions compared to the complexities involved in the control of human arm movements, the finding lends plausibility to the hypothesis that some multijoint movements can in principle be controlled even in the absence of internal models of intersegmental dynamics or learned compensatory motor signals.

  14. Discovery and Development of ATP-Competitive mTOR Inhibitors Using Computational Approaches.

    PubMed

    Luo, Yao; Wang, Ling

    2017-11-16

    The mammalian target of rapamycin (mTOR) is a central controller of cell growth, proliferation, metabolism, and angiogenesis. This protein is an attractive target for new anticancer drug development. Significant progress has been made in hit discovery, lead optimization, drug candidate development and determination of the three-dimensional (3D) structure of mTOR. Computational methods have been applied to accelerate the discovery and development of mTOR inhibitors helping to model the structure of mTOR, screen compound databases, uncover structure-activity relationship (SAR) and optimize the hits, mine the privileged fragments and design focused libraries. Besides, computational approaches were also applied to study protein-ligand interactions mechanisms and in natural product-driven drug discovery. Herein, we survey the most recent progress on the application of computational approaches to advance the discovery and development of compounds targeting mTOR. Future directions in the discovery of new mTOR inhibitors using computational methods are also discussed. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  15. Framework and Method for Controlling a Robotic System Using a Distributed Computer Network

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Strawser, Philip A. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor)

    2015-01-01

    A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.

  16. Automated technical validation--a real time expert system for decision support.

    PubMed

    de Graeve, J S; Cambus, J P; Gruson, A; Valdiguié, P M

    1996-04-15

    Dealing daily with various machines and various control specimens provides a lot of data that cannot be processed manually. In order to help decision-making we wrote specific software coping with the traditional QC, with patient data (mean of normals, delta check) and with criteria related to the analytical equipment (flags and alarms). Four machines (3 Ektachem 700 and 1 Hitachi 911) analysing 25 common chemical tests are controlled. Every day, three different control specimens and one more once a week (regional survey) are run on the various pieces of equipment. The data are collected on a 486 microcomputer connected to the central computer. For every parameter the standard deviation is compared with the published acceptable limits and the Westgard's rules are computed. The mean of normals is continuously monitored. The final decision induces either an alarm sound and the print-out of the cause of rejection or, if no alarms happen, the daily print-out of recorded data, with or without the Levey Jennings graphs.

  17. The curse of planning: dissecting multiple reinforcement-learning systems by taxing the central executive.

    PubMed

    Otto, A Ross; Gershman, Samuel J; Markman, Arthur B; Daw, Nathaniel D

    2013-05-01

    A number of accounts of human and animal behavior posit the operation of parallel and competing valuation systems in the control of choice behavior. In these accounts, a flexible but computationally expensive model-based reinforcement-learning system has been contrasted with a less flexible but more efficient model-free reinforcement-learning system. The factors governing which system controls behavior-and under what circumstances-are still unclear. Following the hypothesis that model-based reinforcement learning requires cognitive resources, we demonstrated that having human decision makers perform a demanding secondary task engenders increased reliance on a model-free reinforcement-learning strategy. Further, we showed that, across trials, people negotiate the trade-off between the two systems dynamically as a function of concurrent executive-function demands, and people's choice latencies reflect the computational expenses of the strategy they employ. These results demonstrate that competition between multiple learning systems can be controlled on a trial-by-trial basis by modulating the availability of cognitive resources.

  18. The Curse of Planning: Dissecting multiple reinforcement learning systems by taxing the central executive

    PubMed Central

    Otto, A. Ross; Gershman, Samuel J.; Markman, Arthur B.; Daw, Nathaniel D.

    2013-01-01

    A number of accounts of human and animal behavior posit the operation of parallel and competing valuation systems in the control of choice behavior. Along these lines, a flexible but computationally expensive model-based reinforcement learning system has been contrasted with a less flexible but more efficient model-free reinforcement learning system. The factors governing which system controls behavior—and under what circumstances—are still unclear. Based on the hypothesis that model-based reinforcement learning requires cognitive resources, we demonstrate that having human decision-makers perform a demanding secondary task engenders increased reliance on a model-free reinforcement learning strategy. Further, we show that across trials, people negotiate this tradeoff dynamically as a function of concurrent executive function demands and their choice latencies reflect the computational expenses of the strategy employed. These results demonstrate that competition between multiple learning systems can be controlled on a trial-by-trial basis by modulating the availability of cognitive resources. PMID:23558545

  19. The Organization and Evaluation of a Computer-Assisted, Centralized Immunization Registry.

    ERIC Educational Resources Information Center

    Loeser, Helen; And Others

    1983-01-01

    Evaluation of a computer-assisted, centralized immunization registry after one year shows that 93 percent of eligible health practitioners initially agreed to provide data and that 73 percent continue to do so. Immunization rates in audited groups have improved significantly. (GC)

  20. Brain-computer interface technology: a review of the Second International Meeting.

    PubMed

    Vaughan, Theresa M; Heetderks, William J; Trejo, Leonard J; Rymer, William Z; Weinrich, Michael; Moore, Melody M; Kübler, Andrea; Dobkin, Bruce H; Birbaumer, Niels; Donchin, Emanuel; Wolpaw, Elizabeth Winter; Wolpaw, Jonathan R

    2003-06-01

    This paper summarizes the Brain-Computer Interfaces for Communication and Control, The Second International Meeting, held in Rensselaerville, NY, in June 2002. Sponsored by the National Institutes of Health and organized by the Wadsworth Center of the New York State Department of Health, the meeting addressed current work and future plans in brain-computer interface (BCI) research. Ninety-two researchers representing 38 different research groups from the United States, Canada, Europe, and China participated. The BCIs discussed at the meeting use electroencephalographic activity recorded from the scalp or single-neuron activity recorded within cortex to control cursor movement, select letters or icons, or operate neuroprostheses. The central element in each BCI is a translation algorithm that converts electrophysiological input from the user into output that controls external devices. BCI operation depends on effective interaction between two adaptive controllers, the user who encodes his or her commands in the electrophysiological input provided to the BCI, and the BCI that recognizes the commands contained in the input and expresses them in device control. Current BCIs have maximum information transfer rates of up to 25 b/min. Achievement of greater speed and accuracy requires improvements in signal acquisition and processing, in translation algorithms, and in user training. These improvements depend on interdisciplinary cooperation among neuroscientists, engineers, computer programmers, psychologists, and rehabilitation specialists, and on adoption and widespread application of objective criteria for evaluating alternative methods. The practical use of BCI technology will be determined by the development of appropriate applications and identification of appropriate user groups, and will require careful attention to the needs and desires of individual users.

  1. Central charge from adiabatic transport of cusp singularities in the quantum Hall effect

    NASA Astrophysics Data System (ADS)

    Can, Tankut

    2017-04-01

    We study quantum Hall (QH) states on a punctured Riemann sphere. We compute the Berry curvature under adiabatic motion in the moduli space in the large N limit. The Berry curvature is shown to be finite in the large N limit and controlled by the conformal dimension of the cusp singularity, a local property of the mean density. Utilizing exact sum rules obtained from a Ward identity, we show that for the Laughlin wave function, the dimension of a cusp singularity is given by the central charge, a robust geometric response coefficient in the QHE. Thus, adiabatic transport of curvature singularities can be used to determine the central charge of QH states. We also consider the effects of threaded fluxes and spin-deformed wave functions. Finally, we give a closed expression for all moments of the mean density in the integer QH state on a punctured disk.

  2. Factors affecting left ventricular synchronicity in hypertensive patients: are arterial stiffness and central blood pressures influential?

    PubMed

    Kırış, Abdulkadir; Kırış, Gülhanım; Karaman, Kayıhan; Sahin, Mürsel; Gedikli, Omer; Kaplan, Sahin; Orem, Asım; Kutlu, Merih; Kazaz, Zeynep

    2012-10-01

    Left ventricular (LV) dyssynchrony is a common finding in patients with hypertension and is associated with LV hypertrophy. Arterial stiffness (AS) and central (aortic) blood pressures play a significant role in end-organ damage such as LV hypertrophy caused by hypertension. The objective of this study was to investigate the relationship between AS, central blood pressures (BP) and LV dyssynchrony. Thirty-five newly diagnosed hypertensive patients and 40 controls were enrolled in the study. The entire study population underwent a comprehensive echocardiographic study including tissue synchrony imaging. The 12 segmental model was used to measure the time to regional peak systolic tissue velocity (Ts) in the LV and two dyssynchrony indices were computed. Parameters of AS including pulse wave velocity (PWV), augmentation index (AIx@75), and central systolic and diastolic BP were evaluated by applanation tonometry. The baseline clinical and echocardiographic parameters of both groups were similar except for their BPs. Dyssynchrony indices were prolonged in patients with hypertension as compared to the controls. The standart deviation of Ts of 12 LV segments in patients with hypertension and the controls were 48.7±18.8 vs. 25.8±13.1, respectively (p<0.001), and the maximal difference in Ts between any 2 of 12 LV segments was 143.9±52.2 for hypertension patients vs. 83.8±39.4 for controls (p<0.001). PWV (11.9±2.5 vs. 9.5±1.4, p<0.001), AIx@75 (27.4±8.3 vs. 18.3±9, p=0.009), and central systolic (147.6±20.8 vs. 105.4±11, p<0.001) and diastolic (99.8±14.4 vs. 72.8±9.5, p<0.001) pressures were higher in patients with hypertension than in the controls, respectively. In multivariable analysis, central systolic BP (ß=0.496, p=0.03), LV mass index (ß=0.232, p=0.027), and body mass index (ß=0.308, p=0.002) were found to be independently related to dyssynchrony. Central systolic BP is an independent predictor of LV dyssynchrony, but Aıx@75 did not have an independent effect on LV synchronicity in patients with newly-diagnosed hypertension.

  3. Computer input and output files associated with ground-water-flow simulations of the Albuquerque Basin, central New Mexico, 1901-94, with projections to 2020; (supplement one to U.S. Geological Survey Water-resources investigations report 94-4251)

    USGS Publications Warehouse

    Kernodle, J.M.

    1996-01-01

    This report presents the computer input files required to run the three-dimensional ground-water-flow model of the Albuquerque Basin, central New Mexico, documented in Kernodle and others (Kernodle, J.M., McAda, D.P., and Thorn, C.R., 1995, Simulation of ground-water flow in the Albuquerque Basin, central New Mexico, 1901-1994, with projections to 2020: U.S. Geological Survey Water-Resources Investigations Report 94-4251, 114 p.). Output files resulting from the computer simulations are included for reference.

  4. Executive control systems in the engineering design environment

    NASA Technical Reports Server (NTRS)

    Hurst, P. W.; Pratt, T. W.

    1985-01-01

    Executive Control Systems (ECSs) are software structures for the unification of various engineering design application programs into comprehensive systems with a central user interface (uniform access) method and a data management facility. Attention is presently given to the most significant determinations of a research program conducted for 24 ECSs, used in government and industry engineering design environments to integrate CAD/CAE applications programs. Characterizations are given for the systems' major architectural components and the alternative design approaches considered in their development. Attention is given to ECS development prospects in the areas of interdisciplinary usage, standardization, knowledge utilization, and computer science technology transfer.

  5. Efficient evaluation of wireless real-time control networks.

    PubMed

    Horvath, Peter; Yampolskiy, Mark; Koutsoukos, Xenofon

    2015-02-11

    In this paper, we present a system simulation framework for the design and performance evaluation of complex wireless cyber-physical systems. We describe the simulator architecture and the specific developments that are required to simulate cyber-physical systems relying on multi-channel, multihop mesh networks. We introduce realistic and efficient physical layer models and a system simulation methodology, which provides statistically significant performance evaluation results with low computational complexity. The capabilities of the proposed framework are illustrated in the example of WirelessHART, a centralized, real-time, multi-hop mesh network designed for industrial control and monitor applications.

  6. The jamming avoidance response in the weakly electric fish Eigenmannia

    NASA Astrophysics Data System (ADS)

    Heiligenberg, Walter

    1980-10-01

    This study analyzes the algorithm by which the animal's nervous system evaluates spatially distributed temporal patterns of electroreceptive information. The outcome of this evaluation controls the jamming avoidance response, which is a shift in the animal's electric organ discharge frequency away from similar foreign frequencies. The encoding of “behaviorally relevant” stimulus variables by electroreceptors and the central computation of their messages are investigated by combined behavioral and neurophysiological strategies.

  7. Low-cost computer mouse for the elderly or disabled in Taiwan.

    PubMed

    Chen, C-C; Chen, W-L; Chen, B-N; Shih, Y-Y; Lai, J-S; Chen, Y-L

    2014-01-01

    A mouse is an important communication interface between a human and a computer, but it is still difficult to use for the elderly or disabled. To develop a low-cost computer mouse auxiliary tool. The principal structure of the low-cost mouse auxiliary tool is the IR (infrared ray) array module and the Wii icon sensor module, which combine with reflective tape and the SQL Server database. This has several benefits including cheap hardware cost, fluent control, prompt response, adaptive adjustment and portability. Also, it carries the game module with the function of training and evaluation; to the trainee, it is really helpful to upgrade the sensitivity of consciousness/sense and the centralization of attention. The intervention phase/maintenance phase, with regard to clicking accuracy and use of time, p value (p< 0.05) reach the level of significance. The development of the low cost adaptive computer mouse auxiliary tool was completed during the study and was also verified as having the characteristics of low cost, easy operation and the adaptability. To patients with physical disabilities, if they have independent control action parts of their limbs, the mouse auxiliary tool is suitable for them to use, i.e. the user only needs to paste the reflective tape by the independent control action parts of the body to operate the mouse auxiliary tool.

  8. U.S. EPA computational toxicology programs: Central role of chemical-annotation efforts and molecular databases

    EPA Science Inventory

    EPA’s National Center for Computational Toxicology is engaged in high-profile research efforts to improve the ability to more efficiently and effectively prioritize and screen thousands of environmental chemicals for potential toxicity. A central component of these efforts invol...

  9. Magnetic Resonance Imaging of Optic Nerve Traction During Adduction in Primary Open-Angle Glaucoma With Normal Intraocular Pressure

    PubMed Central

    Demer, Joseph L.; Clark, Robert A.; Suh, Soh Youn; Giaconi, JoAnn A.; Nouri-Mahdavi, Kouros; Law, Simon K.; Bonelli, Laura; Coleman, Anne L.; Caprioli, Joseph

    2017-01-01

    Purpose We used magnetic resonance imaging (MRI) to ascertain effects of optic nerve (ON) traction in adduction, a phenomenon proposed as neuropathic in primary open-angle glaucoma (POAG). Methods Seventeen patients with POAG and maximal IOP ≤ 20 mm Hg, and 31 controls underwent MRI in central gaze and 20° to 30° abduction and adduction. Optic nerve and sheath area centroids permitted computation of midorbital lengths versus minimum paths. Results Average mean deviation (±SEM) was −8.2 ± 1.2 dB in the 15 patients with POAG having interpretable perimetry. In central gaze, ON path length in POAG was significantly more redundant (104.5 ± 0.4% of geometric minimum) than in controls (102.9 ± 0.4%, P = 2.96 × 10−4). In both groups the ON became significantly straighter in adduction (28.6 ± 0.8° in POAG, 26.8 ± 1.1° in controls) than central gaze and abduction. In adduction, the ON in POAG straightened to 102.0% ± 0.2% of minimum path length versus 104.5% ± 0.4% in central gaze (P = 5.7 × 10−7), compared with controls who straightened to 101.6% ± 0.1% from 102.9% ± 0.3% in central gaze (P = 8.7 × 10−6); and globes retracted 0.73 ± 0.09 mm in POAG, but only 0.07 ± 0.08 mm in controls (P = 8.8 × 10−7). Both effects were confirmed in age-matched controls, and remained significant after correction for significant effects of age and axial globe length (P = 0.005). Conclusions Although tethering and elongation of ON and sheath are normal in adduction, adduction is associated with abnormally great globe retraction in POAG without elevated IOP. Traction in adduction may cause mechanical overloading of the ON head and peripapillary sclera, thus contributing to or resulting from the optic neuropathy of glaucoma independent of IOP. PMID:28829843

  10. Apollo lunar descent guidance

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1974-01-01

    Apollo lunar-descent guidance transfers the Lunar Module from a near-circular orbit to touchdown, traversing a 17 deg central angle and a 15 km altitude in 11 min. A group of interactive programs in an onboard computer guide the descent, controlling altitude and the descent propulsion system throttle. A ground-based program pre-computes guidance targets. The concepts involved in this guidance are described. Explicit and implicit guidance are discussed, guidance equations are derived, and the earlier Apollo explicit equation is shown to be an inferior special case of the later implicit equation. Interactive guidance, by which the two-man crew selects a landing site in favorable terrain and directs the trajectory there, is discussed. Interactive terminal-descent guidance enables the crew to control the essentially vertical descent rate in order to land in minimum time with safe contact speed. The altitude maneuver routine uses concepts that make gimbal lock inherently impossible.

  11. Fusimotor control of spindle sensitivity regulates central and peripheral coding of joint angles.

    PubMed

    Lan, Ning; He, Xin

    2012-01-01

    Proprioceptive afferents from muscle spindles encode information about peripheral joint movements for the central nervous system (CNS). The sensitivity of muscle spindle is nonlinearly dependent on the activation of gamma (γ) motoneurons in the spinal cord that receives inputs from the motor cortex. How fusimotor control of spindle sensitivity affects proprioceptive coding of joint position is not clear. Furthermore, what information is carried in the fusimotor signal from the motor cortex to the muscle spindle is largely unknown. In this study, we addressed the issue of communication between the central and peripheral sensorimotor systems using a computational approach based on the virtual arm (VA) model. In simulation experiments within the operational range of joint movements, the gamma static commands (γ(s)) to the spindles of both mono-articular and bi-articular muscles were hypothesized (1) to remain constant, (2) to be modulated with joint angles linearly, and (3) to be modulated with joint angles nonlinearly. Simulation results revealed a nonlinear landscape of Ia afferent with respect to both γ(s) activation and joint angle. Among the three hypotheses, the constant and linear strategies did not yield Ia responses that matched the experimental data, and therefore, were rejected as plausible strategies of spindle sensitivity control. However, if γ(s) commands were quadratically modulated with joint angles, a robust linear relation between Ia afferents and joint angles could be obtained in both mono-articular and bi-articular muscles. With the quadratic strategy of spindle sensitivity control, γ(s) commands may serve as the CNS outputs that inform the periphery of central coding of joint angles. The results suggest that the information of joint angles may be communicated between the CNS and muscles via the descending γ(s) efferent and Ia afferent signals.

  12. Advanced Map For Real-Time Process Control

    NASA Astrophysics Data System (ADS)

    Shiobara, Yasuhisa; Matsudaira, Takayuki; Sashida, Yoshio; Chikuma, Makoto

    1987-10-01

    MAP, a communications protocol for factory automation proposed by General Motors [1], has been accepted by users throughout the world and is rapidly becoming a user standard. In fact, it is now a LAN standard for factory automation. MAP is intended to interconnect different devices, such as computers and programmable devices, made by different manufacturers, enabling them to exchange information. It is based on the OSI intercomputer com-munications protocol standard under development by the ISO. With progress and standardization, MAP is being investigated for application to process control fields other than factory automation [2]. The transmission response time of the network system and centralized management of data exchanged with various devices for distributed control are import-ant in the case of a real-time process control with programmable controllers, computers, and instruments connected to a LAN system. MAP/EPA and MINI MAP aim at reduced overhead in protocol processing and enhanced transmission response. If applied to real-time process control, a protocol based on point-to-point and request-response transactions limits throughput and transmission response. This paper describes an advanced MAP LAN system applied to real-time process control by adding a new data transmission control that performs multicasting communication voluntarily and periodically in the priority order of data to be exchanged.

  13. Distributed Computing with Centralized Support Works at Brigham Young.

    ERIC Educational Resources Information Center

    McDonald, Kelly; Stone, Brad

    1992-01-01

    Brigham Young University (Utah) has addressed the need for maintenance and support of distributed computing systems on campus by implementing a program patterned after a national business franchise, providing the support and training of a centralized administration but allowing each unit to operate much as an independent small business.…

  14. GBOOST: a GPU-based tool for detecting gene-gene interactions in genome-wide case control studies.

    PubMed

    Yung, Ling Sing; Yang, Can; Wan, Xiang; Yu, Weichuan

    2011-05-01

    Collecting millions of genetic variations is feasible with the advanced genotyping technology. With a huge amount of genetic variations data in hand, developing efficient algorithms to carry out the gene-gene interaction analysis in a timely manner has become one of the key problems in genome-wide association studies (GWAS). Boolean operation-based screening and testing (BOOST), a recent work in GWAS, completes gene-gene interaction analysis in 2.5 days on a desktop computer. Compared with central processing units (CPUs), graphic processing units (GPUs) are highly parallel hardware and provide massive computing resources. We are, therefore, motivated to use GPUs to further speed up the analysis of gene-gene interactions. We implement the BOOST method based on a GPU framework and name it GBOOST. GBOOST achieves a 40-fold speedup compared with BOOST. It completes the analysis of Wellcome Trust Case Control Consortium Type 2 Diabetes (WTCCC T2D) genome data within 1.34 h on a desktop computer equipped with Nvidia GeForce GTX 285 display card. GBOOST code is available at http://bioinformatics.ust.hk/BOOST.html#GBOOST.

  15. Design of the central region in the Gustaf Werner cyclotron at the Uppsala university

    NASA Astrophysics Data System (ADS)

    Toprek, Dragan; Reistad, Dag; Lundstrom, Bengt; Wessman, Dan

    2002-07-01

    This paper describes the design of the central region in the Gustaf Werner cyclotron for h=1, 2 and 3 modes of acceleration. The electric field distribution in the inflector and in the four acceleration gaps has been numerically calculated from an electric potential map produced by the program RELAX3D. The geometry of the central region has been tested with the computations of orbits carried out by means of the computer code CYCLONE. The optical properties of the spiral inflector and the central region were studied by using the programs CASINO and CYCLONE, respectively.

  16. Cloud computing in pharmaceutical R&D: business risks and mitigations.

    PubMed

    Geiger, Karl

    2010-05-01

    Cloud computing provides information processing power and business services, delivering these services over the Internet from centrally hosted locations. Major technology corporations aim to supply these services to every sector of the economy. Deploying business processes 'in the cloud' requires special attention to the regulatory and business risks assumed when running on both hardware and software that are outside the direct control of a company. The identification of risks at the correct service level allows a good mitigation strategy to be selected. The pharmaceutical industry can take advantage of existing risk management strategies that have already been tested in the finance and electronic commerce sectors. In this review, the business risks associated with the use of cloud computing are discussed, and mitigations achieved through knowledge from securing services for electronic commerce and from good IT practice are highlighted.

  17. On-line data analysis and monitoring for H1 drift chambers

    NASA Astrophysics Data System (ADS)

    Düllmann, Dirk

    1992-05-01

    The on-line monitoring, slow control and calibration of the H1 central jet chamber uses a VME multiprocessor system to perform the analysis and a connected Macintosh computer as graphical interface to the operator on shift. Task of this system are: - analysis of event data including on-line track search, - on-line calibration from normal events and testpulse events, - control of the high voltage and monitoring of settings and currents, - monitoring of temperature, pressure and mixture of the chambergas. A program package is described which controls the dataflow between data aquisition, differnt VME CPUs and Macintosh. It allows to run off-line style programs for the different tasks.

  18. Mission Control Center (MCC) System Specification for the Shuttle Orbital Flight Test (OFT) Timeframe

    NASA Technical Reports Server (NTRS)

    1976-01-01

    System specifications to be used by the mission control center (MCC) for the shuttle orbital flight test (OFT) time frame were described. The three support systems discussed are the communication interface system (CIS), the data computation complex (DCC), and the display and control system (DCS), all of which may interfere with, and share processing facilities with other applications processing supporting current MCC programs. The MCC shall provide centralized control of the space shuttle OFT from launch through orbital flight, entry, and landing until the Orbiter comes to a stop on the runway. This control shall include the functions of vehicle management in the area of hardware configuration (verification), flight planning, communication and instrumentation configuration management, trajectory, software and consumables, payloads management, flight safety, and verification of test conditions/environment.

  19. Low-profile heliostat design for solar central receiver systems

    NASA Technical Reports Server (NTRS)

    Fourakis, E.; Severson, A. M.

    1977-01-01

    Heliostat designs intended to reduce costs and the effect of adverse wind loads on the devices were developed. Included was the low-profile heliostat consisting of a stiff frame with sectional focusing reflectors coupled together to turn as a unit. The entire frame is arranged to turn angularly about a center point. The ability of the heliostat to rotate about both the vertical and horizontal axes permits a central computer control system to continuously aim the sun's reflection onto a selected target. An engineering model of the basic device was built and is being tested. Control and mirror parameters, such as roughness and need for fine aiming, are being studied. The fabrication of these prototypes is in process. The model was also designed to test mirror focusing techniques, heliostat geometry, mechanical functioning, and tracking control. The model can be easily relocated to test mirror imaging on a tower from various directions. In addition to steering and aiming studies, the tests include the effects of temperature changes, wind gusting and weathering. The results of economic studies on this heliostat are also presented.

  20. Adaptive control of robotic manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    The author presents a novel approach to adaptive control of manipulators to achieve trajectory tracking by the joint angles. The central concept in this approach is the utilization of the manipulator inverse as a feedforward controller. The desired trajectory is applied as an input to the feedforward controller which behaves as the inverse of the manipulator at any operating point; the controller output is used as the driving torque for the manipulator. The controller gains are then updated by an adaptation algorithm derived from MRAC (model reference adaptive control) theory to cope with variations in the manipulator inverse due to changes of the operating point. An adaptive feedback controller and an auxiliary signal are also used to enhance closed-loop stability and to achieve faster adaptation. The proposed control scheme is computationally fast and does not require a priori knowledge of the complex dynamic model or the parameter values of the manipulator or the payload.

  1. Scale Space for Camera Invariant Features.

    PubMed

    Puig, Luis; Guerrero, José J; Daniilidis, Kostas

    2014-09-01

    In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.

  2. An Infrastructure to Enable Lightweight Context-Awareness for Mobile Users

    PubMed Central

    Curiel, Pablo; Lago, Ana B.

    2013-01-01

    Mobile phones enable us to carry out a wider range of tasks every day, and as a result they have become more ubiquitous than ever. However, they are still more limited in terms of processing power and interaction capabilities than traditional computers, and the often distracting and time-constricted scenarios in which we use them do not help in alleviating these limitations. Context-awareness is a valuable technique to address these issues, as it enables to adapt application behaviour to each situation. In this paper we present a context management infrastructure for mobile environments, aimed at controlling context information life-cycle in this kind of scenarios, with the main goal of enabling application and services to adapt their behaviour to better meet end-user needs. This infrastructure relies on semantic technologies and open standards to improve interoperability, and is based on a central element, the context manager. This element acts as a central context repository and takes most of the computational burden derived from dealing with this kind of information, thus relieving from these tasks to more resource-scarce devices in the system. PMID:23899932

  3. Computer simulation of preflight blood volume reduction as a countermeasure to fluid shifts in space flight

    NASA Technical Reports Server (NTRS)

    Simanonok, K. E.; Srinivasan, R.; Charles, J. B.

    1992-01-01

    Fluid shifts in weightlessness may cause a central volume expansion, activating reflexes to reduce the blood volume. Computer simulation was used to test the hypothesis that preadaptation of the blood volume prior to exposure to weightlessness could counteract the central volume expansion due to fluid shifts and thereby attenuate the circulatory and renal responses resulting in large losses of fluid from body water compartments. The Guyton Model of Fluid, Electrolyte, and Circulatory Regulation was modified to simulate the six degree head down tilt that is frequently use as an experimental analog of weightlessness in bedrest studies. Simulation results show that preadaptation of the blood volume by a procedure resembling a blood donation immediately before head down bedrest is beneficial in damping the physiologic responses to fluid shifts and reducing body fluid losses. After ten hours of head down tilt, blood volume after preadaptation is higher than control for 20 to 30 days of bedrest. Preadaptation also produces potentially beneficial higher extracellular volume and total body water for 20 to 30 days of bedrest.

  4. Competence with fractions predicts gains in mathematics achievement.

    PubMed

    Bailey, Drew H; Hoard, Mary K; Nugent, Lara; Geary, David C

    2012-11-01

    Competence with fractions predicts later mathematics achievement, but the codevelopmental pattern between fractions knowledge and mathematics achievement is not well understood. We assessed this codevelopment through examination of the cross-lagged relation between a measure of conceptual knowledge of fractions and mathematics achievement in sixth and seventh grades (N=212). The cross-lagged effects indicated that performance on the sixth grade fractions concepts measure predicted 1-year gains in mathematics achievement (ß=.14, p<.01), controlling for the central executive component of working memory and intelligence, but sixth grade mathematics achievement did not predict gains on the fractions concepts measure (ß=.03, p>.50). In a follow-up assessment, we demonstrated that measures of fluency with computational fractions significantly predicted seventh grade mathematics achievement above and beyond the influence of fluency in computational whole number arithmetic, performance on number fluency and number line tasks, central executive span, and intelligence. Results provide empirical support for the hypothesis that competence with fractions underlies, in part, subsequent gains in mathematics achievement. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Nonsomatotopic organization of the higher motor centers in octopus.

    PubMed

    Zullo, Letizia; Sumbre, German; Agnisola, Claudio; Flash, Tamar; Hochner, Binyamin

    2009-10-13

    Hyperredundant limbs with a virtually unlimited number of degrees of freedom (DOFs) pose a challenge for both biological and computational systems of motor control. In the flexible arms of the octopus, simplification strategies have evolved to reduce the number of controlled DOFs. Motor control in the octopus nervous system is hierarchically organized. A relatively small central brain integrates a huge amount of visual and tactile information from the large optic lobes and the peripheral nervous system of the arms and issues commands to lower motor centers controlling the elaborated neuromuscular system of the arms. This unique organization raises new questions on the organization of the octopus brain and whether and how it represents the rich movement repertoire. We developed a method of brain microstimulation in freely behaving animals and stimulated the higher motor centers-the basal lobes-thus inducing discrete and complex sets of movements. As stimulation strength increased, complex movements were recruited from basic components shared by different types of movement. We found no stimulation site where movements of a single arm or body part could be elicited. Discrete and complex components have no central topographical organization but are distributed over wide regions.

  6. Inter-hemispheric functional connectivity disruption in children with prenatal alcohol exposure

    PubMed Central

    Wozniak, Jeffrey R.; Mueller, Bryon A.; Muetzel, Ryan L.; Bell, Christopher J.; Hoecker, Heather L.; Nelson, Miranda L.; Chang, Pi-Nian; Lim, Kelvin O.

    2010-01-01

    Background MRI studies, including recent diffusion tensor imaging (DTI) studies, have shown corpus callosum abnormalities in children prenatally exposed to alcohol, especially in the posterior regions. These abnormalities appear across the range of Fetal Alcohol Spectrum Disorders (FASD). Several studies have demonstrated cognitive correlates of callosal abnormalities in FASD including deficits in visual-motor skill, verbal learning, and executive functioning. The goal of this study was to determine if inter-hemispheric structural connectivity abnormalities in FASD are associated with disrupted inter-hemispheric functional connectivity and disrupted cognition. Methods Twenty-one children with FASD and 23 matched controls underwent a six minute resting-state functional MRI scan as well as anatomical imaging and DTI. Using a semiautomated method, we parsed the corpus callosum and delineated seven inter-hemispheric white matter tracts with DTI tractography. Cortical regions of interest (ROIs) at the distal ends of these tracts were identified. Right-left correlations in resting fMRI signal were computed for these sets of ROIs and group comparisons were done. Correlations with facial dysmorphology, cognition, and DTI measures were computed. Results A significant group difference in inter-hemispheric functional connectivity was seen in a posterior set of ROIs, the para-central region. Children with FASD had functional connectivity that was 12% lower than controls in this region. Sub-group analyses were not possible due to small sample size, but the data suggest that there were effects across the FASD spectrum. No significant association with facial dysmorphology was found. Para-central functional connectivity was significantly correlated with DTI mean diffusivity, a measure of microstructural integrity, in posterior callosal tracts in controls but not in FASD. Significant correlations were seen between these structural and functional measures and Wechsler perceptual reasoning ability. Conclusions Inter-hemispheric functional connectivity disturbances were observed in children with FASD relative to controls. The disruption was measured in medial parietal regions (para-central) that are connected by posterior callosal fiber projections. We have previously shown microstructural abnormalities in these same posterior callosal regions and the current study suggests a possible relationship between the two. These measures have clinical relevance as they are associated with cognitive functioning. PMID:21303384

  7. Maximal venous outflow velocity: an index for iliac vein obstruction.

    PubMed

    Jones, T Matthew; Cassada, David C; Heidel, R Eric; Grandas, Oscar G; Stevens, Scott L; Freeman, Michael B; Edmondson, James D; Goldman, Mitchell H

    2012-11-01

    Leg swelling is a common cause for vascular surgical evaluation, and iliocaval obstruction due to May-Thurner syndrome (MTS) can be difficult to diagnose. Physical examination and planar radiographic imaging give anatomic information but may miss the fundamental pathophysiology of MTS. Similarly, duplex ultrasonographic examination of the legs gives little information about central impedance of venous return above the inguinal ligament. We have modified the technique of duplex ultrasonography to evaluate the flow characteristics of the leg after tourniquet-induced venous engorgement, with the objective of revealing iliocaval obstruction characteristic of MTS. Twelve patients with signs and symptoms of MTS were compared with healthy control subjects for duplex-derived maximal venous outflow velocity (MVOV) after tourniquet-induced venous engorgement of the leg. The data for healthy control subjects were obtained from a previous study of asymptomatic volunteers using the same MVOV maneuvers. The tourniquet-induced venous engorgement mimics that caused during vigorous exercise. A right-to-left ratio of MVOV was generated for patient comparisons. Patients with clinical evidence of MTS had a mean right-to-left MVOV ratio of 2.0, asymptomatic control subjects had a mean ratio of 1.3, and MTS patients who had undergone endovascular treatment had a poststent mean ratio of 1.2 (P = 0.011). Interestingly, computed tomography and magnetic resonance imaging results, when available, were interpreted as positive in only 53% of the patients with MTS according to both our MVOV criteria and confirmatory venography. After intervention, the right-to-left MVOV ratio in the MTS patients was found to be reduced similar to asymptomatic control subjects, indicating a relief of central venous obstruction by stenting the compressive MTS anatomy. Duplex-derived MVOV measurements are helpful for detection of iliocaval venous obstruction, such as MTS. Right-to-left MVOV ratios and postengorgement spectral analysis are helpful adjuncts to duplex imaging for leg swelling. The MVOV maneuvers are well tolerated by patients and yields physiological data regarding central venous obstruction that computed tomography and magnetic resonance imaging fail to detect. Copyright © 2012 Annals of Vascular Surgery Inc. Published by Elsevier Inc. All rights reserved.

  8. Economics of Computing: The Case of Centralized Network File Servers.

    ERIC Educational Resources Information Center

    Solomon, Martin B.

    1994-01-01

    Discusses computer networking and the cost effectiveness of decentralization, including local area networks. A planned experiment with a centralized approach to the operation and management of file servers at the University of South Carolina is described that hopes to realize cost savings and the avoidance of staffing problems. (Contains four…

  9. The human factors of workstation telepresence

    NASA Technical Reports Server (NTRS)

    Smith, Thomas J.; Smith, Karl U.

    1990-01-01

    The term workstation telepresence has been introduced to describe human-telerobot compliance, which enables the human operator to effectively project his/her body image and behavioral skills to control of the telerobot itself. Major human-factors considerations for establishing high fidelity workstation telepresence during human-telerobot operation are discussed. Telerobot workstation telepresence is defined by the proficiency and skill with which the operator is able to control sensory feedback from direct interaction with the workstation itself, and from workstation-mediated interaction with the telerobot. Numerous conditions influencing such control have been identified. This raises the question as to what specific factors most critically influence the realization of high fidelity workstation telepresence. The thesis advanced here is that perturbations in sensory feedback represent a major source of variability in human performance during interactive telerobot operation. Perturbed sensory feedback research over the past three decades has established that spatial transformations or temporal delays in sensory feedback engender substantial decrements in interactive task performance, which training does not completely overcome. A recently developed social cybernetic model of human-computer interaction can be used to guide this approach, based on computer-mediated tracking and control of sensory feedback. How the social cybernetic model can be employed for evaluating the various modes, patterns, and integrations of interpersonal, team, and human-computer interactions which play a central role is workstation telepresence are discussed.

  10. Cognitive Control Reflects Context Monitoring, Not Motoric Stopping, in Response Inhibition

    PubMed Central

    Chatham, Christopher H.; Claus, Eric D.; Kim, Albert; Curran, Tim; Banich, Marie T.; Munakata, Yuko

    2012-01-01

    The inhibition of unwanted behaviors is considered an effortful and controlled ability. However, inhibition also requires the detection of contexts indicating that old behaviors may be inappropriate – in other words, inhibition requires the ability to monitor context in the service of goals, which we refer to as context-monitoring. Using behavioral, neuroimaging, electrophysiological and computational approaches, we tested whether motoric stopping per se is the cognitively-controlled process supporting response inhibition, or whether context-monitoring may fill this role. Our results demonstrate that inhibition does not require control mechanisms beyond those involved in context-monitoring, and that such control mechanisms are the same regardless of stopping demands. These results challenge dominant accounts of inhibitory control, which posit that motoric stopping is the cognitively-controlled process of response inhibition, and clarify emerging debates on the frontal substrates of response inhibition by replacing the centrality of controlled mechanisms for motoric stopping with context-monitoring. PMID:22384038

  11. Development of monitoring and control system for a mine main fan based on frequency converter

    NASA Astrophysics Data System (ADS)

    Zhang, Y. C.; Zhang, R. W.; Kong, X. Z.; Y Gong, J.; Chen, Q. G.

    2013-12-01

    In the process of mine exploitation, the requirement of air flow rate often changes. The procedure of traditional control mode of the fan is complex and it is hard to meet the worksite requirement for air. This system is based on Principal Computer (PC) monitoring system and high performance PLC control system. In this system, the frequency converter is adapted to adjust the fan speed and the air of worksite can be regulated steplessly. The function of the monitoring and control system contains on-line monitoring and centralized control. The system can monitor the parameters of fan in real-time, control the operation of frequency converter, as well as, control the fan and its accessory equipments. At the same time, the automation level of the system is highly, the field equipments can be monitored and controlled automatically. So, the system is an important safeguard for mine production.

  12. Design of the central region in the Warsaw K-160 cyclotron

    NASA Astrophysics Data System (ADS)

    Toprek, Dragan; Sura, Josef; Choinski, Jaroslav; Czosnyka, Tomas

    2001-08-01

    This paper describes the design of the central region for h=2 and 3 modes of acceleration in the Warsaw K-160 cyclotron. The central region is unique and compatible with the two above-mentioned harmonic modes of operation. Only one spiral type inflector will be used. The electric field distribution in the inflector and in the four acceleration gaps has been numerically calculated from an electric potential map produced by the program RELAX3D. The geometry of the central region has been tested with the computations of orbits carried out by means of the computer code CYCLONE. The optical properties of the spiral inflector and the central region were studied by using the programs CASINO and CYCLONE, respectively.

  13. Protocols for Handling Messages Between Simulation Computers

    NASA Technical Reports Server (NTRS)

    Balcerowski, John P.; Dunnam, Milton

    2006-01-01

    Practical Simulator Network (PSimNet) is a set of data-communication protocols designed especially for use in handling messages between computers that are engaging cooperatively in real-time or nearly-real-time training simulations. In a typical application, computers that provide individualized training at widely dispersed locations would communicate, by use of PSimNet, with a central host computer that would provide a common computational- simulation environment and common data. Originally intended for use in supporting interfaces between training computers and computers that simulate the responses of spacecraft scientific payloads, PSimNet could be especially well suited for a variety of other applications -- for example, group automobile-driver training in a classroom. Another potential application might lie in networking of automobile-diagnostic computers at repair facilities to a central computer that would compile the expertise of numerous technicians and engineers and act as an expert consulting technician.

  14. A Modelica-based Model Library for Building Energy and Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael

    2009-04-07

    This paper describes an open-source library with component models for building energy and control systems that is based on Modelica, an equation-based objectoriented language that is well positioned to become the standard for modeling of dynamic systems in various industrial sectors. The library is currently developed to support computational science and engineering for innovative building energy and control systems. Early applications will include controls design and analysis, rapid prototyping to support innovation of new building systems and the use of models during operation for controls, fault detection and diagnostics. This paper discusses the motivation for selecting an equation-based object-oriented language.more » It presents the architecture of the library and explains how base models can be used to rapidly implement new models. To demonstrate the capability of analyzing novel energy and control systems, the paper closes with an example where we compare the dynamic performance of a conventional hydronic heating system with thermostatic radiator valves to an innovative heating system. In the new system, instead of a centralized circulation pump, each of the 18 radiators has a pump whose speed is controlled using a room temperature feedback loop, and the temperature of the boiler is controlled based on the speed of the radiator pump. All flows are computed by solving for the pressure distribution in the piping network, and the controls include continuous and discrete time controls.« less

  15. Computational Models and Emergent Properties of Respiratory Neural Networks

    PubMed Central

    Lindsey, Bruce G.; Rybak, Ilya A.; Smith, Jeffrey C.

    2012-01-01

    Computational models of the neural control system for breathing in mammals provide a theoretical and computational framework bringing together experimental data obtained from different animal preparations under various experimental conditions. Many of these models were developed in parallel and iteratively with experimental studies and provided predictions guiding new experiments. This data-driven modeling approach has advanced our understanding of respiratory network architecture and neural mechanisms underlying generation of the respiratory rhythm and pattern, including their functional reorganization under different physiological conditions. Models reviewed here vary in neurobiological details and computational complexity and span multiple spatiotemporal scales of respiratory control mechanisms. Recent models describe interacting populations of respiratory neurons spatially distributed within the Bötzinger and pre-Bötzinger complexes and rostral ventrolateral medulla that contain core circuits of the respiratory central pattern generator (CPG). Network interactions within these circuits along with intrinsic rhythmogenic properties of neurons form a hierarchy of multiple rhythm generation mechanisms. The functional expression of these mechanisms is controlled by input drives from other brainstem components, including the retrotrapezoid nucleus and pons, which regulate the dynamic behavior of the core circuitry. The emerging view is that the brainstem respiratory network has rhythmogenic capabilities at multiple levels of circuit organization. This allows flexible, state-dependent expression of different neural pattern-generation mechanisms under various physiological conditions, enabling a wide repertoire of respiratory behaviors. Some models consider control of the respiratory CPG by pulmonary feedback and network reconfiguration during defensive behaviors such as cough. Future directions in modeling of the respiratory CPG are considered. PMID:23687564

  16. The impact of goal-oriented task design on neurofeedback learning for brain-computer interface control.

    PubMed

    McWhinney, S R; Tremblay, A; Boe, S G; Bardouille, T

    2018-02-01

    Neurofeedback training teaches individuals to modulate brain activity by providing real-time feedback and can be used for brain-computer interface control. The present study aimed to optimize training by maximizing engagement through goal-oriented task design. Participants were shown either a visual display or a robot, where each was manipulated using motor imagery (MI)-related electroencephalography signals. Those with the robot were instructed to quickly navigate grid spaces, as the potential for goal-oriented design to strengthen learning was central to our investigation. Both groups were hypothesized to show increased magnitude of these signals across 10 sessions, with the greatest gains being seen in those navigating the robot due to increased engagement. Participants demonstrated the predicted increase in magnitude, with no differentiation between hemispheres. Participants navigating the robot showed stronger left-hand MI increases than those with the computer display. This is likely due to success being reliant on maintaining strong MI-related signals. While older participants showed stronger signals in early sessions, this trend later reversed, suggesting greater natural proficiency but reduced flexibility. These results demonstrate capacity for modulating neurofeedback using MI over a series of training sessions, using tasks of varied design. Importantly, the more goal-oriented robot control task resulted in greater improvements.

  17. A computer-assisted data collection system for use in a multicenter study of American Indians and Alaska Natives: SCAPES.

    PubMed

    Edwards, Roger L; Edwards, Sandra L; Bryner, James; Cunningham, Kelly; Rogers, Amy; Slattery, Martha L

    2008-04-01

    We describe a computer-assisted data collection system developed for a multicenter cohort study of American Indian and Alaska Native people. The study computer-assisted participant evaluation system or SCAPES is built around a central database server that controls a small private network with touch screen workstations. SCAPES encompasses the self-administered questionnaires, the keyboard-based stations for interviewer-administered questionnaires, a system for inputting medical measurements, and administrative tasks such as data exporting, backup and management. Elements of SCAPES hardware/network design, data storage, programming language, software choices, questionnaire programming including the programming of questionnaires administered using audio computer-assisted self-interviewing (ACASI), and participant identification/data security system are presented. Unique features of SCAPES are that data are promptly made available to participants in the form of health feedback; data can be quickly summarized for tribes for health monitoring and planning at the community level; and data are available to study investigators for analyses and scientific evaluation.

  18. Holonic Rationale and Bio-inspiration on Design of Complex Emergent and Evolvable Systems

    NASA Astrophysics Data System (ADS)

    Leitao, Paulo

    Traditional centralized and rigid control structures are becoming inflexible to face the requirements of reconfigurability, responsiveness and robustness, imposed by customer demands in the current global economy. The Holonic Manufacturing Systems (HMS) paradigm, which was pointed out as a suitable solution to face these requirements, translates the concepts inherited from social organizations and biology to the manufacturing world. It offers an alternative way of designing adaptive systems where the traditional centralized control is replaced by decentralization over distributed and autonomous entities organized in hierarchical structures formed by intermediate stable forms. In spite of its enormous potential, methods regarding the self-adaptation and self-organization of complex systems are still missing. This paper discusses how the insights from biology in connection with new fields of computer science can be useful to enhance the holonic design aiming to achieve more self-adaptive and evolvable systems. Special attention is devoted to the discussion of emergent behavior and self-organization concepts, and the way they can be combined with the holonic rationale.

  19. Brain-Computer Interfaces for 1-D and 2-D Cursor Control: Designs Using Volitional Control of the EEG Spectrum or Steady-State Visual Evoked Potentials

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Matthews, Bryan; Rosipal, Roman

    2005-01-01

    We have developed and tested two EEG-based brain-computer interfaces (BCI) for users to control a cursor on a computer display. Our system uses an adaptive algorithm, based on kernel partial least squares classification (KPLS), to associate patterns in multichannel EEG frequency spectra with cursor controls. Our first BCI, Target Practice, is a system for one-dimensional device control, in which participants use biofeedback to learn voluntary control of their EEG spectra. Target Practice uses a KF LS classifier to map power spectra of 30-electrode EEG signals to rightward or leftward position of a moving cursor on a computer display. Three subjects learned to control motion of a cursor on a video display in multiple blocks of 60 trials over periods of up to six weeks. The best subject s average skill in correct selection of the cursor direction grew from 58% to 88% after 13 training sessions. Target Practice also implements online control of two artifact sources: a) removal of ocular artifact by linear subtraction of wavelet-smoothed vertical and horizontal EOG signals, b) control of muscle artifact by inhibition of BCI training during periods of relatively high power in the 40-64 Hz band. The second BCI, Think Pointer, is a system for two-dimensional cursor control. Steady-state visual evoked potentials (SSVEP) are triggered by four flickering checkerboard stimuli located in narrow strips at each edge of the display. The user attends to one of the four beacons to initiate motion in the desired direction. The SSVEP signals are recorded from eight electrodes located over the occipital region. A KPLS classifier is individually calibrated to map multichannel frequency bands of the SSVEP signals to right-left or up-down motion of a cursor on a computer display. The display stops moving when the user attends to a central fixation point. As for Target Practice, Think Pointer also implements wavelet-based online removal of ocular artifact; however, in Think Pointer muscle artifact is controlled via adaptive normalization of the SSVEP. Training of the classifier requires about three minutes. We have tested our system in real-time operation in three human subjects. Across subjects and sessions, control accuracy ranged from 80% to 100% correct with lags of 1-5 seconds for movement initiation and turning.

  20. Distributed neural control of a hexapod walking vehicle

    NASA Technical Reports Server (NTRS)

    Beer, R. D.; Sterling, L. S.; Quinn, R. D.; Chiel, H. J.; Ritzmann, R.

    1989-01-01

    There has been a long standing interest in the design of controllers for multilegged vehicles. The approach is to apply distributed control to this problem, rather than using parallel computing of a centralized algorithm. Researchers describe a distributed neural network controller for hexapod locomotion which is based on the neural control of locomotion in insects. The model considers the simplified kinematics with two degrees of freedom per leg, but the model includes the static stability constraint. Through simulation, it is demonstrated that this controller can generate a continuous range of statically stable gaits at different speeds by varying a single control parameter. In addition, the controller is extremely robust, and can continue the function even after several of its elements have been disabled. Researchers are building a small hexapod robot whose locomotion will be controlled by this network. Researchers intend to extend their model to the dynamic control of legs with more than two degrees of freedom by using data on the control of multisegmented insect legs. Another immediate application of this neural control approach is also exhibited in biology: the escape reflex. Advanced robots are being equipped with tactile sensing and machine vision so that the sensory inputs to the robot controller are vast and complex. Neural networks are ideal for a lower level safety reflex controller because of their extremely fast response time. The combination of robotics, computer modeling, and neurobiology has been remarkably fruitful, and is likely to lead to deeper insights into the problems of real time sensorimotor control.

  1. Plancton: an opportunistic distributed computing project based on Docker containers

    NASA Astrophysics Data System (ADS)

    Concas, Matteo; Berzano, Dario; Bagnasco, Stefano; Lusso, Stefano; Masera, Massimo; Puccio, Maximiliano; Vallero, Sara

    2017-10-01

    The computing power of most modern commodity computers is far from being fully exploited by standard usage patterns. In this work we describe the development and setup of a virtual computing cluster based on Docker containers used as worker nodes. The facility is based on Plancton: a lightweight fire-and-forget background service. Plancton spawns and controls a local pool of Docker containers on a host with free resources, by constantly monitoring its CPU utilisation. It is designed to release the resources allocated opportunistically, whenever another demanding task is run by the host user, according to configurable policies. This is attained by killing a number of running containers. One of the advantages of a thin virtualization layer such as Linux containers is that they can be started almost instantly upon request. We will show how fast the start-up and disposal of containers eventually enables us to implement an opportunistic cluster based on Plancton daemons without a central control node, where the spawned Docker containers behave as job pilots. Finally, we will show how Plancton was configured to run up to 10 000 concurrent opportunistic jobs on the ALICE High-Level Trigger facility, by giving a considerable advantage in terms of management compared to virtual machines.

  2. In silico Interrogation of Insect Central Complex Suggests Computational Roles for the Ellipsoid Body in Spatial Navigation.

    PubMed

    Fiore, Vincenzo G; Kottler, Benjamin; Gu, Xiaosi; Hirth, Frank

    2017-01-01

    The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed computational features identify central complex circuitry, and especially the ellipsoid body, as a key neural correlate involved in spatial navigation.

  3. In silico Interrogation of Insect Central Complex Suggests Computational Roles for the Ellipsoid Body in Spatial Navigation

    PubMed Central

    Fiore, Vincenzo G.; Kottler, Benjamin; Gu, Xiaosi; Hirth, Frank

    2017-01-01

    The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed computational features identify central complex circuitry, and especially the ellipsoid body, as a key neural correlate involved in spatial navigation. PMID:28824390

  4. Dynamic clustering scheme based on the coordination of management and control in multi-layer and multi-region intelligent optical network

    NASA Astrophysics Data System (ADS)

    Niu, Xiaoliang; Yuan, Fen; Huang, Shanguo; Guo, Bingli; Gu, Wanyi

    2011-12-01

    A Dynamic clustering scheme based on coordination of management and control is proposed to reduce network congestion rate and improve the blocking performance of hierarchical routing in Multi-layer and Multi-region intelligent optical network. Its implement relies on mobile agent (MA) technology, which has the advantages of efficiency, flexibility, functional and scalability. The paper's major contribution is to adjust dynamically domain when the performance of working network isn't in ideal status. And the incorporation of centralized NMS and distributed MA control technology migrate computing process to control plane node which releases the burden of NMS and improves process efficiently. Experiments are conducted on Multi-layer and multi-region Simulation Platform for Optical Network (MSPON) to assess the performance of the scheme.

  5. 20 CFR 404.1588 - Your responsibility to tell us of events that may change your disability status.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... may change your disability status. 404.1588 Section 404.1588 Employees' Benefits SOCIAL SECURITY... issue a receipt to you or your representative at least until a centralized computer file that records... centralized computer file is in place, we will continue to issue receipts to you or your representative if you...

  6. 20 CFR 404.1588 - Your responsibility to tell us of events that may change your disability status.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... may change your disability status. 404.1588 Section 404.1588 Employees' Benefits SOCIAL SECURITY... issue a receipt to you or your representative at least until a centralized computer file that records... centralized computer file is in place, we will continue to issue receipts to you or your representative if you...

  7. 20 CFR 404.1588 - Your responsibility to tell us of events that may change your disability status.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... may change your disability status. 404.1588 Section 404.1588 Employees' Benefits SOCIAL SECURITY... issue a receipt to you or your representative at least until a centralized computer file that records... centralized computer file is in place, we will continue to issue receipts to you or your representative if you...

  8. 20 CFR 404.1588 - Your responsibility to tell us of events that may change your disability status.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... may change your disability status. 404.1588 Section 404.1588 Employees' Benefits SOCIAL SECURITY... issue a receipt to you or your representative at least until a centralized computer file that records... centralized computer file is in place, we will continue to issue receipts to you or your representative if you...

  9. The governance of innovation diffusion - a socio-technical analysis of energy policy

    NASA Astrophysics Data System (ADS)

    Nolden, C.

    2012-10-01

    This paper describes a dynamic price mechanism to coordinate eletric power generation from micro Combined Heat and Power (micro-CHP) systems in a network of households. It is assumed that the households are prosumers, i.e. both producers and consumers of electricity. The control is done on household level in a completely distributed manner. Avoiding a centralized controller both eases computation complexity and preserves communication structure in the network. Local information is used to decide to turn on or off the micro-CHP, but through price signals between the prosumers the network as a whole operates in a cooperative way.

  10. ICC '86; Proceedings of the International Conference on Communications, Toronto, Canada, June 22-25, 1986, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    Papers are presented on ISDN, mobile radio systems and techniques for digital connectivity, centralized and distributed algorithms in computer networks, communications networks, quality assurance and impact on cost, adaptive filters in communications, the spread spectrum, signal processing, video communication techniques, and digital satellite services. Topics discussed include performance evaluation issues for integrated protocols, packet network operations, the computer network theory and multiple-access, microwave single sideband systems, switching architectures, fiber optic systems, wireless local communications, modulation, coding, and synchronization, remote switching, software quality, transmission, and expert systems in network operations. Consideration is given to wide area networks, image and speech processing, office communications application protocols, multimedia systems, customer-controlled network operations, digital radio systems, channel modeling and signal processing in digital communications, earth station/on-board modems, computer communications system performance evaluation, source encoding, compression, and quantization, and adaptive communications systems.

  11. Development of alternative data analysis techniques for improving the accuracy and specificity of natural resource inventories made with digital remote sensing data

    NASA Technical Reports Server (NTRS)

    Lillesand, T. M.; Meisner, D. E. (Principal Investigator)

    1980-01-01

    An investigation was conducted into ways to improve the involvement of state and local user personnel in the digital image analysis process by isolating those elements of the analysis process which require extensive involvement by field personnel and providing means for performing those activities apart from a computer facility. In this way, the analysis procedure can be converted from a centralized activity focused on a computer facility to a distributed activity in which users can interact with the data at the field office level or in the field itself. A general image processing software was developed on the University of Minnesota computer system (Control Data Cyber models 172 and 74). The use of color hardcopy image data as a primary medium in supervised training procedures was investigated and digital display equipment and a coordinate digitizer were procured.

  12. Massive Data, the Digitization of Science, and Reproducibility of Results

    ScienceCinema

    Stodden, Victoria

    2018-04-27

    As the scientific enterprise becomes increasingly computational and data-driven, the nature of the information communicated must change. Without inclusion of the code and data with published computational results, we are engendering a credibility crisis in science. Controversies such as ClimateGate, the microarray-based drug sensitivity clinical trials under investigation at Duke University, and retractions from prominent journals due to unverified code suggest the need for greater transparency in our computational science. In this talk I argue that the scientific method be restored to (1) a focus on error control as central to scientific communication and (2) complete communication of the underlying methodology producing the results, ie. reproducibility. I outline barriers to these goals based on recent survey work (Stodden 2010), and suggest solutions such as the “Reproducible Research Standard” (Stodden 2009), giving open licensing options designed to create an intellectual property framework for scientists consonant with longstanding scientific norms.

  13. Army/NASA small turboshaft engine digital controls research program

    NASA Technical Reports Server (NTRS)

    Sellers, J. F.; Baez, A. N.

    1981-01-01

    The emphasis of a program to conduct digital controls research for small turboshaft engines is on engine test evaluation of advanced control logic using a flexible microprocessor based digital control system designed specifically for research on advanced control logic. Control software is stored in programmable memory. New control algorithms may be stored in a floppy disk and loaded directly into memory. This feature facilitates comparative evaluation of different advanced control modes. The central processor in the digital control is an Intel 8086 16 bit microprocessor. Control software is programmed in assembly language. Software checkout is accomplished prior to engine test by connecting the digital control to a real time hybrid computer simulation of the engine. The engine currently installed in the facility has a hydromechanical control modified to allow electrohydraulic fuel metering and VG actuation by the digital control. Simulation results are presented which show that the modern control reduces the transient rotor speed droop caused by unanticipated load changes such as cyclic pitch or wind gust transients.

  14. An Implemented Strategy for Campus Connectivity and Cooperative Computing.

    ERIC Educational Resources Information Center

    Halaris, Antony S.; Sloan, Lynda W.

    1989-01-01

    ConnectPac, a software package developed at Iona College to allow a computer user to access all services from a single personal computer, is described. ConnectPac uses mainframe computing to support a campus computing network, integrating personal and centralized computing into a menu-driven user environment. (Author/MLW)

  15. Development status of the PDC-1 Parabolic Dish Concentrator

    NASA Technical Reports Server (NTRS)

    Thostesen, T.; Soczak, I. F.; Pons, R. L.

    1982-01-01

    The status of development of the 12 m diameter parabolic dish concentrator which is planned for use with the Small Community Solar Thermal Power System. The PDC-1 unit features the use of plastic reflector film bonded to structural plastic gores supported by front-bracing steel ribs. An elevation-over-azimuth mount arrangement is employed, with a conventional wheel-and-track arrangement; outboard trunnions permit the dish to be stored in the face down position, with the added advantage of easy access to the power conversion assembly. The control system is comprised of a central computer (LSI 1123), a manual control panel, a concentrator control unit, two motor controllers, a Sun sensor, and two angular position resolvers. The system is designed for the simultaneous control of several concentrators. The optical testing of reflective panels is described.

  16. [Research on Barrier-free Home Environment System Based on Speech Recognition].

    PubMed

    Zhu, Husheng; Yu, Hongliu; Shi, Ping; Fang, Youfang; Jian, Zhuo

    2015-10-01

    The number of people with physical disabilities is increasing year by year, and the trend of population aging is more and more serious. In order to improve the quality of the life, a control system of accessible home environment for the patients with serious disabilities was developed to control the home electrical devices with the voice of the patients. The control system includes a central control platform, a speech recognition module, a terminal operation module, etc. The system combines the speech recognition control technology and wireless information transmission technology with the embedded mobile computing technology, and interconnects the lamp, electronic locks, alarms, TV and other electrical devices in the home environment as a whole system through a wireless network node. The experimental results showed that speech recognition success rate was more than 84% in the home environment.

  17. The use of combined single photon emission computed tomography and X-ray computed tomography to assess the fate of inhaled aerosol.

    PubMed

    Fleming, John; Conway, Joy; Majoral, Caroline; Tossici-Bolt, Livia; Katz, Ira; Caillibotte, Georges; Perchet, Diane; Pichelin, Marine; Muellinger, Bernhard; Martonen, Ted; Kroneberg, Philipp; Apiou-Sbirlea, Gabriela

    2011-02-01

    Gamma camera imaging is widely used to assess pulmonary aerosol deposition. Conventional planar imaging provides limited information on its regional distribution. In this study, single photon emission computed tomography (SPECT) was used to describe deposition in three dimensions (3D) and combined with X-ray computed tomography (CT) to relate this to lung anatomy. Its performance was compared to planar imaging. Ten SPECT/CT studies were performed on five healthy subjects following carefully controlled inhalation of radioaerosol from a nebulizer, using a variety of inhalation regimes. The 3D spatial distribution was assessed using a central-to-peripheral ratio (C/P) normalized to lung volume and for the right lung was compared to planar C/P analysis. The deposition by airway generation was calculated for each lung and the conducting airways deposition fraction compared to 24-h clearance. The 3D normalized C/P ratio correlated more closely with 24-h clearance than the 2D ratio for the right lung [coefficient of variation (COV), 9% compared to 15% p < 0.05]. Analysis of regional distribution was possible for both lungs in 3D but not in 2D due to overlap of the stomach on the left lung. The mean conducting airways deposition fraction from SPECT for both lungs was not significantly different from 24-h clearance (COV 18%). Both spatial and generational measures of central deposition were significantly higher for the left than for the right lung. Combined SPECT/CT enabled improved analysis of aerosol deposition from gamma camera imaging compared to planar imaging. 3D radionuclide imaging combined with anatomical information from CT and computer analysis is a useful approach for applications requiring regional information on deposition.

  18. JACOB: an enterprise framework for computational chemistry.

    PubMed

    Waller, Mark P; Dresselhaus, Thomas; Yang, Jack

    2013-06-15

    Here, we present just a collection of beans (JACOB): an integrated batch-based framework designed for the rapid development of computational chemistry applications. The framework expedites developer productivity by handling the generic infrastructure tier, and can be easily extended by user-specific scientific code. Paradigms from enterprise software engineering were rigorously applied to create a scalable, testable, secure, and robust framework. A centralized web application is used to configure and control the operation of the framework. The application-programming interface provides a set of generic tools for processing large-scale noninteractive jobs (e.g., systematic studies), or for coordinating systems integration (e.g., complex workflows). The code for the JACOB framework is open sourced and is available at: www.wallerlab.org/jacob. Copyright © 2013 Wiley Periodicals, Inc.

  19. A neurocomputational system for relational reasoning.

    PubMed

    Knowlton, Barbara J; Morrison, Robert G; Hummel, John E; Holyoak, Keith J

    2012-07-01

    The representation and manipulation of structured relations is central to human reasoning. Recent work in computational modeling and neuroscience has set the stage for developing more detailed neurocomputational models of these abilities. Several key neural findings appear to dovetail with computational constraints derived from a model of analogical processing, 'Learning and Inference with Schemas and Analogies' (LISA). These include evidence that (i) coherent oscillatory activity in the gamma and theta bands enables long-distance communication between the prefrontal cortex and posterior brain regions where information is stored; (ii) neurons in prefrontal cortex can rapidly learn to represent abstract concepts; (iii) a rostral-caudal abstraction gradient exists in the PFC; and (iv) the inferior frontal gyrus exerts inhibitory control over task-irrelevant information. Copyright © 2012. Published by Elsevier Ltd.

  20. Functional relevance of neurotransmitter receptor heteromers in the central nervous system.

    PubMed

    Ferré, Sergi; Ciruela, Francisco; Woods, Amina S; Lluis, Carme; Franco, Rafael

    2007-09-01

    The existence of neurotransmitter receptor heteromers is becoming broadly accepted and their functional significance is being revealed. Heteromerization of neurotransmitter receptors produces functional entities that possess different biochemical characteristics with respect to the individual components of the heteromer. Neurotransmitter receptor heteromers can function as processors of computations that modulate cell signaling. Thus, the quantitative or qualitative aspects of the signaling generated by stimulation of any of the individual receptor units in the heteromer are different from those obtained during coactivation. Furthermore, recent studies demonstrate that some neurotransmitter receptor heteromers can exert an effect as processors of computations that directly modulate both pre- and postsynaptic neurotransmission. This is illustrated by the analysis of striatal receptor heteromers that control striatal glutamatergic neurotransmission.

  1. TangibleCubes — Implementation of Tangible User Interfaces through the Usage of Microcontroller and Sensor Technology

    NASA Astrophysics Data System (ADS)

    Setscheny, Stephan

    The interaction between human beings and technology builds a central aspect in human life. The most common form of this human-technology interface is the graphical user interface which is controlled through the mouse and the keyboard. In consequence of continuous miniaturization and the increasing performance of microcontrollers and sensors for the detection of human interactions, developers receive new possibilities for realising innovative interfaces. As far as this movement is concerned, the relevance of computers in the common sense and graphical user interfaces is decreasing. Especially in the area of ubiquitous computing and the interaction through tangible user interfaces a highly impact of this technical evolution can be seen. Apart from this, tangible and experience able interaction offers users the possibility of an interactive and intuitive method for controlling technical objects. The implementation of microcontrollers for control functions and sensors enables the realisation of these experience able interfaces. Besides the theories about tangible user interfaces, the consideration about sensors and the Arduino platform builds a main aspect of this work.

  2. Centralized Authorization Using a Direct Service, Part II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wachsmann, A

    Authorization is the process of deciding if entity X is allowed to have access to resource Y. Determining the identity of X is the job of the authentication process. One task of authorization in computer networks is to define and determine which user has access to which computers in the network. On Linux, the tendency exists to create a local account for each single user who should be allowed to logon to a computer. This is typically the case because a user not only needs login privileges to a computer but also additional resources like a home directory to actuallymore » do some work. Creating a local account on every computer takes care of all this. The problem with this approach is that these local accounts can be inconsistent with each other. The same user name could have a different user ID and/or group ID on different computers. Even more problematic is when two different accounts share the same user ID and group ID on different computers: User joe on computer1 could have user ID 1234 and group ID 56 and user jane on computer2 could have the same user ID 1234 and group ID 56. This is a big security risk in case shared resources like NFS are used. These two different accounts are the same for an NFS server so that these users can wipe out each other's files. The solution to this inconsistency problem is to have only one central, authoritative data source for this kind of information and a means of providing all your computers with access to this central source. This is what a ''Directory Service'' is. The two directory services most widely used for centralizing authorization data are the Network Information Service (NIS, formerly known as Yellow Pages or YP) and Lightweight Directory Access Protocol (LDAP).« less

  3. The DFVLR main department for central data processing, 1976 - 1983

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Data processing, equipment and systems operation, operative and user systems, user services, computer networks and communications, text processing, computer graphics, and high power computers are discussed.

  4. Mold Heating and Cooling Pump Package Operator Interface Controls Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josh A. Salmond

    2009-08-07

    The modernization of the Mold Heating and Cooling Pump Package Operator Interface (MHC PP OI) consisted of upgrading the antiquated single board computer with a proprietary operating system to off-the-shelf hardware and off-the-shelf software with customizable software options. The pump package is the machine interface between a central heating and cooling system that pumps heat transfer fluid through an injection or compression mold base on a local plastic molding machine. The operator interface provides the intelligent means of controlling this pumping process. Strict temperature control of a mold allows the production of high quality parts with tight tolerances and lowmore » residual stresses. The products fabricated are used on multiple programs.« less

  5. X-ray investigation of cross-breed silk in cocoon, yarn and fabric forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radhalakshmi, Y. C.; Kariappa,; Siddaraju, G. N.

    2012-06-05

    Recently Central Sericulture Research and Training Institute, Mysore developed many improved cross breeds and bivoltine hybrids. Newly developed cross breeds recorded fibre characteristics which are significantly superior over existing control hybrids. This aspect has been investigated using X-ray diffraction technique. We have employed line profile analysis to compute microstructural parameters. These parameters are compared with physical parameters of newly developed cross breed silk fibers for a better understanding of structure-property relation in these samples.

  6. 13. VIEW OF A BBOX, WHICH WAS USED IN THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. VIEW OF A B-BOX, WHICH WAS USED IN THE FAST RECOVERY PROCESS. URANIUM OXIDE WAS TRANSFERRED FOR DISSOLUTION IN A ROOM WHICH HOUSED 3 ROWS OF B-BOXES. B-BOXES ARE CONTROLLED HOODS, SIMILAR TO LAB HOODS THAT OPERATED WITH HIGH AIR VELOCITIES AT THEIR OPENINGS TO ENSURE THAT THE VAPORS WERE CONTAINED WITHIN THE HOOD. (2/14/79) - Rocky Flats Plant, General Manufacturing, Support, Records-Central Computing, Southern portion of Plant, Golden, Jefferson County, CO

  7. Beam orbit simulation in the central region of the RIKEN AVF cyclotron

    NASA Astrophysics Data System (ADS)

    Toprek, Dragan; Goto, Akira; Yano, Yasushige

    1999-04-01

    This paper describes the modification design of the central region for h=2 mode of acceleration in the RIKEN AVF cyclotron. we made a small modification to the electrode shape in the central region for optimization of the beam transmission. The central region is equipped with an axial injection system. The spiral type inflector is used for axial injection. The electric field distribution in the inflector and in four acceleration gaps has been numerically calculated from an electric potential map produced by the program RELAX3D. The magnetic field is measured. The geometry of the central region has been tested with the computations of orbits carried out by means of the computer code CYCLONE. The optical properties of the spiral inflector and the central region are studied by using the program CASINO and CYCLONE, respectively. We have also made an effort to minimize the inflector fringe field effects using the RELAX3D program.

  8. Integration of a Decentralized Linear-Quadratic-Gaussian Control into GSFC's Universal 3-D Autonomous Formation Flying Algorithm

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Carpenter, J. Russell

    1999-01-01

    A decentralized control is investigated for applicability to the autonomous formation flying control algorithm developed by GSFC for the New Millenium Program Earth Observer-1 (EO-1) mission. This decentralized framework has the following characteristics: The approach is non-hierarchical, and coordination by a central supervisor is not required; Detected failures degrade the system performance gracefully; Each node in the decentralized network processes only its own measurement data, in parallel with the other nodes; Although the total computational burden over the entire network is greater than it would be for a single, centralized controller, fewer computations are required locally at each node; Requirements for data transmission between nodes are limited to only the dimension of the control vector, at the cost of maintaining a local additional data vector. The data vector compresses all past measurement history from all the nodes into a single vector of the dimension of the state; and The approach is optimal with respect to standard cost functions. The current approach is valid for linear time-invariant systems only. Similar to the GSFC formation flying algorithm, the extension to linear LQG time-varying systems requires that each node propagate its filter covariance forward (navigation) and controller Riccati matrix backward (guidance) at each time step. Extension of the GSFC algorithm to non-linear systems can also be accomplished via linearization about a reference trajectory in the standard fashion, or linearization about the current state estimate as with the extended Kalman filter. To investigate the feasibility of the decentralized integration with the GSFC algorithm, an existing centralized LQG design for a single spacecraft orbit control problem is adapted to the decentralized framework while using the GSFC algorithm's state transition matrices and framework. The existing GSFC design uses both reference trajectories of each spacecraft in formation and by appropriate choice of coordinates and simplified measurement modeling is formulated as a linear time-invariant system. Results for improvements to the GSFC algorithm and a multiple satellite formation will be addressed. The goal of this investigation is to progressively relax the assumptions that result in linear time-invariance, ultimately to the point of linearization of the non-linear dynamics about the current state estimate as in the extended Kalman filter. An assessment will then be made about the feasibility of the decentralized approach to the realistic formation flying application of the EO-1/Landsat 7 formation flying experiment.

  9. BNL ATLAS Grid Computing

    ScienceCinema

    Michael Ernst

    2017-12-09

    As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

  10. Sleep spindle alterations in patients with Parkinson's disease

    PubMed Central

    Christensen, Julie A. E.; Nikolic, Miki; Warby, Simon C.; Koch, Henriette; Zoetmulder, Marielle; Frandsen, Rune; Moghadam, Keivan K.; Sorensen, Helge B. D.; Mignot, Emmanuel; Jennum, Poul J.

    2015-01-01

    The aim of this study was to identify changes of sleep spindles (SS) in the EEG of patients with Parkinson's disease (PD). Five sleep experts manually identified SS at a central scalp location (C3-A2) in 15 PD and 15 age- and sex-matched control subjects. Each SS was given a confidence score, and by using a group consensus rule, 901 SS were identified and characterized by their (1) duration, (2) oscillation frequency, (3) maximum peak-to-peak amplitude, (4) percent-to-peak amplitude, and (5) density. Between-group comparisons were made for all SS characteristics computed, and significant changes for PD patients vs. control subjects were found for duration, oscillation frequency, maximum peak-to-peak amplitude and density. Specifically, SS density was lower, duration was longer, oscillation frequency slower and maximum peak-to-peak amplitude higher in patients vs. controls. We also computed inter-expert reliability in SS scoring and found a significantly lower reliability in scoring definite SS in patients when compared to controls. How neurodegeneration in PD could influence SS characteristics is discussed. We also note that the SS morphological changes observed here may affect automatic detection of SS in patients with PD or other neurodegenerative disorders (NDDs). PMID:25983685

  11. Hardware description ADSP-21020 40-bit floating point DSP as designed in a remotely controlled digital CW Doppler radar

    NASA Astrophysics Data System (ADS)

    Morrison, R. E.; Robinson, S. H.

    A continuous wave Doppler radar system has been designed which is portable, easily deployed, and remotely controlled. The heart of this system is a DSP/control board using Analog Devices ADSP-21020 40-bit floating point digital signal processor (DSP) microprocessor. Two 18-bit audio A/D converters provide digital input to the DSP/controller board for near real time target detection. Program memory for the DSP is dual ported with an Intel 87C51 microcontroller allowing DSP code to be up-loaded or down-loaded from a central controlling computer. The 87C51 provides overall system control for the remote radar and includes a time-of-day/day-of-year real time clock, system identification (ID) switches, and input/output (I/O) expansion by an Intel 82C55 I/O expander.

  12. Modeling, Analysis, and Control of a Hypersonic Vehicle with Significant Aero-Thermo-Elastic-Propulsion Interactions: Elastic, Thermal and Mass Uncertainty

    NASA Astrophysics Data System (ADS)

    Khatri, Jaidev

    This thesis examines themodeling, analysis, and control system design issues for scramjet powered hypersonic vehicles. A nonlinear three degrees of freedom longitudinal model which includes aero-propulsion-elasticity effects was used for all analyses. This model is based upon classical compressible flow and Euler-Bernouli structural concepts. Higher fidelity computational fluid dynamics and finite element methods are needed for more precise intermediate and final evaluations. The methods presented within this thesis were shown to be useful for guiding initial control relevant design. The model was used to examine the vehicle's static and dynamic characteristics over the vehicle's trimmable region. The vehicle has significant longitudinal coupling between the fuel equivalency ratio (FER) and the flight path angle (FPA). For control system design, a two-input two-output plant (FER - elevator to speed-FPA) with 11 states (including 3 flexible modes) was used. Velocity, FPA, and pitch were assumed to be available for feedback. Aerodynamic heat modeling and design for the assumed TPS was incorporated to original Bolender's model to study the change in static and dynamic properties. De-centralized control stability, feasibility and limitations issues were dealt with the change in TPS elasticity, mass and physical dimension. The impact of elasticity due to TPS mass, TPS physical dimension as well as prolonged heating was also analyzed to understand performance limitations of de-centralized control designed for nominal model.

  13. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-05-29

    We present a new lock-free parallel algorithm for computing betweenness centrality of massive small-world networks. With minor changes to the data structures, our algorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in the HPCS SSCA#2 Graph Analysis benchmark, which has been extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the ThreadStorm processor, and a single-socket Sun multicore server with the UltraSparc T2 processor.more » For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  14. Eigenvector centrality for geometric and topological characterization of porous media

    NASA Astrophysics Data System (ADS)

    Jimenez-Martinez, Joaquin; Negre, Christian F. A.

    2017-07-01

    Solving flow and transport through complex geometries such as porous media is computationally difficult. Such calculations usually involve the solution of a system of discretized differential equations, which could lead to extreme computational cost depending on the size of the domain and the accuracy of the model. Geometric simplifications like pore networks, where the pores are represented by nodes and the pore throats by edges connecting pores, have been proposed. These models, despite their ability to preserve the connectivity of the medium, have difficulties capturing preferential paths (high velocity) and stagnation zones (low velocity), as they do not consider the specific relations between nodes. Nonetheless, network theory approaches, where a complex network is a graph, can help to simplify and better understand fluid dynamics and transport in porous media. Here we present an alternative method to address these issues based on eigenvector centrality, which has been corrected to overcome the centralization problem and modified to introduce a bias in the centrality distribution along a particular direction to address the flow and transport anisotropy in porous media. We compare the model predictions with millifluidic transport experiments, which shows that, albeit simple, this technique is computationally efficient and has potential for predicting preferential paths and stagnation zones for flow and transport in porous media. We propose to use the eigenvector centrality probability distribution to compute the entropy as an indicator of the "mixing capacity" of the system.

  15. Evaluating Computer Technology Integration in a Centralized School System

    ERIC Educational Resources Information Center

    Eteokleous, N.

    2008-01-01

    The study evaluated the current situation in Cyprus elementary classrooms regarding computer technology integration in an attempt to identify ways of expanding teachers' and students' experiences with computer technology. It examined how Cypriot elementary teachers use computers, and the factors that influence computer integration in their…

  16. Chaotic dynamics in nanoscale NbO2 Mott memristors for analogue computing

    NASA Astrophysics Data System (ADS)

    Kumar, Suhas; Strachan, John Paul; Williams, R. Stanley

    2017-08-01

    At present, machine learning systems use simplified neuron models that lack the rich nonlinear phenomena observed in biological systems, which display spatio-temporal cooperative dynamics. There is evidence that neurons operate in a regime called the edge of chaos that may be central to complexity, learning efficiency, adaptability and analogue (non-Boolean) computation in brains. Neural networks have exhibited enhanced computational complexity when operated at the edge of chaos, and networks of chaotic elements have been proposed for solving combinatorial or global optimization problems. Thus, a source of controllable chaotic behaviour that can be incorporated into a neural-inspired circuit may be an essential component of future computational systems. Such chaotic elements have been simulated using elaborate transistor circuits that simulate known equations of chaos, but an experimental realization of chaotic dynamics from a single scalable electronic device has been lacking. Here we describe niobium dioxide (NbO2) Mott memristors each less than 100 nanometres across that exhibit both a nonlinear-transport-driven current-controlled negative differential resistance and a Mott-transition-driven temperature-controlled negative differential resistance. Mott materials have a temperature-dependent metal-insulator transition that acts as an electronic switch, which introduces a history-dependent resistance into the device. We incorporate these memristors into a relaxation oscillator and observe a tunable range of periodic and chaotic self-oscillations. We show that the nonlinear current transport coupled with thermal fluctuations at the nanoscale generates chaotic oscillations. Such memristors could be useful in certain types of neural-inspired computation by introducing a pseudo-random signal that prevents global synchronization and could also assist in finding a global minimum during a constrained search. We specifically demonstrate that incorporating such memristors into the hardware of a Hopfield computing network can greatly improve the efficiency and accuracy of converging to a solution for computationally difficult problems.

  17. Aerodynamic Interactions of Propulsive Deceleration and Reaction Control System Jets on Mars-Entry Aeroshells

    NASA Astrophysics Data System (ADS)

    Alkandry, Hicham

    Future missions to Mars, including sample-return and human-exploration missions, may require alternative entry, descent, and landing technologies in order to perform pinpoint landing of heavy vehicles. Two such alternatives are propulsive deceleration (PD) and reaction control systems (RCS). PD can slow the vehicle during Mars atmospheric descent by directing thrusters into the incoming freestream. RCS can provide vehicle control and steering by inducing moments using thrusters on the hack of the entry capsule. The use of these PD and RCS jets, however, involves complex flow interactions that are still not well understood. The fluid interactions induced by PD and RCS jets for Mars-entry vehicles in hypersonic freestream conditions are investigated using computational fluid dynamics (CFD). The effects of central and peripheral PD configurations using both sonic and supersonic jets at various thrust conditions are examined in this dissertation. The RCS jet is directed either parallel or transverse to the freestream flow at different thrust conditions in order to examine the effects of the thruster orientation with respect to the center of gravity of the aeroshell. The physical accuracy of the computational method is also assessed by comparing the numerical results with available experimental data. The central PD configuration decreases the drag force acting on the entry capsule due to a shielding effect that prevents mass and momentum in the hypersonic freestream from reaching the aeroshell. The peripheral PD configuration also decreases the drag force by obstructing the flow around the aeroshell and creating low surface pressure regions downstream of the PD nozzles. The Mach number of the PD jets, however, does not have a significant effect on the induced fluid interactions. The reaction control system also alters the flowfield, surface, and aerodynamic properties of the aeroshell, while the jet orientation can have a significant effect on the control effectiveness of the RCS.

  18. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  19. Parallel neural pathways in higher visual centers of the Drosophila brain that mediate wavelength-specific behavior

    PubMed Central

    Otsuna, Hideo; Shinomiya, Kazunori; Ito, Kei

    2014-01-01

    Compared with connections between the retinae and primary visual centers, relatively less is known in both mammals and insects about the functional segregation of neural pathways connecting primary and higher centers of the visual processing cascade. Here, using the Drosophila visual system as a model, we demonstrate two levels of parallel computation in the pathways that connect primary visual centers of the optic lobe to computational circuits embedded within deeper centers in the central brain. We show that a seemingly simple achromatic behavior, namely phototaxis, is under the control of several independent pathways, each of which is responsible for navigation towards unique wavelengths. Silencing just one pathway is enough to disturb phototaxis towards one characteristic monochromatic source, whereas phototactic behavior towards white light is not affected. The response spectrum of each demonstrable pathway is different from that of individual photoreceptors, suggesting subtractive computations. A choice assay between two colors showed that these pathways are responsible for navigation towards, but not for the detection itself of, the monochromatic light. The present study provides novel insights about how visual information is separated and processed in parallel to achieve robust control of an innate behavior. PMID:24574974

  20. Scientific Visualization, Seeing the Unseeable

    ScienceCinema

    LBNL

    2017-12-09

    June 24, 2008 Berkeley Lab lecture: Scientific visualization transforms abstract data into readily comprehensible images, provide a vehicle for "seeing the unseeable," and play a central role in bo... June 24, 2008 Berkeley Lab lecture: Scientific visualization transforms abstract data into readily comprehensible images, provide a vehicle for "seeing the unseeable," and play a central role in both experimental and computational sciences. Wes Bethel, who heads the Scientific Visualization Group in the Computational Research Division, presents an overview of visualization and computer graphics, current research challenges, and future directions for the field.

  1. The Lilongwe Central Hospital Patient Management Information System: A Success in Computer-Based Order Entry Where One Might Least Expect It

    PubMed Central

    GP, Douglas; RA, Deula; SE, Connor

    2003-01-01

    Computer-based order entry is a powerful tool for enhancing patient care. A pilot project in the pediatric department of the Lilongwe Central Hospital (LCH) in Malawi, Africa has demonstrated that computer-based order entry (COE): 1) can be successfully deployed and adopted in resource-poor settings, 2) can be built, deployed and sustained at relatively low cost and with local resources, and 3) has a greater potential to improve patient care in developing than in developed countries. PMID:14728338

  2. Unified algorithm of cone optics to compute solar flux on central receiver

    NASA Astrophysics Data System (ADS)

    Grigoriev, Victor; Corsi, Clotilde

    2017-06-01

    Analytical algorithms to compute flux distribution on central receiver are considered as a faster alternative to ray tracing. They have quite too many modifications, with HFLCAL and UNIZAR being the most recognized and verified. In this work, a generalized algorithm is presented which is valid for arbitrary sun shape of radial symmetry. Heliostat mirrors can have a nonrectangular profile, and the effects of shading and blocking, strong defocusing and astigmatism can be taken into account. The algorithm is suitable for parallel computing and can benefit from hardware acceleration of polygon texturing.

  3. TDRSS data handling and management system study. Ground station systems for data handling and relay satellite control

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Results of a two-phase study of the (Data Handling and Management System DHMS) are presented. An original baseline DHMS is described. Its estimated costs are presented in detail. The DHMS automates the Tracking and Data Relay Satellite System (TDRSS) ground station's functions and handles both the forward and return link user and relay satellite data passing through the station. Direction of the DHMS is effected via a TDRSS Operations Control Central (OCC) that is remotely located. A composite ground station system, a modified DHMS (MDHMS), was conceptually developed. The MDHMS performs both the DHMS and OCC functions. Configurations and costs are presented for systems using minicomputers and midicomputers. It is concluded that a MDHMS should be configured with a combination of the two computer types. The midicomputers provide the system's organizational direction and computational power, and the minicomputers (or interface processors) perform repetitive data handling functions that relieve the midicomputers of these burdensome tasks.

  4. Viability of Bioprinted Cellular Constructs Using a Three Dispenser Cartesian Printer.

    PubMed

    Dennis, Sarah Grace; Trusk, Thomas; Richards, Dylan; Jia, Jia; Tan, Yu; Mei, Ying; Fann, Stephen; Markwald, Roger; Yost, Michael

    2015-09-22

    Tissue engineering has centralized its focus on the construction of replacements for non-functional or damaged tissue. The utilization of three-dimensional bioprinting in tissue engineering has generated new methods for the printing of cells and matrix to fabricate biomimetic tissue constructs. The solid freeform fabrication (SFF) method developed for three-dimensional bioprinting uses an additive manufacturing approach by depositing droplets of cells and hydrogels in a layer-by-layer fashion. Bioprinting fabrication is dependent on the specific placement of biological materials into three-dimensional architectures, and the printed constructs should closely mimic the complex organization of cells and extracellular matrices in native tissue. This paper highlights the use of the Palmetto Printer, a Cartesian bioprinter, as well as the process of producing spatially organized, viable constructs while simultaneously allowing control of environmental factors. This methodology utilizes computer-aided design and computer-aided manufacturing to produce these specific and complex geometries. Finally, this approach allows for the reproducible production of fabricated constructs optimized by controllable printing parameters.

  5. BARTER: Behavior Profile Exchange for Behavior-Based Admission and Access Control in MANETs

    NASA Astrophysics Data System (ADS)

    Frias-Martinez, Vanessa; Stolfo, Salvatore J.; Keromytis, Angelos D.

    Mobile Ad-hoc Networks (MANETs) are very dynamic networks with devices continuously entering and leaving the group. The highly dynamic nature of MANETs renders the manual creation and update of policies associated with the initial incorporation of devices to the MANET (admission control) as well as with anomaly detection during communications among members (access control) a very difficult task. In this paper, we present BARTER, a mechanism that automatically creates and updates admission and access control policies for MANETs based on behavior profiles. BARTER is an adaptation for fully distributed environments of our previously introduced BB-NAC mechanism for NAC technologies. Rather than relying on a centralized NAC enforcer, MANET members initially exchange their behavior profiles and compute individual local definitions of normal network behavior. During admission or access control, each member issues an individual decision based on its definition of normalcy. Individual decisions are then aggregated via a threshold cryptographic infrastructure that requires an agreement among a fixed amount of MANET members to change the status of the network. We present experimental results using content and volumetric behavior profiles computed from the ENRON dataset. In particular, we show that the mechanism achieves true rejection rates of 95% with false rejection rates of 9%.

  6. Design, Specification, and Synthesis of Aircraft Electric Power Systems Control Logic

    NASA Astrophysics Data System (ADS)

    Xu, Huan

    Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, actuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based specifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considerations for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area. This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller. The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is explored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.

  7. Evolution in a centralized transfusion service.

    PubMed

    AuBuchon, James P; Linauts, Sandra; Vaughan, Mimi; Wagner, Jeffrey; Delaney, Meghan; Nester, Theresa

    2011-12-01

    The metropolitan Seattle area has utilized a centralized transfusion service model throughout the modern era of blood banking. This approach has used four laboratories to serve over 20 hospitals and clinics, providing greater capabilities for all at a lower consumption of resources than if each depended on its own laboratory and staff for these functions. In addition, this centralized model has facilitated wider use of the medical capabilities of the blood center's physicians, and a county-wide network of transfusion safety officers is now being developed to increase the impact of the blood center's transfusion expertise at the patient's bedside. Medical expectations and traffic have led the blood center to evolve the centralized model to include on-site laboratories at facilities with complex transfusion requirements (e.g., a children's hospital) and to implement in all the others a system of remote allocation. This new capability places a refrigerator stocked with uncrossmatched units in the hospital but retains control over the dispensing of these through the blood center's computer system; the correct unit can be electronically cross-matched and released on demand, obviating the need for transportation to the hospital and thus speeding transfusion. This centralized transfusion model has withstood the test of time and continues to evolve to meet new situations and ensure optimal patient care. © 2011 American Association of Blood Banks.

  8. Removing the center from computing: biology's new mode of digital knowledge production.

    PubMed

    November, Joseph

    2011-06-01

    This article shows how the USA's National Institutes of Health (NIH) helped to bring about a major shift in the way computers are used to produce knowledge and in the design of computers themselves as a consequence of its early 1960s efforts to introduce information technology to biologists. Starting in 1960 the NIH sought to reform the life sciences by encouraging researchers to make use of digital electronic computers, but despite generous federal support biologists generally did not embrace the new technology. Initially the blame fell on biologists' lack of appropriate (i.e. digital) data for computers to process. However, when the NIH consulted MIT computer architect Wesley Clark about this problem, he argued that the computer's quality as a device that was centralized posed an even greater challenge to potential biologist users than did the computer's need for digital data. Clark convinced the NIH that if the agency hoped to effectively computerize biology, it would need to satisfy biologists' experimental and institutional needs by providing them the means to use a computer without going to a computing center. With NIH support, Clark developed the 1963 Laboratory Instrument Computer (LINC), a small, real-time interactive computer intended to be used inside the laboratory and controlled entirely by its biologist users. Once built, the LINC provided a viable alternative to the 1960s norm of large computers housed in computing centers. As such, the LINC not only became popular among biologists, but also served in later decades as an important precursor of today's computing norm in the sciences and far beyond, the personal computer.

  9. Functional Analysis and Preliminary Specifications for a Single Integrated Central Computer System for Secondary Schools and Junior Colleges. Interim Report.

    ERIC Educational Resources Information Center

    1968

    The present report proposes a central computing facility and presents the preliminary specifications for such a system. It is based, in part, on the results of earlier studies by two previous contractors on behalf of the U.S. Office of Education. The recommendations are based upon the present contractors considered evaluation of the earlier…

  10. TBGG- INTERACTIVE ALGEBRAIC GRID GENERATION

    NASA Technical Reports Server (NTRS)

    Smith, R. E.

    1994-01-01

    TBGG, Two-Boundary Grid Generation, applies an interactive algebraic grid generation technique in two dimensions. The program incorporates mathematical equations that relate the computational domain to the physical domain. TBGG has application to a variety of problems using finite difference techniques, such as computational fluid dynamics. Examples include the creation of a C-type grid about an airfoil and a nozzle configuration in which no left or right boundaries are specified. The underlying two-boundary technique of grid generation is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are defined by two ordered sets of points, referred to as the top and bottom. Left and right side boundaries may also be specified, and call upon linear blending functions to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly spaced computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth cubic spline functions is also presented. The TBGG program is written in FORTRAN 77. It works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. The program has been implemented on a CDC Cyber 170 series computer using NOS 2.4 operating system, with a central memory requirement of 151,700 (octal) 60 bit words. TBGG requires a Tektronix 4015 terminal and the DI-3000 Graphics Library of Precision Visuals, Inc. TBGG was developed in 1986.

  11. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    PubMed

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  12. Geometric and topological characterization of porous media: insights from eigenvector centrality

    NASA Astrophysics Data System (ADS)

    Jimenez-Martinez, J.; Negre, C.

    2017-12-01

    Solving flow and transport through complex geometries such as porous media involves an extreme computational cost. Simplifications such as pore networks, where the pores are represented by nodes and the pore throats by edges connecting pores, have been proposed. These models have the ability to preserve the connectivity of the medium. However, they have difficulties capturing preferential paths (high velocity) and stagnation zones (low velocity), as they do not consider the specific relations between nodes. Network theory approaches, where the complex network is conceptualized like a graph, can help to simplify and better understand fluid dynamics and transport in porous media. To address this issue, we propose a method based on eigenvector centrality. It has been corrected to overcome the centralization problem and modified to introduce a bias in the centrality distribution along a particular direction which allows considering the flow and transport anisotropy in porous media. The model predictions are compared with millifluidic transport experiments, showing that this technique is computationally efficient and has potential for predicting preferential paths and stagnation zones for flow and transport in porous media. Entropy computed from the eigenvector centrality probability distribution is proposed as an indicator of the "mixing capacity" of the system.

  13. Human factors in command and control for the Los Angeles Fire Department.

    PubMed

    Harper, W R

    1974-03-01

    Ergonomics owes much of its operations and systems heritage to military research. Since public safety systems such as police, fire departments and civil defence organisations are quasi-military in nature, one may reasonably use the findings from military ergonomics research to extrapolate design data for use in a decision-making system. This article discusses a case study concerning Human Factors in command and control for the Los Angeles Fire Department. The case involved transfer from a manual dispatch system involving three geographic areas of metropolitan Los Angeles to one central computer-aided command and control system. Comments are made on console mock-ups, environmental factors in the Control Centre placement of the consoles. Because of extreme delays in procurement of the recommended hardware it is doubtful that empirical testing of the ergonomics aspect of the system will take place.

  14. Reinforcement learning for a biped robot based on a CPG-actor-critic method.

    PubMed

    Nakamura, Yutaka; Mori, Takeshi; Sato, Masa-aki; Ishii, Shin

    2007-08-01

    Animals' rhythmic movements, such as locomotion, are considered to be controlled by neural circuits called central pattern generators (CPGs), which generate oscillatory signals. Motivated by this biological mechanism, studies have been conducted on the rhythmic movements controlled by CPG. As an autonomous learning framework for a CPG controller, we propose in this article a reinforcement learning method we call the "CPG-actor-critic" method. This method introduces a new architecture to the actor, and its training is roughly based on a stochastic policy gradient algorithm presented recently. We apply this method to an automatic acquisition problem of control for a biped robot. Computer simulations show that training of the CPG can be successfully performed by our method, thus allowing the biped robot to not only walk stably but also adapt to environmental changes.

  15. Control of fluxes in metabolic networks

    PubMed Central

    Basler, Georg; Nikoloski, Zoran; Larhlimi, Abdelhalim; Barabási, Albert-László; Liu, Yang-Yu

    2016-01-01

    Understanding the control of large-scale metabolic networks is central to biology and medicine. However, existing approaches either require specifying a cellular objective or can only be used for small networks. We introduce new coupling types describing the relations between reaction activities, and develop an efficient computational framework, which does not require any cellular objective for systematic studies of large-scale metabolism. We identify the driver reactions facilitating control of 23 metabolic networks from all kingdoms of life. We find that unicellular organisms require a smaller degree of control than multicellular organisms. Driver reactions are under complex cellular regulation in Escherichia coli, indicating their preeminent role in facilitating cellular control. In human cancer cells, driver reactions play pivotal roles in malignancy and represent potential therapeutic targets. The developed framework helps us gain insights into regulatory principles of diseases and facilitates design of engineering strategies at the interface of gene regulation, signaling, and metabolism. PMID:27197218

  16. Development of a forecasting model for brucellosis spreading in the Italian cattle trade network aimed to prioritise the field interventions.

    PubMed

    Savini, L; Candeloro, L; Conte, A; De Massis, F; Giovannini, A

    2017-01-01

    Brucellosis caused by Brucella abortus is an important zoonosis that constitutes a serious hazard to public health. Prevention of human brucellosis depends on the control of the disease in animals. Livestock movement data represent a valuable source of information to understand the pattern of contacts between holdings, which may determine the inter-herds and intra-herd spread of the disease. The manuscript addresses the use of computational epidemic models rooted in the knowledge of cattle trade network to assess the probabilities of brucellosis spread and to design control strategies. Three different spread network-based models were proposed: the DFC (Disease Flow Centrality) model based only on temporal cattle network structure and unrelated to the epidemiological disease parameters; a deterministic SIR (Susceptible-Infectious-Recovered) model; a stochastic SEIR (Susceptible-Exposed-Infectious-Recovered) model in which epidemiological and demographic within-farm aspects were also modelled. Containment strategies based on farms centrality in the cattle network were tested and discussed. All three models started from the identification of the entire sub-network originated from an infected farm, up to the fifth order of contacts. Their performances were based on data collected in Sicily in the framework of the national eradication plan of brucellosis in 2009. Results show that the proposed methods improves the efficacy and efficiency of the tracing activities in comparison to the procedure currently adopted by the veterinary services in the brucellosis control, in Italy. An overall assessment shows that the SIR model is the most suitable for the practical needs of the veterinary services, being the one with the highest sensitivity and the shortest computation time.

  17. Development of a forecasting model for brucellosis spreading in the Italian cattle trade network aimed to prioritise the field interventions

    PubMed Central

    Candeloro, L.; Conte, A.; De Massis, F.; Giovannini, A.

    2017-01-01

    Brucellosis caused by Brucella abortus is an important zoonosis that constitutes a serious hazard to public health. Prevention of human brucellosis depends on the control of the disease in animals. Livestock movement data represent a valuable source of information to understand the pattern of contacts between holdings, which may determine the inter-herds and intra-herd spread of the disease. The manuscript addresses the use of computational epidemic models rooted in the knowledge of cattle trade network to assess the probabilities of brucellosis spread and to design control strategies. Three different spread network-based models were proposed: the DFC (Disease Flow Centrality) model based only on temporal cattle network structure and unrelated to the epidemiological disease parameters; a deterministic SIR (Susceptible-Infectious-Recovered) model; a stochastic SEIR (Susceptible-Exposed-Infectious-Recovered) model in which epidemiological and demographic within-farm aspects were also modelled. Containment strategies based on farms centrality in the cattle network were tested and discussed. All three models started from the identification of the entire sub-network originated from an infected farm, up to the fifth order of contacts. Their performances were based on data collected in Sicily in the framework of the national eradication plan of brucellosis in 2009. Results show that the proposed methods improves the efficacy and efficiency of the tracing activities in comparison to the procedure currently adopted by the veterinary services in the brucellosis control, in Italy. An overall assessment shows that the SIR model is the most suitable for the practical needs of the veterinary services, being the one with the highest sensitivity and the shortest computation time. PMID:28654703

  18. Computed tomography demonstrates abnormalities of contralateral ear in subjects with unilateral sensorineural hearing loss.

    PubMed

    Marcus, Sonya; Whitlow, Christopher T; Koonce, James; Zapadka, Michael E; Chen, Michael Y; Williams, Daniel W; Lewis, Meagan; Evans, Adele K

    2014-02-01

    Prior studies have associated gross inner ear abnormalities with pediatric sensorineural hearing loss (SNHL) using computed tomography (CT). No studies to date have specifically investigated morphologic inner ear abnormalities involving the contralateral unaffected ear in patients with unilateral SNHL. The purpose of this study is to evaluate contralateral inner ear structures of subjects with unilateral SNHL but no grossly abnormal findings on CT. IRB-approved retrospective analysis of pediatric temporal bone CT scans. 97 temporal bone CT scans, previously interpreted as "normal" based upon previously accepted guidelines by board certified neuroradiologists, were assessed using 12 measurements of the semicircular canals, cochlea and vestibule. The control-group consisted of 72 "normal" temporal bone CTs with underlying SNHL in the subject excluded. The study-group consisted of 25 normal-hearing contralateral temporal bones in subjects with unilateral SNHL. Multivariate analysis of covariance (MANCOVA) was then conducted to evaluate for differences between the study and control group. Cochlea basal turn lumen width was significantly greater in magnitude and central lucency of the lateral semicircular canal bony island was significantly lower in density for audiometrically normal ears of subjects with unilateral SNHL compared to controls. Abnormalities of the inner ear were present in the contralateral audiometrically normal ears of subjects with unilateral SNHL. These data suggest that patients with unilateral SNHL may have a more pervasive disease process that results in abnormalities of both ears. The findings of a cochlea basal turn lumen width disparity >5% from "normal" and/or a lateral semicircular canal bony island central lucency disparity of >5% from "normal" may indicate inherent risk to the contralateral unaffected ear in pediatric patients with unilateral sensorineural hearing loss. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Brain-computer interfaces for 1-D and 2-D cursor control: designs using volitional control of the EEG spectrum or steady-state visual evoked potentials.

    PubMed

    Trejo, Leonard J; Rosipal, Roman; Matthews, Bryan

    2006-06-01

    We have developed and tested two electroencephalogram (EEG)-based brain-computer interfaces (BCI) for users to control a cursor on a computer display. Our system uses an adaptive algorithm, based on kernel partial least squares classification (KPLS), to associate patterns in multichannel EEG frequency spectra with cursor controls. Our first BCI, Target Practice, is a system for one-dimensional device control, in which participants use biofeedback to learn voluntary control of their EEG spectra. Target Practice uses a KPLS classifier to map power spectra of 62-electrode EEG signals to rightward or leftward position of a moving cursor on a computer display. Three subjects learned to control motion of a cursor on a video display in multiple blocks of 60 trials over periods of up to six weeks. The best subject's average skill in correct selection of the cursor direction grew from 58% to 88% after 13 training sessions. Target Practice also implements online control of two artifact sources: 1) removal of ocular artifact by linear subtraction of wavelet-smoothed vertical and horizontal electrooculograms (EOG) signals, 2) control of muscle artifact by inhibition of BCI training during periods of relatively high power in the 40-64 Hz band. The second BCI, Think Pointer, is a system for two-dimensional cursor control. Steady-state visual evoked potentials (SSVEP) are triggered by four flickering checkerboard stimuli located in narrow strips at each edge of the display. The user attends to one of the four beacons to initiate motion in the desired direction. The SSVEP signals are recorded from 12 electrodes located over the occipital region. A KPLS classifier is individually calibrated to map multichannel frequency bands of the SSVEP signals to right-left or up-down motion of a cursor on a computer display. The display stops moving when the user attends to a central fixation point. As for Target Practice, Think Pointer also implements wavelet-based online removal of ocular artifact; however, in Think Pointer muscle artifact is controlled via adaptive normalization of the SSVEP. Training of the classifier requires about 3 min. We have tested our system in real-time operation in three human subjects. Across subjects and sessions, control accuracy ranged from 80% to 100% correct with lags of 1-5 s for movement initiation and turning. We have also developed a realistic demonstration of our system for control of a moving map display (http://ti.arc.nasa.gov/).

  20. Intelligent Control in Automation Based on Wireless Traffic Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt Derr; Milos Manic

    2007-09-01

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in controlmore » type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.« less

  1. Intelligent Control in Automation Based on Wireless Traffic Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt Derr; Milos Manic

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in controlmore » type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.« less

  2. Modulation of the Mesenchymal Stem Cell Secretome Using Computer-Controlled Bioreactors: Impact on Neuronal Cell Proliferation, Survival and Differentiation.

    PubMed

    Teixeira, Fábio G; Panchalingam, Krishna M; Assunção-Silva, Rita; Serra, Sofia C; Mendes-Pinheiro, Bárbara; Patrício, Patrícia; Jung, Sunghoon; Anjo, Sandra I; Manadas, Bruno; Pinto, Luísa; Sousa, Nuno; Behie, Leo A; Salgado, António J

    2016-06-15

    In recent years it has been shown that the therapeutic benefits of human mesenchymal stem/stromal cells (hMSCs) in the Central Nervous System (CNS) are mainly attributed to their secretome. The implementation of computer-controlled suspension bioreactors has shown to be a viable route for the expansion of these cells to large numbers. As hMSCs actively respond to their culture environment, there is the hypothesis that one can modulate its secretome through their use. Herein, we present data indicating that the use of computer-controlled suspension bioreactors enhanced the neuroregulatory profile of hMSCs secretome. Indeed, higher levels of in vitro neuronal differentiation and NOTCH1 expression in human neural progenitor cells (hNPCs) were observed when these cells were incubated with the secretome of dynamically cultured hMSCs. A similar trend was also observed in the hippocampal dentate gyrus (DG) of rat brains where, upon injection, an enhanced neuronal and astrocytic survival and differentiation, was observed. Proteomic analysis also revealed that the dynamic culturing of hMSCs increased the secretion of several neuroregulatory molecules and miRNAs present in hMSCs secretome. In summary, the appropriate use of dynamic culture conditions can represent an important asset for the development of future neuro-regenerative strategies involving the use of hMSCs secretome.

  3. Central tarsal bone fractures in horses not used for racing: Computed tomographic configuration and long-term outcome of lag screw fixation.

    PubMed

    Gunst, S; Del Chicca, F; Fürst, A E; Kuemmerle, J M

    2016-09-01

    There are no reports on the configuration of equine central tarsal bone fractures based on cross-sectional imaging and clinical and radiographic long-term outcome after internal fixation. To report clinical, radiographic and computed tomographic findings of equine central tarsal bone fractures and to evaluate the long-term outcome of internal fixation. Retrospective case series. All horses diagnosed with a central tarsal bone fracture at our institution in 2009-2013 were included. Computed tomography and internal fixation using lag screw technique was performed in all patients. Medical records and diagnostic images were reviewed retrospectively. A clinical and radiographic follow-up examination was performed at least 1 year post operatively. A central tarsal bone fracture was diagnosed in 6 horses. Five were Warmbloods used for showjumping and one was a Quarter Horse used for reining. All horses had sagittal slab fractures that began dorsally, ran in a plantar or plantaromedial direction and exited the plantar cortex at the plantar or plantaromedial indentation of the central tarsal bone. Marked sclerosis of the central tarsal bone was diagnosed in all patients. At long-term follow-up, 5/6 horses were sound and used as intended although mild osteophyte formation at the distal intertarsal joint was commonly observed. Central tarsal bone fractures in nonracehorses had a distinct configuration but radiographically subtle additional fracture lines can occur. A chronic stress related aetiology seems likely. Internal fixation of these fractures based on an accurate diagnosis of the individual fracture configuration resulted in a very good prognosis. © 2015 EVJ Ltd.

  4. Cloud Based Educational Systems and Its Challenges and Opportunities and Issues

    ERIC Educational Resources Information Center

    Paul, Prantosh Kr.; Lata Dangwal, Kiran

    2014-01-01

    Cloud Computing (CC) is actually is a set of hardware, software, networks, storage, services an interface combines to deliver aspects of computing as a service. Cloud Computing (CC) actually uses the central remote servers to maintain data and applications. Practically Cloud Computing (CC) is extension of Grid computing with independency and…

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aegerter, P.A.

    Phillips Petroleum Company scientists and engineers have been operating petroleum refining and separations pilot plants for five years in the Process Development Center. The 26 pilot plants in this building, with one exception, operate under complete computer-control, allowing maximum utilization of limited operating manpower. This centralization and automation of pilot plants has allowed Phillips to more than double the number of operating pilot plants in the petroleum refining area without an increase in manpower. At the same time, the quantity and quality of data has increased correspondingly. This paper discusses Phillips philosophy of operation and management of these pilot plants.more » In addition, details of day-to-day operations and a brief description of the control system are also presented.« less

  6. A price mechanism for supply demand matching in local grid of households with micro-CHP

    NASA Astrophysics Data System (ADS)

    Larsen, G. K. H.; van Foreest, N. D.; Scherpen, J. M. A.

    2012-10-01

    This paper describes a dynamic price mechanism to coordinate eletric power generation from micro Combined Heat and Power (micro-CHP) systems in a network of households. It is assumed that the households are prosumers, i.e. both producers and consumers of electricity. The control is done on household level in a completely distributed manner. Avoiding a centralized controller both eases computation complexity and preserves communication structure in the network. Local information is used to decide to turn on or off the micro-CHP, but through price signals between the prosumers the network as a whole operates in a cooperative way.

  7. Ultra-Compact Transputer-Based Controller for High-Level, Multi-Axis Coordination

    NASA Technical Reports Server (NTRS)

    Zenowich, Brian; Crowell, Adam; Townsend, William T.

    2013-01-01

    The design of machines that rely on arrays of servomotors such as robotic arms, orbital platforms, and combinations of both, imposes a heavy computational burden to coordinate their actions to perform coherent tasks. For example, the robotic equivalent of a person tracing a straight line in space requires enormously complex kinematics calculations, and complexity increases with the number of servo nodes. A new high-level architecture for coordinated servo-machine control enables a practical, distributed transputer alternative to conventional central processor electronics. The solution is inherently scalable, dramatically reduces bulkiness and number of conductor runs throughout the machine, requires only a fraction of the power, and is designed for cooling in a vacuum.

  8. Methods for computational disease surveillance in infection prevention and control: Statistical process control versus Twitter's anomaly and breakout detection algorithms.

    PubMed

    Wiemken, Timothy L; Furmanek, Stephen P; Mattingly, William A; Wright, Marc-Oliver; Persaud, Annuradha K; Guinn, Brian E; Carrico, Ruth M; Arnold, Forest W; Ramirez, Julio A

    2018-02-01

    Although not all health care-associated infections (HAIs) are preventable, reducing HAIs through targeted intervention is key to a successful infection prevention program. To identify areas in need of targeted intervention, robust statistical methods must be used when analyzing surveillance data. The objective of this study was to compare and contrast statistical process control (SPC) charts with Twitter's anomaly and breakout detection algorithms. SPC and anomaly/breakout detection (ABD) charts were created for vancomycin-resistant Enterococcus, Acinetobacter baumannii, catheter-associated urinary tract infection, and central line-associated bloodstream infection data. Both SPC and ABD charts detected similar data points as anomalous/out of control on most charts. The vancomycin-resistant Enterococcus ABD chart detected an extra anomalous point that appeared to be higher than the same time period in prior years. Using a small subset of the central line-associated bloodstream infection data, the ABD chart was able to detect anomalies where the SPC chart was not. SPC charts and ABD charts both performed well, although ABD charts appeared to work better in the context of seasonal variation and autocorrelation. Because they account for common statistical issues in HAI data, ABD charts may be useful for practitioners for analysis of HAI surveillance data. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  9. Self managing experiment resources

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Ubeda, M.; Tsaregorodtsev, A.; Romanovskiy, V.; Roiser, S.; Charpentier, P.; Graciani, R.

    2014-06-01

    Within this paper we present an autonomic Computing resources management system, used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG, and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware, addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System (Resource Status System) delivering real time information, the system controls the resources topology, independently of the resource types. The Resource Status System applies data mining techniques against all possible information sources available and assesses the status changes, that are then propagated to the topology description. Obviously, giving full control to such an automated system is not risk-free. Therefore, in order to minimise the probability of misbehavior, a battery of tests has been developed in order to certify the correctness of its assessments. We will demonstrate the performance and efficiency of such a system in terms of cost reduction and reliability.

  10. Pace: Privacy-Protection for Access Control Enforcement in P2P Networks

    NASA Astrophysics Data System (ADS)

    Sánchez-Artigas, Marc; García-López, Pedro

    In open environments such as peer-to-peer (P2P) systems, the decision to collaborate with multiple users — e.g., by granting access to a resource — is hard to achieve in practice due to extreme decentralization and the lack of trusted third parties. The literature contains a plethora of applications in which a scalable solution for distributed access control is crucial. This fact motivates us to propose a protocol to enforce access control, applicable to networks consisting entirely of untrusted nodes. The main feature of our protocol is that it protects both sensitive permissions and sensitive policies, and does not rely on any centralized authority. We analyze the efficiency (computational effort and communication overhead) as well as the security of our protocol.

  11. On the decentralized observer/controller strategy for disturbance rejection

    NASA Astrophysics Data System (ADS)

    Faria, Cassio T.; Dong, ZhongZhe; Zhang, Xueji

    2017-04-01

    Centralized controllers and state estimation strategies are the most common topology implemented for disturbance rejection in smart systems. Although this architecture has been proven to be feasible the advances in computation power and size of logic devices can also enable the implementation of decentralized strategies which, on a large scale, can lead to an overall reliability increase of the system. The goal of this paper is to present this concept within the field of disturbance rejection for smart systems and to explore its capabilities and bottlenecks. A simple example is carried out for a composite plate with two controller/estimator centers. Simulation is used to validate the efficacy of the topology and a design procedure is proposed to guarantee the consensus of the agents within the network.

  12. The Mariner Venus Mercury flight data subsystem.

    NASA Technical Reports Server (NTRS)

    Whitehead, P. B.

    1972-01-01

    The flight data subsystem (FDS) discussed handles both the engineering and scientific measurements performed on the MVM'73. It formats the data into serial data streams, and sends it to the modulation/demodulation subsystem for transmission to earth or to the data storage subsystem for storage on a digital tape recorder. The FDS is controlled by serial digital words, called coded commands, received from the central computer sequencer of from the ground via the modulation/demodulation subsystem. The eight major blocks of the FDS are: power converter, timing and control, engineering data, memory, memory input/output and control, nonimaging data, imaging data, and data output. The FDS incorporates some 4000 components, weighs 17 kg, and uses 35 W of power. General data on the mission and spacecraft are given.

  13. Propulsion/flight control integration technology (PROFIT) design analysis status

    NASA Technical Reports Server (NTRS)

    Carlin, C. M.; Hastings, W. J.

    1978-01-01

    The propulsion flight control integration technology (PROFIT) program was designed to develop a flying testbed dedicated to controls research. The preliminary design, analysis, and feasibility studies conducted in support of the PROFIT program are reported. The PROFIT system was built around existing IPCS hardware. In order to achieve the desired system flexibility and capability, additional interfaces between the IPCS hardware and F-15 systems were required. The requirements for additions and modifications to the existing hardware were defined. Those interfaces involving the more significant changes were studied. The DCU memory expansion to 32K with flight qualified hardware was completed on a brassboard basis. The uplink interface breadboard and a brassboard of the central computer interface were also tested. Two preliminary designs and corresponding program plans are presented.

  14. Data management applications

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Kennedy Space Center's primary institutional computer is a 4 megabyte IBM 4341 with 3.175 billion characters of IBM 3350 disc storage. This system utilizes the Software AG product known as ADABAS with the on line user oriented features of NATURAL and COMPLETE as a Data Base Management System (DBMS). It is operational under the OS/VSI and is currently supporting batch/on line applications such as Personnel, Training, Physical Space Management, Procurement, Office Equipment Maintenance, and Equipment Visibility. A third and by far the largest DBMS application is known as the Shuttle Inventory Management System (SIMS) which is operational on a Honeywell 6660 (dedicated) computer system utilizing Honeywell Integrated Data Storage I (IDSI) as the DBMS. The SIMS application is designed to provide central supply system acquisition, inventory control, receipt, storage, and issue of spares, supplies, and materials.

  15. Research into display sharing techniques for distributed computing environments

    NASA Technical Reports Server (NTRS)

    Hugg, Steven B.; Fitzgerald, Paul F., Jr.; Rosson, Nina Y.; Johns, Stephen R.

    1990-01-01

    The X-based Display Sharing solution for distributed computing environments is described. The Display Sharing prototype includes the base functionality for telecast and display copy requirements. Since the prototype implementation is modular and the system design provided flexibility for the Mission Control Center Upgrade (MCCU) operational consideration, the prototype implementation can be the baseline for a production Display Sharing implementation. To facilitate the process the following discussions are presented: Theory of operation; System of architecture; Using the prototype; Software description; Research tools; Prototype evaluation; and Outstanding issues. The prototype is based on the concept of a dedicated central host performing the majority of the Display Sharing processing, allowing minimal impact on each individual workstation. Each workstation participating in Display Sharing hosts programs to facilitate the user's access to Display Sharing as host machine.

  16. Asynchronous Data Retrieval from an Object-Oriented Database

    NASA Astrophysics Data System (ADS)

    Gilbert, Jonathan P.; Bic, Lubomir

    We present an object-oriented semantic database model which, similar to other object-oriented systems, combines the virtues of four concepts: the functional data model, a property inheritance hierarchy, abstract data types and message-driven computation. The main emphasis is on the last of these four concepts. We describe generic procedures that permit queries to be processed in a purely message-driven manner. A database is represented as a network of nodes and directed arcs, in which each node is a logical processing element, capable of communicating with other nodes by exchanging messages. This eliminates the need for shared memory and for centralized control during query processing. Hence, the model is suitable for implementation on a multiprocessor computer architecture, consisting of large numbers of loosely coupled processing elements.

  17. Gait Planning and Stability Control of a Quadruped Robot

    PubMed Central

    Li, Junmin; Wang, Jinge; Yang, Simon X.; Zhou, Kedong; Tang, Huijuan

    2016-01-01

    In order to realize smooth gait planning and stability control of a quadruped robot, a new controller algorithm based on CPG-ZMP (central pattern generator-zero moment point) is put forward in this paper. To generate smooth gait and shorten the adjusting time of the model oscillation system, a new CPG model controller and its gait switching strategy based on Wilson-Cowan model are presented in the paper. The control signals of knee-hip joints are obtained by the improved multi-DOF reduced order control theory. To realize stability control, the adaptive speed adjustment and gait switch are completed by the real-time computing of ZMP. Experiment results show that the quadruped robot's gaits are efficiently generated and the gait switch is smooth in the CPG control algorithm. Meanwhile, the stability of robot's movement is improved greatly with the CPG-ZMP algorithm. The algorithm in this paper has good practicability, which lays a foundation for the production of the robot prototype. PMID:27143959

  18. Gait Planning and Stability Control of a Quadruped Robot.

    PubMed

    Li, Junmin; Wang, Jinge; Yang, Simon X; Zhou, Kedong; Tang, Huijuan

    2016-01-01

    In order to realize smooth gait planning and stability control of a quadruped robot, a new controller algorithm based on CPG-ZMP (central pattern generator-zero moment point) is put forward in this paper. To generate smooth gait and shorten the adjusting time of the model oscillation system, a new CPG model controller and its gait switching strategy based on Wilson-Cowan model are presented in the paper. The control signals of knee-hip joints are obtained by the improved multi-DOF reduced order control theory. To realize stability control, the adaptive speed adjustment and gait switch are completed by the real-time computing of ZMP. Experiment results show that the quadruped robot's gaits are efficiently generated and the gait switch is smooth in the CPG control algorithm. Meanwhile, the stability of robot's movement is improved greatly with the CPG-ZMP algorithm. The algorithm in this paper has good practicability, which lays a foundation for the production of the robot prototype.

  19. 77 FR 26660 - Guidelines for the Transfer of Excess Computers or Other Technical Equipment Pursuant to Section...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-07

    ....usda.gov . SUPPLEMENTARY INFORMATION: A. Background A proposed rule was published in the Federal.... Computers or other technical equipment means central processing units, laptops, desktops, computer mouses...

  20. 41 CFR 105-56.017 - Centralized salary offset computer match.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... offset computer match. 105-56.017 Section 105-56.017 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the Computer...

  1. 41 CFR 105-56.017 - Centralized salary offset computer match.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... offset computer match. 105-56.017 Section 105-56.017 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the Computer...

  2. 41 CFR 105-56.027 - Centralized salary offset computer match.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... offset computer match. 105-56.027 Section 105-56.027 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the Computer...

  3. 41 CFR 105-56.017 - Centralized salary offset computer match.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... offset computer match. 105-56.017 Section 105-56.017 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the Computer...

  4. 41 CFR 105-56.027 - Centralized salary offset computer match.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... offset computer match. 105-56.027 Section 105-56.027 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the Computer...

  5. 41 CFR 105-56.027 - Centralized salary offset computer match.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... offset computer match. 105-56.027 Section 105-56.027 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the Computer...

  6. 41 CFR 105-56.027 - Centralized salary offset computer match.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... offset computer match. 105-56.027 Section 105-56.027 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the Computer...

  7. 41 CFR 105-56.027 - Centralized salary offset computer match.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... offset computer match. 105-56.027 Section 105-56.027 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the Computer...

  8. 41 CFR 105-56.017 - Centralized salary offset computer match.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... offset computer match. 105-56.017 Section 105-56.017 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the Computer...

  9. 41 CFR 105-56.017 - Centralized salary offset computer match.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... offset computer match. 105-56.017 Section 105-56.017 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the Computer...

  10. Nuclear spin noise in the central spin model

    NASA Astrophysics Data System (ADS)

    Fröhling, Nina; Anders, Frithjof B.; Glazov, Mikhail

    2018-05-01

    We study theoretically the fluctuations of the nuclear spins in quantum dots employing the central spin model which accounts for the hyperfine interaction of the nuclei with the electron spin. These fluctuations are calculated both with an analytical approach using homogeneous hyperfine couplings (box model) and with a numerical simulation using a distribution of hyperfine coupling constants. The approaches are in good agreement. The box model serves as a benchmark with low computational cost that explains the basic features of the nuclear spin noise well. We also demonstrate that the nuclear spin noise spectra comprise a two-peak structure centered at the nuclear Zeeman frequency in high magnetic fields with the shape of the spectrum controlled by the distribution of the hyperfine constants. This allows for direct access to this distribution function through nuclear spin noise spectroscopy.

  11. Computer-Enriched Instruction (CEI) Is Better for Preview Material Instead of Review Material: An Example of a Biostatistics Chapter, the Central Limit Theorem

    ERIC Educational Resources Information Center

    See, Lai-Chu; Huang, Yu-Hsun; Chang, Yi-Hu; Chiu, Yeo-Ju; Chen, Yi-Fen; Napper, Vicki S.

    2010-01-01

    This study examines the timing using computer-enriched instruction (CEI), before or after a traditional lecture to determine cross-over effect, period effect, and learning effect arising from sequencing of instruction. A 2 x 2 cross-over design was used with CEI to teach central limit theorem (CLT). Two sequences of graduate students in nursing…

  12. 23. VIEW OF THE FIRST FLOOR PLAN. THE FIRST FLOOR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    23. VIEW OF THE FIRST FLOOR PLAN. THE FIRST FLOOR HOUSED ADMINISTRATIVE OFFICES, THE CENTRAL COMPUTING, UTILITY SYSTEMS, ANALYTICAL LABORATORIES, AND MAINTENANCE SHOPS. THE ORIGINAL DRAWING HAS BEEN ARCHIVED ON MICROFILM. THE DRAWING WAS REPRODUCED AT THE BEST QUALITY POSSIBLE. LETTERS AND NUMBERS IN THE CIRCLES INDICATE FOOTER AND/OR COLUMN LOCATIONS. - Rocky Flats Plant, General Manufacturing, Support, Records-Central Computing, Southern portion of Plant, Golden, Jefferson County, CO

  13. Neural dynamics in reconfigurable silicon.

    PubMed

    Basu, A; Ramakrishnan, S; Petre, C; Koziol, S; Brink, S; Hasler, P E

    2010-10-01

    A neuromorphic analog chip is presented that is capable of implementing massively parallel neural computations while retaining the programmability of digital systems. We show measurements from neurons with Hopf bifurcations and integrate and fire neurons, excitatory and inhibitory synapses, passive dendrite cables, coupled spiking neurons, and central pattern generators implemented on the chip. This chip provides a platform for not only simulating detailed neuron dynamics but also uses the same to interface with actual cells in applications such as a dynamic clamp. There are 28 computational analog blocks (CAB), each consisting of ion channels with tunable parameters, synapses, winner-take-all elements, current sources, transconductance amplifiers, and capacitors. There are four other CABs which have programmable bias generators. The programmability is achieved using floating gate transistors with on-chip programming control. The switch matrix for interconnecting the components in CABs also consists of floating-gate transistors. Emphasis is placed on replicating the detailed dynamics of computational neural models. Massive computational area efficiency is obtained by using the reconfigurable interconnect as synaptic weights, resulting in more than 50 000 possible 9-b accurate synapses in 9 mm(2).

  14. Two-way cable television project

    NASA Astrophysics Data System (ADS)

    Wilkens, H.; Guenther, P.; Kiel, F.; Kraus, F.; Mahnkopf, P.; Schnee, R.

    1982-02-01

    The market demand for a multiuser computer system with interactive services was studied. Mean system work load at peak use hours was estimated and the complexity of dialog with a central computer was determined. Man machine communication by broadband cable television transmission, using digital techniques, was assumed. The end to end system is described. It is user friendly, able to handle 10,000 subscribers, and provides color television display. The central computer system architecture with remote audiovisual terminals is depicted and software is explained. Signal transmission requirements are dealt with. International availability of the test system, including sample programs, is indicated.

  15. Synthetic analog computation in living cells.

    PubMed

    Daniel, Ramiz; Rubens, Jacob R; Sarpeshkar, Rahul; Lu, Timothy K

    2013-05-30

    A central goal of synthetic biology is to achieve multi-signal integration and processing in living cells for diagnostic, therapeutic and biotechnology applications. Digital logic has been used to build small-scale circuits, but other frameworks may be needed for efficient computation in the resource-limited environments of cells. Here we demonstrate that synthetic analog gene circuits can be engineered to execute sophisticated computational functions in living cells using just three transcription factors. Such synthetic analog gene circuits exploit feedback to implement logarithmically linear sensing, addition, ratiometric and power-law computations. The circuits exhibit Weber's law behaviour as in natural biological systems, operate over a wide dynamic range of up to four orders of magnitude and can be designed to have tunable transfer functions. Our circuits can be composed to implement higher-order functions that are well described by both intricate biochemical models and simple mathematical functions. By exploiting analog building-block functions that are already naturally present in cells, this approach efficiently implements arithmetic operations and complex functions in the logarithmic domain. Such circuits may lead to new applications for synthetic biology and biotechnology that require complex computations with limited parts, need wide-dynamic-range biosensing or would benefit from the fine control of gene expression.

  16. Replication of Space-Shuttle Computers in FPGAs and ASICs

    NASA Technical Reports Server (NTRS)

    Ferguson, Roscoe C.

    2008-01-01

    A document discusses the replication of the functionality of the onboard space-shuttle general-purpose computers (GPCs) in field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). The purpose of the replication effort is to enable utilization of proven space-shuttle flight software and software-development facilities to the extent possible during development of software for flight computers for a new generation of launch vehicles derived from the space shuttles. The replication involves specifying the instruction set of the central processing unit and the input/output processor (IOP) of the space-shuttle GPC in a hardware description language (HDL). The HDL is synthesized to form a "core" processor in an FPGA or, less preferably, in an ASIC. The core processor can be used to create a flight-control card to be inserted into a new avionics computer. The IOP of the GPC as implemented in the core processor could be designed to support data-bus protocols other than that of a multiplexer interface adapter (MIA) used in the space shuttle. Hence, a computer containing the core processor could be tailored to communicate via the space-shuttle GPC bus and/or one or more other buses.

  17. Self-management support using an Internet-linked tablet computer (the EDGE platform)-based intervention in chronic obstructive pulmonary disease: protocol for the EDGE-COPD randomised controlled trial.

    PubMed

    Farmer, Andrew; Toms, Christy; Hardinge, Maxine; Williams, Veronika; Rutter, Heather; Tarassenko, Lionel

    2014-01-08

    The potential for telehealth-based interventions to provide remote support, education and improve self-management for long-term conditions is increasingly recognised. This trial aims to determine whether an intervention delivered through an easy-to-use tablet computer can improve the quality of life of patients with chronic obstructive pulmonary disease (COPD) by providing personalised self-management information and education. The EDGE (sElf management anD support proGrammE) for COPD is a multicentre, randomised controlled trial designed to assess the efficacy of an Internet-linked tablet computer-based intervention (the EDGE platform) in improving quality of life in patients with moderate to very severe COPD compared with usual care. Eligible patients are randomly allocated to receive the tablet computer-based intervention or usual care in a 2:1 ratio using a web-based randomisation system. Participants are recruited from respiratory outpatient clinics and pulmonary rehabilitation courses as well as from those recently discharged from hospital with a COPD-related admission and from primary care clinics. Participants allocated to the tablet computer-based intervention complete a daily symptom diary and record clinical symptoms using a Bluetooth-linked pulse oximeter. Participants allocated to receive usual care are provided with all the information given to those allocated to the intervention but without the use of the tablet computer or the facility to monitor their symptoms or physiological variables. The primary outcome of quality of life is measured using the St George's Respiratory Questionnaire for COPD patients (SGRQ-C) baseline, 6 and 12 months. Secondary outcome measures are recorded at these intervals in addition to 3 months. The Research Ethics Committee for Berkshire-South Central has provided ethical approval for the conduct of the study in the recruiting regions. The results of the study will be disseminated through peer review publications and conference presentations. Current controlled trials ISRCTN40367841.

  18. Evolution of the Hubble Space Telescope Safing Systems

    NASA Technical Reports Server (NTRS)

    Pepe, Joyce; Myslinski, Michael

    2006-01-01

    The Hubble Space Telescope (HST) was launched on April 24 1990, with an expected lifespan of 15 years. Central to the spacecraft design was the concept of a series of on-orbit shuttle servicing missions permitting astronauts to replace failed equipment, update the scientific instruments and keep the HST at the forefront of astronomical discoveries. One key to the success of the Hubble mission has been the robust Safing systems designed to monitor the performance of the observatory and to react to keep the spacecraft safe in the event of equipment anomaly. The spacecraft Safing System consists of a range of software tests in the primary flight computer that evaluate the performance of mission critical hardware, safe modes that are activated when the primary control mode is deemed inadequate for protecting the vehicle, and special actions that the computer can take to autonomously reconfigure critical hardware. The HST Safing System was structured to autonomously detect electrical power system, data management system, and pointing control system malfunctions and to configure the vehicle to ensure safe operation without ground intervention for up to 72 hours. There is also a dedicated safe mode computer that constantly monitors a keep-alive signal from the primary computer. If this signal stops, the safe mode computer shuts down the primary computer and takes over control of the vehicle, putting it into a safe, low-power configuration. The HST Safing system has continued to evolve as equipment has aged, as new hardware has been installed on the vehicle, and as the operation modes have matured during the mission. Along with the continual refinement of the limits used in the safing tests, several new tests have been added to the monitoring system, and new safe modes have been added to the flight software. This paper will focus on the evolution of the HST Safing System and Safing tests, and the importance of this evolution to prolonging the science operations of the telescope.

  19. Embracing the Cloud: Six Ways to Look at the Shift to Cloud Computing

    ERIC Educational Resources Information Center

    Ullman, David F.; Haggerty, Blake

    2010-01-01

    Cloud computing is the latest paradigm shift for the delivery of IT services. Where previous paradigms (centralized, decentralized, distributed) were based on fairly straightforward approaches to technology and its management, cloud computing is radical in comparison. The literature on cloud computing, however, suffers from many divergent…

  20. Multispectral imaging system for contaminant detection

    NASA Technical Reports Server (NTRS)

    Poole, Gavin H. (Inventor)

    2003-01-01

    An automated inspection system for detecting digestive contaminants on food items as they are being processed for consumption includes a conveyor for transporting the food items, a light sealed enclosure which surrounds a portion of the conveyor, with a light source and a multispectral or hyperspectral digital imaging camera disposed within the enclosure. Operation of the conveyor, light source and camera are controlled by a central computer unit. Light reflected by the food items within the enclosure is detected in predetermined wavelength bands, and detected intensity values are analyzed to detect the presence of digestive contamination.

  1. Noise Effects on Entangled Coherent State Generated via Atom-Field Interaction and Beam Splitter

    NASA Astrophysics Data System (ADS)

    Najarbashi, G.; Mirzaei, S.

    2016-05-01

    In this paper, we introduce a controllable method for producing two and three-mode entangled coherent states (ECS's) using atom-field interaction in cavity QED and beam splitter. The generated states play central roles in linear optics, quantum computation and teleportation. We especially focus on qubit, qutrit and qufit like ECS's and investigate their entanglement by concurrence measure. Moreover, we illustrate decoherence properties of ECS's due to noisy channels, using negativity measure. At the end the effect of noise on monogamy inequality is discussed.

  2. A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA

    PubMed Central

    2010-01-01

    Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Methods Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Results Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological systems do not have an external source for a map of their stored information or for an operational instruction set; rather, they must contain an organizational template conserved within their intra-nuclear architecture that "manipulates" the laws of chemistry and physics into a highly robust instruction set. We propose that the epigenetic structure of the intra-nuclear environment and the non-coding RNA may play the roles of a Biological File Allocation Table (BFAT) and biological operating system (Bio-OS) in eukaryotic cells. Conclusions The comparison of functional and structural characteristics of the DNA complex and the computer hard drive leads to a new descriptive paradigm that identifies the DNA as a dynamic storage system of biological information. This system is embodied in an autonomous operating system that inductively follows organizational structures, data hierarchy and executable operations that are well understood in the computer science industry. Characterizing the "DNA hard drive" in this fashion can lead to insights arising from discrepancies in the descriptive framework, particularly with respect to positing the role of epigenetic processes in an information-processing context. Further expansions arising from this comparison include the view of cells as parallel computing machines and a new approach towards characterizing cellular control systems. PMID:20092652

  3. A comparative approach for the investigation of biological information processing: an examination of the structure and function of computer hard drives and DNA.

    PubMed

    D'Onofrio, David J; An, Gary

    2010-01-21

    The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological systems do not have an external source for a map of their stored information or for an operational instruction set; rather, they must contain an organizational template conserved within their intra-nuclear architecture that "manipulates" the laws of chemistry and physics into a highly robust instruction set. We propose that the epigenetic structure of the intra-nuclear environment and the non-coding RNA may play the roles of a Biological File Allocation Table (BFAT) and biological operating system (Bio-OS) in eukaryotic cells. The comparison of functional and structural characteristics of the DNA complex and the computer hard drive leads to a new descriptive paradigm that identifies the DNA as a dynamic storage system of biological information. This system is embodied in an autonomous operating system that inductively follows organizational structures, data hierarchy and executable operations that are well understood in the computer science industry. Characterizing the "DNA hard drive" in this fashion can lead to insights arising from discrepancies in the descriptive framework, particularly with respect to positing the role of epigenetic processes in an information-processing context. Further expansions arising from this comparison include the view of cells as parallel computing machines and a new approach towards characterizing cellular control systems.

  4. A Fuzzy-Based Control Method for Smoothing Power Fluctuations in Substations along High-Speed Railways

    NASA Astrophysics Data System (ADS)

    Sugio, Tetsuya; Yamamoto, Masayoshi; Funabiki, Shigeyuki

    The use of an SMES (Superconducting Magnetic Energy Storage) for smoothing power fluctuations in a railway substation has been discussed. This paper proposes a smoothing control method based on fuzzy reasoning for reducing the SMES capacity at substations along high-speed railways. The proposed smoothing control method comprises three countermeasures for reduction of the SMES capacity. The first countermeasure involves modification of rule 1 for smoothing out the fluctuating electric power to its average value. The other countermeasures involve the modification of the central value of the stored energy control in the SMES and revision of the membership function in rule 2 for reduction of the SMES capacity. The SMES capacity in the proposed smoothing control method is reduced by 49.5% when compared to that in the nonrevised control method. It is confirmed by computer simulations that the proposed control method is suitable for smoothing out power fluctuations in substations along high-speed railways and for reducing the SMES capacity.

  5. Network Information System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    1996-05-01

    The Network Information System (NWIS) was initially implemented in May 1996 as a system in which computing devices could be recorded so that unique names could be generated for each device. Since then the system has grown to be an enterprise wide information system which is integrated with other systems to provide the seamless flow of data through the enterprise. The system Iracks data for two main entities: people and computing devices. The following are the type of functions performed by NWIS for these two entities: People Provides source information to the enterprise person data repository for select contractors andmore » visitors Generates and tracks unique usernames and Unix user IDs for every individual granted cyber access Tracks accounts for centrally managed computing resources, and monitors and controls the reauthorization of the accounts in accordance with the DOE mandated interval Computing Devices Generates unique names for all computing devices registered in the system Tracks the following information for each computing device: manufacturer, make, model, Sandia property number, vendor serial number, operating system and operating system version, owner, device location, amount of memory, amount of disk space, and level of support provided for the machine Tracks the hardware address for network cards Tracks the P address registered to computing devices along with the canonical and alias names for each address Updates the Dynamic Domain Name Service (DDNS) for canonical and alias names Creates the configuration files for DHCP to control the DHCP ranges and allow access to only properly registered computers Tracks and monitors classified security plans for stand-alone computers Tracks the configuration requirements used to setup the machine Tracks the roles people have on machines (system administrator, administrative access, user, etc...) Allows systems administrators to track changes made on the machine (both hardware and software) Generates an adjustment history of changes on selected fields« less

  6. Computer and photogrammetric general land use study of central north Alabama

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.; Larsen, P. A.; Campbell, C. W.

    1974-01-01

    The object of this report is to acquaint potential users with two computer programs, developed at NASA, Marshall Space Flight Center. They were used in producing a land use survey and maps of central north Alabama from Earth Resources Technology Satellite (ERTS) digital data. The report describes in detail the thought processes and analysis procedures used from the initiation of the land use study to its completion, as well as a photogrammetric study that was used in conjunction with the computer analysis to produce similar land use maps. The results of the land use demonstration indicate that, with respect to computer time and cost, such a study may be economically and realistically feasible on a statewide basis.

  7. Baseline Architecture of ITER Control System

    NASA Astrophysics Data System (ADS)

    Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.

    2011-08-01

    The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.

  8. Constructing Confidence Intervals for Reliability Coefficients Using Central and Noncentral Distributions.

    ERIC Educational Resources Information Center

    Weber, Deborah A.

    Greater understanding and use of confidence intervals is central to changes in statistical practice (G. Cumming and S. Finch, 2001). Reliability coefficients and confidence intervals for reliability coefficients can be computed using a variety of methods. Estimating confidence intervals includes both central and noncentral distribution approaches.…

  9. AnimatLab: a 3D graphics environment for neuromechanical simulations.

    PubMed

    Cofer, David; Cymbalyuk, Gennady; Reid, James; Zhu, Ying; Heitler, William J; Edwards, Donald H

    2010-03-30

    The nervous systems of animals evolved to exert dynamic control of behavior in response to the needs of the animal and changing signals from the environment. To understand the mechanisms of dynamic control requires a means of predicting how individual neural and body elements will interact to produce the performance of the entire system. AnimatLab is a software tool that provides an approach to this problem through computer simulation. AnimatLab enables a computational model of an animal's body to be constructed from simple building blocks, situated in a virtual 3D world subject to the laws of physics, and controlled by the activity of a multicellular, multicompartment neural circuit. Sensor receptors on the body surface and inside the body respond to external and internal signals and then excite central neurons, while motor neurons activate Hill muscle models that span the joints and generate movement. AnimatLab provides a common neuromechanical simulation environment in which to construct and test models of any skeletal animal, vertebrate or invertebrate. The use of AnimatLab is demonstrated in a neuromechanical simulation of human arm flexion and the myotactic and contact-withdrawal reflexes. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  10. Modeling Memory for Language Understanding.

    DTIC Science & Technology

    1982-02-01

    Abstract Research on natural language understanding by computer has shown that the nature and organization of memory plays j central role in the...block number) Research on natural language understanding by computer has shown that the nature and organization of memory plays a central role in the...understanding mechanism. Further we claim that such reminding is at the root of how we learn. Issues such as these have played an important part in shaping the

  11. Measurement and control system for cryogenic helium gas bearing turbo-expander experimental platform based on Siemens PLC S7-300

    NASA Astrophysics Data System (ADS)

    Li, J.; Xiong, L. Y.; Peng, N.; Dong, B.; Wang, P.; Liu, L. Q.

    2014-01-01

    An experimental platform for cryogenic Helium gas bearing turbo-expanders is established at the Technical Institute of Physics and Chemistry, Chinese Academy of Sciences. This turbo-expander experimental platform is designed for performance testing and experimental research on Helium turbo-expanders with different sizes from the liquid hydrogen temperature to the room temperature region. A measurement and control system based on Siemens PLC S7-300 for this turbo-expander experimental platform is developed. Proper sensors are selected to measure such parameters as temperature, pressure, rotation speed and air flow rate. All the collected data to be processed are transformed and transmitted to S7-300 CPU. Siemens S7-300 series PLC CPU315-2PN/DP is as master station and two sets of ET200M DP remote expand I/O is as slave station. Profibus-DP field communication is established between master station and slave stations. The upper computer Human Machine Interface (HMI) is compiled using Siemens configuration software WinCC V6.2. The upper computer communicates with PLC by means of industrial Ethernet. Centralized monitoring and distributed control is achieved. Experimental results show that this measurement and control system has fulfilled the test requirement for the turbo-expander experimental platform.

  12. Measurement and control system for cryogenic helium gas bearing turbo-expander experimental platform based on Siemens PLC S7-300

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J.; Xiong, L. Y.; Peng, N.

    2014-01-29

    An experimental platform for cryogenic Helium gas bearing turbo-expanders is established at the Technical Institute of Physics and Chemistry, Chinese Academy of Sciences. This turbo-expander experimental platform is designed for performance testing and experimental research on Helium turbo-expanders with different sizes from the liquid hydrogen temperature to the room temperature region. A measurement and control system based on Siemens PLC S7-300 for this turbo-expander experimental platform is developed. Proper sensors are selected to measure such parameters as temperature, pressure, rotation speed and air flow rate. All the collected data to be processed are transformed and transmitted to S7-300 CPU. Siemensmore » S7-300 series PLC CPU315-2PN/DP is as master station and two sets of ET200M DP remote expand I/O is as slave station. Profibus-DP field communication is established between master station and slave stations. The upper computer Human Machine Interface (HMI) is compiled using Siemens configuration software WinCC V6.2. The upper computer communicates with PLC by means of industrial Ethernet. Centralized monitoring and distributed control is achieved. Experimental results show that this measurement and control system has fulfilled the test requirement for the turbo-expander experimental platform.« less

  13. Computer input and output files associated with ground-water-flow simulations of the Albuquerque Basin, central New Mexico, 1901-95, with projections to 2020; (supplement three to U.S. Geological Survey Water-resources investigations report 94-4251)

    USGS Publications Warehouse

    Kernodle, J.M.

    1996-01-01

    This report presents the computer input files required to run the three-dimensional ground-water-flow model of the Albuquerque Basin, central New Mexico, documented in Kernodle and others (Kernodle, J.M., McAda, D.P., and Thorn, C.R., 1995, Simulation of ground-water flow in the Albuquerque Basin, central New Mexico, 1901-1994, with projections to 2020: U.S. Geological Survey Water-Resources Investigations Report 94-4251, 114 p.) and revised by Kernodle (Kernodle, J.M., 1998, Simulation of ground-water flow in the Albuquerque Basin, 1901-95, with projections to 2020 (supplement two to U.S. Geological Survey Water-Resources Investigations Report 94-4251): U.S. Geological Survey Open-File Report 96-209, 54 p.). Output files resulting from the computer simulations are included for reference.

  14. Iterative evaluation in a mobile counseling and testing program to reach people of color at risk for HIV--new strategies improve program acceptability, effectiveness, and evaluation capabilities.

    PubMed

    Spielberg, Freya; Kurth, Ann; Reidy, William; McKnight, Teka; Dikobe, Wame; Wilson, Charles

    2011-06-01

    This article highlights findings from an evaluation that explored the impact of mobile versus clinic-based testing, rapid versus central-lab based testing, incentives for testing, and the use of a computer counseling program to guide counseling and automate evaluation in a mobile program reaching people of color at risk for HIV. The program's results show that an increased focus on mobile outreach using rapid testing, incentives and health information technology tools may improve program acceptability, quality, productivity and timeliness of reports. This article describes program design decisions based on continuous quality assessment efforts. It also examines the impact of the Computer Assessment and Risk Reduction Education computer tool on HIV testing rates, staff perception of counseling quality, program productivity, and on the timeliness of evaluation reports. The article concludes with a discussion of implications for programmatic responses to the Centers for Disease Control and Prevention's HIV testing recommendations.

  15. ITERATIVE EVALUATION IN A MOBILE COUNSELING AND TESTING PROGRAM TO REACH PEOPLE OF COLOR AT RISK FOR HIV—NEW STRATEGIES IMPROVE PROGRAM ACCEPTABILITY, EFFECTIVENESS, AND EVALUATION CAPABILITIES

    PubMed Central

    Spielberg, Freya; Kurth, Ann; Reidy, William; McKnight, Teka; Dikobe, Wame; Wilson, Charles

    2016-01-01

    This article highlights findings from an evaluation that explored the impact of mobile versus clinic-based testing, rapid versus central-lab based testing, incentives for testing, and the use of a computer counseling program to guide counseling and automate evaluation in a mobile program reaching people of color at risk for HIV. The program’s results show that an increased focus on mobile outreach using rapid testing, incentives and health information technology tools may improve program acceptability, quality, productivity and timeliness of reports. This article describes program design decisions based on continuous quality assessment efforts. It also examines the impact of the Computer Assessment and Risk Reduction Education computer tool on HIV testing rates, staff perception of counseling quality, program productivity, and on the timeliness of evaluation reports. The article concludes with a discussion of implications for programmatic responses to the Centers for Disease Control and Prevention’s HIV testing recommendations. PMID:21689041

  16. Central FPGA-based destination and load control in the LHCb MHz event readout

    NASA Astrophysics Data System (ADS)

    Jacobsson, R.

    2012-10-01

    The readout strategy of the LHCb experiment is based on complete event readout at 1 MHz. A set of 320 sub-detector readout boards transmit event fragments at total rate of 24.6 MHz at a bandwidth usage of up to 70 GB/s over a commercial switching network based on Gigabit Ethernet to a distributed event building and high-level trigger processing farm with 1470 individual multi-core computer nodes. In the original specifications, the readout was based on a pure push protocol. This paper describes the proposal, implementation, and experience of a non-conventional mixture of a push and a pull protocol, akin to credit-based flow control. An FPGA-based central master module, partly operating at the LHC bunch clock frequency of 40.08 MHz and partly at a double clock speed, is in charge of the entire trigger and readout control from the front-end electronics up to the high-level trigger farm. One FPGA is dedicated to controlling the event fragment packing in the readout boards, the assignment of the farm node destination for each event, and controls the farm load based on an asynchronous pull mechanism from each farm node. This dynamic readout scheme relies on generic event requests and the concept of node credit allowing load control and trigger rate regulation as a function of the global farm load. It also allows the vital task of fast central monitoring and automatic recovery in-flight of failing nodes while maintaining dead-time and event loss at a minimum. This paper demonstrates the strength and suitability of implementing this real-time task for a very large distributed system in an FPGA where no random delays are introduced, and where extreme reliability and accurate event accounting are fundamental requirements. It was in use during the entire commissioning phase of LHCb and has been in faultless operation during the first two years of physics luminosity data taking.

  17. Design of the Wind Tunnel Model Communication Controller Board. Degree awarded by Christopher Newport Univ. on Dec. 1998

    NASA Technical Reports Server (NTRS)

    Wilson, William C.

    1999-01-01

    The NASA Langley Research Center's Wind Tunnel Reinvestment project plans to shrink the existing data acquisition electronics to fit inside a wind tunnel model. Space limitations within a model necessitate a distributed system of Application Specific Integrated Circuits (ASICs) rather than a centralized system based on PC boards. This thesis will focus on the design of the prototype of the communication Controller board. A portion of the communication Controller board is to be used as the basis of an ASIC design. The communication Controller board will communicate between the internal model modules and the external data acquisition computer. This board is based around an Field Programmable Gate Array (FPGA), to allow for reconfigurability. In addition to the FPGA, this board contains buffer Random Access Memory (RAM), configuration memory (EEPROM), drivers for the communications ports, and passive components.

  18. Space station dynamics, attitude control and momentum management

    NASA Technical Reports Server (NTRS)

    Sunkel, John W.; Singh, Ramen P.; Vengopal, Ravi

    1989-01-01

    The Space Station Attitude Control System software test-bed provides a rigorous environment for the design, development and functional verification of GN and C algorithms and software. The approach taken for the simulation of the vehicle dynamics and environmental models using a computationally efficient algorithm is discussed. The simulation includes capabilities for docking/berthing dynamics, prescribed motion dynamics associated with the Mobile Remote Manipulator System (MRMS) and microgravity disturbances. The vehicle dynamics module interfaces with the test-bed through the central Communicator facility which is in turn driven by the Station Control Simulator (SCS) Executive. The Communicator addresses issues such as the interface between the discrete flight software and the continuous vehicle dynamics, and multi-programming aspects such as the complex flow of control in real-time programs. Combined with the flight software and redundancy management modules, the facility provides a flexible, user-oriented simulation platform.

  19. Robot Task Commander with Extensible Programming Environment

    NASA Technical Reports Server (NTRS)

    Hart, Stephen W (Inventor); Wightman, Brian J (Inventor); Dinh, Duy Paul (Inventor); Yamokoski, John D. (Inventor); Gooding, Dustin R (Inventor)

    2014-01-01

    A system for developing distributed robot application-level software includes a robot having an associated control module which controls motion of the robot in response to a commanded task, and a robot task commander (RTC) in networked communication with the control module over a network transport layer (NTL). The RTC includes a script engine(s) and a GUI, with a processor and a centralized library of library blocks constructed from an interpretive computer programming code and having input and output connections. The GUI provides access to a Visual Programming Language (VPL) environment and a text editor. In executing a method, the VPL is opened, a task for the robot is built from the code library blocks, and data is assigned to input and output connections identifying input and output data for each block. A task sequence(s) is sent to the control module(s) over the NTL to command execution of the task.

  20. An overview of adaptive model theory: solving the problems of redundancy, resources, and nonlinear interactions in human movement control.

    PubMed

    Neilson, Peter D; Neilson, Megan D

    2005-09-01

    Adaptive model theory (AMT) is a computational theory that addresses the difficult control problem posed by the musculoskeletal system in interaction with the environment. It proposes that the nervous system creates motor maps and task-dependent synergies to solve the problems of redundancy and limited central resources. These lead to the adaptive formation of task-dependent feedback/feedforward controllers able to generate stable, noninteractive control and render nonlinear interactions unobservable in sensory-motor relationships. AMT offers a unified account of how the nervous system might achieve these solutions by forming internal models. This is presented as the design of a simulator consisting of neural adaptive filters based on cerebellar circuitry. It incorporates a new network module that adaptively models (in real time) nonlinear relationships between inputs with changing and uncertain spectral and amplitude probability density functions as is the case for sensory and motor signals.

  1. BridgeRank: A novel fast centrality measure based on local structure of the network

    NASA Astrophysics Data System (ADS)

    Salavati, Chiman; Abdollahpouri, Alireza; Manbari, Zhaleh

    2018-04-01

    Ranking nodes in complex networks have become an important task in many application domains. In a complex network, influential nodes are those that have the most spreading ability. Thus, identifying influential nodes based on their spreading ability is a fundamental task in different applications such as viral marketing. One of the most important centrality measures to ranking nodes is closeness centrality which is efficient but suffers from high computational complexity O(n3) . This paper tries to improve closeness centrality by utilizing the local structure of nodes and presents a new ranking algorithm, called BridgeRank centrality. The proposed method computes local centrality value for each node. For this purpose, at first, communities are detected and the relationship between communities is completely ignored. Then, by applying a centrality in each community, only one best critical node from each community is extracted. Finally, the nodes are ranked based on computing the sum of the shortest path length of nodes to obtained critical nodes. We have also modified the proposed method by weighting the original BridgeRank and selecting several nodes from each community based on the density of that community. Our method can find the best nodes with high spread ability and low time complexity, which make it applicable to large-scale networks. To evaluate the performance of the proposed method, we use the SIR diffusion model. Finally, experiments on real and artificial networks show that our method is able to identify influential nodes so efficiently, and achieves better performance compared to other recent methods.

  2. Integration of Gravitational Torques in Cerebellar Pathways Allows for the Dynamic Inverse Computation of Vertical Pointing Movements of a Robot Arm

    PubMed Central

    Gentili, Rodolphe J.; Papaxanthis, Charalambos; Ebadzadeh, Mehdi; Eskiizmirliler, Selim; Ouanezar, Sofiane; Darlot, Christian

    2009-01-01

    Background Several authors suggested that gravitational forces are centrally represented in the brain for planning, control and sensorimotor predictions of movements. Furthermore, some studies proposed that the cerebellum computes the inverse dynamics (internal inverse model) whereas others suggested that it computes sensorimotor predictions (internal forward model). Methodology/Principal Findings This study proposes a model of cerebellar pathways deduced from both biological and physical constraints. The model learns the dynamic inverse computation of the effect of gravitational torques from its sensorimotor predictions without calculating an explicit inverse computation. By using supervised learning, this model learns to control an anthropomorphic robot arm actuated by two antagonists McKibben artificial muscles. This was achieved by using internal parallel feedback loops containing neural networks which anticipate the sensorimotor consequences of the neural commands. The artificial neural networks architecture was similar to the large-scale connectivity of the cerebellar cortex. Movements in the sagittal plane were performed during three sessions combining different initial positions, amplitudes and directions of movements to vary the effects of the gravitational torques applied to the robotic arm. The results show that this model acquired an internal representation of the gravitational effects during vertical arm pointing movements. Conclusions/Significance This is consistent with the proposal that the cerebellar cortex contains an internal representation of gravitational torques which is encoded through a learning process. Furthermore, this model suggests that the cerebellum performs the inverse dynamics computation based on sensorimotor predictions. This highlights the importance of sensorimotor predictions of gravitational torques acting on upper limb movements performed in the gravitational field. PMID:19384420

  3. Active Flash: Out-of-core Data Analytics on Flash Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S

    2012-01-01

    Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less

  4. Process control charts in infection prevention: Make it simple to make it happen.

    PubMed

    Wiemken, Timothy L; Furmanek, Stephen P; Carrico, Ruth M; Mattingly, William A; Persaud, Annuradha K; Guinn, Brian E; Kelley, Robert R; Ramirez, Julio A

    2017-03-01

    Quality improvement is central to Infection Prevention and Control (IPC) programs. Challenges may occur when applying quality improvement methodologies like process control charts, often due to the limited exposure of typical IPs. Because of this, our team created an open-source database with a process control chart generator for IPC programs. The objectives of this report are to outline the development of the application and demonstrate application using simulated data. We used Research Electronic Data Capture (REDCap Consortium, Vanderbilt University, Nashville, TN), R (R Foundation for Statistical Computing, Vienna, Austria), and R Studio Shiny (R Foundation for Statistical Computing) to create an open source data collection system with automated process control chart generation. We used simulated data to test and visualize both in-control and out-of-control processes for commonly used metrics in IPC programs. The R code for implementing the control charts and Shiny application can be found on our Web site (https://github.com/ul-research-support/spcapp). Screen captures of the workflow and simulated data indicating both common cause and special cause variation are provided. Process control charts can be easily developed based on individual facility needs using freely available software. Through providing our work free to all interested parties, we hope that others will be able to harness the power and ease of use of the application for improving the quality of care and patient safety in their facilities. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  5. Control of electro-rheological fluid-based torque generation components for use in active rehabilitation devices

    NASA Astrophysics Data System (ADS)

    Nikitczuk, Jason; Weinberg, Brian; Mavroidis, Constantinos

    2006-03-01

    In this paper we present the design and control algorithms for novel electro-rheological fluid based torque generation elements that will be used to drive the joint of a new type of portable and controllable Active Knee Rehabilitation Orthotic Device (AKROD) for gait retraining in stroke patients. The AKROD is composed of straps and rigid components for attachment to the leg, with a central hinge mechanism where a gear system is connected. The key features of AKROD include: a compact, lightweight design with highly tunable torque capabilities through a variable damper component, full portability with on board power, control circuitry, and sensors (encoder and torque), and real-time capabilities for closed loop computer control for optimizing gait retraining. The variable damper component is achieved through an electro-rheological fluid (ERF) element that connects to the output of the gear system. Using the electrically controlled rheological properties of ERFs, compact brakes capable of supplying high resistive and controllable torques, are developed. A preliminary prototype for AKROD v.2 has been developed and tested in our laboratory. AKROD's v.2 ERF resistive actuator was tested in laboratory experiments using our custom made ERF Testing Apparatus (ETA). ETA provides a computer controlled environment to test ERF brakes and actuators in various conditions and scenarios including emulating the interaction between human muscles involved with the knee and AKROD's ERF actuators / brakes. In our preliminary results, AKROD's ERF resistive actuator was tested in closed loop torque control experiments. A hybrid (non-linear, adaptive) Proportional-Integral (PI) torque controller was implemented to achieve this goal.

  6. Central Computer Science Concepts to Research-Based Teacher Training in Computer Science: An Experimental Study

    ERIC Educational Resources Information Center

    Zendler, Andreas; Klaudt, Dieter

    2012-01-01

    The significance of computer science for economics and society is undisputed. In particular, computer science is acknowledged to play a key role in schools (e.g., by opening multiple career paths). The provision of effective computer science education in schools is dependent on teachers who are able to properly represent the discipline and whose…

  7. Separating temporal and topological effects in walk-based network centrality.

    PubMed

    Colman, Ewan R; Charlton, Nathaniel

    2016-07-01

    The recently introduced concept of dynamic communicability is a valuable tool for ranking the importance of nodes in a temporal network. Two metrics, broadcast score and receive score, were introduced to measure the centrality of a node with respect to a model of contagion based on time-respecting walks. This article examines the temporal and structural factors influencing these metrics by considering a versatile stochastic temporal network model. We analytically derive formulas to accurately predict the expectation of the broadcast and receive scores when one or more columns in a temporal edge-list are shuffled. These methods are then applied to two publicly available data sets and we quantify how much the centrality of each individual depends on structural or temporal influences. From our analysis, we highlight two practical contributions: a way to control for temporal variation when computing dynamic communicability and the conclusion that the broadcast and receive scores can, under a range of circumstances, be replaced by the row and column sums of the matrix exponential of a weighted adjacency matrix given by the data.

  8. Separating temporal and topological effects in walk-based network centrality

    NASA Astrophysics Data System (ADS)

    Colman, Ewan R.; Charlton, Nathaniel

    2016-07-01

    The recently introduced concept of dynamic communicability is a valuable tool for ranking the importance of nodes in a temporal network. Two metrics, broadcast score and receive score, were introduced to measure the centrality of a node with respect to a model of contagion based on time-respecting walks. This article examines the temporal and structural factors influencing these metrics by considering a versatile stochastic temporal network model. We analytically derive formulas to accurately predict the expectation of the broadcast and receive scores when one or more columns in a temporal edge-list are shuffled. These methods are then applied to two publicly available data sets and we quantify how much the centrality of each individual depends on structural or temporal influences. From our analysis, we highlight two practical contributions: a way to control for temporal variation when computing dynamic communicability and the conclusion that the broadcast and receive scores can, under a range of circumstances, be replaced by the row and column sums of the matrix exponential of a weighted adjacency matrix given by the data.

  9. Interdependent Network Recovery Games.

    PubMed

    Smith, Andrew M; González, Andrés D; Dueñas-Osorio, Leonardo; D'Souza, Raissa M

    2017-10-30

    Recovery of interdependent infrastructure networks in the presence of catastrophic failure is crucial to the economy and welfare of society. Recently, centralized methods have been developed to address optimal resource allocation in postdisaster recovery scenarios of interdependent infrastructure systems that minimize total cost. In real-world systems, however, multiple independent, possibly noncooperative, utility network controllers are responsible for making recovery decisions, resulting in suboptimal decentralized processes. With the goal of minimizing recovery cost, a best-case decentralized model allows controllers to develop a full recovery plan and negotiate until all parties are satisfied (an equilibrium is reached). Such a model is computationally intensive for planning and negotiating, and time is a crucial resource in postdisaster recovery scenarios. Furthermore, in this work, we prove this best-case decentralized negotiation process could continue indefinitely under certain conditions. Accounting for network controllers' urgency in repairing their system, we propose an ad hoc sequential game-theoretic model of interdependent infrastructure network recovery represented as a discrete time noncooperative game between network controllers that is guaranteed to converge to an equilibrium. We further reduce the computation time needed to find a solution by applying a best-response heuristic and prove bounds on ε-Nash equilibrium, where ε depends on problem inputs. We compare best-case and ad hoc models on an empirical interdependent infrastructure network in the presence of simulated earthquakes to demonstrate the extent of the tradeoff between optimality and computational efficiency. Our method provides a foundation for modeling sociotechnical systems in a way that mirrors restoration processes in practice. © 2017 Society for Risk Analysis.

  10. Central Hypothyroidism in Miniature Schnauzers.

    PubMed

    Voorbij, Annemarie M W Y; Leegwater, Peter A J; Buijtels, Jenny J C W M; Daminet, Sylvie; Kooistra, Hans S

    2016-01-01

    Primary hypothyroidism is a common endocrinopathy in dogs. In contrast, central hypothyroidism is rare in this species. The objective of this article is to describe the occurrence and clinical presentation of central hypothyroidism in Miniature Schnauzers. Additionally, the possible role of the thyroid-stimulating hormone (TSH)-releasing hormone receptor (TRHR) gene and the TSHβ (TSHB) gene was investigated. Miniature Schnauzers with proven central hypothyroidism, based on scintigraphy, and the results of a 3-day-TSH-stimulation test, or a TSH-releasing hormone (TRH)-stimulation test or both, presented to the Department of Clinical Sciences of Companion Animals at Utrecht University or the Department of Medicine and Clinical Biology of Small Animals at Ghent University from 2008 to 2012. Retrospective study. Pituitary function tests, thyroid scintigraphy, and computed tomography (CT) of the pituitary area were performed. Gene fragments of affected dogs and controls were amplified by polymerase chain reaction (PCR). Subsequently, the deoxyribonucleic acid (DNA) sequences of the products were analyzed. Central hypothyroidism was diagnosed in 7 Miniature Schnauzers. Three dogs had disproportionate dwarfism and at least one of them had a combined deficiency of TSH and prolactin. No disease-causing mutations were found in the TSHB gene and the exons of the TRHR gene of these Schnauzers. Central hypothyroidism could be underdiagnosed in Miniature Schnauzers with hypothyroidism, especially in those of normal stature. The fact that this rare disorder occurred in 7 dogs from the same breed suggests that central hypothyroidism could have a genetic background in Miniature Schnauzers. Copyright © 2015 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  11. Development of a PC-based ground support system for a small satellite instrument

    NASA Astrophysics Data System (ADS)

    Deschambault, Robert L.; Gregory, Philip R.; Spenler, Stephen; Whalen, Brian A.

    1993-11-01

    The importance of effective ground support for the remote control and data retrieval of a satellite instrument cannot be understated. Problems with ground support may include the need to base personnel at a ground tracking station for extended periods, and the delay between the instrument observation and the processing of the data by the science team. Flexible solutions to such problems in the case of small satellite systems are provided by using low-cost, powerful personal computers and off-the-shelf software for data acquisition and processing, and by using Internet as a communication pathway to enable scientists to view and manipulate satellite data in real time at any ground location. The personal computer based ground support system is illustrated for the case of the cold plasma analyzer flown on the Freja satellite. Commercial software was used as building blocks for writing the ground support equipment software. Several levels of hardware support, including unit tests and development, functional tests, and integration were provided by portable and desktop personal computers. Satellite stations in Saskatchewan and Sweden were linked to the science team via phone lines and Internet, which provided remote control through a central point. These successful strategies will be used on future small satellite space programs.

  12. Organising a University Computer System: Analytical Notes.

    ERIC Educational Resources Information Center

    Jacquot, J. P.; Finance, J. P.

    1990-01-01

    Thirteen trends in university computer system development are identified, system user requirements are analyzed, critical system qualities are outlined, and three options for organizing a computer system are presented. The three systems include a centralized network, local network, and federation of local networks. (MSE)

  13. Best Practice Guidelines for Computer Technology in the Montessori Early Childhood Classroom.

    ERIC Educational Resources Information Center

    Montminy, Peter

    1999-01-01

    Presents a draft for a principle-centered position statement of a Montessori early childhood program in central Pennsylvania, on the pros and cons of computer use in a Montessori 3-6 classroom. Includes computer software rating form. (Author/KB)

  14. The role of soft computing in intelligent machines.

    PubMed

    de Silva, Clarence W

    2003-08-15

    An intelligent machine relies on computational intelligence in generating its intelligent behaviour. This requires a knowledge system in which representation and processing of knowledge are central functions. Approximation is a 'soft' concept, and the capability to approximate for the purposes of comparison, pattern recognition, reasoning, and decision making is a manifestation of intelligence. This paper examines the use of soft computing in intelligent machines. Soft computing is an important branch of computational intelligence, where fuzzy logic, probability theory, neural networks, and genetic algorithms are synergistically used to mimic the reasoning and decision making of a human. This paper explores several important characteristics and capabilities of machines that exhibit intelligent behaviour. Approaches that are useful in the development of an intelligent machine are introduced. The paper presents a general structure for an intelligent machine, giving particular emphasis to its primary components, such as sensors, actuators, controllers, and the communication backbone, and their interaction. The role of soft computing within the overall system is discussed. Common techniques and approaches that will be useful in the development of an intelligent machine are introduced, and the main steps in the development of an intelligent machine for practical use are given. An industrial machine, which employs the concepts of soft computing in its operation, is presented, and one aspect of intelligent tuning, which is incorporated into the machine, is illustrated.

  15. CRYSNET manual. Informal report. [Hardware and software of crystallographic computing network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None,

    1976-07-01

    This manual describes the hardware and software which together make up the crystallographic computing network (CRYSNET). The manual is intended as a users' guide and also provides general information for persons without any experience with the system. CRYSNET is a network of intelligent remote graphics terminals that are used to communicate with the CDC Cyber 70/76 computing system at the Brookhaven National Laboratory (BNL) Central Scientific Computing Facility. Terminals are in active use by four research groups in the field of crystallography. A protein data bank has been established at BNL to store in machine-readable form atomic coordinates and othermore » crystallographic data for macromolecules. The bank currently includes data for more than 20 proteins. This structural information can be accessed at BNL directly by the CRYSNET graphics terminals. More than two years of experience has been accumulated with CRYSNET. During this period, it has been demonstrated that the terminals, which provide access to a large, fast third-generation computer, plus stand-alone interactive graphics capability, are useful for computations in crystallography, and in a variety of other applications as well. The terminal hardware, the actual operations of the terminals, and the operations of the BNL Central Facility are described in some detail, and documentation of the terminal and central-site software is given. (RWR)« less

  16. Development of a prototype two-phase thermal bus system for Space Station

    NASA Technical Reports Server (NTRS)

    Myron, D. L.; Parish, R. C.

    1987-01-01

    This paper describes the basic elements of a pumped two-phase ammonia thermal control system designed for microgravity environments, the development of the concept into a Space Station flight design, and design details of the prototype to be ground-tested in the Johnson Space Center (JSC) Thermal Test Bed. The basic system concept is one of forced-flow heat transport through interface heat exchangers with anhydrous ammonia being pumped by a device expressly designed for two-phase fluid management in reduced gravity. Control of saturation conditions, and thus system interface temperatures, is accomplished with a single central pressure regulating valve. Flow control and liquid inventory are controlled by passive, nonelectromechanical devices. Use of these simple control elements results in minimal computer controls and high system reliability. Building on the basic system concept, a brief overview of a potential Space Station flight design is given. Primary verification of the system concept will involve testing at JSC of a 25-kW ground test article currently in fabrication.

  17. Development of an Active Flow Control Technique for an Airplane High-Lift Configuration

    NASA Technical Reports Server (NTRS)

    Shmilovich, Arvin; Yadlin, Yoram; Dickey, Eric D.; Hartwich, Peter M.; Khodadoust, Abdi

    2017-01-01

    This study focuses on Active Flow Control methods used in conjunction with airplane high-lift systems. The project is motivated by the simplified high-lift system, which offers enhanced airplane performance compared to conventional high-lift systems. Computational simulations are used to guide the implementation of preferred flow control methods, which require a fluidic supply. It is first demonstrated that flow control applied to a high-lift configuration that consists of simple hinge flaps is capable of attaining the performance of the conventional high-lift counterpart. A set of flow control techniques has been subsequently considered to identify promising candidates, where the central requirement is that the mass flow for actuation has to be within available resources onboard. The flow control methods are based on constant blowing, fluidic oscillators, and traverse actuation. The simulations indicate that the traverse actuation offers a substantial reduction in required mass flow, and it is especially effective when the frequency of actuation is consistent with the characteristic time scale of the flow.

  18. P wave velocity of Proterozoic upper mantle beneath central and southern Asia

    NASA Astrophysics Data System (ADS)

    Nyblade, Andrew A.; Vogfjord, Kristin S.; Langston, Charles A.

    1996-05-01

    P wave velocity structure of Proterozoic upper mantle beneath central and southern Africa was investigated by forward modeling of Pnl waveforms from four moderate size earthquakes. The source-receiver path of one event crosses central Africa and lies outside the African superswell while the source-receiver paths for the other events cross Proterozoic lithosphere within southern Africa, inside the African superswell. Three observables (Pn waveshape, PL-Pn time, and Pn/PL amplitude ratio) from the Pnl waveform were used to constrain upper mantle velocity models in a grid search procedure. For central Africa, synthetic seismograms were computed for 5880 upper mantle models using the generalized ray method and wavenumber integration; synthetic seismograms for 216 models were computed for southern Africa. Successful models were taken as those whose synthetic seismograms had similar waveshapes to the observed waveforms, as well as PL-Pn times within 3 s of the observed times and Pn/PL amplitude ratios within 30% of the observed ratio. Successful models for central Africa yield a range of uppermost mantle velocity between 7.9 and 8.3 km s-1, velocities between 8.3 and 8.5 km s-1 at a depth of 200 km, and velocity gradients that are constant or slightly positive. For southern Africa, successful models yield uppermost mantle velocities between 8.1 and 8.3 km s-1, velocities between 7.9 and 8.4 km s-1 at a depth of 130 km, and velocity gradients between -0.001 and 0.001 s-1. Because velocity gradients are controlled strongly by structure at the bottoming depths for Pn waves, it is not easy to compare the velocity gradients obtained for central and southern Africa. For central Africa, Pn waves turn at depths of about 150-200 km, whereas for southern Africa they bottom at ˜100-150 km depth. With regard to the origin of the African superswell, our results do not have sufficient resolution to test hypotheses that invoke simple lithospheric reheating. However, our models are not consistent with explanations for the African superswell invoking extensive amounts of lithospheric thinning. If extensive lithospheric thinning had occurred beneath southern Africa, as suggested previously, then upper mantle P wave velocities beneath southern Africa would likely be lower than those in our models.

  19. Sparsity enabled cluster reduced-order models for control

    NASA Astrophysics Data System (ADS)

    Kaiser, Eurika; Morzyński, Marek; Daviller, Guillaume; Kutz, J. Nathan; Brunton, Bingni W.; Brunton, Steven L.

    2018-01-01

    Characterizing and controlling nonlinear, multi-scale phenomena are central goals in science and engineering. Cluster-based reduced-order modeling (CROM) was introduced to exploit the underlying low-dimensional dynamics of complex systems. CROM builds a data-driven discretization of the Perron-Frobenius operator, resulting in a probabilistic model for ensembles of trajectories. A key advantage of CROM is that it embeds nonlinear dynamics in a linear framework, which enables the application of standard linear techniques to the nonlinear system. CROM is typically computed on high-dimensional data; however, access to and computations on this full-state data limit the online implementation of CROM for prediction and control. Here, we address this key challenge by identifying a small subset of critical measurements to learn an efficient CROM, referred to as sparsity-enabled CROM. In particular, we leverage compressive measurements to faithfully embed the cluster geometry and preserve the probabilistic dynamics. Further, we show how to identify fewer optimized sensor locations tailored to a specific problem that outperform random measurements. Both of these sparsity-enabled sensing strategies significantly reduce the burden of data acquisition and processing for low-latency in-time estimation and control. We illustrate this unsupervised learning approach on three different high-dimensional nonlinear dynamical systems from fluids with increasing complexity, with one application in flow control. Sparsity-enabled CROM is a critical facilitator for real-time implementation on high-dimensional systems where full-state information may be inaccessible.

  20. Assessment of regional management strategies for controlling seawater intrusion

    USGS Publications Warehouse

    Reichard, E.G.; Johnson, T.A.

    2005-01-01

    Simulation-optimization methods, applied with adequate sensitivity tests, can provide useful quantitative guidance for controlling seawater intrusion. This is demonstrated in an application to the West Coast Basin of coastal Los Angeles that considers two management options for improving hydraulic control of seawater intrusion: increased injection into barrier wells and in lieu delivery of surface water to replace current pumpage. For the base-case optimization analysis, assuming constant groundwater demand, in lieu delivery was determined to be most cost effective. Reduced-cost information from the optimization provided guidance for prioritizing locations for in lieu delivery. Model sensitivity to a suite of hydrologic, economic, and policy factors was tested. Raising the imposed average water-level constraint at the hydraulic-control locations resulted in nonlinear increases in cost. Systematic varying of the relative costs of injection and in lieu water yielded a trade-off curve between relative costs and injection/in lieu amounts. Changing the assumed future scenario to one of increasing pumpage in the adjacent Central Basin caused a small increase in the computed costs of seawater intrusion control. Changing the assumed boundary condition representing interaction with an adjacent basin did not affect the optimization results. Reducing the assumed hydraulic conductivity of the main productive aquifer resulted in a large increase in the model-computed cost. Journal of Water Resources Planning and Management ?? ASCE.

  1. The genetics of shovel shape in maxillary central incisors in man.

    PubMed

    Blanco, R; Chakraborty, R

    1976-03-01

    From dental casts of 94 parent-offspring and 127 full-sib pairs, sampled from two Chilean populations, shovelling indices are computed to measure the degree of shovelling of maxillary central incisors quantitatively. Genetic correlations are computed to determine the role of genetic factors in explaining the variation in this trait. Assuming only hereditary factors to be responsible for the transmission of shovel shape, 68% of total variability is ascribed to the additive effect of genes.

  2. Slew maneuvers on the SCOLE Laboratory Facility

    NASA Technical Reports Server (NTRS)

    Williams, Jeffrey P.

    1987-01-01

    The Spacecraft Control Laboratory Experiment (SCOLE) was conceived to provide a physical test bed for the investigation of control techniques for large flexible spacecraft. The control problems studied are slewing maneuvers and pointing operations. The slew is defined as a minimum time maneuver to bring the antenna line-of-sight (LOS) pointing to within an error limit of the pointing target. The second objective is to rotate about the LOS within the 0.02 degree error limit. The SCOLE problem is defined as two design challenges: control laws for a mathematical model of a large antenna attached to the Space Shuttle by a long flexible mast; and a control scheme on a laboratory representation of the structure modelled on the control laws. Control sensors and actuators are typical of those which the control designer would have to deal with on an actual spacecraft. Computational facilities consist of microcomputer based central processing units with appropriate analog interfaces for implementation of the primary control system, and the attitude estimation algorithm. Preliminary results of some slewing control experiments are given.

  3. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-02-15

    We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 millionmore » vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  4. Preliminary Design and Evaluation of Portable Electronic Flight Progress Strips

    NASA Technical Reports Server (NTRS)

    Doble, Nathan A.; Hansman, R. John

    2002-01-01

    There has been growing interest in using electronic alternatives to the paper Flight Progress Strip (FPS) for air traffic control. However, most research has been centered on radar-based control environments, and has not considered the unique operational needs of the airport air traffic control tower. Based on an analysis of the human factors issues for control tower Decision Support Tool (DST) interfaces, a requirement has been identified for an interaction mechanism which replicates the advantages of the paper FPS (e.g., head-up operation, portability) but also enables input and output with DSTs. An approach has been developed which uses a Portable Electronic FPS that has attributes of both a paper strip and an electronic strip. The prototype flight strip system uses Personal Digital Assistants (PDAs) to replace individual paper strips in addition to a central management interface which is displayed on a desktop computer. Each PDA is connected to the management interface via a wireless local area network. The Portable Electronic FPSs replicate the core functionality of paper flight strips and have additional features which provide a heads-up interface to a DST. A departure DST is used as a motivating example. The central management interface is used for aircraft scheduling and sequencing and provides an overview of airport departure operations. This paper will present the design of the Portable Electronic FPS system as well as preliminary evaluation results.

  5. Extension of a streamwise upwind algorithm to a moving grid system

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru; Goorjian, Peter M.; Guruswamy, Guru P.

    1990-01-01

    A new streamwise upwind algorithm was derived to compute unsteady flow fields with the use of a moving-grid system. The temporally nonconservative LU-ADI (lower-upper-factored, alternating-direction-implicit) method was applied for time marching computations. A comparison of the temporally nonconservative method with a time-conservative implicit upwind method indicates that the solutions are insensitive to the conservative properties of the implicit solvers when practical time steps are used. Using this new method, computations were made for an oscillating wing at a transonic Mach number. The computed results confirm that the present upwind scheme captures the shock motion better than the central-difference scheme based on the beam-warming algorithm. The new upwind option of the code allows larger time-steps and thus is more efficient, even though it requires slightly more computational time per time step than the central-difference option.

  6. [Personal computer-based computer monitoring system of the anesthesiologist (2-year experience in development and use)].

    PubMed

    Buniatian, A A; Sablin, I N; Flerov, E V; Mierbekov, E M; Broĭtman, O G; Shevchenko, V V; Shitikov, I I

    1995-01-01

    Creation of computer monitoring systems (CMS) for operating rooms is one of the most important spheres of personal computer employment in anesthesiology. The authors developed a PC RS/AT-based CMS and effectively used it for more than 2 years. This system permits comprehensive monitoring in cardiosurgical operations by real time processing the values of arterial and central venous pressure, pressure in the pulmonary artery, bioelectrical activity of the brain, and two temperature values. Use of this CMS helped appreciably improve patients' safety during surgery. The possibility to assess brain function by computer monitoring the EEF simultaneously with central hemodynamics and body temperature permit the anesthesiologist to objectively assess the depth of anesthesia and to diagnose cerebral hypoxia. Automated anesthesiological chart issued by the CMS after surgery reliably reflects the patient's status and the measures taken by the anesthesiologist.

  7. 40 CFR 81.243 - Central Minnesota Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 17 2010-07-01 2010-07-01 false Central Minnesota Intrastate Air... Air Quality Control Regions § 81.243 Central Minnesota Intrastate Air Quality Control Region. The Central Minnesota Intrastate Air Quality Control Region consists of the territorial area encompassed by...

  8. 40 CFR 81.243 - Central Minnesota Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 17 2011-07-01 2011-07-01 false Central Minnesota Intrastate Air... Air Quality Control Regions § 81.243 Central Minnesota Intrastate Air Quality Control Region. The Central Minnesota Intrastate Air Quality Control Region consists of the territorial area encompassed by...

  9. 40 CFR 81.243 - Central Minnesota Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 18 2012-07-01 2012-07-01 false Central Minnesota Intrastate Air... Air Quality Control Regions § 81.243 Central Minnesota Intrastate Air Quality Control Region. The Central Minnesota Intrastate Air Quality Control Region consists of the territorial area encompassed by...

  10. 40 CFR 81.243 - Central Minnesota Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 18 2013-07-01 2013-07-01 false Central Minnesota Intrastate Air... Air Quality Control Regions § 81.243 Central Minnesota Intrastate Air Quality Control Region. The Central Minnesota Intrastate Air Quality Control Region consists of the territorial area encompassed by...

  11. 40 CFR 81.243 - Central Minnesota Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 18 2014-07-01 2014-07-01 false Central Minnesota Intrastate Air... Air Quality Control Regions § 81.243 Central Minnesota Intrastate Air Quality Control Region. The Central Minnesota Intrastate Air Quality Control Region consists of the territorial area encompassed by...

  12. 40 CFR 81.105 - South Central Pennsylvania Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 17 2010-07-01 2010-07-01 false South Central Pennsylvania Intrastate... Designation of Air Quality Control Regions § 81.105 South Central Pennsylvania Intrastate Air Quality Control Region. The South Central Pennsylvania Intrastate Air Quality Control Region consists of the territorial...

  13. 40 CFR 81.157 - North Central Wisconsin Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 18 2012-07-01 2012-07-01 false North Central Wisconsin Intrastate Air... Air Quality Control Regions § 81.157 North Central Wisconsin Intrastate Air Quality Control Region. The North Central Wisconsin Intrastate Air Quality Control Region consists of the territorial area...

  14. 40 CFR 81.157 - North Central Wisconsin Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 18 2013-07-01 2013-07-01 false North Central Wisconsin Intrastate Air... Air Quality Control Regions § 81.157 North Central Wisconsin Intrastate Air Quality Control Region. The North Central Wisconsin Intrastate Air Quality Control Region consists of the territorial area...

  15. 40 CFR 81.157 - North Central Wisconsin Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 17 2010-07-01 2010-07-01 false North Central Wisconsin Intrastate Air... Air Quality Control Regions § 81.157 North Central Wisconsin Intrastate Air Quality Control Region. The North Central Wisconsin Intrastate Air Quality Control Region consists of the territorial area...

  16. 40 CFR 81.157 - North Central Wisconsin Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 18 2014-07-01 2014-07-01 false North Central Wisconsin Intrastate Air... Air Quality Control Regions § 81.157 North Central Wisconsin Intrastate Air Quality Control Region. The North Central Wisconsin Intrastate Air Quality Control Region consists of the territorial area...

  17. 40 CFR 81.157 - North Central Wisconsin Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 17 2011-07-01 2011-07-01 false North Central Wisconsin Intrastate Air... Air Quality Control Regions § 81.157 North Central Wisconsin Intrastate Air Quality Control Region. The North Central Wisconsin Intrastate Air Quality Control Region consists of the territorial area...

  18. General-purpose interface bus for multiuser, multitasking computer system

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1990-01-01

    The architecture of a multiuser, multitasking, virtual-memory computer system intended for the use by a medium-size research group is described. There are three central processing units (CPU) in the configuration, each with 16 MB memory, and two 474 MB hard disks attached. CPU 1 is designed for data analysis and contains an array processor for fast-Fourier transformations. In addition, CPU 1 shares display images viewed with the image processor. CPU 2 is designed for image analysis and display. CPU 3 is designed for data acquisition and contains 8 GPIB channels and an analog-to-digital conversion input/output interface with 16 channels. Up to 9 users can access the third CPU simultaneously for data acquisition. Focus is placed on the optimization of hardware interfaces and software, facilitating instrument control, data acquisition, and processing.

  19. Optimized 4-bit Quantum Reversible Arithmetic Logic Unit

    NASA Astrophysics Data System (ADS)

    Ayyoub, Slimani; Achour, Benslama

    2017-08-01

    Reversible logic has received a great attention in the recent years due to its ability to reduce the power dissipation. The main purposes of designing reversible logic are to decrease quantum cost, depth of the circuits and the number of garbage outputs. The arithmetic logic unit (ALU) is an important part of central processing unit (CPU) as the execution unit. This paper presents a complete design of a new reversible arithmetic logic unit (ALU) that can be part of a programmable reversible computing device such as a quantum computer. The proposed ALU based on a reversible low power control unit and small performance parameters full adder named double Peres gates. The presented ALU can produce the largest number (28) of arithmetic and logic functions and have the smallest number of quantum cost and delay compared with existing designs.

  20. A controlled variation scheme for convection treatment in pressure-based algorithm

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Thakur, Siddharth; Tucker, Kevin

    1993-01-01

    Convection effect and source terms are two primary sources of difficulties in computing turbulent reacting flows typically encountered in propulsion devices. The present work intends to elucidate the individual as well as the collective roles of convection and source terms in the fluid flow equations, and to devise appropriate treatments and implementations to improve our current capability of predicting such flows. A controlled variation scheme (CVS) has been under development in the context of a pressure-based algorithm, which has the characteristics of adaptively regulating the amount of numerical diffusivity, relative to central difference scheme, according to the variation in local flow field. Both the basic concepts and a pragmatic assessment will be presented to highlight the status of this work.

  1. Distributed Optimal Power Flow of AC/DC Interconnected Power Grid Using Synchronous ADMM

    NASA Astrophysics Data System (ADS)

    Liang, Zijun; Lin, Shunjiang; Liu, Mingbo

    2017-05-01

    Distributed optimal power flow (OPF) is of great importance and challenge to AC/DC interconnected power grid with different dispatching centres, considering the security and privacy of information transmission. In this paper, a fully distributed algorithm for OPF problem of AC/DC interconnected power grid called synchronous ADMM is proposed, and it requires no form of central controller. The algorithm is based on the fundamental alternating direction multiplier method (ADMM), by using the average value of boundary variables of adjacent regions obtained from current iteration as the reference values of both regions for next iteration, which realizes the parallel computation among different regions. The algorithm is tested with the IEEE 11-bus AC/DC interconnected power grid, and by comparing the results with centralized algorithm, we find it nearly no differences, and its correctness and effectiveness can be validated.

  2. Hypercalculia in savant syndrome: central executive failure?

    PubMed

    González-Garrido, Andrés Antonio; Ruiz-Sandoval, José Luis; Gómez-Velázquez, Fabiola R; de Alba, José Luis Oropeza; Villaseñor-Cabrera, Teresa

    2002-01-01

    The existence of outstanding cognitive talent in mentally retarded subjects persists as a challenge to present knowledge. We report the case of a 16-year-old male patient with exceptional mental calculation abilities and moderate mental retardation. The patient was clinically evaluated. Data from standard magnetic resonance imaging (MRI) and two 99mTc-ethyl cysteine dimer (ECD)-single photon emission computer tomography (SPECT) (in resting condition and performing a mental calculation task) studies were analyzed. Main neurologic findings were brachycephalia, right-side neurologic soft signs, obsessive personality profile, low color-word interference effect in Stroop test, and diffuse increased cerebral blood flow during calculation task in 99mTc-ECD SPECT. MRI showed anatomical temporal plane inverse asymmetry. Evidence appears to support the hypothesis that savant skill is related to excessive and erroneous use of cognitive processing resources instigated by probable failure in central executive control mechanisms.

  3. Space spider crane

    NASA Technical Reports Server (NTRS)

    Macconochie, Ian O. (Inventor); Mikulas, Martin M., Jr. (Inventor); Pennington, Jack E. (Inventor); Kinkead, Rebecca L. (Inventor); Bryan, Charles F., Jr. (Inventor)

    1988-01-01

    A space spider crane for the movement, placement, and or assembly of various components on or in the vicinity of a space structure is described. As permanent space structures are utilized by the space program, a means will be required to transport cargo and perform various repair tasks. A space spider crane comprising a small central body with attached manipulators and legs fulfills this requirement. The manipulators may be equipped with constant pressure gripping end effectors or tools to accomplish various repair tasks. The legs are also equipped with constant pressure gripping end effectors to grip the space structure. Control of the space spider crane may be achieved either by computer software or a remotely situated human operator, who maintains visual contact via television cameras mounted on the space spider crane. One possible walking program consists of a parallel motion walking program whereby the small central body alternatively leans forward and backward relative to end effectors.

  4. A computational modeling of semantic knowledge in reading comprehension: Integrating the landscape model with latent semantic analysis.

    PubMed

    Yeari, Menahem; van den Broek, Paul

    2016-09-01

    It is a well-accepted view that the prior semantic (general) knowledge that readers possess plays a central role in reading comprehension. Nevertheless, computational models of reading comprehension have not integrated the simulation of semantic knowledge and online comprehension processes under a unified mathematical algorithm. The present article introduces a computational model that integrates the landscape model of comprehension processes with latent semantic analysis representation of semantic knowledge. In three sets of simulations of previous behavioral findings, the integrated model successfully simulated the activation and attenuation of predictive and bridging inferences during reading, as well as centrality estimations and recall of textual information after reading. Analyses of the computational results revealed new theoretical insights regarding the underlying mechanisms of the various comprehension phenomena.

  5. 28 CFR 25.8 - System safeguards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... justice agency computer site must have adequate physical security to protect against any unauthorized... Index is stored electronically for use in an FBI computer environment. The NICS central computer will... authorized personnel who have identified themselves and their need for access to a system security officer...

  6. 28 CFR 25.8 - System safeguards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... justice agency computer site must have adequate physical security to protect against any unauthorized... Index is stored electronically for use in an FBI computer environment. The NICS central computer will... authorized personnel who have identified themselves and their need for access to a system security officer...

  7. 28 CFR 25.8 - System safeguards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... justice agency computer site must have adequate physical security to protect against any unauthorized... Index is stored electronically for use in an FBI computer environment. The NICS central computer will... authorized personnel who have identified themselves and their need for access to a system security officer...

  8. 28 CFR 25.8 - System safeguards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... justice agency computer site must have adequate physical security to protect against any unauthorized... Index is stored electronically for use in an FBI computer environment. The NICS central computer will... authorized personnel who have identified themselves and their need for access to a system security officer...

  9. 28 CFR 25.8 - System safeguards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... justice agency computer site must have adequate physical security to protect against any unauthorized... Index is stored electronically for use in an FBI computer environment. The NICS central computer will... authorized personnel who have identified themselves and their need for access to a system security officer...

  10. Understanding Emergency Care Delivery Through Computer Simulation Modeling.

    PubMed

    Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L

    2018-02-01

    In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.

  11. Acceleration and Velocity Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truax, Roger

    2015-01-01

    A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an autoregressive moving average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. Simple harmonic motion is assumed for the acceleration computations, and the central difference equation with a linear autoregressive model is used for the computations of velocity. A cantilevered rectangular wing model is used to validate the simple approach. Quality of the computed deflection, acceleration, and velocity values are independent of the number of fibers. The central difference equation with a linear autoregressive model proposed in this study follows the target response with reasonable accuracy. Therefore, the handicap of the backward difference equation, phase shift, is successfully overcome.

  12. The effects of computer simulation versus hands-on dissection and the placement of computer simulation within the learning cycle on student achievement and attitude

    NASA Astrophysics Data System (ADS)

    Hopkins, Kathryn Susan

    The value of dissection as an instructional strategy has been debated, but not evidenced in research literature. The purpose of this study was to examine the efficacy of using computer simulated frog dissection as a substitute for traditional hands-on frog dissection and to examine the possible enhancement of achievement by combining the two strategies in a specific sequence. In this study, 134 biology students at two Central Texas schools were divided into the five following treatment groups: computer simulation of frog dissection, computer simulation before dissection, traditional hands-on frog dissection, dissection before computer simulation, and textual worksheet materials. The effects on achievement were evaluated by labeling 10 structures on three diagrams, identifying 11 pinned structures on a prosected frog, and answering 9 multiple-choice questions over the dissection process. Attitude was evaluated using a thirty item survey with a five-point Likert scale. The quasi-experimental design was pretest/post-test/post-test nonequivalent group for both control and experimental groups, a 2 x 2 x 5 completely randomized factorial design (gender, school, five treatments). The pretest/post-test design was incorporated to control for prior knowledge using analysis of covariance. The dissection only group evidenced a significantly higher performance than all other treatments except dissection-then-computer on the post-test segment requiring students to label pinned anatomical parts on a prosected frog. Interactions between treatment and school in addition to interaction between treatment and gender were found to be significant. The diagram and attitude post-tests evidenced no significant difference. Results on the nine multiple-choice questions about dissection procedures indicated a significant difference between schools. The interaction between treatment and school was also found to be significant. On a delayed post-test, a significant difference in gender was found on the diagram labeling segment of the post-test. Males were reported to have the higher score. Since existing research conflicts with this study's results, additional research using authentic assessment is recommended. Instruction should be aligned with dissection content and process objectives for each treatment group, and the teacher variable should be controlled.

  13. Baroreflex regulation of blood pressure during dynamic exercise

    NASA Technical Reports Server (NTRS)

    Raven, P. B.; Potts, J. T.; Shi, X.; Blomqvist, C. G. (Principal Investigator)

    1997-01-01

    From the work of Potts et al. Papelier et al. and Shi et al. it is readily apparent that the arterial (aortic and carotid) baroreflexes are reset to function at the prevailing ABP of exercise. The blood pressure of exercise is the result of the hemodynamic (cardiac output and TPR) responses, which appear to be regulated by two redundant neural control systems, "Central Command" and the "exercise pressor reflex". Central Command is a feed-forward neural control system that operates in parallel with the neural regulation of the locomotor system and appears to establish the hemodynamic response to exercise. Within the central nervous system it appears that the HLR may be the operational site for Central Command. Specific neural sites within the HLR have been demonstrated in animals to be active during exercise. With the advent of positron emission tomography (PET) and single-photon emission computed tomography (SPECT), the anatomical areas of the human brain related to Central Command are being mapped. It also appears that the Nucleus Tractus Solitarius and the ventrolateral medulla may serve as an integrating site as they receive neural information from the working muscles via the group III/IV muscle afferents as well as from higher brain centers. This anatomical site within the CNS is now the focus of many investigations in which arterial baroreflex function, Central Command and the "exercise pressor reflex" appear to demonstrate inhibitory or facilitatory interaction. The concept of whether Central Command is the prime mover in the resetting of the arterial baroreceptors to function at the exercising ABP or whether the resetting is an integration of the "exercise pressor reflex" information with that of Central Command is now under intense investigation. However, it would be justified to conclude, from the data of Bevegard and Shepherd, Dicarlo and Bishop, Potts et al., and Papelier et al. that the act of exercise results in the resetting of the arterial baroreflex. In addition, if, as we have proposed, the cardiopulmonary baroreceptors primarily monitors and reflexly regulates cardiac filling volume, it would seem from the data of Mack et al. and Potts et al. that the cardiopulmonary baroreceptor is also reset at the beginning of exercise. Therefore, investigations of the neural mechanisms of regulation involving Central Command and cardiopulmonary afferents, similar to those being undertaken for the arterial baroreflex, need to be established.

  14. [Groupamatic 360 C1 and automated blood donor processing in a transfusion center].

    PubMed

    Guimbretiere, J; Toscer, M; Harousseau, H

    1978-03-01

    Automation of donor management flow path is controlled by: --a 3 slip "port a punch" card, --the groupamatic unit with a result sorted out on punch paper tape, --the management computer off line connected to groupamatic. Data tracking at blood collection time is made by punching a card with the donor card used as a master card. Groupamatic performs: --a standard blood grouping with one run for registered donors and two runs for new donors, --a phenotyping with two runs, --a screening of irregular antibodies. Themanagement computer checks the correlation between the data of the two runs or the data of a single run and that of previous file. It updates the data resident in the central file and prints out: --the controls of the different blood group for the red cell panel, --The listing of error messages, --The listing of emergency call up, --The listing of collected blood units when arrived at the blood center, with quantitative and qualitative information such as: number of blood, units collected, donor addresses, etc., --Statistics, --Donor cards, --Diplomas.

  15. Global flexibility--shop floor flexibility: what's a worker to do?

    PubMed

    Forrant, R

    1999-01-01

    For several years new forms of work organization have been introduced by U.S. management to cut labor costs, improve productivity, and increase their shop floor control. Corporations have also invested in computer-controlled machinery in an effort to eliminate large numbers of skilled blue-collar workers and to decrease their reliance on the tacit knowledge of such workers. Once seemingly secure jobs in diverse industries like airplanes, jet engines, machine tools, and computer chips, are no longer so stable. In an effort to expand their global reach and reorganize the workplace, managers are able to capitalize on two conflicted and conflicting attitudes among the workforce: the first, workers most deep-seated fear, the loss of a permanent job; the second, their aspirations to contribute their knowledge and skills in a positive way on the shop floor. In this article the reorganization of work at two western Massachusetts metalworking companies is described. What distinguishes these cases is the central role that the union played in the organized plant and the workers played in both plants to improve production and at least for now preserve jobs.

  16. Computerized Library Serves Six Colleges

    ERIC Educational Resources Information Center

    Blankenship, Ted

    1973-01-01

    The Associated Colleges of Central Kansas have a cooperative library program that gives students access to 300,000 volumes and 2,800 periodicals. This is possible through a central computer book listing and a telephone hotline. (PG)

  17. 40 CFR 81.127 - Central New York Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 17 2010-07-01 2010-07-01 false Central New York Intrastate Air... Air Quality Control Regions § 81.127 Central New York Intrastate Air Quality Control Region. The Central New York Intrastate Air Quality Control Region consists of the territorial area encompassed by the...

  18. 40 CFR 81.95 - Central Florida Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Quality Control Regions § 81.95 Central Florida Intrastate Air Quality Control Region. The Central Florida Intrastate Air Quality Control Region consists of the territorial area encompassed by the boundaries of the... 40 Protection of Environment 17 2011-07-01 2011-07-01 false Central Florida Intrastate Air Quality...

  19. 40 CFR 81.95 - Central Florida Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Quality Control Regions § 81.95 Central Florida Intrastate Air Quality Control Region. The Central Florida Intrastate Air Quality Control Region consists of the territorial area encompassed by the boundaries of the... 40 Protection of Environment 17 2010-07-01 2010-07-01 false Central Florida Intrastate Air Quality...

  20. Control Centrality and Hierarchical Structure in Complex Networks

    PubMed Central

    Liu, Yang-Yu; Slotine, Jean-Jacques; Barabási, Albert-László

    2012-01-01

    We introduce the concept of control centrality to quantify the ability of a single node to control a directed weighted network. We calculate the distribution of control centrality for several real networks and find that it is mainly determined by the network’s degree distribution. We show that in a directed network without loops the control centrality of a node is uniquely determined by its layer index or topological position in the underlying hierarchical structure of the network. Inspired by the deep relation between control centrality and hierarchical structure in a general directed network, we design an efficient attack strategy against the controllability of malicious networks. PMID:23028542

  1. Studies in Mathematics, Volume 22. Studies in Computer Science.

    ERIC Educational Resources Information Center

    Pollack, Seymour V., Ed.

    The nine articles in this collection were selected because they represent concerns central to computer science, emphasize topics of particular interest to mathematicians, and underscore the wide range of areas deeply and continually affected by computer science. The contents consist of: "Introduction" (S. V. Pollack), "The…

  2. NSTX-U Control System Upgrades

    DOE PAGES

    Erickson, K. G.; Gates, D. A.; Gerhardt, S. P.; ...

    2014-06-01

    The National Spherical Tokamak Experiment (NSTX) is undergoing a wealth of upgrades (NSTX-U). These upgrades, especially including an elongated pulse length, require broad changes to the control system that has served NSTX well. A new fiber serial Front Panel Data Port input and output (I/O) stream will supersede the aging copper parallel version. Driver support for the new I/O and cyber security concerns require updating the operating system from Redhat Enterprise Linux (RHEL) v4 to RedHawk (based on RHEL) v6. While the basic control system continues to use the General Atomics Plasma Control System (GA PCS), the effort to forwardmore » port the entire software package to run under 64-bit Linux instead of 32-bit Linux included PCS modifications subsequently shared with GA and other PCS users. Software updates focused on three key areas: (1) code modernization through coding standards (C99/C11), (2) code portability and maintainability through use of the GA PCS code generator, and (3) support of 64-bit platforms. Central to the control system upgrade is the use of a complete real time (RT) Linux platform provided by Concurrent Computer Corporation, consisting of a computer (iHawk), an operating system and drivers (RedHawk), and RT tools (NightStar). Strong vendor support coupled with an extensive RT toolset influenced this decision. The new real-time Linux platform, I/O, and software engineering will foster enhanced capability and performance for NSTX-U plasma control.« less

  3. Towards a general neural controller for quadrupedal locomotion.

    PubMed

    Maufroy, Christophe; Kimura, Hiroshi; Takase, Kunikatsu

    2008-05-01

    Our study aims at the design and implementation of a general controller for quadruped locomotion, allowing the robot to use the whole range of quadrupedal gaits (i.e. from low speed walking to fast running). A general legged locomotion controller must integrate both posture control and rhythmic motion control and have the ability to shift continuously from one control method to the other according to locomotion speed. We are developing such a general quadrupedal locomotion controller by using a neural model involving a CPG (Central Pattern Generator) utilizing ground reaction force sensory feedback. We used a biologically faithful musculoskeletal model with a spine and hind legs, and computationally simulated stable stepping motion at various speeds using the neuro-mechanical system combining the neural controller and the musculoskeletal model. We compared the changes of the most important locomotion characteristics (stepping period, duty ratio and support length) according to speed in our simulations with the data on real cat walking. We found similar tendencies for all of them. In particular, the swing period was approximately constant while the stance period decreased with speed, resulting in a decreasing stepping period and duty ratio. Moreover, the support length increased with speed due to the posterior extreme position that shifted progressively caudally, while the anterior extreme position was approximately constant. This indicates that we succeeded in reproducing to some extent the motion of a cat from the kinematical point of view, even though we used a 2D bipedal model. We expect that such computational models will become essential tools for legged locomotion neuroscience in the future.

  4. Control of fluxes in metabolic networks.

    PubMed

    Basler, Georg; Nikoloski, Zoran; Larhlimi, Abdelhalim; Barabási, Albert-László; Liu, Yang-Yu

    2016-07-01

    Understanding the control of large-scale metabolic networks is central to biology and medicine. However, existing approaches either require specifying a cellular objective or can only be used for small networks. We introduce new coupling types describing the relations between reaction activities, and develop an efficient computational framework, which does not require any cellular objective for systematic studies of large-scale metabolism. We identify the driver reactions facilitating control of 23 metabolic networks from all kingdoms of life. We find that unicellular organisms require a smaller degree of control than multicellular organisms. Driver reactions are under complex cellular regulation in Escherichia coli, indicating their preeminent role in facilitating cellular control. In human cancer cells, driver reactions play pivotal roles in malignancy and represent potential therapeutic targets. The developed framework helps us gain insights into regulatory principles of diseases and facilitates design of engineering strategies at the interface of gene regulation, signaling, and metabolism. © 2016 Basler et al.; Published by Cold Spring Harbor Laboratory Press.

  5. What you feel is what you see: inverse dynamics estimation underlies the resistive sensation of a delayed cursor

    PubMed Central

    Takamuku, Shinya; Gomi, Hiroaki

    2015-01-01

    How our central nervous system (CNS) learns and exploits relationships between force and motion is a fundamental issue in computational neuroscience. While several lines of evidence have suggested that the CNS predicts motion states and signals from motor commands for control and perception (forward dynamics), it remains controversial whether it also performs the ‘inverse’ computation, i.e. the estimation of force from motion (inverse dynamics). Here, we show that the resistive sensation we experience while moving a delayed cursor, perceived purely from the change in visual motion, provides evidence of the inverse computation. To clearly specify the computational process underlying the sensation, we systematically varied the visual feedback and examined its effect on the strength of the sensation. In contrast to the prevailing theory that sensory prediction errors modulate our perception, the sensation did not correlate with errors in cursor motion due to the delay. Instead, it correlated with the amount of exposure to the forward acceleration of the cursor. This indicates that the delayed cursor is interpreted as a mechanical load, and the sensation represents its visually implied reaction force. Namely, the CNS automatically computes inverse dynamics, using visually detected motions, to monitor the dynamic forces involved in our actions. PMID:26156766

  6. A comparison of decentralized, distributed, and centralized vibro-acoustic control.

    PubMed

    Frampton, Kenneth D; Baumann, Oliver N; Gardonio, Paolo

    2010-11-01

    Direct velocity feedback control of structures is well known to increase structural damping and thus reduce vibration. In multi-channel systems the way in which the velocity signals are used to inform the actuators ranges from decentralized control, through distributed or clustered control to fully centralized control. The objective of distributed controllers is to exploit the anticipated performance advantage of the centralized control while maintaining the scalability, ease of implementation, and robustness of decentralized control. However, and in seeming contradiction, some investigations have concluded that decentralized control performs as well as distributed and centralized control, while other results have indicated that distributed control has significant performance advantages over decentralized control. The purpose of this work is to explain this seeming contradiction in results, to explore the effectiveness of decentralized, distributed, and centralized vibro-acoustic control, and to expand the concept of distributed control to include the distribution of the optimization process and the cost function employed.

  7. Improved usability of a multi-infusion setup using a centralized control interface: A task-based usability test

    PubMed Central

    Cnossen, Fokie; Dieperink, Willem; Bult, Wouter; de Smet, Anne Marie; Touw, Daan J.; Nijsten, Maarten W.

    2017-01-01

    The objective of this study was to assess the usability benefits of adding a bedside central control interface that controls all intravenous (IV) infusion pumps compared to the conventional individual control of multiple infusion pumps. Eighteen dedicated ICU nurses volunteered in a between-subjects task-based usability test. A newly developed central control interface was compared to conventional control of multiple infusion pumps in a simulated ICU setting. Task execution time, clicks, errors and questionnaire responses were evaluated. Overall the central control interface outperformed the conventional control in terms of fewer user actions (40±3 vs. 73±20 clicks, p<0.001) and fewer user errors (1±1 vs. 3±2 errors, p<0.05), with no difference in task execution times (421±108 vs. 406±119 seconds, not significant). Questionnaires indicated a significant preference for the central control interface. Despite being novice users of the central control interface, ICU nurses displayed improved performance with the central control interface compared to the conventional interface they were familiar with. We conclude that the new user interface has an overall better usability than the conventional interface. PMID:28800617

  8. Improved usability of a multi-infusion setup using a centralized control interface: A task-based usability test.

    PubMed

    Doesburg, Frank; Cnossen, Fokie; Dieperink, Willem; Bult, Wouter; de Smet, Anne Marie; Touw, Daan J; Nijsten, Maarten W

    2017-01-01

    The objective of this study was to assess the usability benefits of adding a bedside central control interface that controls all intravenous (IV) infusion pumps compared to the conventional individual control of multiple infusion pumps. Eighteen dedicated ICU nurses volunteered in a between-subjects task-based usability test. A newly developed central control interface was compared to conventional control of multiple infusion pumps in a simulated ICU setting. Task execution time, clicks, errors and questionnaire responses were evaluated. Overall the central control interface outperformed the conventional control in terms of fewer user actions (40±3 vs. 73±20 clicks, p<0.001) and fewer user errors (1±1 vs. 3±2 errors, p<0.05), with no difference in task execution times (421±108 vs. 406±119 seconds, not significant). Questionnaires indicated a significant preference for the central control interface. Despite being novice users of the central control interface, ICU nurses displayed improved performance with the central control interface compared to the conventional interface they were familiar with. We conclude that the new user interface has an overall better usability than the conventional interface.

  9. Everything You Always Wanted to Know about Computers but Were Afraid to Ask.

    ERIC Educational Resources Information Center

    DiSpezio, Michael A.

    1989-01-01

    An overview of the basics of computers is presented. Definitions and discussions of processing, programs, memory, DOS, anatomy and design, central processing unit (CPU), disk drives, floppy disks, and peripherals are included. This article was designed to help teachers to understand basic computer terminology. (CW)

  10. Digital Data Transmission Via CATV.

    ERIC Educational Resources Information Center

    Stifle, Jack; And Others

    A low cost communications network has been designed for use in the PLATO IV computer-assisted instruction system. Over 1,000 remote computer graphic terminals each requiring a 1200 bps channel are to be connected to one centrally located computer. Digital data are distributed to these terminals using standard commercial cable television (CATV)…

  11. Infrared-Proximity-Sensor Modules For Robot

    NASA Technical Reports Server (NTRS)

    Parton, William; Wegerif, Daniel; Rosinski, Douglas

    1995-01-01

    Collision-avoidance system for articulated robot manipulators uses infrared proximity sensors grouped together in array of sensor modules. Sensor modules, called "sensorCells," distributed processing board-level products for acquiring data from proximity-sensors strategically mounted on robot manipulators. Each sensorCell self-contained and consists of multiple sensing elements, discrete electronics, microcontroller and communications components. Modules connected to central control computer by redundant serial digital communication subsystem including both serial and a multi-drop bus. Detects objects made of various materials at distance of up to 50 cm. For some materials, such as thermal protection system tiles, detection range reduced to approximately 20 cm.

  12. Microcomputer keeps watch at Emerald Mine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-04-01

    This paper reviews the computerized mine monitoring system set up at the Emerald Mine, SW Pennsylvania, USA. This coal mine has pioneered the automation of many production and safety features and this article covers their work in fire detection and conveyor belt monitoring. A central computer control room can safely watch over the whole underground mining operation using one 25 inch colour monitor. These new data-acquisition systems will lead the way, in the future, to safer move efficient coal mining. Multi-point monitoring of carbon monoxide, heat anomalies, toxic gases and the procedures in conveyor belt operation from start-up to closedown.

  13. Morphogenesis in Plants: Modeling the Shoot Apical Meristem, and Possible Applications

    NASA Technical Reports Server (NTRS)

    Mjolsness, Eric; Gor, Victoria; Meyerowitz, Elliot; Mann, Tobias

    1998-01-01

    A key determinant of overall morphogenesis in flowering plants such as Arabidopsis thaliana is the shoot apical meristem (growing tip of a shoot). Gene regulation networks can be used to model this system. We exhibit a very preliminary two-dimensional model including gene regulation and intercellular signaling, but omitting cell division and dynamical geometry. The model can be trained to have three stable regions of gene expression corresponding to the central zone, peripheral zone, and rib meristem. We also discuss a space-engineering motivation for studying and controlling the morphogenesis of plants using such computational models.

  14. 32 bit digital optical computer - A hardware update

    NASA Technical Reports Server (NTRS)

    Guilfoyle, Peter S.; Carter, James A., III; Stone, Richard V.; Pape, Dennis R.

    1990-01-01

    Such state-of-the-art devices as multielement linear laser diode arrays, multichannel acoustooptic modulators, optical relays, and avalanche photodiode arrays, are presently applied to the implementation of a 32-bit supercomputer's general-purpose optical central processing architecture. Shannon's theorem, Morozov's control operator method (in conjunction with combinatorial arithmetic), and DeMorgan's law have been used to design an architecture whose 100 MHz clock renders it fully competitive with emerging planar-semiconductor technology. Attention is given to the architecture's multichannel Bragg cells, thermal design and RF crosstalk considerations, and the first and second anamorphic relay legs.

  15. A multi-channel coronal spectrophotometer.

    NASA Technical Reports Server (NTRS)

    Landman, D. A.; Orrall, F. Q.; Zane, R.

    1973-01-01

    We describe a new multi-channel coronal spectrophotometer system, presently being installed at Mees Solar Observatory, Mount Haleakala, Maui. The apparatus is designed to record and interpret intensities from many sections of the visible and near-visible spectral regions simultaneously, with relatively high spatial and temporal resolution. The detector, a thermoelectrically cooled silicon vidicon camera tube, has its central target area divided into a rectangular array of about 100,000 pixels and is read out in a slow-scan (about 2 sec/frame) mode. Instrument functioning is entirely under PDP 11/45 computer control, and interfacing is via the CAMAC system.

  16. Brief Survey of TSC Computing Facilities

    DOT National Transportation Integrated Search

    1972-05-01

    The Transportation Systems Center (TSC) has four, essentially separate, in-house computing facilities. We shall call them Honeywell Facility, the Hybrid Facility, the Multimode Simulation Facility, and the Central Facility. In addition to these four,...

  17. Characterizing Crowd Participation and Productivity of Foldit Through Web Scraping

    DTIC Science & Technology

    2016-03-01

    Berkeley Open Infrastructure for Network Computing CDF Cumulative Distribution Function CPU Central Processing Unit CSSG Crowdsourced Serious Game...computers at once can create a similar capacity. According to Anderson [6], principal investigator for the Berkeley Open Infrastructure for Network...extraterrestrial life. From this project, a software-based distributed computing platform called the Berkeley Open Infrastructure for Network Computing

  18. Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.

    2004-01-01

    Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.

  19. Integrating computers in physics teaching: An Indian perspective

    NASA Astrophysics Data System (ADS)

    Jolly, Pratibha

    1997-03-01

    The University of Delhi has around twenty affiliated undergraduate colleges that offer a three-year physics major program to nearly five hundred students. All follow a common curriculum and submit to a centralized examination. This structure of tertiary education makes it relatively difficult to implement radical or rapid changes in the formal curriculum. The technology onslaught has, at last, irrevocably altered this; computers are carving new windows in old citadels and defining the agenda in teaching-learning environments the world over. In 1992, we formally introduced Computational Physics as a core paper in the second year of the Bachelor's program. As yet, the emphasis is on imparting familiarity with computers, a programming language and rudiments of numerical algorithms. In a parallel development, we also introduced a strong component of instrumentation with modern day electronic devices, including microprocessors. Many of us, however, would like to see not just computer presence in our curriculum but a totally new curriculum and teaching strategy that exploits, befittingly, the new technology. The current challenge is to realize in practice the full potential of the computer as the proverbial versatile tool: interfacing laboratory experiments for real-time acquisition and control of data; enabling rigorous analysis and data modeling; simulating micro-worlds and real life phenomena; establishing new cognitive linkages between theory and empirical observation; and between abstract constructs and visual representations.

  20. Experimental realization of universal geometric quantum gates with solid-state spins.

    PubMed

    Zu, C; Wang, W-B; He, L; Zhang, W-G; Dai, C-Y; Wang, F; Duan, L-M

    2014-10-02

    Experimental realization of a universal set of quantum logic gates is the central requirement for the implementation of a quantum computer. In an 'all-geometric' approach to quantum computation, the quantum gates are implemented using Berry phases and their non-Abelian extensions, holonomies, from geometric transformation of quantum states in the Hilbert space. Apart from its fundamental interest and rich mathematical structure, the geometric approach has some built-in noise-resilience features. On the experimental side, geometric phases and holonomies have been observed in thermal ensembles of liquid molecules using nuclear magnetic resonance; however, such systems are known to be non-scalable for the purposes of quantum computing. There are proposals to implement geometric quantum computation in scalable experimental platforms such as trapped ions, superconducting quantum bits and quantum dots, and a recent experiment has realized geometric single-bit gates in a superconducting system. Here we report the experimental realization of a universal set of geometric quantum gates using the solid-state spins of diamond nitrogen-vacancy centres. These diamond defects provide a scalable experimental platform with the potential for room-temperature quantum computing, which has attracted strong interest in recent years. Our experiment shows that all-geometric and potentially robust quantum computation can be realized with solid-state spin quantum bits, making use of recent advances in the coherent control of this system.

  1. 40 CFR 81.104 - Central Pennsylvania Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Quality Control Region. 81.104 Section 81.104 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Air Quality Control Regions § 81.104 Central Pennsylvania Intrastate Air Quality Control Region. The Central Pennsylvania Intrastate Air Quality Control Region consists of the territorial area encompassed by...

  2. 40 CFR 81.104 - Central Pennsylvania Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Quality Control Region. 81.104 Section 81.104 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Air Quality Control Regions § 81.104 Central Pennsylvania Intrastate Air Quality Control Region. The Central Pennsylvania Intrastate Air Quality Control Region consists of the territorial area encompassed by...

  3. A system to build distributed multivariate models and manage disparate data sharing policies: implementation in the scalable national network for effectiveness research.

    PubMed

    Meeker, Daniella; Jiang, Xiaoqian; Matheny, Michael E; Farcas, Claudiu; D'Arcy, Michel; Pearlman, Laura; Nookala, Lavanya; Day, Michele E; Kim, Katherine K; Kim, Hyeoneui; Boxwala, Aziz; El-Kareh, Robert; Kuo, Grace M; Resnic, Frederic S; Kesselman, Carl; Ohno-Machado, Lucila

    2015-11-01

    Centralized and federated models for sharing data in research networks currently exist. To build multivariate data analysis for centralized networks, transfer of patient-level data to a central computation resource is necessary. The authors implemented distributed multivariate models for federated networks in which patient-level data is kept at each site and data exchange policies are managed in a study-centric manner. The objective was to implement infrastructure that supports the functionality of some existing research networks (e.g., cohort discovery, workflow management, and estimation of multivariate analytic models on centralized data) while adding additional important new features, such as algorithms for distributed iterative multivariate models, a graphical interface for multivariate model specification, synchronous and asynchronous response to network queries, investigator-initiated studies, and study-based control of staff, protocols, and data sharing policies. Based on the requirements gathered from statisticians, administrators, and investigators from multiple institutions, the authors developed infrastructure and tools to support multisite comparative effectiveness studies using web services for multivariate statistical estimation in the SCANNER federated network. The authors implemented massively parallel (map-reduce) computation methods and a new policy management system to enable each study initiated by network participants to define the ways in which data may be processed, managed, queried, and shared. The authors illustrated the use of these systems among institutions with highly different policies and operating under different state laws. Federated research networks need not limit distributed query functionality to count queries, cohort discovery, or independently estimated analytic models. Multivariate analyses can be efficiently and securely conducted without patient-level data transport, allowing institutions with strict local data storage requirements to participate in sophisticated analyses based on federated research networks. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  4. Reaching Agreement in Quantum Hybrid Networks.

    PubMed

    Shi, Guodong; Li, Bo; Miao, Zibo; Dower, Peter M; James, Matthew R

    2017-07-20

    We consider a basic quantum hybrid network model consisting of a number of nodes each holding a qubit, for which the aim is to drive the network to a consensus in the sense that all qubits reach a common state. Projective measurements are applied serving as control means, and the measurement results are exchanged among the nodes via classical communication channels. In this way the quantum-opeartion/classical-communication nature of hybrid quantum networks is captured, although coherent states and joint operations are not taken into consideration in order to facilitate a clear and explicit analysis. We show how to carry out centralized optimal path planning for this network with all-to-all classical communications, in which case the problem becomes a stochastic optimal control problem with a continuous action space. To overcome the computation and communication obstacles facing the centralized solutions, we also develop a distributed Pairwise Qubit Projection (PQP) algorithm, where pairs of nodes meet at a given time and respectively perform measurements at their geometric average. We show that the qubit states are driven to a consensus almost surely along the proposed PQP algorithm, and that the expected qubit density operators converge to the average of the network's initial values.

  5. Examining the architecture of cellular computing through a comparative study with a computer

    PubMed Central

    Wang, Degeng; Gribskov, Michael

    2005-01-01

    The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software–hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's ‘hardware’ equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the ‘bandwidth’ of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed. PMID:16849179

  6. Examining the architecture of cellular computing through a comparative study with a computer.

    PubMed

    Wang, Degeng; Gribskov, Michael

    2005-06-22

    The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software-hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's "hardware" equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the "bandwidth" of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed.

  7. NASA/DOD Aerospace Knowledge Diffusion Research Project. Paper 32: A new era in international technical communication: American-Russian collaboration

    NASA Technical Reports Server (NTRS)

    Flammia, Madelyn; Barclay, Rebecca O.; Pinelli, Thomas E.; Keene, Michael L.; Burger, Robert H.; Kennedy, John M.

    1993-01-01

    Until the recent dissolution of the Soviet Union, the Communist Party exerted a strict control of access to and dissemination of scientific and technical information (STI). This article presents models of the Soviet-style information society and the Western-style information society and discusses the effects of centralized governmental control of information on Russian technical communication practices. The effects of political control on technical communication are then used to interpret the results of a survey of Russian and U.S. aerospace engineers and scientists concerning the time devoted to technical communication, their collaborative writing practices and their attitudes toward collaboration, the kinds of technical documents they produce and use, their views regarding the appropriate content for an undergraduate technical communication course, and their use of computer technology. Finally, the implications of these findings for future collaboration between Russian and U.S. engineers and scientists are examined.

  8. The CRAF/Cassini power subsystem - Preliminary design update

    NASA Technical Reports Server (NTRS)

    Atkins, Kenneth L.; Brisendine, Philip; Clark, Karla; Klein, John; Smith, Richard

    1991-01-01

    A chronology is provided of the rationale leading from the early Mariner spacecraft to the CRAF/Cassini Mariner Mark II power subsystem architecture. The display pathway began with a hybrid including a solar photovoltaic array, a radioisotope thermoelectric generator (RTG), and a battery supplying a power profile with a peak loading of about 300 W. The initial concept was to distribute power through a new solid-state, programmable switch controlled by an embedded microprocessor. As the overall mission, science, and project design matured, the power requirements increased. The design evolved from the hybrid to two RTG plus batteries to meet peak loadings of near 500 W in 1989. Later that year, circumstances led to abandonment of the distributed computer concept and a return to centralized control. Finally, as power requirements continued to grow, a third RTG was added to the design and the battery removed, with the return to the discharge-controller for transients during fault recovery procedures.

  9. NASA/DoD Aerospace Knowledge Diffusion Research Project. XXXII - A new era in international technical communication: American-Russian collaboration

    NASA Technical Reports Server (NTRS)

    Flammia, Madelyn; Barclay, Rebecca O.; Pinelli, Thomas E.; Keene, Michael L.; Burger, Robert H.; Kennedy, John M.

    1993-01-01

    Until the recent dissolution of the Soviet Union, the Communist Party exerted a strict control of access to and dissemination of scientific and technical information. This article presents models of the Soviet-style information society and the Western-style information society and discusses the effects of centralized governmental control of information on Russian technical communication practices. The effects of political control on technical communication are then used to interpret the results of a survey of Russian and U.S. aerospace engineers and scientists concerning the time devoted to technical communication, their collaborative writing practices and their attitudes toward collaboration, the kinds of technical documents they produce and use, their views regarding the appropriate content for an undergraduate technical communication course, and their use of computer technology. Finally, the implications of these findings for future collaboration between Russian and U.S. engineers and scientists are examined.

  10. Quantifying the Impact of Unavailability in Cyber-Physical Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aissa, Anis Ben; Abercrombie, Robert K; Sheldon, Federick T.

    2014-01-01

    The Supervisory Control and Data Acquisition (SCADA) system discussed in this work manages a distributed control network for the Tunisian Electric & Gas Utility. The network is dispersed over a large geographic area that monitors and controls the flow of electricity/gas from both remote and centralized locations. The availability of the SCADA system in this context is critical to ensuring the uninterrupted delivery of energy, including safety, security, continuity of operations and revenue. Such SCADA systems are the backbone of national critical cyber-physical infrastructures. Herein, we propose adapting the Mean Failure Cost (MFC) metric for quantifying the cost of unavailability.more » This new metric combines the classic availability formulation with MFC. The resulting metric, so-called Econometric Availability (EA), offers a computational basis to evaluate a system in terms of the gain/loss ($/hour of operation) that affects each stakeholder due to unavailability.« less

  11. Factors influencing tests of auditory processing: a perspective on current issues and relevant concerns.

    PubMed

    Cacace, Anthony T; McFarland, Dennis J

    2013-01-01

    Tests of auditory perception, such as those used in the assessment of central auditory processing disorders ([C]APDs), represent a domain in audiological assessment where measurement of this theoretical construct is often confounded by nonauditory abilities due to methodological shortcomings. These confounds include the effects of cognitive variables such as memory and attention and suboptimal testing paradigms, including the use of verbal reproduction as a form of response selection. We argue that these factors need to be controlled more carefully and/or modified so that their impact on tests of auditory and visual perception is only minimal. To advocate for a stronger theoretical framework than currently exists and to suggest better methodological strategies to improve assessment of auditory processing disorders (APDs). Emphasis is placed on adaptive forced-choice psychophysical methods and the use of matched tasks in multiple sensory modalities to achieve these goals. Together, this approach has potential to improve the construct validity of the diagnosis, enhance and develop theory, and evolve into a preferred method of testing. Examination of methods commonly used in studies of APDs. Where possible, currently used methodology is compared to contemporary psychophysical methods that emphasize computer-controlled forced-choice paradigms. In many cases, the procedures used in studies of APD introduce confounding factors that could be minimized if computer-controlled forced-choice psychophysical methods were utilized. Ambiguities of interpretation, indeterminate diagnoses, and unwanted confounds can be avoided by minimizing memory and attentional demands on the input end and precluding the use of response-selection strategies that use complex motor processes on the output end. Advocated are the use of computer-controlled forced-choice psychophysical paradigms in combination with matched tasks in multiple sensory modalities to enhance the prospect of obtaining a valid diagnosis. American Academy of Audiology.

  12. Brain-computer interface technology: a review of the first international meeting.

    PubMed

    Wolpaw, J R; Birbaumer, N; Heetderks, W J; McFarland, D J; Peckham, P H; Schalk, G; Donchin, E; Quatrano, L A; Robinson, C J; Vaughan, T M

    2000-06-01

    Over the past decade, many laboratories have begun to explore brain-computer interface (BCI) technology as a radically new communication option for those with neuromuscular impairments that prevent them from using conventional augmentative communication methods. BCI's provide these users with communication channels that do not depend on peripheral nerves and muscles. This article summarizes the first international meeting devoted to BCI research and development. Current BCI's use electroencephalographic (EEG) activity recorded at the scalp or single-unit activity recorded from within cortex to control cursor movement, select letters or icons, or operate a neuroprosthesis. The central element in each BCI is a translation algorithm that converts electrophysiological input from the user into output that controls external devices. BCI operation depends on effective interaction between two adaptive controllers, the user who encodes his or her commands in the electrophysiological input provided to the BCI, and the BCI which recognizes the commands contained in the input and expresses them in device control. Current BCI's have maximum information transfer rates of 5-25 b/min. Achievement of greater speed and accuracy depends on improvements in signal processing, translation algorithms, and user training. These improvements depend on increased interdisciplinary cooperation between neuroscientists, engineers, computer programmers, psychologists, and rehabilitation specialists, and on adoption and widespread application of objective methods for evaluating alternative methods. The practical use of BCI technology depends on the development of appropriate applications, identification of appropriate user groups, and careful attention to the needs and desires of individual users. BCI research and development will also benefit from greater emphasis on peer-reviewed publications, and from adoption of standard venues for presentations and discussion.

  13. Integrated Control Using the SOFFT Control Structure

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim

    1996-01-01

    The need for integrated/constrained control systems has become clearer as advanced aircraft introduced new coupled subsystems such as new propulsion subsystems with thrust vectoring and new aerodynamic designs. In this study, we develop an integrated control design methodology which accomodates constraints among subsystem variables while using the Stochastic Optimal Feedforward/Feedback Control Technique (SOFFT) thus maintaining all the advantages of the SOFFT approach. The Integrated SOFFT Control methodology uses a centralized feedforward control and a constrained feedback control law. The control thus takes advantage of the known coupling among the subsystems while maintaining the identity of subsystems for validation purposes and the simplicity of the feedback law to understand the system response in complicated nonlinear scenarios. The Variable-Gain Output Feedback Control methodology (including constant gain output feedback) is extended to accommodate equality constraints. A gain computation algorithm is developed. The designer can set the cross-gains between two variables or subsystems to zero or another value and optimize the remaining gains subject to the constraint. An integrated control law is designed for a modified F-15 SMTD aircraft model with coupled airframe and propulsion subsystems using the Integrated SOFFT Control methodology to produce a set of desired flying qualities.

  14. Task conflict and proactive control: A computational theory of the Stroop task.

    PubMed

    Kalanthroff, Eyal; Davelaar, Eddy J; Henik, Avishai; Goldfarb, Liat; Usher, Marius

    2018-01-01

    The Stroop task is a central experimental paradigm used to probe cognitive control by measuring the ability of participants to selectively attend to task-relevant information and inhibit automatic task-irrelevant responses. Research has revealed variability in both experimental manipulations and individual differences. Here, we focus on a particular source of Stroop variability, the reverse-facilitation (RF; faster responses to nonword neutral stimuli than to congruent stimuli), which has recently been suggested as a signature of task conflict. We first review the literature that shows RF variability in the Stroop task, both with regard to experimental manipulations and to individual differences. We suggest that task conflict variability can be understood as resulting from the degree of proactive control that subjects recruit in advance of the Stroop stimulus. When the proactive control is high, task conflict does not arise (or is resolved very quickly), resulting in regular Stroop facilitation. When proactive control is low, task conflict emerges, leading to a slow-down in congruent and incongruent (but not in neutral) trials and thus to Stroop RF. To support this suggestion, we present a computational model of the Stroop task, which includes the resolution of task conflict and its modulation by proactive control. Results show that our model (a) accounts for the variability in Stroop-RF reported in the experimental literature, and (b) solves a challenge to previous Stroop models-their ability to account for reaction time distributional properties. Finally, we discuss theoretical implications to Stroop measures and control deficits observed in some psychopathologies. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Guide to computing at ANL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peavler, J.

    1979-06-01

    This publication gives details about hardware, software, procedures, and services of the Central Computing Facility, as well as information about how to become an authorized user. Languages, compilers' libraries, and applications packages available are described. 17 tables. (RWR)

  16. Data-driven sensor placement from coherent fluid structures

    NASA Astrophysics Data System (ADS)

    Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.

    2017-11-01

    Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.

  17. High Available COTS Based Computer for Space

    NASA Astrophysics Data System (ADS)

    Hartmann, J.; Magistrati, Giorgio

    2015-09-01

    The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.

  18. IAIMS development at Harvard Medical School.

    PubMed Central

    Barnett, G O; Greenes, R A; Zielstorff, R D

    1988-01-01

    The long-range goal of this IAIMS development project is to achieve an Integrated Academic Information Management System for the Harvard Medical School, the Francis A. Countway Library of Medicine, and Harvard's affiliated institutions and their respective libraries. An "opportunistic, incremental" approach to planning has been devised. The projects selected for the initial phase are to implement an increasingly powerful electronic communications network, to encourage the use of a variety of bibliographic and information access techniques, and to begin an ambitious program of faculty and student education in computer science and its applications to medical education, medical care, and research. In addition, we will explore means to promote better collaboration among the separate computer science units in the various schools and hospitals. We believe that our planning approach will have relevance to other educational institutions where lack of strong central organizational control prevents a "top-down" approach to planning. PMID:3416098

  19. Security Framework for Pervasive Healthcare Architectures Utilizing MPEG-21 IPMP Components.

    PubMed

    Fragopoulos, Anastasios; Gialelis, John; Serpanos, Dimitrios

    2009-01-01

    Nowadays in modern and ubiquitous computing environments, it is imperative more than ever the necessity for deployment of pervasive healthcare architectures into which the patient is the central point surrounded by different types of embedded and small computing devices, which measure sensitive physical indications, interacting with hospitals databases, allowing thus urgent medical response in occurrences of critical situations. Such environments must be developed satisfying the basic security requirements for real-time secure data communication, and protection of sensitive medical data and measurements, data integrity and confidentiality, and protection of the monitored patient's privacy. In this work, we argue that the MPEG-21 Intellectual Property Management and Protection (IPMP) components can be used in order to achieve protection of transmitted medical information and enhance patient's privacy, since there is selective and controlled access to medical data that sent toward the hospital's servers.

  20. The revolution in data gathering systems. [mini and microcomputers in NASA wind tunnels

    NASA Technical Reports Server (NTRS)

    Cambra, J. M.; Trover, W. F.

    1975-01-01

    This paper gives a review of the data-acquisition systems used in NASA's wind tunnels from the 1950's to the present as a basis for assessing the impact of minicomputers and microcomputers on data acquisition and processing. The operation and disadvantages of wind-tunnel data systems are summarized for the period before 1950, the early 1950's, the early and late 1960's, and the early 1970's. Some significant advances discussed include the use or development of solid-state components, minicomputer systems, large central computers, on-line data processing, autoranging DC amplifiers, MOS-FET multiplexers, MSI and LSI logic, computer-controlled programmable amplifiers, solid-state remote multiplexing, integrated circuits, and microprocessors. The distributed system currently in use with the 40-ft by 80-ft wind tunnel at Ames Research Center is described in detail. The expected employment of distributed systems and microprocessors in the next decade is noted.

  1. SALLY LEVEL II- COMPUTE AND INTEGRATE DISTURBANCE AMPLIFICATION RATES ON SWEPT AND TAPERED LAMINAR FLOW CONTROL WINGS WITH SUCTION

    NASA Technical Reports Server (NTRS)

    Srokowski, A. J.

    1994-01-01

    The computer program SALLY was developed to compute the incompressible linear stability characteristics and integrate the amplification rates of boundary layer disturbances on swept and tapered wings. For some wing designs, boundary layer disturbance can significantly alter the wing performance characteristics. This is particularly true for swept and tapered laminar flow control wings which incorporate suction to prevent boundary layer separation. SALLY should prove to be a useful tool in the analysis of these wing performance characteristics. The first step in calculating the disturbance amplification rates is to numerically solve the compressible laminar boundary-layer equation with suction for the swept and tapered wing. A two-point finite-difference method is used to solve the governing continuity, momentum, and energy equations. A similarity transformation is used to remove the wall normal velocity as a boundary condition and place it into the governing equations as a parameter. Thus the awkward nonlinear boundary condition is avoided. The resulting compressible boundary layer data is used by SALLY to compute the incompressible linear stability characteristics. The local disturbance growth is obtained from temporal stability theory and converted into a local growth rate for integration. The direction of the local group velocity is taken as the direction of integration. The amplification rate, or logarithmic disturbance amplitude ratio, is obtained by integration of the local disturbance growth over distance. The amplification rate serves as a measure of the growth of linear disturbances within the boundary layer and can serve as a guide in transition prediction. This program is written in FORTRAN IV and ASSEMBLER for batch execution and has been implemented on a CDC CYBER 70 series computer with a central memory requirement of approximately 67K (octal) of 60 bit words. SALLY was developed in 1979.

  2. Development and acceleration of unstructured mesh-based cfd solver

    NASA Astrophysics Data System (ADS)

    Emelyanov, V.; Karpenko, A.; Volkov, K.

    2017-06-01

    The study was undertaken as part of a larger effort to establish a common computational fluid dynamics (CFD) code for simulation of internal and external flows and involves some basic validation studies. The governing equations are solved with ¦nite volume code on unstructured meshes. The computational procedure involves reconstruction of the solution in each control volume and extrapolation of the unknowns to find the flow variables on the faces of control volume, solution of Riemann problem for each face of the control volume, and evolution of the time step. The nonlinear CFD solver works in an explicit time-marching fashion, based on a three-step Runge-Kutta stepping procedure. Convergence to a steady state is accelerated by the use of geometric technique and by the application of Jacobi preconditioning for high-speed flows, with a separate low Mach number preconditioning method for use with low-speed flows. The CFD code is implemented on graphics processing units (GPUs). Speedup of solution on GPUs with respect to solution on central processing units (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  3. Modeling of the Human - Operator in a Complex System Functioning Under Extreme Conditions

    NASA Astrophysics Data System (ADS)

    Getzov, Peter; Hubenova, Zoia; Yordanov, Dimitar; Popov, Wiliam

    2013-12-01

    Problems, related to the explication of sophisticated control systems of objects, operating under extreme conditions, have been examined and the impact of the effectiveness of the operator's activity on the systems as a whole. The necessity of creation of complex simulation models, reflecting operator's activity, is discussed. Organizational and technical system of an unmanned aviation complex is described as a sophisticated ergatic system. Computer realization of main subsystems of algorithmic system of the man as a controlling system is implemented and specialized software for data processing and analysis is developed. An original computer model of a Man as a tracking system has been implemented. Model of unmanned complex for operators training and formation of a mental model in emergency situation, implemented in "matlab-simulink" environment, has been synthesized. As a unit of the control loop, the pilot (operator) is simplified viewed as an autocontrol system consisting of three main interconnected subsystems: sensitive organs (perception sensors); central nervous system; executive organs (muscles of the arms, legs, back). Theoretical-data model of prediction the level of operator's information load in ergatic systems is proposed. It allows the assessment and prediction of the effectiveness of a real working operator. Simulation model of operator's activity in takeoff based on the Petri nets has been synthesized.

  4. All biology is computational biology.

    PubMed

    Markowetz, Florian

    2017-03-01

    Here, I argue that computational thinking and techniques are so central to the quest of understanding life that today all biology is computational biology. Computational biology brings order into our understanding of life, it makes biological concepts rigorous and testable, and it provides a reference map that holds together individual insights. The next modern synthesis in biology will be driven by mathematical, statistical, and computational methods being absorbed into mainstream biological training, turning biology into a quantitative science.

  5. Integrating Xgrid into the HENP distributed computing model

    NASA Astrophysics Data System (ADS)

    Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.

    2008-07-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  6. Real-time dynamics simulation of the Cassini spacecraft using DARTS. Part 1: Functional capabilities and the spatial algebra algorithm

    NASA Technical Reports Server (NTRS)

    Jain, A.; Man, G. K.

    1993-01-01

    This paper describes the Dynamics Algorithms for Real-Time Simulation (DARTS) real-time hardware-in-the-loop dynamics simulator for the National Aeronautics and Space Administration's Cassini spacecraft. The spacecraft model consists of a central flexible body with a number of articulated rigid-body appendages. The demanding performance requirements from the spacecraft control system require the use of a high fidelity simulator for control system design and testing. The DARTS algorithm provides a new algorithmic and hardware approach to the solution of this hardware-in-the-loop simulation problem. It is based upon the efficient spatial algebra dynamics for flexible multibody systems. A parallel and vectorized version of this algorithm is implemented on a low-cost, multiprocessor computer to meet the simulation timing requirements.

  7. Corpus callosal atrophy and associations with cognitive impairment in Parkinson disease

    PubMed Central

    Bledsoe, Ian O.; Merkitch, Doug; Dinh, Vy; Bernard, Bryan; Stebbins, Glenn T.

    2017-01-01

    Objective: To investigate atrophy of the corpus callosum on MRI in Parkinson disease (PD) and its relationship to cognitive impairment. Methods: One hundred patients with PD and 24 healthy control participants underwent clinical and neuropsychological evaluations and structural MRI brain scans. Participants with PD were classified as cognitively normal (PD-NC; n = 28), having mild cognitive impairment (PD-MCI; n = 47), or having dementia (PDD; n = 25) by Movement Disorder Society criteria. Cognitive domain (attention/working memory, executive function, memory, language, visuospatial function) z scores were calculated. With the use of FreeSurfer image processing, volumes for total corpus callosum and its subsections (anterior, midanterior, central, midposterior, posterior) were computed and normalized by total intracranial volume. Callosal volumes were compared between participants with PD and controls and among PD cognitive groups, covarying for age, sex, and PD duration and with multiple comparison corrections. Regression analyses were performed to evaluate relationships between callosal volumes and performance in cognitive domains. Results: Participants with PD had reduced corpus callosum volumes in midanterior and central regions compared to healthy controls. Participants with PDD demonstrated decreased callosal volumes involving multiple subsections spanning anterior to posterior compared to participants with PD-MCI and PD-NC. Regional callosal atrophy predicted cognitive domain performance such that central volumes were associated with the attention/working memory domain; midposterior volumes with executive function, language, and memory domains; and posterior volumes with memory and visuospatial domains. Conclusions: Notable volume loss occurs in the corpus callosum in PD, with specific neuroanatomic distributions in PDD and relationships of regional atrophy to different cognitive domains. Callosal volume loss may contribute to clinical manifestations of PD cognitive impairment. PMID:28235816

  8. Quantification of Peripapillary Sparing and Macular Involvement in Stargardt Disease (STGD1)

    PubMed Central

    Rhee, David W.; Smith, R. Theodore; Tsang, Stephen H.; Allikmets, Rando; Chang, Stanley; Lazow, Margot A.; Hood, Donald C.; Greenstein, Vivienne C.

    2011-01-01

    Purpose. To quantify and compare structure and function across the macula and peripapillary area in Stargardt disease (STGD1). Methods. Twenty-seven patients (27 eyes) and 12 age-similar controls (12 eyes) were studied. Patients were classified on the basis of full-field electroretinogram (ERG) results. Fundus autofluorescence (FAF) and spectral domain-optical coherence tomography (SD-OCT) horizontal line scans were obtained through the fovea and peripapillary area. The thicknesses of the outer nuclear layer plus outer plexiform layer (ONL+), outer segment (OS), and retinal pigment epithelium (RPE) were measured through the fovea, and peripapillary areas from 1° to 4° temporal to the optic disc edge using a computer-aided, manual segmentation technique. Visual sensitivities in the central 10° were assessed using microperimetry and related to retinal layer thicknesses. Results. Compared to the central macula, the differences between controls and patients in ONL+, OS, and RPE layer thicknesses were less in the nasal and temporal macula. Relative sparing of the ONL+ and/or OS layers was detected in the nasal (i.e., peripapillary) macula in 8 of 13 patients with extramacular disease on FAF; relative functional sparing was also detected in this subgroup. All 14 patients with disease confined to the central macula, as detected on FAF, showed ONL+ and OS layer thinning in regions of normal RPE thickness. Conclusions. Relative peripapillary sparing was detected in STGD1 patients with extramacular disease on FAF. Photoreceptor thinning may precede RPE degeneration in STGD1. PMID:21873672

  9. A Series of Molecular Dynamics and Homology Modeling Computer Labs for an Undergraduate Molecular Modeling Course

    ERIC Educational Resources Information Center

    Elmore, Donald E.; Guayasamin, Ryann C.; Kieffer, Madeleine E.

    2010-01-01

    As computational modeling plays an increasingly central role in biochemical research, it is important to provide students with exposure to common modeling methods in their undergraduate curriculum. This article describes a series of computer labs designed to introduce undergraduate students to energy minimization, molecular dynamics simulations,…

  10. All about Reading and Technology.

    ERIC Educational Resources Information Center

    Karbal, Harold, Ed.

    1985-01-01

    The central theme in this journal issue is the use of the computer in teaching reading. The following articles are included: "The Use of Computers in the Reading Program: A District Approach" by Nora Forester; "Reading and Computers: A Partnership" by Dr. Martha Irwin; "Rom, Ram and Reason" by Candice Carlile; "Word Processing: Practical Ideas and…

  11. Long Range Planning for Computer Use--A Task Force Model.

    ERIC Educational Resources Information Center

    Raucher, S. M.; Koehler, T. J.

    A Management Operations Review and Evaluation (MORE) study of the Department of Management Information and Computer Services, which was completed in the fall of 1980, strongly recommended that the Montgomery County Public Schools (MCPS) develop a long-range plan to meet the computer needs of schools and central offices. In response to this…

  12. French Plans for Fifth Generation Computer Systems.

    DTIC Science & Technology

    1984-12-07

    centrally man- French industry In electronics, compu- aged project in France that covers all ters, software, and services and to make the facets of the...Centre National of Japan’s Fifth Generation Project , the de Recherche Scientifique (CNRS) Cooper- French scientific and industrial com- ative Research...systems, man-computer The National Projects interaction, novel computer structures, The French Ministry of Research and knowledge-based computer systems

  13. 40 CFR 81.219 - Central Oregon Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 17 2010-07-01 2010-07-01 false Central Oregon Intrastate Air Quality... Quality Control Regions § 81.219 Central Oregon Intrastate Air Quality Control Region. The Central Oregon... outermost boundaries of the area so delimited): In the State of Oregon: Crook County, Deschutes County, Hood...

  14. 40 CFR 81.219 - Central Oregon Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 18 2013-07-01 2013-07-01 false Central Oregon Intrastate Air Quality... Quality Control Regions § 81.219 Central Oregon Intrastate Air Quality Control Region. The Central Oregon... outermost boundaries of the area so delimited): In the State of Oregon: Crook County, Deschutes County, Hood...

  15. 40 CFR 81.219 - Central Oregon Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 18 2012-07-01 2012-07-01 false Central Oregon Intrastate Air Quality... Quality Control Regions § 81.219 Central Oregon Intrastate Air Quality Control Region. The Central Oregon... outermost boundaries of the area so delimited): In the State of Oregon: Crook County, Deschutes County, Hood...

  16. 40 CFR 81.219 - Central Oregon Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 18 2014-07-01 2014-07-01 false Central Oregon Intrastate Air Quality... Quality Control Regions § 81.219 Central Oregon Intrastate Air Quality Control Region. The Central Oregon... outermost boundaries of the area so delimited): In the State of Oregon: Crook County, Deschutes County, Hood...

  17. 40 CFR 81.219 - Central Oregon Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 17 2011-07-01 2011-07-01 false Central Oregon Intrastate Air Quality... Quality Control Regions § 81.219 Central Oregon Intrastate Air Quality Control Region. The Central Oregon... outermost boundaries of the area so delimited): In the State of Oregon: Crook County, Deschutes County, Hood...

  18. The Effects of Closed-Loop Medical Devices on the Autonomy and Accountability of Persons and Systems.

    PubMed

    Kellmeyer, Philipp; Cochrane, Thomas; Müller, Oliver; Mitchell, Christine; Ball, Tonio; Fins, Joseph J; Biller-Andorno, Nikola

    2016-10-01

    Closed-loop medical devices such as brain-computer interfaces are an emerging and rapidly advancing neurotechnology. The target patients for brain-computer interfaces (BCIs) are often severely paralyzed, and thus particularly vulnerable in terms of personal autonomy, decisionmaking capacity, and agency. Here we analyze the effects of closed-loop medical devices on the autonomy and accountability of both persons (as patients or research participants) and neurotechnological closed-loop medical systems. We show that although BCIs can strengthen patient autonomy by preserving or restoring communicative abilities and/or motor control, closed-loop devices may also create challenges for moral and legal accountability. We advocate the development of a comprehensive ethical and legal framework to address the challenges of emerging closed-loop neurotechnologies like BCIs and stress the centrality of informed consent and refusal as a means to foster accountability. We propose the creation of an international neuroethics task force with members from medical neuroscience, neuroengineering, computer science, medical law, and medical ethics, as well as representatives of patient advocacy groups and the public.

  19. [The laboratory of tomorrow. Particular reference to hematology].

    PubMed

    Cazal, P

    1985-01-01

    A serious prediction can only be an extrapolation of recent developments. To be exact, the development has to continue in the same direction, which is only a probability. Probable development of hematological technology: Progress in methods. Development of new labelling methods: radio-elements, antibodies. Monoclonal antibodies. Progress in equipment: Cell counters and their adaptation to routine hemograms is a certainty. From analyzers: a promise that will perhaps become reality. Coagulometers: progress still to be made. Hemagglutination detectors and their application to grouping: good achievements, but the market is too limited. Computerization and automation: What form will the computerizing take? What will the computer do? Who will the computer control? What should the automatic analyzers be? Two current levels. Relationships between the automatic analysers and the computer. rapidity, fidelity and above all, reliability. Memory: large capacity and easy access. Disadvantages: conservatism and technical dependency. How can they be avoided? Development of the environment: Laboratory input: outside supplies, electricity, reagents, consumables. Samples and their identification. Output: distribution of results and communication problems. Centralization or decentralization? What will tomorrow's laboratory be? 3 hypotheses: optimistic, pessimistic, and balanced.

  20. High order filtering methods for approximating hyperbolic systems of conservation laws

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1991-01-01

    The essentially nonoscillatory (ENO) schemes, while potentially useful in the computation of discontinuous solutions of hyperbolic conservation-law systems, are computationally costly relative to simple central-difference methods. A filtering technique is presented which employs central differencing of arbitrarily high-order accuracy except where a local test detects the presence of spurious oscillations and calls upon the full ENO apparatus to remove them. A factor-of-three speedup is thus obtained over the full-ENO method for a wide range of problems, with high-order accuracy in regions of smooth flow.

Top