Sample records for large computer installation

  1. Installing computers in older adults' homes and teaching them to access a patient education web site: a systematic approach.

    PubMed

    Dauz, Emily; Moore, Jan; Smith, Carol E; Puno, Florence; Schaag, Helen

    2004-01-01

    This article describes the experiences of nurses who, as part of a large clinical trial, brought the Internet into older adults' homes by installing a computer, if needed, and connecting to a patient education Web site. Most of these patients had not previously used the Internet and were taught even basic computer skills when necessary. Because of increasing use of the Internet in patient education, assessment, and home monitoring, nurses in various roles currently connect with patients to monitor their progress, teach about medications, and answer questions about appointments and treatments. Thus, nurses find themselves playing the role of technology managers for patients with home-based Internet connections. This article provides step-by-step procedures for computer installation and training in the form of protocols, checklists, and patient user guides. By following these procedures, nurses can install computers, arrange Internet access, teach and connect to their patients, and prepare themselves to install future generations of technological devices.

  2. Implementing Accessible Workstations in a Large Diverse University Community.

    ERIC Educational Resources Information Center

    Christierson, Eric; Marota, Cindy; Radwan, Neveen; Wydeven, Julie

    This paper describes how San Jose State University installed adaptive and accessible computer workstations for students with disabilities. It begins by discussing factors crucial to the installation of such workstations, including the importance of understanding legal and budgetary constraints, applying standards which meet diverse disability…

  3. VALIDATION OF ANSYS FINITE ELEMENT ANALYSIS SOFTWARE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HAMM, E.R.

    2003-06-27

    This document provides a record of the verification and Validation of the ANSYS Version 7.0 software that is installed on selected CH2M HILL computers. The issues addressed include: Software verification, installation, validation, configuration management and error reporting. The ANSYS{reg_sign} computer program is a large scale multi-purpose finite element program which may be used for solving several classes of engineering analysis. The analysis capabilities of ANSYS Full Mechanical Version 7.0 installed on selected CH2M Hill Hanford Group (CH2M HILL) Intel processor based computers include the ability to solve static and dynamic structural analyses, steady-state and transient heat transfer problems, mode-frequency andmore » buckling eigenvalue problems, static or time-varying magnetic analyses and various types of field and coupled-field applications. The program contains many special features which allow nonlinearities or secondary effects to be included in the solution, such as plasticity, large strain, hyperelasticity, creep, swelling, large deflections, contact, stress stiffening, temperature dependency, material anisotropy, and thermal radiation. The ANSYS program has been in commercial use since 1970, and has been used extensively in the aerospace, automotive, construction, electronic, energy services, manufacturing, nuclear, plastics, oil and steel industries.« less

  4. Artificial intelligence issues related to automated computing operations

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1989-01-01

    Large data processing installations represent target systems for effective applications of artificial intelligence (AI) constructs. The system organization of a large data processing facility at the NASA Marshall Space Flight Center is presented. The methodology and the issues which are related to AI application to automated operations within a large-scale computing facility are described. Problems to be addressed and initial goals are outlined.

  5. DIALOG: An executive computer program for linking independent programs

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Watson, D. A.

    1973-01-01

    A very large scale computer programming procedure called the DIALOG executive system was developed for the CDC 6000 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. Each computer program maintains its individual identity and is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG executive system. The installation and uses of the DIALOG executive system are described.

  6. Email networks and the spread of computer viruses

    NASA Astrophysics Data System (ADS)

    Newman, M. E.; Forrest, Stephanie; Balthrop, Justin

    2002-09-01

    Many computer viruses spread via electronic mail, making use of computer users' email address books as a source for email addresses of new victims. These address books form a directed social network of connections between individuals over which the virus spreads. Here we investigate empirically the structure of this network using data drawn from a large computer installation, and discuss the implications of this structure for the understanding and prevention of computer virus epidemics.

  7. Squid - a simple bioinformatics grid.

    PubMed

    Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M

    2005-08-03

    BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.

  8. Crowd-Sourcing Seismic Data: Lessons Learned from the Quake-Catcher Network

    NASA Astrophysics Data System (ADS)

    Cochran, E. S.; Sumy, D. F.; DeGroot, R. M.; Clayton, R. W.

    2015-12-01

    The Quake Catcher Network (QCN; qcn.caltech.edu) uses low cost micro-electro-mechanical system (MEMS) sensors hosted by volunteers to collect seismic data. Volunteers use accelerometers internal to laptop computers, phones, tablets or small (the size of a matchbox) MEMS sensors plugged into desktop computers using a USB connector to collect scientifically useful data. Data are collected and sent to a central server using the Berkeley Open Infrastructure for Network Computing (BOINC) distributed computing software. Since 2008, when the first citizen scientists joined the QCN project, sensors installed in museums, schools, offices, and residences have collected thousands of earthquake records. We present and describe the rapid installations of very dense sensor networks that have been undertaken following several large earthquakes including the 2010 M8.8 Maule Chile, the 2010 M7.1 Darfield, New Zealand, and the 2015 M7.8 Gorkha, Nepal earthquake. These large data sets allowed seismologists to develop new rapid earthquake detection capabilities and closely examine source, path, and site properties that impact ground shaking at a site. We show how QCN has engaged a wide sector of the public in scientific data collection, providing the public with insights into how seismic data are collected and used. Furthermore, we describe how students use data recorded by QCN sensors installed in their classrooms to explore and investigate earthquakes that they felt, as part of 'teachable moment' exercises.

  9. Numerical methods for engine-airframe integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murthy, S.N.B.; Paynter, G.C.

    1986-01-01

    Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison ofmore » full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment.« less

  10. Third Congress on Information System Science and Technology

    DTIC Science & Technology

    1968-04-01

    versions of the same compiler. The " fast compile-slow execute" and the "slow compile- fast execute" gimmick is the greatest hoax ever per- petrated on the... fast such natural language analysis and translation can be accomplished. If the fairly superficial syntactic anal- ysis of a sentence which is...two kinds of computer: a fast computer with large immediate access and bulk memory for rear echelon and large installation em- ployment, and a

  11. Thermal and fluid-dynamics behavior of circulating systems in the case of pressure relief

    NASA Astrophysics Data System (ADS)

    Moeller, L.

    Aspects of safety in the case of large-scale installations with operational high-pressure conditions must be an important consideration already during the design of such installations, taking into account all conceivable disturbances. Within an analysis of such disturbances, studies related to pressure relief processes will have to occupy a central position. For such studies, it is convenient to combine experiments involving small-scale models of the actual installation with suitable computational programs. The experiments can be carried out at lower pressures and temperatures if the actual fluid is replaced by another medium, such as, for instance, a refrigerant. This approach has been used in the present investigation. The obtained experimental data are employed as a basis for a verification of the results provided by the computational model 'Frelap-UK' which has been expressly developed for the analysis of system behavior in the case of pressure relief. It is found that the computer fluid-dynamics characteristics agree with the experimental results.

  12. DIALOG: An executive computer program for linking independent programs

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Watson, D. A.

    1973-01-01

    A very large scale computer programming procedure called the DIALOG Executive System has been developed for the Univac 1100 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. The unique feature of the DIALOG Executive System is the manner in which computer programs are linked. Each program maintains its individual identity and as such is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG Executive System. The installation and use of the DIALOG Executive System are described at Johnson Space Center.

  13. Finding Space for Technology: Pedagogical Observations on the Organization of Computers in School Environments

    ERIC Educational Resources Information Center

    Jenson, Jennifer; Rose, Chloë Brushwood

    2006-01-01

    With the large-scale acquisition and installation of computer and networking hardware in schools across Canada, a major concern has been where to locate these new technologies and whether and how the structure of the school might itself be made to accommodate these new technologies. In this paper, we suggest that the physical location and…

  14. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.

  15. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  16. Simulation Study of the Helical Superconducting Undulator Installation at the Advanced Photon Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sajaev, V.; Borland, M.; Sun, Y.

    A helical superconducting undulator is planned for installation at the APS. Such an installation would be first of its kind – helical devices were never installed in synchrotron light sources before. Due to its reduced horizontal aperture, a lattice modification is required to accommodate for large horizontal oscillations during injection. We describe the lattice change details and show the new lattice experimental test results. To understand the effect of the undulator on single-particle dynamics, first, its kick maps were computed using different methods. We have found that often-used Elleaume formula* for kick maps gives wrong results for this undulator. Wemore » then used the kick maps obtained by other methods to simulate the effect of the undulator on injection and lifetime.« less

  17. Potential climatic impacts and reliability of very large-scale wind farms

    NASA Astrophysics Data System (ADS)

    Wang, C.; Prinn, R. G.

    2010-02-01

    Meeting future world energy needs while addressing climate change requires large-scale deployment of low or zero greenhouse gas (GHG) emission technologies such as wind energy. The widespread availability of wind power has fueled substantial interest in this renewable energy source as one of the needed technologies. For very large-scale utilization of this resource, there are however potential environmental impacts, and also problems arising from its inherent intermittency, in addition to the present need to lower unit costs. To explore some of these issues, we use a three-dimensional climate model to simulate the potential climate effects associated with installation of wind-powered generators over vast areas of land or coastal ocean. Using wind turbines to meet 10% or more of global energy demand in 2100, could cause surface warming exceeding 1 °C over land installations. In contrast, surface cooling exceeding 1 °C is computed over ocean installations, but the validity of simulating the impacts of wind turbines by simply increasing the ocean surface drag needs further study. Significant warming or cooling remote from both the land and ocean installations, and alterations of the global distributions of rainfall and clouds also occur. These results are influenced by the competing effects of increases in roughness and decreases in wind speed on near-surface turbulent heat fluxes, the differing nature of land and ocean surface friction, and the dimensions of the installations parallel and perpendicular to the prevailing winds. These results are also dependent on the accuracy of the model used, and the realism of the methods applied to simulate wind turbines. Additional theory and new field observations will be required for their ultimate validation. Intermittency of wind power on daily, monthly and longer time scales as computed in these simulations and inferred from meteorological observations, poses a demand for one or more options to ensure reliability, including backup generation capacity, very long distance power transmission lines, and onsite energy storage, each with specific economic and/or technological challenges.

  18. Potential climatic impacts and reliability of very large-scale wind farms

    NASA Astrophysics Data System (ADS)

    Wang, C.; Prinn, R. G.

    2009-09-01

    Meeting future world energy needs while addressing climate change requires large-scale deployment of low or zero greenhouse gas (GHG) emission technologies such as wind energy. The widespread availability of wind power has fueled legitimate interest in this renewable energy source as one of the needed technologies. For very large-scale utilization of this resource, there are however potential environmental impacts, and also problems arising from its inherent intermittency, in addition to the present need to lower unit costs. To explore some of these issues, we use a three-dimensional climate model to simulate the potential climate effects associated with installation of wind-powered generators over vast areas of land or coastal ocean. Using wind turbines to meet 10% or more of global energy demand in 2100, could cause surface warming exceeding 1°C over land installations. In contrast, surface cooling exceeding 1°C is computed over ocean installations, but the validity of simulating the impacts of wind turbines by simply increasing the ocean surface drag needs further study. Significant warming or cooling remote from both the land and ocean installations, and alterations of the global distributions of rainfall and clouds also occur. These results are influenced by the competing effects of increases in roughness and decreases in wind speed on near-surface turbulent heat fluxes, the differing nature of land and ocean surface friction, and the dimensions of the installations parallel and perpendicular to the prevailing winds. These results are also dependent on the accuracy of the model used, and the realism of the methods applied to simulate wind turbines. Additional theory and new field observations will be required for their ultimate validation. Intermittency of wind power on daily, monthly and longer time scales as computed in these simulations and inferred from meteorological observations, poses a demand for one or more options to ensure reliability, including backup generation capacity, very long distance power transmission lines, and onsite energy storage, each with specific economic and/or technological challenges.

  19. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    PubMed

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-05

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.

  20. Implementation of interconnect simulation tools in spice

    NASA Technical Reports Server (NTRS)

    Satsangi, H.; Schutt-Aine, J. E.

    1993-01-01

    Accurate computer simulation of high speed digital computer circuits and communication circuits requires a multimode approach to simulate both the devices and the interconnects between devices. Classical circuit analysis algorithms (lumped parameter) are needed for circuit devices and the network formed by the interconnected devices. The interconnects, however, have to be modeled as transmission lines which incorporate electromagnetic field analysis. An approach to writing a multimode simulator is to take an existing software package which performs either lumped parameter analysis or field analysis and add the missing type of analysis routines to the package. In this work a traditionally lumped parameter simulator, SPICE, is modified so that it will perform lossy transmission line analysis using a different model approach. Modifying SPICE3E2 or any other large software package is not a trivial task. An understanding of the programming conventions used, simulation software, and simulation algorithms is required. This thesis was written to clarify the procedure for installing a device into SPICE3E2. The installation of three devices is documented and the installations of the first two provide a foundation for installation of the lossy line which is the third device. The details of discussions are specific to SPICE, but the concepts will be helpful when performing installations into other circuit analysis packages.

  1. Contributing opportunistic resources to the grid with HTCondor-CE-Bosco

    NASA Astrophysics Data System (ADS)

    Weitzel, Derek; Bockelman, Brian

    2017-10-01

    The HTCondor-CE [1] is the primary Compute Element (CE) software for the Open Science Grid. While it offers many advantages for large sites, for smaller, WLCG Tier-3 sites or opportunistic clusters, it can be a difficult task to install, configure, and maintain the HTCondor-CE. Installing a CE typically involves understanding several pieces of software, installing hundreds of packages on a dedicated node, updating several configuration files, and implementing grid authentication mechanisms. On the other hand, accessing remote clusters from personal computers has been dramatically improved with Bosco: site admins only need to setup SSH public key authentication and appropriate accounts on a login host. In this paper, we take a new approach with the HTCondor-CE-Bosco, a CE which combines the flexibility and reliability of the HTCondor-CE with the easy-to-install Bosco. The administrators of the opportunistic resource are not required to install any software: only SSH access and a user account are required from the host site. The OSG can then run the grid-specific portions from a central location. This provides a new, more centralized, model for running grid services, which complements the traditional distributed model. We will show the architecture of a HTCondor-CE-Bosco enabled site, as well as feedback from multiple sites that have deployed it.

  2. Information Weighted Consensus for Distributed Estimation in Vision Networks

    ERIC Educational Resources Information Center

    Kamal, Ahmed Tashrif

    2013-01-01

    Due to their high fault-tolerance, ease of installation and scalability to large networks, distributed algorithms have recently gained immense popularity in the sensor networks community, especially in computer vision. Multi-target tracking in a camera network is one of the fundamental problems in this domain. Distributed estimation algorithms…

  3. Organization and Management of Project Athena.

    ERIC Educational Resources Information Center

    Champine, George A.

    1991-01-01

    Project Athena is a $100 million, eight-year project to install a large network of high performance computer work stations for education and research at the Massachusetts Institute of Technology (MIT). Organizational, legal, and administrative aspects of the project allow two competitors (Digital Equipment Corporation and IBM) to work together…

  4. The Case for Modular Redundancy in Large-Scale High Performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2009-01-01

    Recent investigations into resilience of large-scale high-performance computing (HPC) systems showed a continuous trend of decreasing reliability and availability. Newly installed systems have a lower mean-time to failure (MTTF) and a higher mean-time to recover (MTTR) than their predecessors. Modular redundancy is being used in many mission critical systems today to provide for resilience, such as for aerospace and command \\& control systems. The primary argument against modular redundancy for resilience in HPC has always been that the capability of a HPC system, and respective return on investment, would be significantly reduced. We argue that modular redundancy can significantly increasemore » compute node availability as it removes the impact of scale from single compute node MTTR. We further argue that single compute nodes can be much less reliable, and therefore less expensive, and still be highly available, if their MTTR/MTTF ratio is maintained.« less

  5. USMC Installations Command Information Environment: Opportunities and Analysis for Integration of First Responder Communications

    DTIC Science & Technology

    2014-09-01

    becoming a more and more prevalent technology in the business world today. According to Syal and Goswami (2012), cloud technology is seen as a...use of computing resources, applications, and personal files without reliance on a single computer or system ( Syal & Goswami, 2012). By operating in...cloud services largely being web-based, which can be retrieved through most systems with access to the Internet ( Syal & Goswami, 2012). The end user can

  6. Study on installation of the submersible mixer

    NASA Astrophysics Data System (ADS)

    Tian, F.; Shi, W. D.; He, X. H.; Jiang, H.; Xu, Y. H.

    2013-12-01

    Study on installation of the submersible mixer for sewage treatment has been limited. In this article, large-scale computational fluid dynamics software FLUENT6.3 was adopted. ICEM software was used to build an unstructured grid of sewage treatment pool. After that, the sewage treatment pool was numerically simulated by dynamic coordinate system technology and RNG k-ε turbulent model and PIOS algorithm. Agitation pools on four different installation location cases were simulated respectively, and the external characteristic of the submersible mixer and the velocity cloud of the axial section were respectively comparatively analyzed. The best stirring effect can be reached by the installation location of case C, which is near the bottom of the pool 600 mm and blade distance the bottom at least for 200 mm wide and wide edge and narrow edge distance by 4:3. The conclusion can guide the engineering practice.

  7. Personal computer security: part 1. Firewalls, antivirus software, and Internet security suites.

    PubMed

    Caruso, Ronald D

    2003-01-01

    Personal computer (PC) security in the era of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) involves two interrelated elements: safeguarding the basic computer system itself and protecting the information it contains and transmits, including personal files. HIPAA regulations have toughened the requirements for securing patient information, requiring every radiologist with such data to take further precautions. Security starts with physically securing the computer. Account passwords and a password-protected screen saver should also be set up. A modern antivirus program can easily be installed and configured. File scanning and updating of virus definitions are simple processes that can largely be automated and should be performed at least weekly. A software firewall is also essential for protection from outside intrusion, and an inexpensive hardware firewall can provide yet another layer of protection. An Internet security suite yields additional safety. Regular updating of the security features of installed programs is important. Obtaining a moderate degree of PC safety and security is somewhat inconvenient but is necessary and well worth the effort. Copyright RSNA, 2003

  8. The CD-ROM Services of SilverPlatter Information, Inc.

    ERIC Educational Resources Information Center

    Allen, Robert J.

    1985-01-01

    The SilverPlatter system is a complete, stand-alone system, consisting of an IBM (or compatible) personal computer, compact disc with read-only memory (CD-ROM) drive, software, and one or more databases. Large databases (e.g., ERIC, PsycLIT) will soon be available on the system for "local" installation in schools, libraries, and…

  9. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  10. Online System for Faster Multipoint Linkage Analysis via Parallel Execution on Thousands of Personal Computers

    PubMed Central

    Silberstein, M.; Tzemach, A.; Dovgolevsky, N.; Fishelson, M.; Schuster, A.; Geiger, D.

    2006-01-01

    Computation of LOD scores is a valuable tool for mapping disease-susceptibility genes in the study of Mendelian and complex diseases. However, computation of exact multipoint likelihoods of large inbred pedigrees with extensive missing data is often beyond the capabilities of a single computer. We present a distributed system called “SUPERLINK-ONLINE,” for the computation of multipoint LOD scores of large inbred pedigrees. It achieves high performance via the efficient parallelization of the algorithms in SUPERLINK, a state-of-the-art serial program for these tasks, and through the use of the idle cycles of thousands of personal computers. The main algorithmic challenge has been to efficiently split a large task for distributed execution in a highly dynamic, nondedicated running environment. Notably, the system is available online, which allows computationally intensive analyses to be performed with no need for either the installation of software or the maintenance of a complicated distributed environment. As the system was being developed, it was extensively tested by collaborating medical centers worldwide on a variety of real data sets, some of which are presented in this article. PMID:16685644

  11. Western State Hospital: implementing a MUMPS-based PC network.

    PubMed

    Russ, D C

    1991-06-01

    Western State Hospital, a state-administered 1,200-bed mental health institution near Tacoma, Wash., confronted the challenge of automating its large campus through the application of the Healthcare Integrated Information System (HIIS). It is the first adaptation of the Veterans Administration's Decentralized Hospital Computer Program software in a mental health institution of this size, and the first DHCP application to be installed on a PC client/server network in a large U.S. hospital.

  12. Computer literacy for life sciences: helping the digital-era biology undergraduates face today's research.

    PubMed

    Smolinski, Tomasz G

    2010-01-01

    Computer literacy plays a critical role in today's life sciences research. Without the ability to use computers to efficiently manipulate and analyze large amounts of data resulting from biological experiments and simulations, many of the pressing questions in the life sciences could not be answered. Today's undergraduates, despite the ubiquity of computers in their lives, seem to be largely unfamiliar with how computers are being used to pursue and answer such questions. This article describes an innovative undergraduate-level course, titled Computer Literacy for Life Sciences, that aims to teach students the basics of a computerized scientific research pursuit. The purpose of the course is for students to develop a hands-on working experience in using standard computer software tools as well as computer techniques and methodologies used in life sciences research. This paper provides a detailed description of the didactical tools and assessment methods used in and outside of the classroom as well as a discussion of the lessons learned during the first installment of the course taught at Emory University in fall semester 2009.

  13. Computing Cluster for Large Scale Turbulence Simulations and Applications in Computational Aeroacoustics

    NASA Astrophysics Data System (ADS)

    Lele, Sanjiva K.

    2002-08-01

    Funds were received in April 2001 under the Department of Defense DURIP program for construction of a 48 processor high performance computing cluster. This report details the hardware which was purchased and how it has been used to enable and enhance research activities directly supported by, and of interest to, the Air Force Office of Scientific Research and the Department of Defense. The report is divided into two major sections. The first section after this summary describes the computer cluster, its setup, and some cluster performance benchmark results. The second section explains ongoing research efforts which have benefited from the cluster hardware, and presents highlights of those efforts since installation of the cluster.

  14. New rules of thumb maximizing energy efficiency in street lighting with discharge lamps: The general equations for lighting design

    NASA Astrophysics Data System (ADS)

    Peña-García, A.; Gómez-Lorente, D.; Espín, A.; Rabaza, O.

    2016-06-01

    New relationships between energy efficiency, illuminance uniformity, spacing and mounting height in public lighting installations were derived from the analysis of a large sample of outputs generated with a widely used software application for lighting design. These new relationships greatly facilitate the calculation of basic lighting installation parameters. The results obtained are also based on maximal energy efficiency and illuminance uniformity as a premise, which are not included in more conventional methods. However, these factors are crucial since they ensure the sustainability of the installations. This research formulated, applied and analysed these new equations. The results of this study highlight their usefulness in rapid planning and urban planning in developing countries or areas affected by natural disasters where engineering facilities and computer applications for this purpose are often unavailable.

  15. Using Residential Solar PV Quote Data to Analyze the Relationship Between Installer Pricing and Firm Size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Shaughnessy, Eric; Margolis, Robert

    2017-04-01

    The vast majority of U.S. residential solar PV installers are small local-scale companies, however the industry is relatively concentrated in a few large national-scale installers. We develop a novel approach using solar PV quote data to study the price behavior of large solar PV installers in the United States. Through a paired differences approach, we find that large installer quotes are about higher, on average, than non-large installer quotes made to the same customer. The difference is statistically significant and robust after controlling for factors such as system size, equipment quality, and time effects. The results suggest that low pricesmore » are not the primary value proposition of large installer systems. We explore several hypotheses for this finding, including that large installers are able to exercise some market power and/or earn returns from reputations.« less

  16. A simplified analysis of propulsion installation losses for computerized aircraft design

    NASA Technical Reports Server (NTRS)

    Morris, S. J., Jr.; Nelms, W. P., Jr.; Bailey, R. O.

    1976-01-01

    A simplified method is presented for computing the installation losses of aircraft gas turbine propulsion systems. The method has been programmed for use in computer aided conceptual aircraft design studies that cover a broad range of Mach numbers and altitudes. The items computed are: inlet size, pressure recovery, additive drag, subsonic spillage drag, bleed and bypass drags, auxiliary air systems drag, boundary-layer diverter drag, nozzle boattail drag, and the interference drag on the region adjacent to multiple nozzle installations. The methods for computing each of these installation effects are described and computer codes for the calculation of these effects are furnished. The results of these methods are compared with selected data for the F-5A and other aircraft. The computer program can be used with uninstalled engine performance information which is currently supplied by a cycle analysis program. The program, including comments, is about 600 FORTRAN statements long, and uses both theoretical and empirical techniques.

  17. Information and Communications Technology (ICT) Infrastructure for the ASTRI SST-2M telescope prototype for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Gianotti, F.; Tacchini, A.; Leto, G.; Martinetti, E.; Bruno, P.; Bellassai, G.; Conforti, V.; Gallozzi, S.; Mastropietro, M.; Tanci, C.; Malaguti, G.; Trifoglio, M.

    2016-08-01

    The Cherenkov Telescope Array (CTA) represents the next generation of ground-based observatories for very high energy gamma-ray astronomy. The CTA will consist of two arrays at two different sites, one in the northern and one in the southern hemisphere. The current CTA design foresees, in the southern site, the installation of many tens of imaging atmospheric Cherenkov telescopes of three different classes, namely large, medium and small, so defined in relation to their mirror area; the northern hemisphere array would consist of few tens of the two larger telescope types. The Italian National Institute for Astrophysics (INAF) is developing the Cherenkov Small Size Telescope ASTRI SST- 2M end-to-end prototype telescope within the framework of the International Cherenkov Telescope Array (CTA) project. The ASTRI prototype has been installed at the INAF observing station located in Serra La Nave on Mt. Etna, Italy. Furthermore a mini-array, composed of nine of ASTRI telescopes, has been proposed to be installed at the Southern CTA site. Among the several different infrastructures belonging the ASTRI project, the Information and Communication Technology (ICT) equipment is dedicated to operations of computing and data storage, as well as the control of the entire telescope, and it is designed to achieve the maximum efficiency for all performance requirements. Thus a complete and stand-alone computer centre has been designed and implemented. The goal is to obtain optimal ICT equipment, with an adequate level of redundancy, that might be scaled up for the ASTRI mini-array, taking into account the necessary control, monitor and alarm system requirements. In this contribution we present the ICT equipment currently installed at the Serra La Nave observing station where the ASTRI SST-2M prototype will be operated. The computer centre and the control room are described with particular emphasis on the Local Area Network scheme, the computing and data storage system, and the telescope control and monitoring.

  18. Cloud Computing: A Free Technology Option to Promote Collaborative Learning

    ERIC Educational Resources Information Center

    Siegle, Del

    2010-01-01

    In a time of budget cuts and limited funding, purchasing and installing the latest software on classroom computers can be prohibitive for schools. Many educators are unaware that a variety of free software options exist, and some of them do not actually require installing software on the user's computer. One such option is cloud computing. This…

  19. Coal conversion systems design and process modeling. Volume 2: Installation of MPPM on the Signal 9 computer

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Relevant differences between the MPPM resident IBM 370computer and the NASA Sigma 9 computer are described as well as the MPPM system itself and its development. Problems encountered and solutions used to overcome these difficulties during installation of the MPPM system at MSFC are discussed. Remaining work on the installation effort is summarized. The relevant hardware features incorporated in the program are described and their implications on the transportability of the MPPM source code are examined.

  20. Force user's manual: A portable, parallel FORTRAN

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.

    1990-01-01

    The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.

  1. System analysis for the Huntsville Operation Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.

    1986-01-01

    A simulation model of the NASA Huntsville Operational Support Center (HOSC) was developed. This simulation model emulates the HYPERchannel Local Area Network (LAN) that ties together the various computers of HOSC. The HOSC system is a large installation of mainframe computers such as the Perkin Elmer 3200 series and the Dec VAX series. A series of six simulation exercises of the HOSC model is described using data sets provided by NASA. The analytical analysis of the ETHERNET LAN and the video terminals (VTs) distribution system are presented. An interface analysis of the smart terminal network model which allows the data flow requirements due to VTs on the ETHERNET LAN to be estimated, is presented.

  2. Application research of Ganglia in Hadoop monitoring and management

    NASA Astrophysics Data System (ADS)

    Li, Gang; Ding, Jing; Zhou, Lixia; Yang, Yi; Liu, Lei; Wang, Xiaolei

    2017-03-01

    There are many applications of Hadoop System in the field of large data, cloud computing. The test bench of storage and application in seismic network at Earthquake Administration of Tianjin use with Hadoop system, which is used the open source software of Ganglia to operate and monitor. This paper reviews the function, installation and configuration process, application effect of operating and monitoring in Hadoop system of the Ganglia system. It briefly introduces the idea and effect of Nagios software monitoring Hadoop system. It is valuable for the industry in the monitoring system of cloud computing platform.

  3. 46 CFR 183.354 - Battery installations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Battery installations. 183.354 Section 183.354 Shipping...) ELECTRICAL INSTALLATION Power Sources and Distribution Systems § 183.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely...

  4. 46 CFR 183.354 - Battery installations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 7 2012-10-01 2012-10-01 false Battery installations. 183.354 Section 183.354 Shipping...) ELECTRICAL INSTALLATION Power Sources and Distribution Systems § 183.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely...

  5. 46 CFR 183.354 - Battery installations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 7 2013-10-01 2013-10-01 false Battery installations. 183.354 Section 183.354 Shipping...) ELECTRICAL INSTALLATION Power Sources and Distribution Systems § 183.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely...

  6. 46 CFR 183.354 - Battery installations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 7 2011-10-01 2011-10-01 false Battery installations. 183.354 Section 183.354 Shipping...) ELECTRICAL INSTALLATION Power Sources and Distribution Systems § 183.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely...

  7. 46 CFR 183.354 - Battery installations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 7 2014-10-01 2014-10-01 false Battery installations. 183.354 Section 183.354 Shipping...) ELECTRICAL INSTALLATION Power Sources and Distribution Systems § 183.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely...

  8. An evaluation of superminicomputers for thermal analysis

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Vidal, J. B.; Jones, G. K.

    1982-01-01

    The use of superminicomputers for solving a series of increasingly complex thermal analysis problems is investigated. The approach involved (1) installation and verification of the SPAR thermal analyzer software on superminicomputers at Langley Research Center and Goddard Space Flight Center, (2) solution of six increasingly complex thermal problems on this equipment, and (3) comparison of solution (accuracy, CPU time, turnaround time, and cost) with solutions on large mainframe computers.

  9. Project description: design and operational energy studies in a new high-rise office building. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1983-12-01

    A large, privately-owned, office building was thoroughly equipped with sensors to measure energy consumption by specific systems over an extended period of time. The building is a 26-story, glass, office tower sited on an open plaza and connected by a pedestrian bridge to a second, smaller building (primarily servicing PSE and G's customers) which is located on the plaza. A large computer installation controls the utility's entire electric grid and is located in the lower levels of the building below the plaza. The building is occupied by approximately 3400 people including executive, professional, and clerical staff. The building was designedmore » to be energy efficient, with an annual energy consumption of about 60,000 Btu's per square foot, in accordance with the nationally recognized ASHRAE 90-75 standard. Extensive energy data was collected from August 1981 through July 1983. A weather station was installed on the roof to record data on the microclimate. Data from these sensors was continuously recorded by the building automation computer over a period of two years to provide a profile through several seasonal cycles. Additional weather data was obtained from the PSE and G, Maplewood station located nearby. A summary of this data is included.« less

  10. NGScloud: RNA-seq analysis of non-model species using cloud computing.

    PubMed

    Mora-Márquez, Fernando; Vázquez-Poletti, José Luis; López de Heredia, Unai

    2018-05-03

    RNA-seq analysis usually requires large computing infrastructures. NGScloud is a bioinformatic system developed to analyze RNA-seq data using the cloud computing services of Amazon that permit the access to ad hoc computing infrastructure scaled according to the complexity of the experiment, so its costs and times can be optimized. The application provides a user-friendly front-end to operate Amazon's hardware resources, and to control a workflow of RNA-seq analysis oriented to non-model species, incorporating the cluster concept, which allows parallel runs of common RNA-seq analysis programs in several virtual machines for faster analysis. NGScloud is freely available at https://github.com/GGFHF/NGScloud/. A manual detailing installation and how-to-use instructions is available with the distribution. unai.lopezdeheredia@upm.es.

  11. Supporting medical communication for older patients with a shared touch-screen computer.

    PubMed

    Piper, Anne Marie; Hollan, James D

    2013-11-01

    Increasingly health care facilities are adopting electronic medical record systems and installing computer workstations in patient exam rooms. The introduction of computer workstations into the medical interview process makes it important to consider the impact of such technology on older patients as well as new types of interfaces that may better suit the needs of older adults. While many older adults are comfortable with a traditional computer workstation with a keyboard and mouse, this article explores how a large horizontal touch-screen (i.e., a surface computer) may suit the needs of older patients and facilitates the doctor-patient interview process. Twenty older adults (age 60 to 88) used a prototype multiuser, multitouch system in our research laboratory to examine seven health care scenarios. Behavioral observations as well as results from questionnaires and a structured interview were analyzed. The older adults quickly adapted to the prototype system and reported that it was easy to use. Participants also suggested that having a shared view of one's medical records, especially charts and images, would enhance communication with their doctor and aid understanding. While this study is exploratory and some areas of interaction with a surface computer need to be refined, the technology is promising for sharing electronic patient information during medical interviews involving older adults. Future work must examine doctors' and nurses' interaction with the technology as well as logistical issues of installing such a system in a real world medical setting. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  13. Diffraction studies applicable to 60-foot microwave research facilities

    NASA Technical Reports Server (NTRS)

    Schmidt, R. F.

    1973-01-01

    The principal features of this document are the analysis of a large dual-reflector antenna system by vector Kirchhoff theory, the evaluation of subreflector aperture-blocking, determination of the diffraction and blockage effects of a subreflector mounting structure, and an estimate of strut-blockage effects. Most of the computations are for a frequency of 15.3 GHz, and were carried out using the IBM 360/91 and 360/95 systems at Goddard Space Flight Center. The FORTRAN 4 computer program used to perform the computations is of a general and modular type so that various system parameters such as frequency, eccentricity, diameter, focal-length, etc. can be varied at will. The parameters of the 60-foot NRL Ku-band installation at Waldorf, Maryland, were entered into the program for purposes of this report. Similar calculations could be performed for the NELC installation at La Posta, California, the NASA Wallops Station facility in Virginia, and other antenna systems, by a simple change in IBM control cards. A comparison is made between secondary radiation patterns of the NRL antenna measured by DOD Satellite and those obtained by analytical/numerical methods at a frequency of 7.3 GHz.

  14. 46 CFR 111.15-5 - Battery installation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Battery installation. 111.15-5 Section 111.15-5 Shipping... REQUIREMENTS Storage Batteries and Battery Chargers: Construction and Installation § 111.15-5 Battery installation. (a) Large batteries. Each large battery installation must be in a room that is only for batteries...

  15. 46 CFR 129.356 - Battery installations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Battery installations. 129.356 Section 129.356 Shipping... INSTALLATIONS Power Sources and Distribution Systems § 129.356 Battery installations. (a) Large. Each large battery-installation must be located in a locker, room, or enclosed box dedicated solely to the storage of...

  16. 46 CFR 129.356 - Battery installations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 4 2013-10-01 2013-10-01 false Battery installations. 129.356 Section 129.356 Shipping... INSTALLATIONS Power Sources and Distribution Systems § 129.356 Battery installations. (a) Large. Each large battery-installation must be located in a locker, room, or enclosed box dedicated solely to the storage of...

  17. 46 CFR 111.15-5 - Battery installation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 4 2013-10-01 2013-10-01 false Battery installation. 111.15-5 Section 111.15-5 Shipping... REQUIREMENTS Storage Batteries and Battery Chargers: Construction and Installation § 111.15-5 Battery installation. (a) Large batteries. Each large battery installation must be in a room that is only for batteries...

  18. 46 CFR 129.356 - Battery installations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Battery installations. 129.356 Section 129.356 Shipping... INSTALLATIONS Power Sources and Distribution Systems § 129.356 Battery installations. (a) Large. Each large battery-installation must be located in a locker, room, or enclosed box dedicated solely to the storage of...

  19. 46 CFR 129.356 - Battery installations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 4 2012-10-01 2012-10-01 false Battery installations. 129.356 Section 129.356 Shipping... INSTALLATIONS Power Sources and Distribution Systems § 129.356 Battery installations. (a) Large. Each large battery-installation must be located in a locker, room, or enclosed box dedicated solely to the storage of...

  20. 46 CFR 129.356 - Battery installations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 4 2014-10-01 2014-10-01 false Battery installations. 129.356 Section 129.356 Shipping... INSTALLATIONS Power Sources and Distribution Systems § 129.356 Battery installations. (a) Large. Each large battery-installation must be located in a locker, room, or enclosed box dedicated solely to the storage of...

  1. 46 CFR 111.15-5 - Battery installation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 4 2014-10-01 2014-10-01 false Battery installation. 111.15-5 Section 111.15-5 Shipping... REQUIREMENTS Storage Batteries and Battery Chargers: Construction and Installation § 111.15-5 Battery installation. (a) Large batteries. Each large battery installation must be in a room that is only for batteries...

  2. 46 CFR 111.15-5 - Battery installation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 4 2012-10-01 2012-10-01 false Battery installation. 111.15-5 Section 111.15-5 Shipping... REQUIREMENTS Storage Batteries and Battery Chargers: Construction and Installation § 111.15-5 Battery installation. (a) Large batteries. Each large battery installation must be in a room that is only for batteries...

  3. 46 CFR 111.15-5 - Battery installation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Battery installation. 111.15-5 Section 111.15-5 Shipping... REQUIREMENTS Storage Batteries and Battery Chargers: Construction and Installation § 111.15-5 Battery installation. (a) Large batteries. Each large battery installation must be in a room that is only for batteries...

  4. Really Large Scale Computer Graphic Projection Using Lasers and Laser Substitutes

    NASA Astrophysics Data System (ADS)

    Rother, Paul

    1989-07-01

    This paper reflects on past laser projects to display vector scanned computer graphic images onto very large and irregular surfaces. Since the availability of microprocessors and high powered visible lasers, very large scale computer graphics projection have become a reality. Due to the independence from a focusing lens, lasers easily project onto distant and irregular surfaces and have been used for amusement parks, theatrical performances, concert performances, industrial trade shows and dance clubs. Lasers have been used to project onto mountains, buildings, 360° globes, clouds of smoke and water. These methods have proven successful in installations at: Epcot Theme Park in Florida; Stone Mountain Park in Georgia; 1984 Olympics in Los Angeles; hundreds of Corporate trade shows and thousands of musical performances. Using new ColorRayTM technology, the use of costly and fragile lasers is no longer necessary. Utilizing fiber optic technology, the functionality of lasers can be duplicated for new and exciting projection possibilities. The use of ColorRayTM technology has enjoyed worldwide recognition in conjunction with Pink Floyd and George Michaels' world wide tours.

  5. Issues in ATM Support of High-Performance, Geographically Distributed Computing

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G

    1995-01-01

    This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.

  6. Distributed information system (water fact sheet)

    USGS Publications Warehouse

    Harbaugh, A.W.

    1986-01-01

    During 1982-85, the Water Resources Division (WRD) of the U.S. Geological Survey (USGS) installed over 70 large minicomputers in offices across the country to support its mission in the science of hydrology. These computers are connected by a communications network that allows information to be shared among computers in each office. The computers and network together are known as the Distributed Information System (DIS). The computers are accessed through the use of more than 1500 terminals and minicomputers. The WRD has three fundamentally different needs for computing: data management; hydrologic analysis; and administration. Data management accounts for 50% of the computational workload of WRD because hydrologic data are collected in all 50 states, Puerto Rico, and the Pacific trust territories. Hydrologic analysis consists of 40% of the computational workload of WRD. Cost accounting, payroll, personnel records, and planning for WRD programs occupies an estimated 10% of the computer workload. The DIS communications network is shown on a map. (Lantz-PTT)

  7. Core network infrastructure supporting the VLT at ESO Paranal in Chile

    NASA Astrophysics Data System (ADS)

    Reay, Harold

    2000-06-01

    In October 1997 a number of projects were started at ESO's Paranal Observatory at Cerro Paranal in Chile to upgrade the communications infrastructure in place at the time. The planned upgrades were to internal systems such as computer data networks and telephone installations and also data links connecting Paranal to other ESO sites. This paper details the installation work carried out on the Paranal Core Network (PCN) during the period of October 1997 to December 1999. These installations were to provide both short term solutions to the requirement for reliable high bandwidth network connectivity between Paranal and ESO HQ in Garching, Germany in time for UTI (Antu) first light and perhaps more importantly, to provide the core systems necessary for a site moving towards operational status. This paper explains the reasons for using particular cable types, network topology, and fiber backbone design and implementation. We explain why it was decided to install the PCN in two distinct stages and how equipment used in temporary installations was re-used in the Very Large Telescope networks. Finally we describe the tools used to monitor network and satellite link performance and will discuss whether network backbone bandwidth meets the expected utilization and how this bandwidth can easily be increased in the future should there be a requirement.

  8. IP-Based Video Modem Extender Requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierson, L G; Boorman, T M; Howe, R E

    2003-12-16

    Visualization is one of the keys to understanding large complex data sets such as those generated by the large computing resources purchased and developed by the Advanced Simulation and Computing program (aka ASCI). In order to be convenient to researchers, visualization data must be distributed to offices and large complex visualization theaters. Currently, local distribution of the visual data is accomplished by distance limited modems and RGB switches that simply do not scale to hundreds of users across the local, metropolitan, and WAN distances without incurring large costs in fiber plant installation and maintenance. Wide Area application over the DOEmore » Complex is infeasible using these limited distance RGB extenders. On the other hand, Internet Protocols (IP) over Ethernet is a scalable well-proven technology that can distribute large volumes of data over these distances. Visual data has been distributed at lower resolutions over IP in industrial applications. This document describes requirements of the ASCI program in visual signal distribution for the purpose of identifying industrial partners willing to develop products to meet ASCI's needs.« less

  9. The NASA computer aided design and test system

    NASA Technical Reports Server (NTRS)

    Gould, J. M.; Juergensen, K.

    1973-01-01

    A family of computer programs facilitating the design, layout, evaluation, and testing of digital electronic circuitry is described. CADAT (computer aided design and test system) is intended for use by NASA and its contractors and is aimed predominantly at providing cost effective microelectronic subsystems based on custom designed metal oxide semiconductor (MOS) large scale integrated circuits (LSIC's). CADAT software can be easily adopted by installations with a wide variety of computer hardware configurations. Its structure permits ease of update to more powerful component programs and to newly emerging LSIC technologies. The components of the CADAT system are described stressing the interaction of programs rather than detail of coding or algorithms. The CADAT system provides computer aids to derive and document the design intent, includes powerful automatic layout software, permits detailed geometry checks and performance simulation based on mask data, and furnishes test pattern sequences for hardware testing.

  10. Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets

    PubMed Central

    Heath, Allison P; Greenway, Matthew; Powell, Raymond; Spring, Jonathan; Suarez, Rafael; Hanley, David; Bandlamudi, Chai; McNerney, Megan E; White, Kevin P; Grossman, Robert L

    2014-01-01

    Background As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. Methods Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. Results Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. Conclusions Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics. PMID:24464852

  11. 46 CFR 120.354 - Battery installations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Battery installations. 120.354 Section 120.354 Shipping... and Distribution Systems § 120.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely dedicated to the storage of batteries...

  12. 46 CFR 120.354 - Battery installations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 4 2014-10-01 2014-10-01 false Battery installations. 120.354 Section 120.354 Shipping... and Distribution Systems § 120.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely dedicated to the storage of batteries...

  13. 46 CFR 120.354 - Battery installations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Battery installations. 120.354 Section 120.354 Shipping... and Distribution Systems § 120.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely dedicated to the storage of batteries...

  14. 46 CFR 120.354 - Battery installations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 4 2013-10-01 2013-10-01 false Battery installations. 120.354 Section 120.354 Shipping... and Distribution Systems § 120.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely dedicated to the storage of batteries...

  15. 46 CFR 120.354 - Battery installations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 4 2012-10-01 2012-10-01 false Battery installations. 120.354 Section 120.354 Shipping... and Distribution Systems § 120.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely dedicated to the storage of batteries...

  16. Using Residential Solar PV Quote Data to Analyze the Relationship Between Installer Pricing and Firm Size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Shaughnessy, Eric; Margolis, Robert

    2017-05-18

    We use residential solar photovoltaic (PV) quote data to study the role of firm size in PV installer pricing. We find that large installers (those that installed more than 1,000 PV systems in any year from 2013 to 2015) quote higher prices for customer-owned systems, on average, than do other installers. The results suggest that low prices are not the primary value proposition of large installers.

  17. Using Residential Solar PV Quote Data to Analyze the Relationship Between Installer Pricing and Firm Size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Shaughnessy, Eric; Margolis, Robert

    2017-05-19

    We use residential solar photovoltaic (PV) quote data to study the role of firm size in PV installer pricing. We find that large installers (those that installed more than 1,000 PV systems in any year from 2013 to 2015) quote higher prices for customer-owned systems, on average, than do other installers. The results suggest that low prices are not the primary value proposition of large installers.

  18. Catalog of Computer Programs Used in Undergraduate Geology Education (Second Edition): Installment 3.

    ERIC Educational Resources Information Center

    Burger, H. Robert

    1983-01-01

    Presents annotated list of computer programs related to geophysics, geomorphology, paleontology, economic geology, petroleum geology, and miscellaneous topics. Entries include description, instructional use(s), programing language, and availability. Programs described in previous installments (found in SE 533 635 and 534 182) focused on…

  19. Aircraft Engine Noise Scattering By Fuselage and Wings: A Computational Approach

    NASA Technical Reports Server (NTRS)

    Stanescu, D.; Hussaini, M. Y.; Farassat, F.

    2003-01-01

    The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing.

  20. Aircraft Engine Noise Scattering by Fuselage and Wings: A Computational Approach

    NASA Technical Reports Server (NTRS)

    Stanescu, D.; Hussaini, M. Y.; Farassat, F.

    2003-01-01

    The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing.

  1. Computers in the examination room and the electronic health record: physicians' perceived impact on clinical encounters before and after full installation and implementation.

    PubMed

    Doyle, Richard J; Wang, Nina; Anthony, David; Borkan, Jeffrey; Shield, Renee R; Goldman, Roberta E

    2012-10-01

    We compared physicians' self-reported attitudes and behaviours regarding electronic health record (EHR) use before and after installation of computers in patient examination rooms and transition to full implementation of an EHR in a family medicine training practice to identify anticipated and observed effects these changes would have on physicians' practices and clinical encounters. We conducted two individual qualitative interviews with family physicians. The first interview was before and second interview was 8 months later after full implementation of an EHR and computer installation in the examination rooms. Data were analysed through project team discussions and subsequent coding with qualitative analysis software. At the first interviews, physicians frequently expressed concerns about the potential negative effect of the EHR on quality of care and physician-patient interaction, adequacy of their skills in EHR use and privacy and confidentiality concerns. Nevertheless, most physicians also anticipated multiple benefits, including improved accessibility of patient data and online health information. In the second interviews, physicians reported that their concerns did not persist. Many anticipated benefits were realized, appearing to facilitate collaborative physician-patient relationships. Physicians reported a greater teaching role with patients and sharing online medical information and treatment plan decisions. Before computer installation and full EHR implementation, physicians expressed concerns about the impact of computer use on patient care. After installation and implementation, however, many concerns were mitigated. Using computers in the examination rooms to document and access patients' records along with online medical information and decision-making tools appears to contribute to improved physician-patient communication and collaboration.

  2. Integrated Baseline System (IBS). Version 1.03, System Management Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, J.R.; Bailey, S.; Bower, J.C.

    This IBS System Management Guide explains how to install or upgrade the Integrated Baseline System (IBS) software package. The IBS is an emergency management planning and analysis tool that was developed under the direction of the Federal Emergency Management Agency (FEMA). This guide includes detailed instructions for installing the IBS software package on a Digital Equipment Corporation (DEC) VAX computer from the IBS distribution tapes. The installation instructions include procedures for both first-time installations and upgrades to existing IBS installations. To ensure that the system manager has the background necessary for successful installation of the IBS package, this guide alsomore » includes information on IBS computer requirements, software organization, and the generation of IBS distribution tapes. When special utility programs are used during IBS installation and setups, this guide refers you to the IBS Utilities Guide for specific instructions. This guide also refers you to the IBS Data Management Guide for detailed descriptions of some IBS data files and structures. Any special requirements for installation are not documented here but should be included in a set of installation notes that come with the distribution tapes.« less

  3. Effects of Large-Scale Solar Installations on Dust Mobilization and Air Quality

    NASA Astrophysics Data System (ADS)

    Pratt, J. T.; Singh, D.; Diffenbaugh, N. S.

    2012-12-01

    Large-scale solar projects are increasingly being developed worldwide and many of these installations are located in arid, desert regions. To examine the effects of these projects on regional dust mobilization and air quality, we analyze aerosol product data from NASA's Multi-angle Imaging Spectroradiometer (MISR) at annual and seasonal time intervals near fifteen photovoltaic and solar thermal stations ranging from 5-200 MW (12-4,942 acres) in size. The stations are distributed over eight different countries and were chosen based on size, location and installation date; most of the installations are large-scale, took place in desert climates and were installed between 2006 and 2010. We also consider air quality measurements of particulate matter between 2.5 and 10 micrometers (PM10) from the Environmental Protection Agency (EPA) monitoring sites near and downwind from the project installations in the U.S. We use monthly wind data from the NOAA's National Center for Atmospheric Prediction (NCEP) Global Reanalysis to select the stations downwind from the installations, and then perform statistical analysis on the data to identify any significant changes in these quantities. We find that fourteen of the fifteen regions have lower aerosol product after the start of the installations as well as all six PM10 monitoring stations showing lower particulate matter measurements after construction commenced. Results fail to show any statistically significant differences in aerosol optical index or PM10 measurements before and after the large-scale solar installations. However, many of the large installations are very recent, and there is insufficient data to fully understand the long-term effects on air quality. More data and higher resolution analysis is necessary to better understand the relationship between large-scale solar, dust and air quality.

  4. Neural Network approach to assess the thermal affected zone around the injection well in a groundwater heat pump system

    NASA Astrophysics Data System (ADS)

    Lo Russo, Stefano; Taddia, Glenda; Verda, Vittorio

    2014-05-01

    The common use of well doublets for groundwater-sourced heating or cooling results in a thermal plume of colder or warmer re-injected groundwater known as the Thermal Affected Zone(TAZ). The plumes may be regarded either as a potential anthropogenic geothermal resource or as pollution, depending on downstream aquifer usage. A fundamental aspect in groundwater heat pump (GWHP) plant design is the correct evaluation of the thermally affected zone that develops around the injection well. Temperature anomalies are detected through numerical methods. Crucial elements in the process of thermal impact assessment are the sizes of installations, their position, the heating/cooling load of the building, and the temperature drop/increase imposed on the re-injected water flow. For multiple-well schemes, heterogeneous aquifers, or variable heating and cooling loads, numerical models that simulate groundwater and heat transport are needed. These tools should consider numerous scenarios obtained considering different heating/cooling loads, positions, and operating modes. Computational fluid dynamic (CFD) models are widely used in this field because they offer the opportunity to calculate the time evolution of the thermal plume produced by a heat pump, depending on the characteristics of the subsurface and the heat pump. Nevertheless, these models require large computational efforts, and therefore their use may be limited to a reasonable number of scenarios. Neural networks could represent an alternative to CFD for assessing the TAZ under different scenarios referring to a specific site. The use of neural networks is proposed to determine the time evolution of the groundwater temperature downstream of an installation as a function of the possible utilization profiles of the heat pump. The main advantage of neural network modeling is the possibility of evaluating a large number of scenarios in a very short time, which is very useful for the preliminary analysis of future multiple installations. The neural network is trained using the results from a CFD model (FEFLOW) applied to the installation at Politecnico di Torino (Italy) under several operating conditions.

  5. Helicopter rotor and engine sizing for preliminary performance estimation

    NASA Technical Reports Server (NTRS)

    Talbot, P. D.; Bowles, J. V.; Lee, H. C.

    1986-01-01

    Methods are presented for estimating some of the more fundamental design variables of single-rotor helicopters (tip speed, blade area, disk loading, and installed power) based on design requirements (speed, weight, fuselage drag, and design hover ceiling). The well-known constraints of advancing-blade compressibility and retreating-blade stall are incorporated into the estimation process, based on an empirical interpretation of rotor performance data from large-scale wind-tunnel tests. Engine performance data are presented and correlated with a simple model usable for preliminary design. When approximate results are required quickly, these methods may be more convenient to use and provide more insight than large digital computer programs.

  6. Analysis of severe storm data

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.

    1983-01-01

    The Mesoscale Analysis and Space Sensor (MASS) Data Management and Analysis System developed by Atsuko Computing International (ACI) on the MASS HP-1000 Computer System within the Systems Dynamics Laboratory of the Marshall Space Flight Center is described. The MASS Data Management and Analysis System was successfully implemented and utilized daily by atmospheric scientists to graphically display and analyze large volumes of conventional and satellite derived meteorological data. The scientists can process interactively various atmospheric data (Sounding, Single Level, Gird, and Image) by utilizing the MASS (AVE80) share common data and user inputs, thereby reducing overhead, optimizing execution time, and thus enhancing user flexibility, useability, and understandability of the total system/software capabilities. In addition ACI installed eight APPLE III graphics/imaging computer terminals in individual scientist offices and integrated them into the MASS HP-1000 Computer System thus providing significant enhancement to the overall research environment.

  7. Centralized Authentication with Kerberos 5, Part I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wachsmann, A

    Account administration in a distributed Unix/Linux environment can become very complicated and messy if done by hand. Large sites use special tools to deal with this problem. I will describe how even very small installations like your three computer network at home can take advantage of the very same tools. The problem in a distributed environment is that password and shadow files need to be changed individually on each machine if an account change occurs. Account changes include: password change, addition/removal of accounts, name change of an account (UID/GID changes are a big problem in any case), additional or removedmore » login privileges to a (group of) computer(s), etc. In this article, I will show how Kerberos 5 solves the authentication problem in a distributed computing environment. A second article will describe a solution for the authorization problem.« less

  8. Testing an Open Source installation and server provisioning tool for the INFN CNAF Tierl Storage system

    NASA Astrophysics Data System (ADS)

    Pezzi, M.; Favaro, M.; Gregori, D.; Ricci, P. P.; Sapunenko, V.

    2014-06-01

    In large computing centers, such as the INFN CNAF Tier1 [1], is essential to be able to configure all the machines, depending on use, in an automated way. For several years at the Tier1 has been used Quattor[2], a server provisioning tool, which is currently used in production. Nevertheless we have recently started a comparison study involving other tools able to provide specific server installation and configuration features and also offer a proper full customizable solution as an alternative to Quattor. Our choice at the moment fell on integration between two tools: Cobbler [3] for the installation phase and Puppet [4] for the server provisioning and management operation. The tool should provide the following properties in order to replicate and gradually improve the current system features: implement a system check for storage specific constraints such as kernel modules black list at boot time to avoid undesired SAN (Storage Area Network) access during disk partitioning; a simple and effective mechanism for kernel upgrade and downgrade; the ability of setting package provider using yum, rpm or apt; easy to use Virtual Machine installation support including bonding and specific Ethernet configuration; scalability for managing thousands of nodes and parallel installations. This paper describes the results of the comparison and the tests carried out to verify the requirements and the new system suitability in the INFN-T1 environment.

  9. The NASA Energy Conservation Program

    NASA Technical Reports Server (NTRS)

    Gaffney, G. P.

    1977-01-01

    Large energy-intensive research and test equipment at NASA installations is identified, and methods for reducing energy consumption outlined. However, some of the research facilities are involved in developing more efficient, fuel-conserving aircraft, and tradeoffs between immediate and long-term conservation may be necessary. Major programs for conservation include: computer-based systems to automatically monitor and control utility consumption; a steam-producing solid waste incinerator; and a computer-based cost analysis technique to engineer more efficient heating and cooling of buildings. Alternate energy sources in operation or under evaluation include: solar collectors; electric vehicles; and ultrasonically emulsified fuel to attain higher combustion efficiency. Management support, cooperative participation by employees, and effective reporting systems for conservation programs, are also discussed.

  10. Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets.

    PubMed

    Heath, Allison P; Greenway, Matthew; Powell, Raymond; Spring, Jonathan; Suarez, Rafael; Hanley, David; Bandlamudi, Chai; McNerney, Megan E; White, Kevin P; Grossman, Robert L

    2014-01-01

    As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. Distributed solar radiation fast dynamic measurement for PV cells

    NASA Astrophysics Data System (ADS)

    Wan, Xuefen; Yang, Yi; Cui, Jian; Du, Xingjing; Zheng, Tao; Sardar, Muhammad Sohail

    2017-10-01

    To study the operating characteristics about PV cells, attention must be given to the dynamic behavior of the solar radiation. The dynamic behaviors of annual, monthly, daily and hourly averages of solar radiation have been studied in detail. But faster dynamic behaviors of solar radiation need more researches. The solar radiation random fluctuations in minute-long or second-long range, which lead to alternating radiation and cool down/warm up PV cell frequently, decrease conversion efficiency. Fast dynamic processes of solar radiation are mainly relevant to stochastic moving of clouds. Even in clear sky condition, the solar irradiations show a certain degree of fast variation. To evaluate operating characteristics of PV cells under fast dynamic irradiation, a solar radiation measuring array (SRMA) based on large active area photodiode, LoRa spread spectrum communication and nanoWatt MCU is proposed. This cross photodiodes structure tracks fast stochastic moving of clouds. To compensate response time of pyranometer and reduce system cost, the terminal nodes with low-cost fast-responded large active area photodiode are placed besides positions of tested PV cells. A central node, consists with pyranometer, large active area photodiode, wind detector and host computer, is placed in the center of the central topologies coordinate to scale temporal envelope of solar irradiation and get calibration information between pyranometer and large active area photodiodes. In our SRMA system, the terminal nodes are designed based on Microchip's nanoWatt XLP PIC16F1947. FDS-100 is adopted for large active area photodiode in terminal nodes and host computer. The output current and voltage of each PV cell are monitored by I/V measurement. AS62-T27/SX1278 LoRa communication modules are used for communicating between terminal nodes and host computer. Because the LoRa LPWAN (Low Power Wide Area Network) specification provides seamless interoperability among Smart Things without the need of complex local installations, configuring of our SRMA system is very easy. Lora also provides SRMA a means to overcome the short communication distance and weather signal propagation decline such as in ZigBee and WiFi. The host computer in SRMA system uses the low power single-board PC EMB-3870 which was produced by NORCO. Wind direction sensor SM5386B and wind-force sensor SM5387B are installed to host computer through RS-485 bus for wind reference data collection. And Davis 6450 solar radiation sensor, which is a precision instrument that detects radiation at wavelengths of 300 to 1100 nanometers, allow host computer to follow real-time solar radiation. A LoRa polling scheme is adopt for the communication between host computer and terminal nodes in SRMA. An experimental SRMA has been established. This system was tested in Ganyu, Jiangshu province from May to August, 2016. In the test, the distances between the nodes and the host computer were between 100m and 1900m. At work, SRMA system showed higher reliability. Terminal nodes could follow the instructions from host computer and collect solar radiation data of distributed PV cells effectively. And the host computer managed the SRAM and achieves reference parameters well. Communications between the host computer and terminal nodes were almost unaffected by the weather. In conclusion, the testing results show that SRMA could be a capable method for fast dynamic measuring about solar radiation and related PV cell operating characteristics.

  12. Design and Implementation of an MC68020-Based Educational Computer Board

    DTIC Science & Technology

    1989-12-01

    device and the other for a Macintosh personal computer. A stored program can be installed in 8K bytes Programmable Read Only Memory (PROM) to initialize...MHz. It includes four * Static Random Access Memory (SRAM) chips which provide a storage of 32K bytes. Two Programmable Array Logic (PAL) chips...device and the other for a Macintosh personal computer. A stored program can be installed in 8K bytes Programmable Read Only Memory (PROM) to

  13. Application-Program-Installer Builder

    NASA Technical Reports Server (NTRS)

    Wolgast, Paul; Demore, Martha; Lowik, Paul

    2007-01-01

    A computer program builds application programming interfaces (APIs) and related software components for installing and uninstalling application programs in any of a variety of computers and operating systems that support the Java programming language in its binary form. This program is partly similar in function to commercial (e.g., Install-Shield) software. This program is intended to enable satisfaction of a quasi-industry-standard set of requirements for a set of APIs that would enable such installation and uninstallation and that would avoid the pitfalls that are commonly encountered during installation of software. The requirements include the following: 1) Properly detecting prerequisites to an application program before performing the installation; 2) Properly registering component requirements; 3) Correctly measuring the required hard-disk space, including accounting for prerequisite components that have already been installed; and 4) Correctly uninstalling an application program. Correct uninstallation includes (1) detecting whether any component of the program to be removed is required by another program, (2) not removing that component, and (3) deleting references to requirements of the to-be-removed program for components of other programs so that those components can be properly removed at a later time.

  14. Evaluation of the utility and energy monitoring and control system installed at the US Army, Europe, 409th Base Support Battalion, Military Community at Grafenwoehr, Germany

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broders, M.A.; Ruppel, F.R.

    1993-05-01

    Under the provisions of Interagency Agreement DOE 1938-B090-A1 between the US Department of Energy (DOE) and the US Army Europe (USAREUR), Martin Marietta Energy Systems, Inc., is providing technical assistance to USAREUR in the areas of computer science, information engineering, energy studies, and engineering and systems development. One of the initial projects authorized under this interagency agreement is the evaluation of utility and energy monitoring and control systems (UEMCSs) installed at selected US Army installations in Europe. This report is an evaluation of the overall energy-conservation effectiveness and use of the UEMCS at the 409th Base Support Battalion located inmore » Grafenwoehr, Germany. The 409th Base Support Battalion is a large USAREUR military training facility that comprises a large training area, leased housing, the main post area, and the camp areas that include Camps Aachen, Algier, Normandy, Cheb, and Kasserine. All of these facilities are consumers of electrical and thermal energy. However, only buildings and facilities in the main post area and Camps Aachen, Algier, and Normandy are under the control of the UEMCS. The focus of this evaluation report is on these specific areas. Recommendations to further increase energy and cost savings and to improve operation of the UEMCS are proposed.« less

  15. S-1 project. Volume I. Architecture. 1979 annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-01-01

    The US Navy is one of the world's largest users of digital computing equipment having a procurement cost of at least $50,000, and is the single largest such computer customer in the Department of Defense. Its projected acquisition plan for embedded computer systems during the first half of the 80s contemplates the installation of over 10,000 such systems at an estimated cost of several billions of dollars. This expenditure, though large, is dwarfed by the 85 billion dollars which DOD is projected to spend during the next half-decade on computer software, the near-majority of which will be spent by themore » Navy; the life-cycle costs of the 700,000+ lines of software for a single large Navy weapons systems application (e.g., AEGIS) have been conservatively estimated at most of a billion dollars. The S-1 Project is dedicated to realizing potentially large improvements in the efficiency with which such very large sums may be spent, so that greater military effectiveness may be secured earlier, and with smaller expenditures. The fundamental objectives of the S-1 Project's work are first to enable the Navy to be able to quickly, reliably and inexpensively evaluate at any time what is available from the state-of-the-art in digital processing systems and what the relevance of such systems may be to Navy data processing applications: and second to provide reference prototype systems to support possible competitive procurement action leading to deployment of such systems.« less

  16. Experience of public procurement of Open Compute servers

    NASA Astrophysics Data System (ADS)

    Bärring, Olof; Guerri, Marco; Bonfillou, Eric; Valsan, Liviu; Grigore, Alexandru; Dore, Vincent; Gentit, Alain; Clement, Benoît; Grossir, Anthony

    2015-12-01

    The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT).

  17. Aircraft Engine Noise Scattering by Fuselage and Wings: A Computational Approach

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Stanescu, D.; Hussaini, M. Y.

    2003-01-01

    The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far field. The effects of non-uniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing. 0 2002 Elsevier Science Ltd. All rights reserved.

  18. Quantitative Microbial Risk Assessment Tutorial Installation of Software for Watershed Modeling in Support of QMRA - Updated 2017

    EPA Science Inventory

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling: • QMRA Installation • SDMProjectBuilder (which includes the Microbial ...

  19. Seismic site-response characterization of high-velocity sites using advanced geophysical techniques: application to the NAGRA-Net

    NASA Astrophysics Data System (ADS)

    Poggi, V.; Burjanek, J.; Michel, C.; Fäh, D.

    2017-08-01

    The Swiss Seismological Service (SED) has recently finalised the installation of ten new seismological broadband stations in northern Switzerland. The project was led in cooperation with the National Cooperative for the Disposal of Radioactive Waste (Nagra) and Swissnuclear to monitor micro seismicity at potential locations of nuclear-waste repositories. To further improve the quality and usability of the seismic recordings, an extensive characterization of the sites surrounding the installation area was performed following a standardised investigation protocol. State-of-the-art geophysical techniques have been used, including advanced active and passive seismic methods. The results of all analyses converged to the definition of a set of best-representative 1-D velocity profiles for each site, which are the input for the computation of engineering soil proxies (traveltime averaged velocity and quarter-wavelength parameters) and numerical amplification models. Computed site response is then validated through comparison with empirical site amplification, which is currently available for any station connected to the Swiss seismic networks. With the goal of a high-sensitivity network, most of the NAGRA stations have been installed on stiff-soil sites of rather high seismic velocity. Seismic characterization of such sites has always been considered challenging, due to lack of relevant velocity contrast and the large wavelengths required to investigate the frequency range of engineering interest. We describe how ambient vibration techniques can successfully be applied in these particular conditions, providing practical recommendations for best practice in seismic site characterization of high-velocity sites.

  20. Computer code for estimating installed performance of aircraft gas turbine engines. Volume 1: Final report

    NASA Technical Reports Server (NTRS)

    Kowalski, E. J.

    1979-01-01

    A computerized method which utilizes the engine performance data is described. The method estimates the installed performance of aircraft gas turbine engines. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag.

  1. Optimal reactive planning with security constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, W.R.; Cheng, D.T.Y.; Dixon, A.M.

    1995-12-31

    The National Grid Company (NGC) of England and Wales has developed a computer program, SCORPION, to help system planners optimize the location and size of new reactive compensation plant on the transmission system. The reactive power requirements of the NGC system have risen as a result of increased power flows and the shorter timescale on which power stations are commissioned and withdrawn from service. In view of the high costs involved, it is important that reactive compensation be installed as economically as possible, without compromising security. Traditional methods based on iterative use of a load flow program are labor intensivemore » and subjective. SCORPION determines a near-optimal pattern of new reactive sources which are required to satisfy voltage constraints for normal and contingent states of operation of the transmission system. The algorithm processes the system states sequentially, instead of optimizing all of them simultaneously. This allows a large number of system states to be considered with an acceptable run time and computer memory requirement. Installed reactive sources are treated as continuous, rather than discrete, variables. However, the program has a restart facility which enables the user to add realistically sized reactive sources explicitly and thereby work towards a realizable solution to the planning problem.« less

  2. Engineering survey planning for the alignment of a particle accelerator: part II. Design of a reference network and measurement strategy

    NASA Astrophysics Data System (ADS)

    Junqueira Leão, Rodrigo; Raffaelo Baldo, Crhistian; Collucci da Costa Reis, Maria Luisa; Alves Trabanco, Jorge Luiz

    2018-03-01

    The building blocks of particle accelerators are magnets responsible for keeping beams of charged particles at a desired trajectory. Magnets are commonly grouped in support structures named girders, which are mounted on vertical and horizontal stages. The performance of this type of machine is highly dependent on the relative alignment between its main components. The length of particle accelerators ranges from small machines to large-scale national or international facilities, with typical lengths of hundreds of meters to a few kilometers. This relatively large volume together with micrometric positioning tolerances make the alignment activity a classical large-scale dimensional metrology problem. The alignment concept relies on networks of fixed monuments installed on the building structure to which all accelerator components are referred. In this work, the Sirius accelerator is taken as a case study, and an alignment network is optimized via computational methods in terms of geometry, densification, and surveying procedure. Laser trackers are employed to guide the installation and measure the girders’ positions, using the optimized network as a reference and applying the metric developed in part I of this paper. Simulations demonstrate the feasibility of aligning the 220 girders of the Sirius synchrotron to better than 0.080 mm, at a coverage probability of 95%.

  3. MOLA: a bootable, self-configuring system for virtual screening using AutoDock4/Vina on computer clusters.

    PubMed

    Abreu, Rui Mv; Froufe, Hugo Jc; Queiroz, Maria João Rp; Ferreira, Isabel Cfr

    2010-10-28

    Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a potential maximum speed-up of 10x, the parallel algorithm of MOLA performed with a speed-up of 8,64× using AutoDock4 and 8,60× using Vina.

  4. Investigation of the Mechanism of Generation of Acoustic Oscillations inside Complicated Curvilinear Channels

    NASA Astrophysics Data System (ADS)

    Mitrofanova, O. V.; Bayramukov, A. S.; Fedorinov, A. V.

    2017-11-01

    There are presented some results of computational-theoretical research on identifying thermo-physical features and topology of high-velocity curved and swirl flows, which are occur inside complicated channels of collector systems, active zones and nuclear power installations equipment with pressurized water reactors. Cylindrical curved channels of different configurations and various combinations of bends and cross sectional areas were considered as modeling objects. Results of computational experiments to determine velocity, pressure, vorticity and temperature fields in transverse and longitudinal sections of the pipeline showed that the complicated geometry of the channels can cause to large-scale swirl of flow, cavitation effects and generation acoustic fluctuations with wide spectrum of sound frequencies for the coolant in the dynamic modes.

  5. Installation of new Generation General Purpose Computer (GPC) compact unit

    NASA Technical Reports Server (NTRS)

    1991-01-01

    In the Kennedy Space Center's (KSC's) Orbiter Processing Facility (OPF) high bay 2, Spacecraft Electronics technician Ed Carter (right), wearing clean suit, prepares for (26864) and installs (26865) the new Generation General Purpose Computer (GPC) compact IBM unit in Atlantis', Orbiter Vehicle (OV) 104's, middeck avionics bay as Orbiter Systems Quality Control technician Doug Snider looks on. Both men work for NASA contractor Lockheed Space Operations Company. All three orbiters are being outfitted with the compact IBM unit, which replaces a two-unit earlier generation computer.

  6. Learning to build large structures in space

    NASA Technical Reports Server (NTRS)

    Hagler, T.; Patterson, H. G.; Nathan, C. A.

    1977-01-01

    The paper examines some of the key technologies and forms of construction know-how that will have to be developed and tested for eventual application to building large structures in space. Construction of a shuttle-tended space construction/demonstration platform would comprehensively demonstrate large structure technology, develop construction capability, and furnish a construction platform for a variety of operational large structures. Completion of this platform would lead to demonstrations of the Satellite Power System (SPS) concept, including microwave transmission, fabrication of 20-m-deep beams, conductor installation, rotary joint installation, and solar blanket installation.

  7. Virtualization of Legacy Instrumentation Control Computers for Improved Reliability, Operational Life, and Management.

    PubMed

    Katz, Jonathan E

    2017-01-01

    Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.

  8. Laboratory and software applications for clinical trials: the global laboratory environment.

    PubMed

    Briscoe, Chad

    2011-11-01

    The Applied Pharmaceutical Software Meeting is held annually. It is sponsored by The Boston Society, a not-for-profit organization that coordinates a series of meetings within the global pharmaceutical industry. The meeting generally focuses on laboratory applications, but in recent years has expanded to include some software applications for clinical trials. The 2011 meeting emphasized the global laboratory environment. Global clinical trials generate massive amounts of data in many locations that must be centralized and processed for efficient analysis. Thus, the meeting had a strong focus on establishing networks and systems for dealing with the computer infrastructure to support such environments. In addition to the globally installed laboratory information management system, electronic laboratory notebook and other traditional laboratory applications, cloud computing is quickly becoming the answer to provide efficient, inexpensive options for managing the large volumes of data and computing power, and thus it served as a central theme for the meeting.

  9. A modular (almost) automatic set-up for elastic multi-tenants cloud (micro)infrastructures

    NASA Astrophysics Data System (ADS)

    Amoroso, A.; Astorino, F.; Bagnasco, S.; Balashov, N. A.; Bianchi, F.; Destefanis, M.; Lusso, S.; Maggiora, M.; Pellegrino, J.; Yan, L.; Yan, T.; Zhang, X.; Zhao, X.

    2017-10-01

    An auto-installing tool on an usb drive can allow for a quick and easy automatic deployment of OpenNebula-based cloud infrastructures remotely managed by a central VMDIRAC instance. A single team, in the main site of an HEP Collaboration or elsewhere, can manage and run a relatively large network of federated (micro-)cloud infrastructures, making an highly dynamic and elastic use of computing resources. Exploiting such an approach can lead to modular systems of cloud-bursting infrastructures addressing complex real-life scenarios.

  10. Installing and Setting Up the Git Software Tool on OS X | High-Performance

    Science.gov Websites

    Computing | NREL the Git Software Tool on OS X Installing and Setting Up the Git Software Tool on OS X Learn how to install the Git software tool on OS X for use with the Peregrine system. You can . Binary Installer for OS X - Easiest! You can download the latest version of git from http://git-scm.com

  11. SOURCE EXPLORER: Towards Web Browser Based Tools for Astronomical Source Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Young, M. D.; Hayashi, S.; Gopu, A.

    2014-05-01

    As a new generation of large format, high-resolution imagers come online (ODI, DECAM, LSST, etc.) we are faced with the daunting prospect of astronomical images containing upwards of hundreds of thousands of identifiable sources. Visualizing and interacting with such large datasets using traditional astronomical tools appears to be unfeasible, and a new approach is required. We present here a method for the display and analysis of arbitrarily large source datasets using dynamically scaling levels of detail, enabling scientists to rapidly move from large-scale spatial overviews down to the level of individual sources and everything in-between. Based on the recognized standards of HTML5+JavaScript, we enable observers and archival users to interact with their images and sources from any modern computer without having to install specialized software. We demonstrate the ability to produce large-scale source lists from the images themselves, as well as overlaying data from publicly available source ( 2MASS, GALEX, SDSS, etc.) or user provided source lists. A high-availability cluster of computational nodes allows us to produce these source maps on demand and customized based on user input. User-generated source lists and maps are persistent across sessions and are available for further plotting, analysis, refinement, and culling.

  12. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics

    PubMed Central

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A.; Caron, Christophe

    2015-01-01

    Summary: The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. Availability and implementation: http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). Contact: contact@workflow4metabolomics.org PMID:25527831

  13. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics.

    PubMed

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A; Caron, Christophe

    2015-05-01

    The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). contact@workflow4metabolomics.org. © The Author 2014. Published by Oxford University Press.

  14. Tracking the NGS revolution: managing life science research on shared high-performance computing clusters.

    PubMed

    Dahlö, Martin; Scofield, Douglas G; Schaal, Wesley; Spjuth, Ola

    2018-05-01

    Next-generation sequencing (NGS) has transformed the life sciences, and many research groups are newly dependent upon computer clusters to store and analyze large datasets. This creates challenges for e-infrastructures accustomed to hosting computationally mature research in other sciences. Using data gathered from our own clusters at UPPMAX computing center at Uppsala University, Sweden, where core hour usage of ∼800 NGS and ∼200 non-NGS projects is now similar, we compare and contrast the growth, administrative burden, and cluster usage of NGS projects with projects from other sciences. The number of NGS projects has grown rapidly since 2010, with growth driven by entry of new research groups. Storage used by NGS projects has grown more rapidly since 2013 and is now limited by disk capacity. NGS users submit nearly twice as many support tickets per user, and 11 more tools are installed each month for NGS projects than for non-NGS projects. We developed usage and efficiency metrics and show that computing jobs for NGS projects use more RAM than non-NGS projects, are more variable in core usage, and rarely span multiple nodes. NGS jobs use booked resources less efficiently for a variety of reasons. Active monitoring can improve this somewhat. Hosting NGS projects imposes a large administrative burden at UPPMAX due to large numbers of inexperienced users and diverse and rapidly evolving research areas. We provide a set of recommendations for e-infrastructures that host NGS research projects. We provide anonymized versions of our storage, job, and efficiency databases.

  15. Tracking the NGS revolution: managing life science research on shared high-performance computing clusters

    PubMed Central

    2018-01-01

    Abstract Background Next-generation sequencing (NGS) has transformed the life sciences, and many research groups are newly dependent upon computer clusters to store and analyze large datasets. This creates challenges for e-infrastructures accustomed to hosting computationally mature research in other sciences. Using data gathered from our own clusters at UPPMAX computing center at Uppsala University, Sweden, where core hour usage of ∼800 NGS and ∼200 non-NGS projects is now similar, we compare and contrast the growth, administrative burden, and cluster usage of NGS projects with projects from other sciences. Results The number of NGS projects has grown rapidly since 2010, with growth driven by entry of new research groups. Storage used by NGS projects has grown more rapidly since 2013 and is now limited by disk capacity. NGS users submit nearly twice as many support tickets per user, and 11 more tools are installed each month for NGS projects than for non-NGS projects. We developed usage and efficiency metrics and show that computing jobs for NGS projects use more RAM than non-NGS projects, are more variable in core usage, and rarely span multiple nodes. NGS jobs use booked resources less efficiently for a variety of reasons. Active monitoring can improve this somewhat. Conclusions Hosting NGS projects imposes a large administrative burden at UPPMAX due to large numbers of inexperienced users and diverse and rapidly evolving research areas. We provide a set of recommendations for e-infrastructures that host NGS research projects. We provide anonymized versions of our storage, job, and efficiency databases. PMID:29659792

  16. Cellular computational generalized neuron network for frequency situational intelligence in a multi-machine power system.

    PubMed

    Wei, Yawei; Venayagamoorthy, Ganesh Kumar

    2017-09-01

    To prevent large interconnected power system from a cascading failure, brownout or even blackout, grid operators require access to faster than real-time information to make appropriate just-in-time control decisions. However, the communication and computational system limitations of currently used supervisory control and data acquisition (SCADA) system can only deliver delayed information. However, the deployment of synchrophasor measurement devices makes it possible to capture and visualize, in near-real-time, grid operational data with extra granularity. In this paper, a cellular computational network (CCN) approach for frequency situational intelligence (FSI) in a power system is presented. The distributed and scalable computing unit of the CCN framework makes it particularly flexible for customization for a particular set of prediction requirements. Two soft-computing algorithms have been implemented in the CCN framework: a cellular generalized neuron network (CCGNN) and a cellular multi-layer perceptron network (CCMLPN), for purposes of providing multi-timescale frequency predictions, ranging from 16.67 ms to 2 s. These two developed CCGNN and CCMLPN systems were then implemented on two different scales of power systems, one of which installed a large photovoltaic plant. A real-time power system simulator at weather station within the Real-Time Power and Intelligent Systems (RTPIS) laboratory at Clemson, SC, was then used to derive typical FSI results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Big Data, Deep Learning and Tianhe-2 at Sun Yat-Sen University, Guangzhou

    NASA Astrophysics Data System (ADS)

    Yuen, D. A.; Dzwinel, W.; Liu, J.; Zhang, K.

    2014-12-01

    In this decade the big data revolution has permeated in many fields, ranging from financial transactions, medical surveys and scientific endeavors, because of the big opportunities people see ahead. What to do with all this data remains an intriguing question. This is where computer scientists together with applied mathematicians have made some significant inroads in developing deep learning techniques for unraveling new relationships among the different variables by means of correlation analysis and data-assimilation methods. Deep-learning and big data taken together is a grand challenge task in High-performance computing which demand both ultrafast speed and large memory. The Tianhe-2 recently installed at Sun Yat-Sen University in Guangzhou is well positioned to take up this challenge because it is currently the world's fastest computer at 34 Petaflops. Each compute node of Tianhe-2 has two CPUs of Intel Xeon E5-2600 and three Xeon Phi accelerators. The Tianhe-2 has a very large fast memory RAM of 88 Gigabytes on each node. The system has a total memory of 1,375 Terabytes. All of these technical features will allow very high dimensional (more than 10) problem in deep learning to be explored carefully on the Tianhe-2. Problems in seismology which can be solved include three-dimensional seismic wave simulations of the whole Earth with a few km resolution and the recognition of new phases in seismic wave form from assemblage of large data sets.

  18. Neural network approach to prediction of temperatures around groundwater heat pump systems

    NASA Astrophysics Data System (ADS)

    Lo Russo, Stefano; Taddia, Glenda; Gnavi, Loretta; Verda, Vittorio

    2014-01-01

    A fundamental aspect in groundwater heat pump (GWHP) plant design is the correct evaluation of the thermally affected zone that develops around the injection well. This is particularly important to avoid interference with previously existing groundwater uses (wells) and underground structures. Temperature anomalies are detected through numerical methods. Computational fluid dynamic (CFD) models are widely used in this field because they offer the opportunity to calculate the time evolution of the thermal plume produced by a heat pump. The use of neural networks is proposed to determine the time evolution of the groundwater temperature downstream of an installation as a function of the possible utilization profiles of the heat pump. The main advantage of neural network modeling is the possibility of evaluating a large number of scenarios in a very short time, which is very useful for the preliminary analysis of future multiple installations. The neural network is trained using the results from a CFD model (FEFLOW) applied to the installation at Politecnico di Torino (Italy) under several operating conditions. The final results appeared to be reliable and the temperature anomalies around the injection well appeared to be well predicted.

  19. Intelligent electrical harness connector assembly using Bell Helicopter Textron's 'Wire Harness Automated Manufacturing System'

    NASA Astrophysics Data System (ADS)

    Springer, D. W.

    Bell Helicopter Textron, Incorporated (BHTI) installed two Digital Equipment Corporation PDP-11 computers and an American Can Inc. Ink Jet printer in 1980 as the cornerstone of the Wire Harness Automated Manufacturing System (WHAMS). WHAMS is based upon the electrical assembly philosophy of continuous filament harness forming. This installation provided BHTI with a 3 to 1 return-on-investment by reducing wire and cable identification cycle time by 80 percent and harness forming, on dedicated layout tooling, by 40 percent. Yet, this improvement in harness forming created a bottle neck in connector assembly. To remove this bottle neck, BHTI has installed a prototype connector assembly cell that integrates the WHAMS' data base and innovative computer technologies to cut harness connector assembly cycle time. This novel connector assembly cell uses voice recognition, laser identification, and animated computer graphics to help the electrician in the correct assembly of harness connectors.

  20. Protecting the patient by promoting end-user competence in health informatics systems-moves towards a generic health computer user "driving license".

    PubMed

    Rigby, Michael

    2004-03-18

    The effectiveness and quality of health informatics systems' support to healthcare delivery are largely determined by two factors-the suitability of the system installed, and the competence of the users. However, the profile of users of large-scale clinical health systems is significantly different from the profile of end-users in other enterprises such as the finance sector, insurance, travel or retail sales. Work with a mental health provider in Ireland, who was introducing a customized electronic patient record (EPR) system, identified the strong legal and ethical importance of adequately skills for the health professionals and others, who would be the system users. The experience identified the need for a clear and comprehensive generic user qualification at a basic but robust level. The European computer driving license (ECDL) has gained wide recognition as a basic generic qualification for users of computer systems. However, health systems and data have a series of characteristics that differentiate them from other data systems. The logical conclusion was the recognition of a need for an additional domain-specific qualification-an "ECDL Health Supplement". Development of this is now being progressed.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Eric J

    The ResStock analysis tool is helping states, municipalities, utilities, and manufacturers identify which home upgrades save the most energy and money. Across the country there's a vast diversity in the age, size, construction practices, installed equipment, appliances, and resident behavior of the housing stock, not to mention the range of climates. These variations have hindered the accuracy of predicting savings for existing homes. Researchers at the National Renewable Energy Laboratory (NREL) developed ResStock. It's a versatile tool that takes a new approach to large-scale residential energy analysis by combining: large public and private data sources, statistical sampling, detailed subhourly buildingmore » simulations, high-performance computing. This combination achieves unprecedented granularity and most importantly - accuracy - in modeling the diversity of the single-family housing stock.« less

  2. Scalable cluster administration - Chiba City I approach and lessons learned.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Navarro, J. P.; Evard, R.; Nurmi, D.

    2002-07-01

    Systems administrators of large clusters often need to perform the same administrative activity hundreds or thousands of times. Often such activities are time-consuming, especially the tasks of installing and maintaining software. By combining network services such as DHCP, TFTP, FTP, HTTP, and NFS with remote hardware control, cluster administrators can automate all administrative tasks. Scalable cluster administration addresses the following challenge: What systems design techniques can cluster builders use to automate cluster administration on very large clusters? We describe the approach used in the Mathematics and Computer Science Division of Argonne National Laboratory on Chiba City I, a 314-node Linuxmore » cluster; and we analyze the scalability, flexibility, and reliability benefits and limitations from that approach.« less

  3. Antenna pattern study, task 2

    NASA Technical Reports Server (NTRS)

    Harper, Warren

    1989-01-01

    Two electromagnetic scattering codes, NEC-BSC and ESP3, were delivered and installed on a NASA VAX computer for use by Marshall Space Flight Center antenna design personnel. The existing codes and certain supplementary software were updated, the codes installed on a computer that will be delivered to the customer, to provide capability for graphic display of the data to be computed by the use of the codes and to assist the customer in the solution of specific problems that demonstrate the use of the codes. With the exception of one code revision, all of these tasks were performed.

  4. CFD-CAA Coupled Calculations of a Tandem Cylinder Configuration to Assess Facility Installation Effects

    NASA Technical Reports Server (NTRS)

    Redonnet, Stephane; Lockard, David P.; Khorrami, Mehdi R.; Choudhari, Meelan M.

    2011-01-01

    This paper presents a numerical assessment of acoustic installation effects in the tandem cylinder (TC) experiments conducted in the NASA Langley Quiet Flow Facility (QFF), an open-jet, anechoic wind tunnel. Calculations that couple the Computational Fluid Dynamics (CFD) and Computational Aeroacoustics (CAA) of the TC configuration within the QFF are conducted using the CFD simulation results previously obtained at NASA LaRC. The coupled simulations enable the assessment of installation effects associated with several specific features in the QFF facility that may have impacted the measured acoustic signature during the experiment. The CFD-CAA coupling is based on CFD data along a suitably chosen surface, and employs a technique that was recently improved to account for installed configurations involving acoustic backscatter into the CFD domain. First, a CFD-CAA calculation is conducted for an isolated TC configuration to assess the coupling approach, as well as to generate a reference solution for subsequent assessments of QFF installation effects. Direct comparisons between the CFD-CAA calculations associated with the various installed configurations allow the assessment of the effects of each component (nozzle, collector, etc.) or feature (confined vs. free jet flow, etc.) characterizing the NASA LaRC QFF facility.

  5. SSR_pipeline--computer software for the identification of microsatellite sequences from paired-end Illumina high-throughput DNA sequence data

    USGS Publications Warehouse

    Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (SSRs; for example, microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains three analysis modules along with a fourth control module that can be used to automate analyses of large volumes of data. The modules are used to (1) identify the subset of paired-end sequences that pass quality standards, (2) align paired-end reads into a single composite DNA sequence, and (3) identify sequences that possess microsatellites conforming to user specified parameters. Each of the three separate analysis modules also can be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc). All modules are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, Windows). The program suite relies on a compiled Python extension module to perform paired-end alignments. Instructions for compiling the extension from source code are provided in the documentation. Users who do not have Python installed on their computers or who do not have the ability to compile software also may choose to download packaged executable files. These files include all Python scripts, a copy of the compiled extension module, and a minimal installation of Python in a single binary executable. See program documentation for more information.

  6. Going Dotty: a practical guide for installing new hand hygiene products.

    PubMed

    Bush, Kathryn; Mah, Manuel W; Meyers, Gwyneth; Armstrong, Pamela; Stoesz, Janice; Strople, Sally

    2007-12-01

    This report distills our experiences coordinating the installation of a new commercial line of hand hygiene products in a large, integrated health care region in Western Canada into a practical guide that can benefit infection control professionals. Some key considerations while managing such a large hand hygiene products installation are stakeholder collaboration, management of occupational hand dermatitis, housekeeping support, and communication.

  7. Do personal computers make doctors less personal?

    PubMed Central

    Rethans, Jan-Joost; Höppener, Paul; Wolfs, George; Diederiks, Jos

    1988-01-01

    Ten months after the installation of a computer in a general practice surgery a postal survey (piloted questionnaire) was sent to 390 patients. The patients' views of their relationship with their doctor after the computer was introduced were compared with their view of their relationship before the installation of the computer. More than 96% of the patients (n=263) stated that contact with their doctor was as easy and as personal as before. Most stated that the computer did not influence the duration of the consultation. Eighty one patients (30%) stated, however, that they thought that their privacy was reduced. Unlike studies of patients' attitudes performed before any actual experience of use of a computer in general practice, this study found that patients have little difficulty in accepting the presence of a computer in the consultation room. Nevertheless, doctors should inform their patients about any connections between their computer and other, external computers to allay fears about a decrease in privacy. PMID:3132287

  8. CloVR: a virtual machine for automated and portable sequence analysis from the desktop using cloud computing.

    PubMed

    Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian

    2011-08-30

    Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.

  9. Mission Driven Scene Understanding: Candidate Model Training and Validation

    DTIC Science & Technology

    2016-09-01

    driven scene understanding. One of the candidate engines that we are evaluating is a convolutional neural network (CNN) program installed on a Windows 10...Theano-AlexNet6,7) installed on a Windows 10 notebook computer. To the best of our knowledge, an implementation of the open-source, Python-based...AlexNet CNN on a Windows notebook computer has not been previously reported. In this report, we present progress toward the proof-of-principle testing

  10. Computer code for estimating installed performance of aircraft gas turbine engines. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    Kowalski, E. J.

    1979-01-01

    A computerized method which utilizes the engine performance data and estimates the installed performance of aircraft gas turbine engines is presented. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag. A user oriented description of the program input requirements, program output, deck setup, and operating instructions is presented.

  11. Bolt installation tool for tightening large nuts and bolts

    NASA Technical Reports Server (NTRS)

    Mcdougal, A. R.; Norman, R. M.

    1974-01-01

    Large bolts and nuts are accurately tightened to structures without damaging torque stresses. There are two models of bolt installation tool. One is rigidly mounted and one is hand held. Each model includes torque-multiplier unit.

  12. TOWARD A COMPUTER BASED INSTRUCTIONAL SYSTEM.

    ERIC Educational Resources Information Center

    GARIGLIO, LAWRENCE M.; RODGERS, WILLIAM A.

    THE INFORMATION FOR THIS REPORT WAS OBTAINED FROM VARIOUS COMPUTER ASSISTED INSTRUCTION INSTALLATIONS. COMPUTER BASED INSTRUCTION REFERS TO A SYSTEM AIMED AT INDIVIDUALIZED INSTRUCTION, WITH THE COMPUTER AS CENTRAL CONTROL. SUCH A SYSTEM HAS 3 MAJOR SUBSYSTEMS--INSTRUCTIONAL, RESEARCH, AND MANAGERIAL. THIS REPORT EMPHASIZES THE INSTRUCTIONAL…

  13. Graphics supercomputer for computational fluid dynamics research

    NASA Astrophysics Data System (ADS)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  14. FUN3D Manual: 12.9

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2016-01-01

    This manual describes the installation and execution of FUN3D version 12.9, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  15. FUN3D Manual: 13.2

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2017-01-01

    This manual describes the installation and execution of FUN3D version 13.2, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  16. FUN3D Manual: 12.6

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; hide

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.6, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  17. FUN3D Manual: 12.7

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.7, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  18. FUN3D Manual: 12.5

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; hide

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.5, including optional dependent packages. FUN3D is a suite of computational uid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables ecient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  19. FUN3D Manual: 12.8

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.8, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  20. FUN3D Manual: 12.4

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; hide

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.4, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixedelement unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  1. FUN3D Manual: 13.1

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2017-01-01

    This manual describes the installation and execution of FUN3D version 13.1, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  2. FUN3D Manual: 13.0

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bill; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2016-01-01

    This manual describes the installation and execution of FUN3D version 13.0, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  3. Rio: a dynamic self-healing services architecture using Jini networking technology

    NASA Astrophysics Data System (ADS)

    Clarke, James B.

    2002-06-01

    Current mainstream distributed Java architectures offer great capabilities embracing conventional enterprise architecture patterns and designs. These traditional systems provide robust transaction oriented environments that are in large part focused on data and host processors. Typically, these implementations require that an entire application be deployed on every machine that will be used as a compute resource. In order for this to happen, the application is usually taken down, installed and started with all systems in-sync and knowing about each other. Static environments such as these present an extremely difficult environment to setup, deploy and administer.

  4. FUN3D Manual: 13.3

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2018-01-01

    This manual describes the installation and execution of FUN3D version 13.3, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  5. LightLeaves: computer controlled kinetic reflection hologram installation and a brief discussion of earlier work

    NASA Astrophysics Data System (ADS)

    Connors Chen, Betsy

    2013-02-01

    LightLeaves is an installation combining leaf shaped, white light reflection holograms of landscape images with a special kinetic lighting device that houses a lamp and moving leaf shaped masks. The masks are controlled by an Arduino microcontroller and servomotors that position the masks in front of the illumination source of the holograms. The work is the most recent in a long series of landscapes that combine multi-hologram installations with computer controlled devices that play with the motion of the holograms, the light, sound or other elements in the work. LightLeaves was first exhibited at the Peabody Essex Museum in Salem, Massachusetts in a show titled "Eye Spy: Playing with Perception".

  6. Computerized systems analysis and optimization of aircraft engine performance, weight, and life cycle costs

    NASA Technical Reports Server (NTRS)

    Fishbach, L. H.

    1980-01-01

    The computational techniques are described which are utilized at Lewis Research Center to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements. Cycle performance, and engine weight can be calculated along with costs and installation effects as opposed to fuel consumption alone. Almost any conceivable turbine engine cycle can be studied. These computer codes are: NNEP, WATE, LIFCYC, INSTAL, and POD DRG. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight and cost for representative types of aircraft and missions.

  7. Aerodynamics of heat exchangers for high-altitude aircraft

    NASA Technical Reports Server (NTRS)

    Drela, Mark

    1996-01-01

    Reduction of convective beat transfer with altitude dictates unusually large beat exchangers for piston- engined high-altitude aircraft The relatively large aircraft drag fraction associated with cooling at high altitudes makes the efficient design of the entire heat exchanger installation an essential part of the aircraft's aerodynamic design. The parameters that directly influence cooling drag are developed in the context of high-altitude flight Candidate wing airfoils that incorporate heat exchangers are examined. Such integrated wing-airfoil/heat-exchanger installations appear to be attractive alternatives to isolated heat.exchanger installations. Examples are drawn from integrated installations on existing or planned high-altitude aircraft.

  8. Development of a SaaS application probe to the physical properties of the Earth's interior: An attempt at moving HPC to the cloud

    NASA Astrophysics Data System (ADS)

    Huang, Qian

    2014-09-01

    Scientific computing often requires the availability of a massive number of computers for performing large-scale simulations, and computing in mineral physics is no exception. In order to investigate physical properties of minerals at extreme conditions in computational mineral physics, parallel computing technology is used to speed up the performance by utilizing multiple computer resources to process a computational task simultaneously thereby greatly reducing computation time. Traditionally, parallel computing has been addressed by using High Performance Computing (HPC) solutions and installed facilities such as clusters and super computers. Today, it has been seen that there is a tremendous growth in cloud computing. Infrastructure as a Service (IaaS), the on-demand and pay-as-you-go model, creates a flexible and cost-effective mean to access computing resources. In this paper, a feasibility report of HPC on a cloud infrastructure is presented. It is found that current cloud services in IaaS layer still need to improve performance to be useful to research projects. On the other hand, Software as a Service (SaaS), another type of cloud computing, is introduced into an HPC system for computing in mineral physics, and an application of which is developed. In this paper, an overall description of this SaaS application is presented. This contribution can promote cloud application development in computational mineral physics, and cross-disciplinary studies.

  9. Wind power for the electric-utility industry: Policy incentives for fuel conservation

    NASA Astrophysics Data System (ADS)

    March, F.; Dlott, E. H.; Korn, D. H.; Madio, F. R.; McArthur, R. C.; Vachon, W. A.

    1982-06-01

    A systematic method for evaluating the economics of solar-electric/conservation technologies as fuel-savings investments for electric utilities in the presence of changing federal incentive policies is presented. The focus is on wind energy conversion systems (WECS) as the solar technology closest to near-term large scale implementation. Commercially available large WECS are described, along with computer models to calculate the economic impact of the inclusion of WECS as 10% of the base-load generating capacity on a grid. A guide to legal structures and relationships which impinge on large-scale WECS utilization is developed, together with a quantitative examination of the installation of 1000 MWe of WECS capacity by a utility in the northeast states. Engineering and financial analyses were performed, with results indicating government policy changes necessary to encourage the entrance of utilities into the field of windpower utilization.

  10. Evaluation of computer usage in healthcare among private practitioners of NCT Delhi.

    PubMed

    Ganeshkumar, P; Arun Kumar, Sharma; Rajoura, O P

    2011-01-01

    1. To evaluate the usage and the knowledge of computers and Information and Communication Technology in health care delivery by private practitioners. 2. To understand the determinants of computer usage by them. A cross sectional study was conducted among the private practitioners practising in three districts of NCT of Delhi between November 2007 and December 2008 by stratified random sampling method, where knowledge and usage of computers in health care and determinants of usage of computer was evaluated in them by a pre-coded semi open ended questionnaire. About 77% of the practitioners reported to have a computer and had the accessibility to internet. Computer availability and internet accessibility was highest among super speciality practitioners. Practitioners who attended a computer course were 13.8 times [OR: 13.8 (7.3 - 25.8)] more likely to have installed an EHR in the clinic. Technical related issues were the major perceived barrier in installing a computer in the clinic. Practice speciality, previous attendance of a computer course, age of started using a computer influenced the knowledge about computers. Speciality of the practice, presence of a computer professional and gender were the determinants of usage of computer.

  11. Reproducible Earth observation analytics: challenges, ideas, and a study case on containerized land use change detection

    NASA Astrophysics Data System (ADS)

    Appel, Marius; Nüst, Daniel; Pebesma, Edzer

    2017-04-01

    Geoscientific analyses of Earth observation data typically involve a long path from data acquisition to scientific results and conclusions. Before starting the actual processing, scenes must be downloaded from the providers' platforms and the computing infrastructure needs to be prepared. The computing environment often requires specialized software, which in turn might have lots of dependencies. The software is often highly customized and provided without commercial support, which leads to rather ad-hoc systems and irreproducible results. To let other scientists reproduce the analyses, the full workspace including data, code, the computing environment, and documentation must be bundled and shared. Technologies such as virtualization or containerization allow for the creation of identical computing environments with relatively little effort. Challenges, however, arise when the volume of the data is too large, when computations are done in a cluster environment, or when complex software components such as databases are used. We discuss these challenges for the example of scalable Land use change detection on Landsat imagery. We present a reproducible implementation that runs R and the scalable data management and analytical system SciDB within a Docker container. Thanks to an explicit container recipe (the Dockerfile), this enables the all-in-one reproduction including the installation of software components, the ingestion of the data, and the execution of the analysis in a well-defined environment. We furthermore discuss possibilities how the implementation could be transferred to multi-container environments in order to support reproducibility on large cluster environments.

  12. BioSMACK: a linux live CD for genome-wide association analyses.

    PubMed

    Hong, Chang Bum; Kim, Young Jin; Moon, Sanghoon; Shin, Young-Ah; Go, Min Jin; Kim, Dong-Joon; Lee, Jong-Young; Cho, Yoon Shin

    2012-01-01

    Recent advances in high-throughput genotyping technologies have enabled us to conduct a genome-wide association study (GWAS) on a large cohort. However, analyzing millions of single nucleotide polymorphisms (SNPs) is still a difficult task for researchers conducting a GWAS. Several difficulties such as compatibilities and dependencies are often encountered by researchers using analytical tools, during the installation of software. This is a huge obstacle to any research institute without computing facilities and specialists. Therefore, a proper research environment is an urgent need for researchers working on GWAS. We developed BioSMACK to provide a research environment for GWAS that requires no configuration and is easy to use. BioSMACK is based on the Ubuntu Live CD that offers a complete Linux-based operating system environment without installation. Moreover, we provide users with a GWAS manual consisting of a series of guidelines for GWAS and useful examples. BioSMACK is freely available at http://ksnp.cdc. go.kr/biosmack.

  13. Assessment of distributed solar power systems: Issues and impacts

    NASA Astrophysics Data System (ADS)

    Moyle, R. A.; Chernoff, H.; Schweizer, T. C.; Patton, J. B.

    1982-11-01

    The installation of distributed solar-power systems presents electric utilities with a host of questions. Some of the technical and economic impacts of these systems are discussed. Among the technical interconnect issues are isolated operation, power quality, line safety, and metering options. Economic issues include user purchase criteria, structures and installation costs, marketing and product distribution costs, and interconnect costs. An interactive computer program that allows easy calculation of allowable system prices and allowable generation-equipment prices was developed as part of this project. It is concluded that the technical problems raised by distributed solar systems are surmountable, but their resolution may be costly. The stringent purchase criteria likely to be imposed by many potential system users and the economies of large-scale systems make small systems (less than 10 to 20 kW) less attractive than larger systems. Utilities that consider life-cycle costs in making investment decisions and third-party investors who have tax and financial advantages are likely to place the highest value on solar-power systems.

  14. CloudDOE: a user-friendly tool for deploying Hadoop clouds and analyzing high-throughput sequencing data with MapReduce.

    PubMed

    Chung, Wei-Chun; Chen, Chien-Chih; Ho, Jan-Ming; Lin, Chung-Yen; Hsu, Wen-Lian; Wang, Yu-Chun; Lee, D T; Lai, Feipei; Huang, Chih-Wei; Chang, Yu-Jung

    2014-01-01

    Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce. We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard. CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users may collaborate to improve the source code of CloudDOE to further incorporate more MapReduce bioinformatics tools into CloudDOE and support next-generation big data open source tools, e.g., Hadoop BigTop and Spark. CloudDOE is distributed under Apache License 2.0 and is freely available at http://clouddoe.iis.sinica.edu.tw/.

  15. CloudDOE: A User-Friendly Tool for Deploying Hadoop Clouds and Analyzing High-Throughput Sequencing Data with MapReduce

    PubMed Central

    Chung, Wei-Chun; Chen, Chien-Chih; Ho, Jan-Ming; Lin, Chung-Yen; Hsu, Wen-Lian; Wang, Yu-Chun; Lee, D. T.; Lai, Feipei; Huang, Chih-Wei; Chang, Yu-Jung

    2014-01-01

    Background Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce. Results We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard. Conclusions CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users may collaborate to improve the source code of CloudDOE to further incorporate more MapReduce bioinformatics tools into CloudDOE and support next-generation big data open source tools, e.g., Hadoop BigTop and Spark. Availability: CloudDOE is distributed under Apache License 2.0 and is freely available at http://clouddoe.iis.sinica.edu.tw/. PMID:24897343

  16. Readout Electronics for the Central Drift Chamber of the Belle-II Detector

    NASA Astrophysics Data System (ADS)

    Uchida, Tomohisa; Taniguchi, Takashi; Ikeno, Masahiro; Iwasaki, Yoshihito; Saito, Masatoshi; Shimazaki, Shoichi; Tanaka, Manobu M.; Taniguchi, Nanae; Uno, Shoji

    2015-08-01

    We have developed readout electronics for the central drift chamber (CDC) of the Belle-II detector. The space near the endplate of the CDC for installation of the electronics was limited by the detector structure. Due to the large amounts of data generated by the CDC, a high-speed data link, with a greater than one gigabit transfer rate, was required to transfer the data to a back-end computer. A new readout module was required to satisfy these requirements. This module processes 48 signals from the CDC, converts them to digital data and transfers it directly to the computer. All functions that transfer digital data via the high speed link were implemented on the single module. We have measured its electrical characteristics and confirmed that the results satisfy the requirements of the Belle-II experiment.

  17. Mesoscale and severe storms (Mass) data management and analysis system

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.; Karitani, S.; Dickerson, M.

    1984-01-01

    Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.

  18. The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation

    NASA Astrophysics Data System (ADS)

    Thoreson, Gregory G.; Schneider, Erich A.

    2012-04-01

    Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.

  19. StarTrax --- The Next Generation User Interface

    NASA Astrophysics Data System (ADS)

    Richmond, Alan; White, Nick

    StarTrax is a software package to be distributed to end users for installation on their local computing infrastructure. It will provide access to many services of the HEASARC, i.e. bulletins, catalogs, proposal and analysis tools, initially for the ROSAT MIPS (Mission Information and Planning System), later for the Next Generation Browse. A user activating the GUI will reach all HEASARC capabilities through a uniform view of the system, independent of the local computing environment and of the networking method of accessing StarTrax. Use it if you prefer the point-and-click metaphor of modern GUI technology, to the classical command-line interfaces (CLI). Notable strengths include: easy to use; excellent portability; very robust server support; feedback button on every dialog; painstakingly crafted User Guide. It is designed to support a large number of input devices including terminals, workstations and personal computers. XVT's Portability Toolkit is used to build the GUI in C/C++ to run on: OSF/Motif (UNIX or VMS), OPEN LOOK (UNIX), or Macintosh, or MS-Windows (DOS), or character systems.

  20. Quantitative Microbial Risk Assessment Tutorial: Installation of Software for Watershed Modeling in Support of QMRA

    EPA Science Inventory

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling:• SDMProjectBuilder (which includes the Microbial Source Module as part...

  1. Measurement of luminance and color uniformity of displays using the large-format scanner

    NASA Astrophysics Data System (ADS)

    Mazikowski, Adam

    2017-08-01

    Uniformity of display luminance and color is important for comfort and good perception of the information presented on the display. Although display technology has developed and improved a lot over the past years, different types of displays still present a challenge in selected applications, e.g. in medical use or in case of multi-screen installations. A simplified 9-point method of determining uniformity does not always produce satisfactory results, so a different solution is proposed in the paper. The developed system consists of the large-format X-Y-Z ISEL scanner (isel Germany AG), Konica Minolta high sensitivity spot photometer-colorimeter (e.g. CS-200, Konica Minolta, Inc.) and PC computer. Dedicated software in LabView environment for control of the scanner, transfer the measured data to the computer, and visualization of measurement results was also prepared. Based on the developed setup measurements of plasma display and LCD-LED display were performed. A heavily wornout plasma TV unit, with several artifacts visible was selected. These tests show the advantages and drawbacks of described scanning method with comparison with 9-point simplified uniformity determining method.

  2. Study of Polyolefines Waste Thermo-Destruction in Large Laboratory and in Industrial Installations

    DTIC Science & Technology

    2014-12-15

    coke ”–waste after thermo-destruction carried out on the module No 2 showed an content to 46.1% of ash [20]. This ash content indicates a very large... coke (post-production waste) from the wastes thermo-destruction on 2 modules of vertical modular installation for thermo-destruction of used polymer...of receivedwaste water, the quantity of received coke , the quantity of gaseous product in periods of carrying out installation work before (first

  3. Physical Computing and Its Scope--Towards a Constructionist Computer Science Curriculum with Physical Computing

    ERIC Educational Resources Information Center

    Przybylla, Mareen; Romeike, Ralf

    2014-01-01

    Physical computing covers the design and realization of interactive objects and installations and allows students to develop concrete, tangible products of the real world, which arise from the learners' imagination. This can be used in computer science education to provide students with interesting and motivating access to the different topic…

  4. Research and realization implementation of monitor technology on illegal external link of classified computer

    NASA Astrophysics Data System (ADS)

    Zhang, Hong

    2017-06-01

    In recent years, with the continuous development and application of network technology, network security has gradually entered people's field of vision. The host computer network external network of violations is an important reason for the threat of network security. At present, most of the work units have a certain degree of attention to network security, has taken a lot of means and methods to prevent network security problems such as the physical isolation of the internal network, install the firewall at the exit. However, these measures and methods to improve network security are often not comply with the safety rules of human behavior damage. For example, the host to wireless Internet access and dual-network card to access the Internet, inadvertently formed a two-way network of external networks and computer connections [1]. As a result, it is possible to cause some important documents and confidentiality leak even in the the circumstances of user unaware completely. Secrecy Computer Violation Out-of-band monitoring technology can largely prevent the violation by monitoring the behavior of the offending connection. In this paper, we mainly research and discuss the technology of secret computer monitoring.

  5. Effects of installation caused flow distortion on noise from a fan designed for turbofan engines

    NASA Technical Reports Server (NTRS)

    Povinelli, F. P.; Dittmar, J. H.; Woodward, R. P.

    1972-01-01

    Far-field noise measurements were taken for three different installations of essentially the same fan. The installation with the most uniform inlet flow resulted in fan-blade-passage tone sound pressure levels more than 10 dB lower than the installation with more nonuniform inflow. Perceived noise levels were computed for the various installations and compared. Some measurements of inlet flow distortion were made and used in a blade-passage noise generation theory to predict the effects of distortion on noise. Good agreement was obtained between the prediction and the measured effect. Possible origins of the distortion were identified by observation of tuft action in the vicinity of the inlet.

  6. Apollo Ring Optical Switch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maestas, J.H.

    1987-03-01

    An optical switch was designed, built, and installed at Sandia National Laboratories in Albuquerque, New Mexico, to facilitate the integration of two Apollo computer networks into a single network. This report presents an overview of the optical switch as well as its layout, switch testing procedure and test data, and installation.

  7. ERA 1103 UNIVAC 2 Calculating Machine

    NASA Image and Video Library

    1955-09-21

    The new 10-by 10-Foot Supersonic Wind Tunnel at the Lewis Flight Propulsion Laboratory included high tech data acquisition and analysis systems. The reliable gathering of pressure, speed, temperature, and other data from test runs in the facilities was critical to the research process. Throughout the 1940s and early 1950s female employees, known as computers, recorded all test data and performed initial calculations by hand. The introduction of punch card computers in the late 1940s gradually reduced the number of hands-on calculations. In the mid-1950s new computational machines were installed in the office building of the 10-by 10-Foot tunnel. The new systems included this UNIVAC 1103 vacuum tube computer—the lab’s first centralized computer system. The programming was done on paper tape and fed into the machine. The 10-by 10 computer center also included the Lewis-designed Computer Automated Digital Encoder (CADDE) and Digital Automated Multiple Pressure Recorder (DAMPR) systems which converted test data to binary-coded decimal numbers and recorded test pressures automatically, respectively. The systems primarily served the 10-by 10, but were also applied to the other large facilities. Engineering Research Associates (ERA) developed the initial UNIVAC computer for the Navy in the late 1940s. In 1952 the company designed a commercial version, the UNIVAC 1103. The 1103 was the first computer designed by Seymour Cray and the first commercially successful computer.

  8. CloVR: A virtual machine for automated and portable sequence analysis from the desktop using cloud computing

    PubMed Central

    2011-01-01

    Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105

  9. Tracking the PhD Students' Daily Computer Use

    ERIC Educational Resources Information Center

    Sim, Kwong Nui; van der Meer, Jacques

    2015-01-01

    This study investigated PhD students' computer activities in their daily research practice. Software that tracks computer usage (Manic Time) was installed on the computers of nine PhD students, who were at their early, mid and final stage in doing their doctoral research in four different discipline areas (Commerce, Humanities, Health Sciences and…

  10. Demystifying the GMAT: Computer-Based Testing Terms

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.

    2012-01-01

    Computer-based testing can be a powerful means to make all aspects of test administration not only faster and more efficient, but also more accurate and more secure. While the Graduate Management Admission Test (GMAT) exam is a computer adaptive test, there are other approaches. This installment presents a primer of computer-based testing terms.

  11. Examining the Feasibility and Effect of Transitioning GED Tests to Computer

    ERIC Educational Resources Information Center

    Higgins, Jennifer; Patterson, Margaret Becker; Bozman, Martha; Katz, Michael

    2010-01-01

    This study examined the feasibility of administering GED Tests using a computer based testing system with embedded accessibility tools and the impact on test scores and test-taker experience when GED Tests are transitioned from paper to computer. Nineteen test centers across five states successfully installed the computer based testing program,…

  12. A Computer Lab that Students Use but Never See

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    2008-01-01

    North Carolina State University may never build another computer lab. Instead the university has installed racks of equipment in windowless rooms where students and professors never go. This article describes a project called the Virtual Computing Lab. Users enter it remotely from their own computers in dormitory rooms or libraries. They get all…

  13. New Technology and the Curriculum.

    ERIC Educational Resources Information Center

    Conklin, Joyce

    1987-01-01

    Hillsdale High School, in San Mateo, California, installed the nation's first 15-computer Macintosh laboratory donated by Apple Computer, Inc. This article describes the lab and the uses to which it has been put, including computer education, word processing, preparation of student publications, and creative writing instruction. (PGD)

  14. 47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...

  15. 47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...

  16. 47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...

  17. 47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...

  18. 47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...

  19. Computer Networking with the Victorian Correspondence School.

    ERIC Educational Resources Information Center

    Conboy, Ian

    During 1985 the Education Department installed two-way radios in 44 remote secondary schools in Victoria, Australia, to improve turn-around time for correspondence assignments. Subsequently, teacher supervisors at Melbourne's Correspondence School sought ways to further augument audio interactivity with computer networking. Computer equipment was…

  20. Computerized systems analysis and optimization of aircraft engine performance, weight, and life cycle costs

    NASA Technical Reports Server (NTRS)

    Fishbach, L. H.

    1979-01-01

    The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.

  1. Astronomy Education using the Web and a Computer Algebra System

    NASA Astrophysics Data System (ADS)

    Flurchick, K. M.; Culver, Roger B.; Griego, Ben

    2013-04-01

    The combination of a web server and a Computer Algebra System to provide students the ability to explore and investigate astronomical concepts presented in a class can help student understanding. This combination of technologies provides a framework to extend the classroom experience with independent student exploration. In this presentation we report on the developmen of this web based material and some initial results of students making use of the computational tools using webMathematica^TM. The material developed allow the student toanalyze and investigate a variety of astronomical phenomena, including topics such as the Runge-Lenz vector, descriptions of the orbits of some of the exo-planets, Bode' law and other topics related to celestial mechanics. The server based Computer Algebra System system allows for computations without installing software on the student's computer but provides a powerful environment to explore the various concepts. The current system is installed at North Carolina A&T State University and has been used in several undergraduate classes.

  2. The influence of installation angle of GGIs on full-tensor gravity gradient measurement

    NASA Astrophysics Data System (ADS)

    Wei, Hongwei; Wu, Meiping

    2018-03-01

    Gravity gradient plays an important role in many disciplines as a fundamental signal to reflect the information of the earth. Full-tensor gravity gradient measurement (FGGM) is an effective way to obtain the gravity gradient signal. In this paper, the installation mode of GGIs in FGGM is studied. It is expected that the accuracy of FGGM will be improved by optimizing the installation mode of GGIs. In addition, we analysed the relationship between GGIs’ installation angle and FGGM by establishing the measurement model of FGGM. Then the following conclusions was proved that there was no relationship between GGIs’ installation angle and the measurement result. This conclusion showed that there was no optimal angle for the GGIs’ installation in FGGM, and the installation angle only need to satisfy the relationship shown in the conclusion section of this paper. Finally, this conclusion was demonstrated by computer simulations.

  3. Home and School Technology: Wired versus Wireless.

    ERIC Educational Resources Information Center

    Van Horn, Royal

    2001-01-01

    Presents results of informal research on smart homes and appliances, structured home wiring, whole-house audio/video distribution, hybrid cable, and wireless networks. Computer network wiring is tricky to install unless all-in-one jacketed cable is used. Wireless phones help installers avoid pre-wiring problems in homes and schools. (MLH)

  4. nu-TRLan User Guide Version 1.0: A High-Performance Software Package for Large-Scale Harmitian Eigenvalue Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamazaki, Ichitaro; Wu, Kesheng; Simon, Horst

    2008-10-27

    The original software package TRLan, [TRLan User Guide], page 24, implements the thick restart Lanczos method, [Wu and Simon 2001], page 24, for computing eigenvalues {lambda} and their corresponding eigenvectors v of a symmetric matrix A: Av = {lambda}v. Its effectiveness in computing the exterior eigenvalues of a large matrix has been demonstrated, [LBNL-42982], page 24. However, its performance strongly depends on the user-specified dimension of a projection subspace. If the dimension is too small, TRLan suffers from slow convergence. If it is too large, the computational and memory costs become expensive. Therefore, to balance the solution convergence and costs,more » users must select an appropriate subspace dimension for each eigenvalue problem at hand. To free users from this difficult task, nu-TRLan, [LNBL-1059E], page 23, adjusts the subspace dimension at every restart such that optimal performance in solving the eigenvalue problem is automatically obtained. This document provides a user guide to the nu-TRLan software package. The original TRLan software package was implemented in Fortran 90 to solve symmetric eigenvalue problems using static projection subspace dimensions. nu-TRLan was developed in C and extended to solve Hermitian eigenvalue problems. It can be invoked using either a static or an adaptive subspace dimension. In order to simplify its use for TRLan users, nu-TRLan has interfaces and features similar to those of TRLan: (1) Solver parameters are stored in a single data structure called trl-info, Chapter 4 [trl-info structure], page 7. (2) Most of the numerical computations are performed by BLAS, [BLAS], page 23, and LAPACK, [LAPACK], page 23, subroutines, which allow nu-TRLan to achieve optimized performance across a wide range of platforms. (3) To solve eigenvalue problems on distributed memory systems, the message passing interface (MPI), [MPI forum], page 23, is used. The rest of this document is organized as follows. In Chapter 2 [Installation], page 2, we provide an installation guide of the nu-TRLan software package. In Chapter 3 [Example], page 3, we present a simple nu-TRLan example program. In Chapter 4 [trl-info structure], page 7, and Chapter 5 [trlan subroutine], page 14, we describe the solver parameters and interfaces in detail. In Chapter 6 [Solver parameters], page 21, we discuss the selection of the user-specified parameters. In Chapter 7 [Contact information], page 22, we give the acknowledgements and contact information of the authors. In Chapter 8 [References], page 23, we list reference to related works.« less

  5. 46 CFR 120.352 - Battery categories.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Battery categories. 120.352 Section 120.352 Shipping... and Distribution Systems § 120.352 Battery categories. This section applies to batteries installed to... sources of power to final emergency loads. (a) Large. A large battery installation is one connected to a...

  6. 46 CFR 120.352 - Battery categories.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 4 2012-10-01 2012-10-01 false Battery categories. 120.352 Section 120.352 Shipping... and Distribution Systems § 120.352 Battery categories. This section applies to batteries installed to... sources of power to final emergency loads. (a) Large. A large battery installation is one connected to a...

  7. 46 CFR 120.352 - Battery categories.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Battery categories. 120.352 Section 120.352 Shipping... and Distribution Systems § 120.352 Battery categories. This section applies to batteries installed to... sources of power to final emergency loads. (a) Large. A large battery installation is one connected to a...

  8. 46 CFR 120.352 - Battery categories.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 4 2014-10-01 2014-10-01 false Battery categories. 120.352 Section 120.352 Shipping... and Distribution Systems § 120.352 Battery categories. This section applies to batteries installed to... sources of power to final emergency loads. (a) Large. A large battery installation is one connected to a...

  9. 46 CFR 120.352 - Battery categories.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 4 2013-10-01 2013-10-01 false Battery categories. 120.352 Section 120.352 Shipping... and Distribution Systems § 120.352 Battery categories. This section applies to batteries installed to... sources of power to final emergency loads. (a) Large. A large battery installation is one connected to a...

  10. An On-Line Nutrition Information System for the Clinical Dietitian

    PubMed Central

    Petot, Grace J.; Houser, Harold B.; Uhrich, Roberta V.

    1980-01-01

    A university based computerized nutrient data base has been integrated into an on-line nutrition information system in a large acute care hospital. Key elements described in the design and installation of the system are the addition of hospital menu items to the existing nutrient data base, the creation of a unique recipe file in the computer, production of a customized menu/nutrient handbook, preparation of forms and establishment of output formats. Standardization of nutrient calculations in the clinical and food production areas, variety and purposes of various format options, the advantages of timesharing and plans for expansion of the system are discussed.

  11. Large Scale Portability of Hospital Information System Software

    PubMed Central

    Munnecke, Thomas H.; Kuhn, Ingeborg M.

    1986-01-01

    As part of its Decentralized Hospital Computer Program (DHCP) the Veterans Administration installed new hospital information systems in 169 of its facilities during 1984 and 1985. The application software for these systems is based on the ANS MUMPS language, is public domain, and is designed to be operating system and hardware independent. The software, developed by VA employees, is built upon a layered approach, where application packages layer on a common data dictionary which is supported by a Kernel of software. Communications between facilities are based on public domain Department of Defense ARPA net standards for domain naming, mail transfer protocols, and message formats, layered on a variety of communications technologies.

  12. Climate Modeling with a Million CPUs

    NASA Astrophysics Data System (ADS)

    Tobis, M.; Jackson, C. S.

    2010-12-01

    Michael Tobis, Ph.D. Research Scientist Associate University of Texas Institute for Geophysics Charles S. Jackson Research Scientist University of Texas Institute for Geophysics Meteorological, oceanographic, and climatological applications have been at the forefront of scientific computing since its inception. The trend toward ever larger and more capable computing installations is unabated. However, much of the increase in capacity is accompanied by an increase in parallelism and a concomitant increase in complexity. An increase of at least four additional orders of magnitude in the computational power of scientific platforms is anticipated. It is unclear how individual climate simulations can continue to make effective use of the largest platforms. Conversion of existing community codes to higher resolution, or to more complex phenomenology, or both, presents daunting design and validation challenges. Our alternative approach is to use the expected resources to run very large ensembles of simulations of modest size, rather than to await the emergence of very large simulations. We are already doing this in exploring the parameter space of existing models using the Multiple Very Fast Simulated Annealing algorithm, which was developed for seismic imaging. Our experiments have the dual intentions of tuning the model and identifying ranges of parameter uncertainty. Our approach is less strongly constrained by the dimensionality of the parameter space than are competing methods. Nevertheless, scaling up remains costly. Much could be achieved by increasing the dimensionality of the search and adding complexity to the search algorithms. Such ensemble approaches scale naturally to very large platforms. Extensions of the approach are anticipated. For example, structurally different models can be tuned to comparable effectiveness. This can provide an objective test for which there is no realistic precedent with smaller computations. We find ourselves inventing new code to manage our ensembles. Component computations involve tens to hundreds of CPUs and tens to hundreds of hours. The results of these moderately large parallel jobs influence the scheduling of subsequent jobs, and complex algorithms may be easily contemplated for this. The operating system concept of a "thread" re-emerges at a very coarse level, where each thread manages atomic computations of thousands of CPU-hours. That is, rather than multiple threads operating on a processor, at this level, multiple processors operate within a single thread. In collaboration with the Texas Advanced Computing Center, we are developing a software library at the system level, which should facilitate the development of computations involving complex strategies which invoke large numbers of moderately large multi-processor jobs. While this may have applications in other sciences, our key intent is to better characterize the coupled behavior of a very large set of climate model configurations.

  13. Care and Handling of Computer Magnetic Storage Media.

    ERIC Educational Resources Information Center

    Geller, Sidney B.

    Intended for use by data processing installation managers, operating personnel, and technical staff, this publication provides a comprehensive set of care and handling guidelines for the physical/chemical preservation of computer magnetic storage media--principally computer magnetic tapes--and their stored data. Emphasis is placed on media…

  14. D Animation Reconstruction from Multi-Camera Coordinates Transformation

    NASA Astrophysics Data System (ADS)

    Jhan, J. P.; Rau, J. Y.; Chou, C. M.

    2016-06-01

    Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP) in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australiscoded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  15. Layer-oriented simulation tool.

    PubMed

    Arcidiacono, Carmelo; Diolaiti, Emiliano; Tordi, Massimiliano; Ragazzoni, Roberto; Farinato, Jacopo; Vernet, Elise; Marchetti, Enrico

    2004-08-01

    The Layer-Oriented Simulation Tool (LOST) is a numerical simulation code developed for analysis of the performance of multiconjugate adaptive optics modules following a layer-oriented approach. The LOST code computes the atmospheric layers in terms of phase screens and then propagates the phase delays introduced in the natural guide stars' wave fronts by using geometrical optics approximations. These wave fronts are combined in an optical or numerical way, including the effects of wave-front sensors on measurements in terms of phase noise. The LOST code is described, and two applications to layer-oriented modules are briefly presented. We have focus on the Multiconjugate adaptive optics demonstrator to be mounted upon the Very Large Telescope and on the Near-IR-Visible Adaptive Interferometer for Astronomy (NIRVANA) interferometric system to be installed on the combined focus of the Large Binocular Telescope.

  16. Direct conversion of solar energy to thermal energy

    NASA Astrophysics Data System (ADS)

    Sizmann, Rudolf

    1986-12-01

    Selective coatings (cermets) were produced by simultaneous evaporation of copper and silicon dioxide, and analyzed by computer assisted spectral photometers and ellipsometers; hemispherical emittance was measured. Steady state test procedures for covered and uncovered collectors were investigated. A method for evaluating the transient behavior of collectors was developed. The derived transfer functions describe their transient behavior. A stochastic approach was used for reducing the meteorological data volume. Data sets which are statistically equivalent to the original data can be synthesized. A simulation program for solar systems using analytical solutions of differential equations was developed. A large solar DHW system was optimized by a detailed modular simulation program. A microprocessor assisted data aquisition records the four characteristics of solar cells and solar cell systems in less than 10 msec. Measurements of a large photovoltaic installation (50 sqm) are reported.

  17. Nephele: a cloud platform for simplified, standardized and reproducible microbiome data analysis.

    PubMed

    Weber, Nick; Liou, David; Dommer, Jennifer; MacMenamin, Philip; Quiñones, Mariam; Misner, Ian; Oler, Andrew J; Wan, Joe; Kim, Lewis; Coakley McCarthy, Meghan; Ezeji, Samuel; Noble, Karlynn; Hurt, Darrell E

    2018-04-15

    Widespread interest in the study of the microbiome has resulted in data proliferation and the development of powerful computational tools. However, many scientific researchers lack the time, training, or infrastructure to work with large datasets or to install and use command line tools. The National Institute of Allergy and Infectious Diseases (NIAID) has created Nephele, a cloud-based microbiome data analysis platform with standardized pipelines and a simple web interface for transforming raw data into biological insights. Nephele integrates common microbiome analysis tools as well as valuable reference datasets like the healthy human subjects cohort of the Human Microbiome Project (HMP). Nephele is built on the Amazon Web Services cloud, which provides centralized and automated storage and compute capacity, thereby reducing the burden on researchers and their institutions. https://nephele.niaid.nih.gov and https://github.com/niaid/Nephele. darrell.hurt@nih.gov.

  18. Nephele: a cloud platform for simplified, standardized and reproducible microbiome data analysis

    PubMed Central

    Weber, Nick; Liou, David; Dommer, Jennifer; MacMenamin, Philip; Quiñones, Mariam; Misner, Ian; Oler, Andrew J; Wan, Joe; Kim, Lewis; Coakley McCarthy, Meghan; Ezeji, Samuel; Noble, Karlynn; Hurt, Darrell E

    2018-01-01

    Abstract Motivation Widespread interest in the study of the microbiome has resulted in data proliferation and the development of powerful computational tools. However, many scientific researchers lack the time, training, or infrastructure to work with large datasets or to install and use command line tools. Results The National Institute of Allergy and Infectious Diseases (NIAID) has created Nephele, a cloud-based microbiome data analysis platform with standardized pipelines and a simple web interface for transforming raw data into biological insights. Nephele integrates common microbiome analysis tools as well as valuable reference datasets like the healthy human subjects cohort of the Human Microbiome Project (HMP). Nephele is built on the Amazon Web Services cloud, which provides centralized and automated storage and compute capacity, thereby reducing the burden on researchers and their institutions. Availability and implementation https://nephele.niaid.nih.gov and https://github.com/niaid/Nephele Contact darrell.hurt@nih.gov PMID:29028892

  19. Optimization analysis of thermal management system for electric vehicle battery pack

    NASA Astrophysics Data System (ADS)

    Gong, Huiqi; Zheng, Minxin; Jin, Peng; Feng, Dong

    2018-04-01

    Electric vehicle battery pack can increase the temperature to affect the power battery system cycle life, charge-ability, power, energy, security and reliability. The Computational Fluid Dynamics simulation and experiment of the charging and discharging process of the battery pack were carried out for the thermal management system of the battery pack under the continuous charging of the battery. The simulation result and the experimental data were used to verify the rationality of the Computational Fluid Dynamics calculation model. In view of the large temperature difference of the battery module in high temperature environment, three optimization methods of the existing thermal management system of the battery pack were put forward: adjusting the installation position of the fan, optimizing the arrangement of the battery pack and reducing the fan opening temperature threshold. The feasibility of the optimization method is proved by simulation and experiment of the thermal management system of the optimized battery pack.

  20. 11 Foot Unitary Plan Tunnel Facility Optical Improvement Large Window Analysis

    NASA Technical Reports Server (NTRS)

    Hawke, Veronica M.

    2015-01-01

    The test section of the 11 by 11-foot Unitary Plan Transonic Wind Tunnel (11-foot UPWT) may receive an upgrade of larger optical windows on both the North and South sides. These new larger windows will provide better access for optical imaging of test article flow phenomena including surface and off body flow characteristics. The installation of these new larger windows will likely produce a change to the aerodynamic characteristics of the flow in the Test Section. In an effort understand the effect of this change, a computational model was employed to predict the flows through the slotted walls, in the test section and around the model before and after the tunnel modification. This report documents the solid CAD model that was created and the inviscid computational analysis that was completed as a preliminary estimate of the effect of the changes.

  1. An installed nacelle design code using a multiblock Euler solver. Volume 1: Theory document

    NASA Technical Reports Server (NTRS)

    Chen, H. C.

    1992-01-01

    An efficient multiblock Euler design code was developed for designing a nacelle installed on geometrically complex airplane configurations. This approach employed a design driver based on a direct iterative surface curvature method developed at LaRC. A general multiblock Euler flow solver was used for computing flow around complex geometries. The flow solver used a finite-volume formulation with explicit time-stepping to solve the Euler Equations. It used a multiblock version of the multigrid method to accelerate the convergence of the calculations. The design driver successively updated the surface geometry to reduce the difference between the computed and target pressure distributions. In the flow solver, the change in surface geometry was simulated by applying surface transpiration boundary conditions to avoid repeated grid generation during design iterations. Smoothness of the designed surface was ensured by alternate application of streamwise and circumferential smoothings. The capability and efficiency of the code was demonstrated through the design of both an isolated nacelle and an installed nacelle at various flow conditions. Information on the execution of the computer program is provided in volume 2.

  2. Automated installation methods for photovoltaic arrays

    NASA Astrophysics Data System (ADS)

    Briggs, R.; Daniels, A.; Greenaway, R.; Oster, J., Jr.; Racki, D.; Stoeltzing, R.

    1982-11-01

    Since installation expenses constitute a substantial portion of the cost of a large photovoltaic power system, methods for reduction of these costs were investigated. The installation of the photovoltaic arrays includes all areas, starting with site preparation (i.e., trenching, wiring, drainage, foundation installation, lightning protection, grounding and installation of the panel) and concluding with the termination of the bus at the power conditioner building. To identify the optimum combination of standard installation procedures and automated/mechanized techniques, the installation process was investigated including the equipment and hardware available, the photovoltaic array structure systems and interfaces, and the array field and site characteristics. Preliminary designs of hardware for both the standard installation method, the automated/mechanized method, and a mix of standard installation procedures and mechanized procedures were identified to determine which process effectively reduced installation costs. In addition, costs associated with each type of installation method and with the design, development and fabrication of new installation hardware were generated.

  3. Thesaurus/Glossary System. User's Guide. Improved Systems for Managing the Control of Paperwork.

    ERIC Educational Resources Information Center

    Hurley, Jeanne S.; And Others

    Intended primarily for the use of NCES (National Center for Education Statistics) staff, this document contains installation-specific information for the Thesaurus/Glossary computer system as installed at the HEW (Health, Education and Welfare) Data Management Center. The first of three sections provides an overview of system objectives,…

  4. FORCHECK -- A Fortran Verifier and Programming Aid

    NASA Astrophysics Data System (ADS)

    Lawden, M. D.

    FORCHECK is a Fortran verifier and programming aid which has been purchased from Polyhedron software and installed on the Starlink Database computer (STADAT) for the use of all Starlink users. It was developed by Erik W. Kruyt at Leiden University. It is only available on STADAT and is not installed on any other Starlink nodes.

  5. Composite rotor blades for large wind energy installations

    NASA Technical Reports Server (NTRS)

    Kussmann, A.; Molly, J.; Muser, D.

    1980-01-01

    The design of large wind power systems in Germany is reviewed with attention given to elaboration of the total wind energy system, aerodynamic design of the rotor blade, and wind loading effects. Particular consideration is given to the development of composite glass fiber/plastic or carbon fiber/plastic rotor blades for such installations.

  6. Airburst height computation method of Sea-Impact Test

    NASA Astrophysics Data System (ADS)

    Kim, Jinho; Kim, Hyungsup; Chae, Sungwoo; Park, Sungho

    2017-05-01

    This paper describes the ways how to measure the airburst height of projectiles and rockets. In general, the airburst height could be determined by using triangulation method or the images from the camera installed on the radar. There are some limitations in these previous methods when the missiles impact the sea surface. To apply triangulation method, the cameras should be installed so that the lines of sight intersect at angles from 60 to 120 degrees. There could be no effective observation towers to install the optical system. In case the range of the missile is more than 50km, the images from the camera of the radar could be useless. This paper proposes the method to measure the airburst height of sea impact projectile by using a single camera. The camera would be installed on the island near to the impact area and the distance could be computed by using the position and attitude of camera and sea level. To demonstrate the proposed method, the results from the proposed method are compared with that from the previous method.

  7. Development of process control capability through the Browns Ferry Integrated Computer System using Reactor Water Clanup System as an example. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, J.; Mowrey, J.

    1995-12-01

    This report describes the design, development and testing of process controls for selected system operations in the Browns Ferry Nuclear Plant (BFNP) Reactor Water Cleanup System (RWCU) using a Computer Simulation Platform which simulates the RWCU System and the BFNP Integrated Computer System (ICS). This system was designed to demonstrate the feasibility of the soft control (video touch screen) of nuclear plant systems through an operator console. The BFNP Integrated Computer System, which has recently. been installed at BFNP Unit 2, was simulated to allow for operator control functions of the modeled RWCU system. The BFNP Unit 2 RWCU systemmore » was simulated using the RELAP5 Thermal/Hydraulic Simulation Model, which provided the steady-state and transient RWCU process variables and simulated the response of the system to control system inputs. Descriptions of the hardware and software developed are also included in this report. The testing and acceptance program and results are also detailed in this report. A discussion of potential installation of an actual RWCU process control system in BFNP Unit 2 is included. Finally, this report contains a section on industry issues associated with installation of process control systems in nuclear power plants.« less

  8. View northeast of a microchip based computer control system installed ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View northeast of a microchip based computer control system installed in the early 1980's to replace Lamokin Tower, at center of photograph; panels 1 and 2 at right of photograph are part of main supervisory board; panel 1 controlled Allen Lane sub-station #7; responsiblity for this portion of the system was transferred to southeast Pennsylvania transit authority (septa) in 1985; panel 2 at extreme right controls catenary switches in a coach storage yard adjacent to the station - Thirtieth Street Station, Power Director Center, Thirtieth & Market Streets in Amtrak Railroad Station, Philadelphia, Philadelphia County, PA

  9. A Patient Record-Filing System for Family Practice

    PubMed Central

    Levitt, Cheryl

    1988-01-01

    The efficient storage and easy retrieval of quality records are a central concern of good family practice. Many physicians starting out in practice have difficulty choosing a practical and lasting system for storing their records. Some who have established practices are installing computers in their offices and finding that their filing systems are worn, outdated, and incompatible with computerized systems. This article describes a new filing system installed simultaneously with a new computer system in a family-practice teaching centre. The approach adopted solved all identifiable problems and is applicable in family practices of all sizes.

  10. CANFAR + Skytree: Mining Massive Datasets as an Essential Part of the Future of Astronomy

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.

    2013-01-01

    The future study of large astronomical datasets, consisting of hundreds of millions to billions of objects, will be dominated by large computing resources, and by analysis tools of the necessary scalability and sophistication to extract useful information. Significant effort will be required to fulfil their potential as a provider of the next generation of science results. To-date, computing systems have allowed either sophisticated analysis of small datasets, e.g., most astronomy software, or simple analysis of large datasets, e.g., database queries. At the Canadian Astronomy Data Centre, we have combined our cloud computing system, the Canadian Advanced Network for Astronomical Research (CANFAR), with the world's most advanced machine learning software, Skytree, to create the world's first cloud computing system for data mining in astronomy. This allows the full sophistication of the huge fields of data mining and machine learning to be applied to the hundreds of millions of objects that make up current large datasets. CANFAR works by utilizing virtual machines, which appear to the user as equivalent to a desktop. Each machine is replicated as desired to perform large-scale parallel processing. Such an arrangement carries far more flexibility than other cloud systems, because it enables the user to immediately install and run the same code that they already utilize for science on their desktop. We demonstrate the utility of the CANFAR + Skytree system by showing science results obtained, including assigning photometric redshifts with full probability density functions (PDFs) to a catalog of approximately 133 million galaxies from the MegaPipe reductions of the Canada-France-Hawaii Telescope Legacy Wide and Deep surveys. Each PDF is produced nonparametrically from 100 instances of the photometric parameters for each galaxy, generated by perturbing within the errors on the measurements. Hence, we produce, store, and assign redshifts to, a catalog of over 13 billion object instances. This catalog is comparable in size to those expected from next-generation surveys, such as Large Synoptic Survey Telescope. The CANFAR+Skytree system is open for use by any interested member of the astronomical community.

  11. Residential solar-heating system-design package

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Design package for modular solar heating system includes performance specifications, design data, installation guidelines, and other information that should be valuable to those interested in system (or similar systems) for projected installation. When installed in insulated "energy saver" home, system can supply large percentage of total energy needs of building.

  12. 40 CFR 141.87 - Monitoring requirements for water quality parameters.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (c) Monitoring after installation of corrosion control. Any large system which installs optimal corrosion control treatment pursuant to § 141.81(d)(4) shall measure the water quality parameters at the...)(i). Any small or medium-size system which installs optimal corrosion control treatment shall conduct...

  13. 40 CFR 141.87 - Monitoring requirements for water quality parameters.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (c) Monitoring after installation of corrosion control. Any large system which installs optimal corrosion control treatment pursuant to § 141.81(d)(4) shall measure the water quality parameters at the...)(i). Any small or medium-size system which installs optimal corrosion control treatment shall conduct...

  14. 40 CFR 141.87 - Monitoring requirements for water quality parameters.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (c) Monitoring after installation of corrosion control. Any large system which installs optimal corrosion control treatment pursuant to § 141.81(d)(4) shall measure the water quality parameters at the...)(i). Any small or medium-size system which installs optimal corrosion control treatment shall conduct...

  15. 40 CFR 141.87 - Monitoring requirements for water quality parameters.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (c) Monitoring after installation of corrosion control. Any large system which installs optimal corrosion control treatment pursuant to § 141.81(d)(4) shall measure the water quality parameters at the...)(i). Any small or medium-size system which installs optimal corrosion control treatment shall conduct...

  16. 40 CFR 141.87 - Monitoring requirements for water quality parameters.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (c) Monitoring after installation of corrosion control. Any large system which installs optimal corrosion control treatment pursuant to § 141.81(d)(4) shall measure the water quality parameters at the...)(i). Any small or medium-size system which installs optimal corrosion control treatment shall conduct...

  17. After Installation: Ubiquitous Computing and High School Science in Three Experienced, High-Technology Schools

    ERIC Educational Resources Information Center

    Drayton, Brian; Falk, Joni K.; Stroud, Rena; Hobbs, Kathryn; Hammerman, James

    2010-01-01

    There are few studies of the impact of ubiquitous computing on high school science, and the majority of studies of ubiquitous computing report only on the early stages of implementation. The present study presents data on 3 high schools with carefully elaborated ubiquitous computing systems that have gone through at least one "obsolescence cycle"…

  18. Long-term acceptability, durability and bio-efficacy of ZeroVector® durable lining for vector control in Papua New Guinea.

    PubMed

    Kuadima, Joseph J; Timinao, Lincoln; Naidi, Laura; Tandrapah, Anthony; Hetzel, Manuel W; Czeher, Cyrille; Pulford, Justin

    2017-02-28

    This study examined the acceptability, durability and bio-efficacy of pyrethroid-impregnated durable lining (DL) over a three-year period post-installation in residential homes across Papua New Guinea (PNG). ZeroVector ® ITPS had previously been installed in 40 homes across four study sites representing a cross section of malaria transmission risk and housing style. Structured questionnaires, DL visual inspections and group interviews (GIs) were completed with household heads at 12- and 36-months post-installation. Three DL samples were collected from all households in which it remained 36-months post-installation to evaluate the bio-efficacy of DL on Anopheles mosquitoes. Bio-efficacy testing followed WHO guidelines for the evaluation of indoor residual spraying. The DL was still intact in 86 and 39% of study homes at the two time periods, respectively. In homes in which the DL was still intact, 92% of household heads considered the appearance at 12-months post installation to be the same as, or better than, that at installation compared to 59% at 36-months post-installation. GIs at both time points confirmed continuing high acceptance of DL, based in large part of the perceived attractiveness and functionality of the material. However, participants frequently asserted that they, or their family members, had ceased or reduced their use of mosquito nets as a result of the DL installation. A total of 16 houses were sampled for bio-efficacy testing across the 4 study sites at 36-months post-installation. Overall, combining all sites and samples, both knock-down at 30 min and mortality at 24 h were 100%. The ZeroVector ® DL installation remained highly acceptable at 36-months post-installation, the material and fixtures proved durable and the efficacy against malaria vectors did not decrease. However, the DL material had been removed from over 50% of the original study homes 3 years post-installation, largely due to deteriorating housing infrastructure. Furthermore, the presence of the DL installation appeared to reduce ITN use among many participating householders. The study findings suggest DL may not be an appropriate vector control method for large-scale use in the contemporary PNG malaria control programme.

  19. Bank Terminals

    NASA Technical Reports Server (NTRS)

    1978-01-01

    In the photo, employees of the UAB Bank, Knoxville, Tennessee, are using Teller Transaction Terminals manufactured by SCI Systems, Inc., Huntsville, Alabama, an electronics firm which has worked on a number of space projects under contract with NASA. The terminals are part of an advanced, computerized financial transaction system that offers high efficiency in bank operations. The key to the system's efficiency is a "multiplexing" technique developed for NASA's Space Shuttle. Multiplexing is simultaneous transmission of large amounts of data over a single transmission link at very high rates of speed. In the banking application, a small multiplex "data bus" interconnects all the terminals and a central computer which stores information on clients' accounts. The data bus replaces the maze-of wiring that would be needed to connect each terminal separately and it affords greater speed in recording transactions. The SCI system offers banks real-time data management through constant updating of the central computer. For example, a check is immediately cancelled at the teller's terminal and the computer is simultaneously advised of the transaction; under other methods, the check would be cancelled and the transaction recorded at the close of business. Teller checkout at the end of the day, conventionally a time-consuming matter of processing paper, can be accomplished in minutes by calling up a summary of the day's transactions. SCI manufactures other types of terminals for use in the system, such as an administrative terminal that provides an immediate printout of a client's account, and another for printing and recording savings account deposits and withdrawals. SCI systems have been installed in several banks in Tennessee, Arizona, and Oregon and additional installations are scheduled this year.

  20. Lightning and surge protection of large ground facilities

    NASA Astrophysics Data System (ADS)

    Stringfellow, Michael F.

    1988-04-01

    The vulnerability of large ground facilities to direct lightning strikes and to lightning-induced overvoltages on the power distribution, telephone and data communication lines are discussed. Advanced electrogeometric modeling is used for the calculation of direct strikes to overhead power lines, buildings, vehicles and objects within the facility. Possible modes of damage, injury and loss are discussed. Some appropriate protection methods for overhead power lines, structures, vehicles and aircraft are suggested. Methods to mitigate the effects of transients on overhead and underground power systems as well as within buildings and other structures are recommended. The specification and location of low-voltage surge suppressors for the protection of vulnerable hardware such as computers, telecommunication equipment and radar installations are considered. The advantages and disadvantages of commonly used grounding techniques, such as single point, multiple and isolated grounds are compared. An example is given of the expected distribution of lightning flashes to a large airport, its buildings, structures and facilities, as well as to vehicles on the ground.

  1. Toward an Engineering Model for the Aerodynamic Forces Acting on Wind Turbine Blades in Quasisteady Standstill and Blade Installation Situations

    NASA Astrophysics Data System (ADS)

    Gaunaa, Mac; Heinz, Joachim; Skrzypiński, Witold

    2016-09-01

    The crossflow principle is one of the key elements used in engineering models for prediction of the aerodynamic loads on wind turbine blades in standstill or blade installation situations, where the flow direction relative to the wind turbine blade has a component in the direction of the blade span direction. In the present work, the performance of the crossflow principle is assessed on the DTU 10MW reference blade using extensive 3D CFD calculations. Analysis of the computational results shows that there is only a relatively narrow region in which the crossflow principle describes the aerodynamic loading well. In some conditions the deviation of the predicted loadings can be quite significant, having a large influence on for instance the integral aerodynamic moments around the blade centre of mass; which is very important for single blade installation applications. The main features of these deviations, however, have a systematic behaviour on all force components, which in this paper is employed to formulate the first version of an engineering correction method to the crossflow principle applicable for wind turbine blades. The new correction model improves the agreement with CFD results for the key aerodynamic loads in crossflow situations. The general validity of this model for other blade shapes should be investigated in subsequent works.

  2. AnnotateGenomicRegions: a web application.

    PubMed

    Zammataro, Luca; DeMolfetta, Rita; Bucci, Gabriele; Ceol, Arnaud; Muller, Heiko

    2014-01-01

    Modern genomic technologies produce large amounts of data that can be mapped to specific regions in the genome. Among the first steps in interpreting the results is annotation of genomic regions with known features such as genes, promoters, CpG islands etc. Several tools have been published to perform this task. However, using these tools often requires a significant amount of bioinformatics skills and/or downloading and installing dedicated software. Here we present AnnotateGenomicRegions, a web application that accepts genomic regions as input and outputs a selection of overlapping and/or neighboring genome annotations. Supported organisms include human (hg18, hg19), mouse (mm8, mm9, mm10), zebrafish (danRer7), and Saccharomyces cerevisiae (sacCer2, sacCer3). AnnotateGenomicRegions is accessible online on a public server or can be installed locally. Some frequently used annotations and genomes are embedded in the application while custom annotations may be added by the user. The increasing spread of genomic technologies generates the need for a simple-to-use annotation tool for genomic regions that can be used by biologists and bioinformaticians alike. AnnotateGenomicRegions meets this demand. AnnotateGenomicRegions is an open-source web application that can be installed on any personal computer or institute server. AnnotateGenomicRegions is available at: http://cru.genomics.iit.it/AnnotateGenomicRegions.

  3. AnnotateGenomicRegions: a web application

    PubMed Central

    2014-01-01

    Background Modern genomic technologies produce large amounts of data that can be mapped to specific regions in the genome. Among the first steps in interpreting the results is annotation of genomic regions with known features such as genes, promoters, CpG islands etc. Several tools have been published to perform this task. However, using these tools often requires a significant amount of bioinformatics skills and/or downloading and installing dedicated software. Results Here we present AnnotateGenomicRegions, a web application that accepts genomic regions as input and outputs a selection of overlapping and/or neighboring genome annotations. Supported organisms include human (hg18, hg19), mouse (mm8, mm9, mm10), zebrafish (danRer7), and Saccharomyces cerevisiae (sacCer2, sacCer3). AnnotateGenomicRegions is accessible online on a public server or can be installed locally. Some frequently used annotations and genomes are embedded in the application while custom annotations may be added by the user. Conclusions The increasing spread of genomic technologies generates the need for a simple-to-use annotation tool for genomic regions that can be used by biologists and bioinformaticians alike. AnnotateGenomicRegions meets this demand. AnnotateGenomicRegions is an open-source web application that can be installed on any personal computer or institute server. AnnotateGenomicRegions is available at: http://cru.genomics.iit.it/AnnotateGenomicRegions. PMID:24564446

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E Wes; Brugger, Eric

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources - the 'Big Iron.' Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the followingmore » questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be - that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?« less

  5. Making Informed Decisions: Management Issues Influencing Computers in the Classroom.

    ERIC Educational Resources Information Center

    Strickland, James

    A number of noninstructional factors appear to determine the extent to which computers make a difference in writing instruction. Once computers have been purchased and installed, it is generally school administrators who make management decisions, often from an uninformed pedagogical orientation. Issues such as what hardware and software to buy,…

  6. Analysis of rocket engine injection combustion processes

    NASA Technical Reports Server (NTRS)

    Salmon, J. W.; Saltzman, D. H.

    1977-01-01

    Mixing methodology improvement for the JANNAF DER and CICM injection/combustion analysis computer programs was accomplished. ZOM plane prediction model development was improved for installation into the new standardized DER computer program. An intra-element mixing model developing approach was recommended for gas/liquid coaxial injection elements for possible future incorporation into the CICM computer program.

  7. Reliability of Computer Systems ODRA 1305 and R-32,

    DTIC Science & Technology

    1983-03-25

    RELIABILITY OF COMPUTER SYSTEMS ODRA 1305 AND R-32 By: Wit Drewniak English pages: 12 Source: Informatyka , Vol. 14, Nr. 7, 1979, pp. 5-8 Country of...JS EMC computers installed in ZETO, Katowice", Informatyka , No. 7-8/78, deals with various reliability classes * within the family of the machines of

  8. Catalog of Computer Programs Used in Undergraduate Geological Education. Second Edition. Installment 4.

    ERIC Educational Resources Information Center

    Burger, H. Robert

    1984-01-01

    Describes 70 computer programs related to (1) structural geology; (2) sedimentology and stratigraphy; and (3) the environment, groundwater, glacial geology, and oceanography. Potential use(s), language, required hardware, and sources are included. (JM)

  9. Computerized systems analysis and optimization of aircraft engine performance, weight, and life cycle costs

    NASA Technical Reports Server (NTRS)

    Fishbach, L. H.

    1979-01-01

    The paper describes the computational techniques employed in determining the optimal propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements. The computer programs used to perform calculations for all the factors that enter into the selection process of determining the optimum combinations of airplanes and engines are examined. Attention is given to the description of the computer codes including NNEP, WATE, LIFCYC, INSTAL, and POD DRG. A process is illustrated by which turbine engines can be evaluated as to fuel consumption, engine weight, cost and installation effects. Examples are shown as to the benefits of variable geometry and of the tradeoff between fuel burned and engine weights. Future plans for further improvements in the analytical modeling of engine systems are also described.

  10. 38 CFR 9.5 - Payment of proceeds.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... rate. (b) If, following the death of an insured member who has designated both principal and contingent..., discounted to the date of his or her death at the same rate used for inclusion of interest in the computation... be paid in installments, the first installment will be payable as of the date of death. The amount of...

  11. 38 CFR 9.5 - Payment of proceeds.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... rate. (b) If, following the death of an insured member who has designated both principal and contingent..., discounted to the date of his or her death at the same rate used for inclusion of interest in the computation... be paid in installments, the first installment will be payable as of the date of death. The amount of...

  12. Specification for installation of the crew activity planning system coaxial cable communication system

    NASA Technical Reports Server (NTRS)

    Allen, M. A.; Roman, G. S.

    1979-01-01

    The specification used to install a broadband coaxial cable communication system to support remote terminal operations on the Crew Activity Planning system at the Lyndon B. Johnson Space Center are reported. The system supports high speed communications between a Harris Slash 8 computer and one or more Sanders Graphic 7 displays.

  13. QMachine: commodity supercomputing in web browsers.

    PubMed

    Wilkinson, Sean R; Almeida, Jonas S

    2014-06-09

    Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics' "Big Data" from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running "download and install" software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments.

  14. SiMon: Simulation Monitor for Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Xuran Qian, Penny; Cai, Maxwell Xu; Portegies Zwart, Simon; Zhu, Ming

    2017-09-01

    Scientific discovery via numerical simulations is important in modern astrophysics. This relatively new branch of astrophysics has become possible due to the development of reliable numerical algorithms and the high performance of modern computing technologies. These enable the analysis of large collections of observational data and the acquisition of new data via simulations at unprecedented accuracy and resolution. Ideally, simulations run until they reach some pre-determined termination condition, but often other factors cause extensive numerical approaches to break down at an earlier stage. In those cases, processes tend to be interrupted due to unexpected events in the software or the hardware. In those cases, the scientist handles the interrupt manually, which is time-consuming and prone to errors. We present the Simulation Monitor (SiMon) to automatize the farming of large and extensive simulation processes. Our method is light-weight, it fully automates the entire workflow management, operates concurrently across multiple platforms and can be installed in user space. Inspired by the process of crop farming, we perceive each simulation as a crop in the field and running simulation becomes analogous to growing crops. With the development of SiMon we relax the technical aspects of simulation management. The initial package was developed for extensive parameter searchers in numerical simulations, but it turns out to work equally well for automating the computational processing and reduction of observational data reduction.

  15. Development and verification of local/global analysis techniques for laminated composites

    NASA Technical Reports Server (NTRS)

    Griffin, O. Hayden, Jr.

    1989-01-01

    Analysis and design methods for laminated composite materials have been the subject of considerable research over the past 20 years, and are currently well developed. In performing the detailed three-dimensional analyses which are often required in proximity to discontinuities, however, analysts often encounter difficulties due to large models. Even with the current availability of powerful computers, models which are too large to run, either from a resource or time standpoint, are often required. There are several approaches which can permit such analyses, including substructuring, use of superelements or transition elements, and the global/local approach. This effort is based on the so-called zoom technique to global/local analysis, where a global analysis is run, with the results of that analysis applied to a smaller region as boundary conditions, in as many iterations as is required to attain an analysis of the desired region. Before beginning the global/local analyses, it was necessary to evaluate the accuracy of the three-dimensional elements currently implemented in the Computational Structural Mechanics (CSM) Testbed. It was also desired to install, using the Experimental Element Capability, a number of displacement formulation elements which have well known behavior when used for analysis of laminated composites.

  16. ‘My Virtual Dream’: Collective Neurofeedback in an Immersive Art Environment

    PubMed Central

    Kovacevic, Natasha; Ritter, Petra; Tays, William; Moreno, Sylvain; McIntosh, Anthony Randal

    2015-01-01

    While human brains are specialized for complex and variable real world tasks, most neuroscience studies reduce environmental complexity, which limits the range of behaviours that can be explored. Motivated to overcome this limitation, we conducted a large-scale experiment with electroencephalography (EEG) based brain-computer interface (BCI) technology as part of an immersive multi-media science-art installation. Data from 523 participants were collected in a single night. The exploratory experiment was designed as a collective computer game where players manipulated mental states of relaxation and concentration with neurofeedback targeting modulation of relative spectral power in alpha and beta frequency ranges. Besides validating robust time-of-night effects, gender differences and distinct spectral power patterns for the two mental states, our results also show differences in neurofeedback learning outcome. The unusually large sample size allowed us to detect unprecedented speed of learning changes in the power spectrum (~ 1 min). Moreover, we found that participants' baseline brain activity predicted subsequent neurofeedback beta training, indicating state-dependent learning. Besides revealing these training effects, which are relevant for BCI applications, our results validate a novel platform engaging art and science and fostering the understanding of brains under natural conditions. PMID:26154513

  17. Integrated Library Systems in Canadian Public, Academic and Special Libraries: The Sixth Annual Survey.

    ERIC Educational Resources Information Center

    Merilees, Bobbie

    1992-01-01

    Reports results of a survey of vendors of large and microcomputer-based integrated library systems. Data presented on Canadian installations include total systems installed, comparisons with earlier years, market segments, and installations by type of library (excluding school). International sales and automation requirements for music are…

  18. A General Purpose High Performance Linux Installation Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wachsmann, Alf

    2002-06-17

    With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then usesmore » kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation.« less

  19. About an Extreme Achievable Current in Plasma Focus Installation of Mather Type

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikulin, V. Ya.; Polukhin, S. N.; Vikhrev, V. V.

    A computer simulation and analytical analysis of the discharge process in Plasma Focus has shown that there is an upper limit to the current which can be achieved in Plasma Focus installation of Mather type by only increasing the capacity of the condenser bank. The maximum current achieved for various plasma focus installations of 1 MJ level is discussed. For example, for the PF-1000 (IFPiLM) and 1 MJ Frascati PF, the maximum current is near 2 MA. Thus, the commonly used method of increasing the energy of the PF installation by increasing of the capacity has no merit. Alternative optionsmore » in order to increase the current are discussed.« less

  20. Analysis of the harmonics and power-factor effects at a utility-inertied photovoltaic system

    NASA Astrophysics Data System (ADS)

    Campen, G. L.

    The harmonics and power factor characteristics and effects of a single residential photovoltaic (PV) installation using a line commutated inverter are outlined. The data were taken during a 5 day measurement program at a prototype residential PV installation in Arizona. The magnitude and phase of various currents and voltages from the fundamental to the 13th harmonic were recorded both with and without the operation of the PV system. A candidate method of modeling the installation for computer studies of larger concentrations is given.

  1. Computer code for estimating installed performance of aircraft gas turbine engines. Volume 3: Library of maps

    NASA Technical Reports Server (NTRS)

    Kowalski, E. J.

    1979-01-01

    A computerized method which utilizes the engine performance data and estimates the installed performance of aircraft gas turbine engines is presented. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag. The use of two data base files to represent the engine and the inlet/nozzle/aftbody performance characteristics is discussed. The existing library of performance characteristics for inlets and nozzle/aftbodies and an example of the 1000 series of engine data tables is presented.

  2. System Would Predictively Preempt Traffic Lights for Emergency Vehicles

    NASA Technical Reports Server (NTRS)

    Bachelder, Aaron; Foster, Conrad

    2004-01-01

    Two electronic communication-and-control systems have been proposed as means of modifying the switching of traffic lights to give priority to emergency vehicles. Both systems would utilize the inductive loops already installed in the streets of many municipalities to detect vehicles for timing the switching of traffic lights. The proposed systems could be used alone or to augment other automated emergency traffic-light preemption systems that are already present in some municipalities, including systems that recognize flashing lights or siren sounds or that utilize information on the positions of emergency vehicles derived from the Global Positioning System (GPS). Systems that detect flashing lights and siren sounds are limited in range, cannot "see" or "hear" well around corners, and are highly vulnerable to noise. GPS-based systems are effective in rural areas and small cities, but are often ineffective in large cities because of frequent occultation of GPS satellite signals by large structures. In contrast, the proposed traffic-loop forward prediction system would be relatively invulnerable to noise, would not be subject to significant range limitations, and would function well in large cities -- even in such places as underneath bridges and in tunnels, where GPS-based systems do not work. One proposed system has been characterized as "car-active" because each participating emergency vehicle would be equipped with a computer and a radio transceiver that would communicate with stationary transceivers at the traffic loops. The other proposed system has been characterized as "car-passive" because a passive radio transponder would be installed on the underside of a participating vehicle.

  3. Overview of Opportunities for Co-Location of Solar Energy Technologies and Vegetation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macknick, Jordan; Beatty, Brenda; Hill, Graham

    2013-12-01

    Large-scale solar facilities have the potential to contribute significantly to national electricity production. Many solar installations are large-scale or utility-scale, with a capacity over 1 MW and connected directly to the electric grid. Large-scale solar facilities offer an opportunity to achieve economies of scale in solar deployment, yet there have been concerns about the amount of land required for solar projects and the impact of solar projects on local habitat. During the site preparation phase for utility-scale solar facilities, developers often grade land and remove all vegetation to minimize installation and operational costs, prevent plants from shading panels, and minimizemore » potential fire or wildlife risks. However, the common site preparation practice of removing vegetation can be avoided in certain circumstances, and there have been successful examples where solar facilities have been co-located with agricultural operations or have native vegetation growing beneath the panels. In this study we outline some of the impacts that large-scale solar facilities can have on the local environment, provide examples of installations where impacts have been minimized through co-location with vegetation, characterize the types of co-location, and give an overview of the potential benefits from co-location of solar energy projects and vegetation. The varieties of co-location can be replicated or modified for site-specific use at other solar energy installations around the world. We conclude with opportunities to improve upon our understanding of ways to reduce the environmental impacts of large-scale solar installations.« less

  4. Experiences Building Globus Genomics: A Next-Generation Sequencing Analysis Service using Galaxy, Globus, and Amazon Web Services

    PubMed Central

    Madduri, Ravi K.; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J.; Foster, Ian T.

    2014-01-01

    We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads. PMID:25342933

  5. Experiences Building Globus Genomics: A Next-Generation Sequencing Analysis Service using Galaxy, Globus, and Amazon Web Services.

    PubMed

    Madduri, Ravi K; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J; Foster, Ian T

    2014-09-10

    We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads.

  6. Biennial Wind Energy Conference and Workshop, 5th, Washington, DC, October 5-7, 1981, Proceedings

    NASA Astrophysics Data System (ADS)

    1982-05-01

    The results of studies funded by the Federal government to advance the state of the art of wind energy conversion systems (WECS) construction, operation, applications, and financial viability are presented. The economics of WECS were considered in terms of applicable tax laws, computer simulations of net value of WECS to utilities, and the installation of Mod-2 2.5 MW and WTS-4 4MW wind turbines near Medicine Bow, WY to test the operation of two different large WECS on the same utility grid. Potential problems of increasing penetration of WECS-produced electricity on a utility grid were explored and remedies suggested. The structural dynamics of wind turbines were analyzed, along with means to predict potential noise pollution from large WECS, and to make blade fatigue life assessments. Finally, Darrieus rotor aerodynamics were investigated, as were dynamic stall in small WECS and lightning protection for wind turbines and components.

  7. S/Ka Dichroic Plate with Rounded Corners for NASA's 34-m Beam-Waveguide Antenna

    NASA Astrophysics Data System (ADS)

    Veruttipong, W.; Khayatian, B.; Imbriale, W.

    2016-02-01

    An S-/Ka-band frequency selective surface (FSS) or a dichroic plate is designed, manufactured, and tested for use in NASA's Deep Space Network (DSN) 34-m beam-waveguide (BWG) antennas. Due to its large size, the proposed dichroic incorporates a new design feature: waveguides with rounded corners to cut cost and allow ease of manufacturing the plate. The dichroic is designed using an analysis that combines the finite-element method (FEM) for arbitrarily shaped guides with the method of moments and Floquet mode theory for periodic structures. The software was verified by comparison with previously measured and computed dichroic plates. The large plate was manufactured with end-mill machining. The RF performance was measured and is in excellent agreement with the analytical results. The dichroic has been successfully installed and is operational at DSS-24, DSS-34, and DSS-54.

  8. Denver RTD's computer aided dispatch/automatic vehicle location system : the human factors consequences

    DOT National Transportation Integrated Search

    1999-09-01

    This report documents what happened to employees' work procedures when their employer when their employer installed Computer Aided Disptach/Automatic Vehicle Locator (CAD/AVL) technology to provide real-time surveillance of vehicles and to upgrade ra...

  9. 7 CFR 2.24 - Assistant Secretary for Administration.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... determining whether to continue, modify, or terminate an information technology program or project. (iii... technology to improve productivity in the Department. (P) Plan, develop, install, and operate computer-based systems for message exchange, scheduling, computer conferencing, televideo technologies, and other...

  10. A NASTRAN model of a large flexible swing-wing bomber. Volume 2: NASTRAN model development-horizontal stabilzer, vertical stabilizer and nacelle structures

    NASA Technical Reports Server (NTRS)

    Mock, W. D.; Latham, R. A.; Tisher, E. D.

    1982-01-01

    The NASTRAN model plans for the horizontal stabilizer, vertical stabilizer, and nacelle structure were expanded in detail to generate the NASTRAN model for each of these substructures. The grid point coordinates were coded for each element. The material properties and sizing data for each element were specified. Each substructure model was thoroughly checked out for continuity, connectivity, and constraints. These substructures were processed for structural influence coefficients (SIC) point loadings and the deflections were compared to those computed for the aircraft detail models. Finally, a demonstration and validation processing of these substructures was accomplished using the NASTRAN finite element program installed at NASA/DFRC facility.

  11. RF optics study for DSS-43 ultracone implementation

    NASA Technical Reports Server (NTRS)

    Lee, P.; Veruttipong, W.

    1994-01-01

    The Ultracone feed system will be implemented on DSS 43 to support the S-band (2.3 GHz) Galileo contingency mission. The feed system will be installed in the host country's cone, which is normally used for radio astronomy, VLBI, and holography. The design must retain existing radio-astronomy capabilities, which could be impaired by shadowing from the large S-band feed horn. Computer calculations were completed to estimate system performance and shadowing effects for various configurations of the host country's cone feed systems. Also, the DSS-43 system performance using higher gain S-band horns was analyzed. A new S-band horn design with improved return loss and cross-polarization characteristics is presented.

  12. Hunting for cosmic neutrinos under the deep sea: the ANTARES experiment

    NASA Astrophysics Data System (ADS)

    Flaminio, Vincenzo

    2013-06-01

    Attempts to detect high energy neutrinos originating in violent Galactic or Extragalactic processes have been carried out for many years, both using the polar-cap ice and the sea as a target/detection medium. The first large detector built and operated for several years has been the AMANDA Ĉerenkov array, installed under about two km of ice at the South Pole. More recently a much larger detector, ICECUBE has been successfully installed and operated at the same location. Attempts by several groups to install similar arrays under large sea depths have been carried out following the original pioneering attempts by the DUMAND collaboration, initiated in 1990 and terminated only six years later. ANTARES has been so far the only detector deployed at large sea depths and successfully operated for several years. It has been installed in the Mediterranean by a large international collaboration and is in operation since 2007. I describe in the following the experimental technique, the sensitivity of the experiment, the detector performance and the first results that have been obtained in the search for neutrinos from cosmic point sources and on the oscillations of atmospheric neutrinos.

  13. NMRbox: A Resource for Biomolecular NMR Computation.

    PubMed

    Maciejewski, Mark W; Schuyler, Adam D; Gryk, Michael R; Moraru, Ion I; Romero, Pedro R; Ulrich, Eldon L; Eghbalnia, Hamid R; Livny, Miron; Delaglio, Frank; Hoch, Jeffrey C

    2017-04-25

    Advances in computation have been enabling many recent advances in biomolecular applications of NMR. Due to the wide diversity of applications of NMR, the number and variety of software packages for processing and analyzing NMR data is quite large, with labs relying on dozens, if not hundreds of software packages. Discovery, acquisition, installation, and maintenance of all these packages is a burdensome task. Because the majority of software packages originate in academic labs, persistence of the software is compromised when developers graduate, funding ceases, or investigators turn to other projects. To simplify access to and use of biomolecular NMR software, foster persistence, and enhance reproducibility of computational workflows, we have developed NMRbox, a shared resource for NMR software and computation. NMRbox employs virtualization to provide a comprehensive software environment preconfigured with hundreds of software packages, available as a downloadable virtual machine or as a Platform-as-a-Service supported by a dedicated compute cloud. Ongoing development includes a metadata harvester to regularize, annotate, and preserve workflows and facilitate and enhance data depositions to BioMagResBank, and tools for Bayesian inference to enhance the robustness and extensibility of computational analyses. In addition to facilitating use and preservation of the rich and dynamic software environment for biomolecular NMR, NMRbox fosters the development and deployment of a new class of metasoftware packages. NMRbox is freely available to not-for-profit users. Copyright © 2017 Biophysical Society. All rights reserved.

  14. Influence of Installation Effects on Pile Bearing Capacity in Cohesive Soils - Large Deformation Analysis Via Finite Element Method

    NASA Astrophysics Data System (ADS)

    Konkol, Jakub; Bałachowski, Lech

    2017-03-01

    In this paper, the whole process of pile construction and performance during loading is modelled via large deformation finite element methods such as Coupled Eulerian Lagrangian (CEL) and Updated Lagrangian (UL). Numerical study consists of installation process, consolidation phase and following pile static load test (SLT). The Poznań site is chosen as the reference location for the numerical analysis, where series of pile SLTs have been performed in highly overconsolidated clay (OCR ≈ 12). The results of numerical analysis are compared with corresponding field tests and with so-called "wish-in-place" numerical model of pile, where no installation effects are taken into account. The advantages of using large deformation numerical analysis are presented and its application to the pile designing is shown.

  15. Computer programs for forward and inverse modeling of acoustic and electromagnetic data

    USGS Publications Warehouse

    Ellefsen, Karl J.

    2011-01-01

    A suite of computer programs was developed by U.S. Geological Survey personnel for forward and inverse modeling of acoustic and electromagnetic data. This report describes the computer resources that are needed to execute the programs, the installation of the programs, the program designs, some tests of their accuracy, and some suggested improvements.

  16. New Ways of Using Computers in Language Teaching. New Ways in TESOL Series II. Innovative Classroom Techniques.

    ERIC Educational Resources Information Center

    Boswood, Tim, Ed.

    A collection of classroom approaches and activities using computers for language learning is presented. Some require sophisticated installations, but most do not, and most use software readily available on most workplace computer systems. The activities were chosen because they use sound language learning strategies. The book is divided into five…

  17. Reading Teachers' Beliefs and Utilization of Computer and Technology: A Case Study

    ERIC Educational Resources Information Center

    Remetio, Jessica Espinas

    2014-01-01

    Many researchers believe that computers have the ability to help improve the reading skills of students. In an effort to improve the poor reading scores of students on state tests, as well as improve students' overall academic performance, computers and other technologies have been installed in Frozen Bay School classrooms. As the success of these…

  18. Paradigm Paralysis and the Plight of the PC in Education.

    ERIC Educational Resources Information Center

    O'Neil, Mick

    1998-01-01

    Examines the varied factors involved in providing Internet access in K-12 education, including expense, computer installation and maintenance, and security, and explores how the network computer could be useful in this context. Operating systems and servers are discussed. (MSE)

  19. Lock It Up! Computer Security.

    ERIC Educational Resources Information Center

    Wodarz, Nan

    1997-01-01

    The data contained on desktop computer systems and networks pose security issues for virtually every district. Sensitive information can be protected by educating users, altering the physical layout, using password protection, designating access levels, backing up data, reformatting floppy disks, using antivirus software, and installing encryption…

  20. Computer laboratory in medical education for medical students.

    PubMed

    Hercigonja-Szekeres, Mira; Marinović, Darko; Kern, Josipa

    2009-01-01

    Five generations of second year students at the Zagreb University School of Medicine were interviewed through an anonymous questionnaire on their use of personal computers, Internet, computer laboratories and computer-assisted education in general. Results show an advance in students' usage of information and communication technology during the period from 1998/99 to 2002/03. However, their positive opinion about computer laboratory depends on installed capacities: the better the computer laboratory technology, the better the students' acceptance and use of it.

  1. Simple tools for assembling and searching high-density picolitre pyrophosphate sequence data.

    PubMed

    Parker, Nicolas J; Parker, Andrew G

    2008-04-18

    The advent of pyrophosphate sequencing makes large volumes of sequencing data available at a lower cost than previously possible. However, the short read lengths are difficult to assemble and the large dataset is difficult to handle. During the sequencing of a virus from the tsetse fly, Glossina pallidipes, we found the need for tools to search quickly a set of reads for near exact text matches. A set of tools is provided to search a large data set of pyrophosphate sequence reads under a "live" CD version of Linux on a standard PC that can be used by anyone without prior knowledge of Linux and without having to install a Linux setup on the computer. The tools permit short lengths of de novo assembly, checking of existing assembled sequences, selection and display of reads from the data set and gathering counts of sequences in the reads. Demonstrations are given of the use of the tools to help with checking an assembly against the fragment data set; investigating homopolymer lengths, repeat regions and polymorphisms; and resolving inserted bases caused by incomplete chain extension. The additional information contained in a pyrophosphate sequencing data set beyond a basic assembly is difficult to access due to a lack of tools. The set of simple tools presented here would allow anyone with basic computer skills and a standard PC to access this information.

  2. Bootstrapping and Maintaining Trust in the Cloud

    DTIC Science & Technology

    2016-12-01

    proliferation and popularity of infrastructure-as-a- service (IaaS) cloud computing services such as Amazon Web Services and Google Compute Engine means...IaaS trusted computing system: • Secure Bootstrapping – the system should enable the tenant to securely install an initial root secret into each cloud ...elastically instantiated and terminated. Prior cloud trusted computing solutions address a subset of these features, but none achieve all. Excalibur [31] sup

  3. USSR Report, Military Affairs Foreign Military Review No 6, June 1986

    DTIC Science & Technology

    1986-11-20

    computers used for an objective accounting of the difference in current firing conditions from standard hold an important place in integrated fire...control systems of modern tanks of capitalist countries. Mechanical ballistic computers gave way in the early 1970’s to electronic computers , initially...made with analog components. Then digital ballistic computers were created, installed in particular in the Ml Abrams and Leopard-2 tanks. The basic

  4. Blast2GO goes grid: developing a grid-enabled prototype for functional genomics analysis.

    PubMed

    Aparicio, G; Götz, S; Conesa, A; Segrelles, D; Blanquer, I; García, J M; Hernandez, V; Robles, M; Talon, M

    2006-01-01

    The vast amount in complexity of data generated in Genomic Research implies that new dedicated and powerful computational tools need to be developed to meet their analysis requirements. Blast2GO (B2G) is a bioinformatics tool for Gene Ontology-based DNA or protein sequence annotation and function-based data mining. The application has been developed with the aim of affering an easy-to-use tool for functional genomics research. Typical B2G users are middle size genomics labs carrying out sequencing, ETS and microarray projects, handling datasets up to several thousand sequences. In the current version of B2G. The power and analytical potential of both annotation and function data-mining is somehow restricted to the computational power behind each particular installation. In order to be able to offer the possibility of an enhanced computational capacity within this bioinformatics application, a Grid component is being developed. A prototype has been conceived for the particular problem of speeding up the Blast searches to obtain fast results for large datasets. Many efforts have been done in the literature concerning the speeding up of Blast searches, but few of them deal with the use of large heterogeneous production Grid Infrastructures. These are the infrastructures that could reach the largest number of resources and the best load balancing for data access. The Grid Service under development will analyse requests based on the number of sequences, splitting them accordingly to the available resources. Lower-level computation will be performed through MPIBLAST. The software architecture is based on the WSRF standard.

  5. Influence of forces acting on side of machine on precision machining of large diameter holes

    NASA Astrophysics Data System (ADS)

    Fedorenko, M. A.; Bondarenko, J. A.; Sanina, T. M.

    2018-03-01

    One of the most important factors that increase efficiency, durability and reliability of rotating units is precision installation, preventive maintenance work, timely replacing of a failed or worn components and assemblies. These works should be carried out in the operation of the equipment, as the downtime in many cases leads to large financial losses. Stop of one unit of an industrial enterprise can interrupt the technological chain of production, resulting in a possible stop of the entire equipment. Improving the efficiency and optimization of the repair process increases accuracy of installation work when installing equipment, conducting restoration under operating conditions relevant for enterprises of different industries because it eliminates dismantling the equipment, sending it to maintenance, the expectation of equipment return, the new installation with the required quality and accuracy of repair.

  6. 26 CFR 20.6166-1 - Election of alternate extension of time for payment of estate tax where estate consists largely...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... consists largely of interest in closely held business. (a) In general. Section 6166 allows an executor to... executor's conclusion that the estate qualifies for payment of the estate tax in installments. In the... under section 6166(a) to pay any tax in installments, the executor may elect under section 6166(h) to...

  7. 5 CFR 532.315 - Additional survey jobs.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... data obtained in special industries WG-10 Communications Telephone Installer-Repairer WG-9 Central... Repairer WG-11 Electronic Computer Mechanic WG-11 Television Station Mechanic WG-11 Guided missiles Electronic Computer Mechanic WG-11 Guided Missile Mechanical Repairer WG-11 Heavy duty equipment Heavy Mobile...

  8. An Automated Approach to Departmental Grant Management.

    ERIC Educational Resources Information Center

    Kressly, Gaby; Kanov, Arnold L.

    1986-01-01

    Installation of a small computer and the use of specially designed programs has proven a cost-effective solution to the data processing needs of a university medical center's ophthalmology department, providing immediate access to grants accounting information and avoiding dependence on the institution's mainframe computer. (MSE)

  9. A Planning Guide for Instructional Networks, Part II.

    ERIC Educational Resources Information Center

    Daly, Kevin F.

    1994-01-01

    This second in a series of articles on planning for instructional computer networks focuses on site preparation, installation, service, and support. Highlights include an implementation schedule; classroom and computer lab layouts; electrical power needs; workstations; network cable; telephones; furniture; climate control; and security. (LRW)

  10. CAMAC throughput of a new RISC-based data acquisition computer at the DIII-D tokamak

    NASA Astrophysics Data System (ADS)

    Vanderlaan, J. F.; Cummings, J. W.

    1993-10-01

    The amount of experimental data acquired per plasma discharge at DIII-D has continued to grow. The largest shot size in May 1991 was 49 Mbyte; in May 1992, 66 Mbyte; and in April 1993, 80 Mbyte. The increasing load has prompted the installation of a new Motorola 88100-based MODCOMP computer to supplement the existing core of three older MODCOMP data acquisition CPU's. New Kinetic Systems CAMAC serial highway driver hardware runs on the 88100 VME bus. The new operating system is MODCOMP REAL/IX version of AT&T System V UNIX with real-time extensions and networking capabilities; future plans call for installation of additional computers of this type for tokamak and neutral beam control functions. Experiences with the CAMAC hardware and software will be chronicled, including observation of data throughput. The Enhanced Serial Highway crate controller is advertised as twice as fast as the previous crate controller, and computer I/O speeds are expected to also increase data rates.

  11. Optimization of hybrid power system composed of SMES and flywheel MG for large pulsed load

    NASA Astrophysics Data System (ADS)

    Niiyama, K.; Yagai, T.; Tsuda, M.; Hamajima, T.

    2008-09-01

    A superconducting magnetic storage system (SMES) has some advantages such as rapid large power response and high storage efficiency which are superior to other energy storage systems. A flywheel motor generator (FWMG) has large scaled capacity and high reliability, and hence is broadly utilized for a large pulsed load, while it has comparatively low storage efficiency due to high mechanical loss compared with SMES. A fusion power plant such as International Thermo-Nuclear Experimental Reactor (ITER) requires a large and long pulsed load which causes a frequency deviation in a utility power system. In order to keep the frequency within an allowable deviation, we propose a hybrid power system for the pulsed load, which equips the SMES and the FWMG with the utility power system. We evaluate installation cost and frequency control performance of three power systems combined with energy storage devices; (i) SMES with the utility power, (ii) FWMG with the utility power, (iii) both SMES and FWMG with the utility power. The first power system has excellent frequency power control performance but its installation cost is high. The second system has inferior frequency control performance but its installation cost is the lowest. The third system has good frequency control performance and its installation cost is attained lower than the first power system by adjusting the ratio between SMES and FWMG.

  12. Real-time POD-CFD Wind-Load Calculator for PV Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huayamave, Victor; Divo, Eduardo; Ceballos, Andres

    The primary objective of this project is to create an accurate web-based real-time wind-load calculator. This is of paramount importance for (1) the rapid and accurate assessments of the uplift and downforce loads on a PV mounting system, (2) identifying viable solutions from available mounting systems, and therefore helping reduce the cost of mounting hardware and installation. Wind loading calculations for structures are currently performed according to the American Society of Civil Engineers/ Structural Engineering Institute Standard ASCE/SEI 7; the values in this standard were calculated from simplified models that do not necessarily take into account relevant characteristics such asmore » those from full 3D effects, end effects, turbulence generation and dissipation, as well as minor effects derived from shear forces on installation brackets and other accessories. This standard does not include provisions that address the special requirements of rooftop PV systems, and attempts to apply this standard may lead to significant design errors as wind loads are incorrectly estimated. Therefore, an accurate calculator would be of paramount importance for the preliminary assessments of the uplift and downforce loads on a PV mounting system, identifying viable solutions from available mounting systems, and therefore helping reduce the cost of the mounting system and installation. The challenge is that although a full-fledged three-dimensional computational fluid dynamics (CFD) analysis would properly and accurately capture the complete physical effects of air flow over PV systems, it would be impractical for this tool, which is intended to be a real-time web-based calculator. CFD routinely requires enormous computation times to arrive at solutions that can be deemed accurate and grid-independent even in powerful and massively parallel computer platforms. This work is expected not only to accelerate solar deployment nationwide, but also help reach the SunShot Initiative goals of reducing the total installed cost of solar energy systems by 75%. The largest percentage of the total installed cost of solar energy system is associated with balance of system cost, with up to 40% going to “soft” costs; which include customer acquisition, financing, contracting, permitting, interconnection, inspection, installation, performance, operations, and maintenance. The calculator that is being developed will provide wind loads in real-time for any solar system designs and suggest the proper installation configuration and hardware; and therefore, it is anticipated to reduce system design, installation and permitting costs.« less

  13. Enhancements to TauDEM to support Rapid Watershed Delineation Services

    NASA Astrophysics Data System (ADS)

    Sazib, N. S.; Tarboton, D. G.

    2015-12-01

    Watersheds are widely recognized as the basic functional unit for water resources management studies and are important for a variety of problems in hydrology, ecology, and geomorphology. Nevertheless, delineating a watershed spread across a large region is still cumbersome due to the processing burden of working with large Digital Elevation Model. Terrain Analysis Using Digital Elevation Models (TauDEM) software supports the delineation of watersheds and stream networks from within desktop Geographic Information Systems. A rich set of watershed and stream network attributes are computed. However limitations of the TauDEM desktop tools are (1) it supports only one type of raster (tiff format) data (2) requires installation of software for parallel processing, and (3) data have to be in projected coordinate system. This paper presents enhancements to TauDEM that have been developed to extend its generality and support web based watershed delineation services. The enhancements of TauDEM include (1) reading and writing raster data with the open-source geospatial data abstraction library (GDAL) not limited to the tiff data format and (2) support for both geographic and projected coordinates. To support web services for rapid watershed delineation a procedure has been developed for sub setting the domain based on sub-catchments, with preprocessed data prepared for each catchment stored. This allows the watershed delineation to function locally, while extending to the full extent of watersheds using preprocessed information. Additional capabilities of this program includes computation of average watershed properties and geomorphic and channel network variables such as drainage density, shape factor, relief ratio and stream ordering. The updated version of TauDEM increases the practical applicability of it in terms of raster data type, size and coordinate system. The watershed delineation web service functionality is useful for web based software as service deployments that alleviate the need for users to install and work with desktop GIS software.

  14. Ecohydrologic coevolution in drylands: relative roles of vegetation, soil depth and runoff connectivity on ecosystem shifts.

    NASA Astrophysics Data System (ADS)

    Saco, P. M.; Moreno de las Heras, M.; Willgoose, G. R.

    2014-12-01

    Watersheds are widely recognized as the basic functional unit for water resources management studies and are important for a variety of problems in hydrology, ecology, and geomorphology. Nevertheless, delineating a watershed spread across a large region is still cumbersome due to the processing burden of working with large Digital Elevation Model. Terrain Analysis Using Digital Elevation Models (TauDEM) software supports the delineation of watersheds and stream networks from within desktop Geographic Information Systems. A rich set of watershed and stream network attributes are computed. However limitations of the TauDEM desktop tools are (1) it supports only one type of raster (tiff format) data (2) requires installation of software for parallel processing, and (3) data have to be in projected coordinate system. This paper presents enhancements to TauDEM that have been developed to extend its generality and support web based watershed delineation services. The enhancements of TauDEM include (1) reading and writing raster data with the open-source geospatial data abstraction library (GDAL) not limited to the tiff data format and (2) support for both geographic and projected coordinates. To support web services for rapid watershed delineation a procedure has been developed for sub setting the domain based on sub-catchments, with preprocessed data prepared for each catchment stored. This allows the watershed delineation to function locally, while extending to the full extent of watersheds using preprocessed information. Additional capabilities of this program includes computation of average watershed properties and geomorphic and channel network variables such as drainage density, shape factor, relief ratio and stream ordering. The updated version of TauDEM increases the practical applicability of it in terms of raster data type, size and coordinate system. The watershed delineation web service functionality is useful for web based software as service deployments that alleviate the need for users to install and work with desktop GIS software.

  15. PACS industry in Korea

    NASA Astrophysics Data System (ADS)

    Kim, Hee-Joung

    2002-05-01

    PACS industry in Korea has been rapidly growing, since government had supported collaborative PACS project between industry and university hospital. In the beginning, PACS industry had focused on developing peripheral PACS solutions, while the Korea PACS society was being formed. A few companies had started developing and installing domestic large-scale full-PACS system for teaching hospitals. Several years later, many hospitals have installed full-PACS system with national policy of reimbursement for PACS exams in November 1999. Both experiences of full-PACS installation and national policy generated tremendous intellectual and technological expertise about PACS at all levels, clinical, hospital management, education, and industrial sectors. There are now more than 20 domestic PACS companies. They have enough experiences which are capable of installing a truly full-PACS system for large-scale teaching hospitals. As an example, a domestic company had installed more than 40 full-PACS systems within 2-3 years. Enough experiences of full-PACS installation in Korea lead PACS industry to start exporting their full-PACS solutions. However, further understanding and timely implementation of continuously evolving international standard and integrated healthcare enterprise concepts may be necessary for international leading of PACS technologies for the future.

  16. The Use of Computers in High Schools. Technical Report Number Eight.

    ERIC Educational Resources Information Center

    Crick, Joe E.; Stolurow, Lawrence M.

    This paper reports on one high school's experience with a project to teach students how to program and solve problems in mathematics using a computer. Part I is intended as a general guide for any high school administrator or mathematics instructor who is interested in exploring the installation of a computer terminal in his high school and wants…

  17. Controls Over Copyrighted Computer Software

    DTIC Science & Technology

    1993-02-19

    The Army Audit Agency issued five installation reports as a result of one multilocation audit . The audit found that 41 percent of the computers...ARMY AUDIT AGENCY REPORTS ON COMPUTER SOFTWARE MANAGEMENT The U.S. Army Audit Agency conducted three multilocation audits from March 1988 through...December 1990, covering the acquisition, use, control, and accountability of commercial software. One multilocation audit resulted in five

  18. XpressWare Installation User guide

    NASA Astrophysics Data System (ADS)

    Duffey, K. P.

    XpressWare is a set of X terminal software, released by Tektronix Inc, that accommodates the X Window system on a range of host computers. The software comprises boot files (the X server image), configuration files, fonts, and font tools to support the X terminal. The files can be installed on one host or distributed across multiple hosts The purpose of this guide is to present the system or network administrator with a step-by-step account of how to install XpressWare, and how subsequently to configure the X terminals appropriately for the environment in which they operate.

  19. Report of the Army Science Board Summer Study on Installations 2025

    DTIC Science & Technology

    2009-12-01

    stresses , beha- vioral health problems, and injuries associated with war. Transform: IMCOM is modernizing installation management processes, policies...well. For example, "Prediction is very difficult, especially about the future" (Niels Bohr). Others stress that the future will be a lot like the...34homogenization" Endangered species Continuous and ubiquitous of society Islanding computing Telecommuting Wireless proliferation across appliances

  20. Treasure Transformers: Novel Interpretative Installations for the National Palace Museum

    NASA Astrophysics Data System (ADS)

    Hsieh, Chun-Ko; Liu, I.-Ling; Lin, Quo-Ping; Chan, Li-Wen; Hsiao, Chuan-Heng; Hung, Yi-Ping

    Museums have missions to increase accessibility and share cultural assets to the public. The National Palace Museum intends to be a pioneer of utilizing novel interpretative installations to reach more diverse and potential audiences, and Human-Computer Interaction (HCI) technology has been selected as the new interpretative approach. The pilot project in partnership with the National Taiwan University has successfully completed four interactive installations. To consider the different nature of collections, the four systems designed against different interpretation strategies are uPoster, i-m-Top, Magic Crystal Ball and Virtual Panel. To assess the feasibility of the project, the interactive installations were exhibited at the Taipei World Trade Center in 2008. The purpose of this paper is to present the development of the "Treasure Transformers" exhibition, design principles, and effectiveness of installations from the evaluation. It is our ambition that the contributions will propose innovative media approaches in museum settings.

  1. Navigating protected genomics data with UCSC Genome Browser in a Box.

    PubMed

    Haeussler, Maximilian; Raney, Brian J; Hinrichs, Angie S; Clawson, Hiram; Zweig, Ann S; Karolchik, Donna; Casper, Jonathan; Speir, Matthew L; Haussler, David; Kent, W James

    2015-03-01

    Genome Browser in a Box (GBiB) is a small virtual machine version of the popular University of California Santa Cruz (UCSC) Genome Browser that can be run on a researcher's own computer. Once GBiB is installed, a standard web browser is used to access the virtual server and add personal data files from the local hard disk. Annotation data are loaded on demand through the Internet from UCSC or can be downloaded to the local computer for faster access. Software downloads and installation instructions are freely available for non-commercial use at https://genome-store.ucsc.edu/. GBiB requires the installation of open-source software VirtualBox, available for all major operating systems, and the UCSC Genome Browser, which is open source and free for non-commercial use. Commercial use of GBiB and the Genome Browser requires a license (http://genome.ucsc.edu/license/). © The Author 2014. Published by Oxford University Press.

  2. Remote detection of explosives using field asymmetric ion mobility spectrometer installed on multicopter.

    PubMed

    Kostyukevich, Yury; Efremov, Denis; Ionov, Vladimir; Kukaev, Eugene; Nikolaev, Eugene

    2017-11-01

    The detection of explosives and drugs in hard-to-reach places is a considerable challenge. We report the development and initial experimental characterization of the air analysis system that includes Field Asymmetric Ion Mobility Spectrometer, array of the semiconductor gas sensors and is installed on multicopter. The system was developed based on the commercially available DJI Matrix 100 platform. For data collection and communication with operator, the special compact computer (Intel Compute Stick) was installed onboard. The total weight of the system was 3.3 kg. The system allows the 15-minute flight and provides the remote access to the obtained data. The developed system can be effectively used for the detection of impurities in the air, ecology monitoring, detection of chemical warfare agents, and explosives, what is especially important in light of recent terroristic attacks. The capabilities of the system were tested on the several explosives such as trinitrotoluene and nitro powder. Copyright © 2017 John Wiley & Sons, Ltd.

  3. 47 CFR 73.9006 - Add-in covered demodulator products.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Section 73.9006 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... passed to an output (e.g., where a demodulator add-in card in a personal computer passes such content to an associated software application installed in the same computer), it shall pass such content: (1...

  4. Desk-top publishing using IBM-compatible computers.

    PubMed

    Grencis, P W

    1991-01-01

    This paper sets out to describe one Medical Illustration Departments' experience of the introduction of computers for desk-top publishing. In this particular case, after careful consideration of all the options open, an IBM-compatible system was installed rather than the often popular choice of an Apple Macintosh.

  5. Operation of the computer model for microenvironment solar exposure

    NASA Technical Reports Server (NTRS)

    Gillis, J. R.; Bourassa, R. J.; Gruenbaum, P. E.

    1995-01-01

    A computer model for microenvironmental solar exposure was developed to predict solar exposure to satellite surfaces which may shadow or reflect on one another. This document describes the technical features of the model as well as instructions for the installation and use of the program.

  6. Application of computational fluid dynamics to the study of vortex flow control for the management of inlet distortion

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Gibb, James

    1992-01-01

    A study is presented to demonstrate that the Reduced Navier-Stokes code RNS3D can be employed effectively to develop a vortex generator installation that minimizes engine face circumferential distortion by controlling the development of secondary flow. The necessary computing times are small enough to show that similar studies are feasible within an analysis-design environment with all its constraints of costs and time. This study establishes the nature of the performance enhancements that can be realized with vortex flow control, and indicates a set of aerodynamic properties that can be utilized to arrive at a successful vortex generator installation design.

  7. Evaluation of Microcomputer-Based Operation and Maintenance Management Systems for Army Water/Wastewater Treatment Plant Operation.

    DTIC Science & Technology

    1986-07-01

    COMPUTER-AIDED OPERATION MANAGEMENT SYSTEM ................. 29 Functions of an Off-Line Computer-Aided Operation Management System Applications of...System Comparisons 85 DISTRIBUTION 5V J. • 0. FIGURES Number Page 1 Hardware Components 21 2 Basic Functions of a Computer-Aided Operation Management System...Plant Visits 26 4 Computer-Aided Operation Management Systems Reviewed for Analysis of Basic Functions 29 5 Progress of Software System Installation and

  8. Implementation of Information Technology in the Free Trade Era for Indonesia

    DTIC Science & Technology

    1998-06-01

    computer usage, had been organized before Thailand, Malaysia , and China. Also, use of computers for crude oil process applications, and marketing and...seismic computing in Pertamina had been installed and in operation ahead of Taiwan, Malaysia , and Brunei. There are many examples of computer usage at...such as: Malaysia , Thailand, USA, China, Germany, and many others. Although IT development is utilized in Indonesia’s development program, it should

  9. Market-based control strategy for long-span structures considering the multi-time delay issue

    NASA Astrophysics Data System (ADS)

    Li, Hongnan; Song, Jianzhu; Li, Gang

    2017-01-01

    To solve the different time delays that exist in the control device installed on spatial structures, in this study, discrete analysis using a 2 N precise algorithm was selected to solve the multi-time-delay issue for long-span structures based on the market-based control (MBC) method. The concept of interval mixed energy was introduced from computational structural mechanics and optimal control research areas, and it translates the design of the MBC multi-time-delay controller into a solution for the segment matrix. This approach transforms the serial algorithm in time to parallel computing in space, greatly improving the solving efficiency and numerical stability. The designed controller is able to consider the issue of time delay with a linear controlling force combination and is especially effective for large time-delay conditions. A numerical example of a long-span structure was selected to demonstrate the effectiveness of the presented controller, and the time delay was found to have a significant impact on the results.

  10. Investigation on installation of offshore wind turbines

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Bai, Yong

    2010-06-01

    Wind power has made rapid progress and should gain significance as an energy resource, given growing interest in renewable energy and clean energy. Offshore wind energy resources have attracted significant attention, as, compared with land-based wind energy resources, offshore wind energy resources are more promising candidates for development. Sea winds are generally stronger and more reliable and with improvements in technology, the sea has become a hot spot for new designs and installation methods for wind turbines. In the present paper, based on experience building offshore wind farms, recommended foundation styles have been examined. Furthermore, wave effects have been investigated. The split installation and overall installation have been illustrated. Methods appropriate when installing a small number of turbines as well as those useful when installing large numbers of turbines were analyzed. This investigation of installation methods for wind turbines should provide practical technical guidance for their installation.

  11. Comparison of Cryogenic Temperature Sensor Installation Inside or Outside the Piping

    NASA Astrophysics Data System (ADS)

    Müller, R.; Süßer, M.

    2010-04-01

    Cryogenic thermometers for large cryogenic facilities, like superconducting particle accelerator or fusion devices, must be able to withstand very severe conditions over the lifetime of the facility. In addition to the proper selection of the sensor, the choice of the appropriate installation method plays an important role for satisfying operation. Several characteristics must be taken into account, for instance: large numbers of sensors, different claims of accuracy, qualified preparation methods and at least qualified attachment of the sensor holder on the piping. One remedy to get satisfying results is the development of simple thermometer mounting fixtures, because thermometer mounting often may be realized by personnel with limited experience. This contribution presents two different methods for sensor installations, namely inside or outside installation on the piping. These have been the standard applications in the superconducting coil test facility TOSKA for many years. The characteristics of each of these methods will be discussed and compared.

  12. Using the Spatial Distribution of Installers to Define Solar Photovoltaic Markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Shaughnessy, Eric; Nemet, Gregory F.; Darghouth, Naim

    2016-09-01

    Solar PV market research to date has largely relied on arbitrary jurisdictional boundaries, such as counties, to study solar PV market dynamics. This paper seeks to improve solar PV market research by developing a methodology to define solar PV markets. The methodology is based on the spatial distribution of solar PV installers. An algorithm is developed and applied to a rich dataset of solar PV installations to study the outcomes of the installer-based market definitions. The installer-based approach exhibits several desirable properties. Specifically, the higher market granularity of the installer-based approach will allow future PV market research to study themore » relationship between market dynamics and pricing with more precision.« less

  13. Globus Online: Climate Data Management for Small Teams

    NASA Astrophysics Data System (ADS)

    Ananthakrishnan, R.; Foster, I.

    2013-12-01

    Large and highly distributed climate data demands new approaches to data organization and lifecycle management. We need, in particular, catalogs that can allow researchers to track the location and properties of large numbers of data files, and management tools that can allow researchers to update data properties and organization during their research, move data among different locations, and invoke analysis computations on data--all as easily as if they were working with small numbers of files on their desktop computer. Both catalogs and management tools often need to be able to scale to extremely large quantities of data. When developing solutions to these problems, it is important to distinguish between the needs of (a) large communities, for whom the ability to organize published data is crucial (e.g., by implementing formal data publication processes, assigning DOIs, recording definitive metadata, providing for versioning), and (b) individual researchers and small teams, who are more frequently concerned with tracking the diverse data and computations involved in what highly dynamic and iterative research processes. Key requirements in the latter case include automated data registration and metadata extraction, ease of update, close-to-zero management overheads (e.g., no local software install); and flexible, user-managed sharing support, allowing read and write privileges within small groups. We describe here how new capabilities provided by the Globus Online system address the needs of the latter group of climate scientists, providing for the rapid creation and establishment of lightweight individual- or team-specific catalogs; the definition of logical groupings of data elements, called datasets; the evolution of catalogs, dataset definitions, and associated metadata over time, to track changes in data properties and organization as a result of research processes; and the manipulation of data referenced by catalog entries (e.g., replication of a dataset to a remote location for analysis, sharing of a dataset). Its software-as-a-service ('SaaS') architecture means that these capabilities are provided to users over the network, without a need for local software installation. In addition, Globus Online provides well defined APIs, thus providing a platform that can be leveraged to integrate the capabilities with other portals and applications. We describe early applications of these new Globus Online to climate science. We focus in particular on applications that demonstrate how Globus Online capabilities complement those of the Earth System Grid Federation (ESGF), the premier system for publication and discovery of large community datasets. ESGF already uses Globus Online mechanisms for data download. We demonstrate methods by which the two systems can be further integrated and harmonized, so that for example data collections produced within a small team can be easily published from Globus Online to ESGF for archival storage and broader access--and a Globus Online catalog can be used to organize an individual view of a subset of data held in ESGF.

  14. High-end Home Firewalls CIAC-2326

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orvis, W

    Networking in most large organizations is protected with corporate firewalls and managed by seasoned security professionals. Attempts to break into systems at these organizations are extremely difficult to impossible for an external intruder. With the growth in networking and the options that it makes possible, new avenues of intrusion are opening up. Corporate machines exist that are completely unprotected against intrusions, that are not managed by a security professional, and that are regularly connected to the company network. People have the option of and are encouraged to work at home using a home computer linked to the company network. Managersmore » have home computers linked to internal machines so they can keep an eye on internal processes while not physically at work. Researchers do research or writing at home and connect to the company network to download information and upload results. In most cases, these home computers are completely unprotected, except for any protection that the home user might have installed. Unfortunately, most home users are not security professionals and home computers are often used by other family members, such as children downloading music, who are completely unconcerned about security precautions. When these computers are connected to the company network, they can easily introduce viruses, worms, and other malicious code or open a channel behind the company firewall for an external intruder.« less

  15. The Impact of a Library Flood on Computer Operations.

    ERIC Educational Resources Information Center

    Myles, Barbara

    2000-01-01

    Describes the efforts at Boston Public Library to recover from serious flooding that damaged computer equipment. Discusses vendor help in assessing the damage; the loss of installation disks; hiring consultants to help with financial matters; effects on staff; repairing and replacing damaged equipment; insurance issues; and disaster recovery…

  16. Burbank works on the EPIC in the Node 2

    NASA Image and Video Library

    2012-02-28

    ISS030-E-114433 (29 Feb. 2012) --- In the International Space Station?s Destiny laboratory, NASA astronaut Dan Burbank, Expedition 30 commander, upgrades Multiplexer/Demultiplexer (MDM) computers and Portable Computer System (PCS) laptops and installs the Enhanced Processor & Integrated Communications (EPIC) hardware in the Payload 1 (PL-1) MDM.

  17. When the Chips Are Down.

    ERIC Educational Resources Information Center

    Ashton, Ray

    1995-01-01

    Strips away advertising hyperbole to explain multimedia CD-ROM technology and its place in today's classrooms. Only the newest computers are adequate for multimedia CD-ROM; only 10% of all computers in schools have CD-ROM drives attached. CD-ROM drives' performance varies, installation hassles abound, and the "edutainment" market directs…

  18. 32 CFR Appendix A to Part 310 - Safeguarding Personally Identifiable Information (PII)

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... all computer products containing classified data in accordance with the requirements of DoD 5200.1-R... computer environments outside the data processing installation (such as, remote job entry stations... process classified material have adequate procedures and security for the purposes of this Regulation...

  19. 32 CFR Appendix A to Part 310 - Safeguarding Personally Identifiable Information (PII)

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... all computer products containing classified data in accordance with the requirements of DoD 5200.1-R... computer environments outside the data processing installation (such as, remote job entry stations... process classified material have adequate procedures and security for the purposes of this Regulation...

  20. 32 CFR Appendix A to Part 310 - Safeguarding Personally Identifiable Information (PII)

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... all computer products containing classified data in accordance with the requirements of DoD 5200.1-R... computer environments outside the data processing installation (such as, remote job entry stations... process classified material have adequate procedures and security for the purposes of this Regulation...

  1. 32 CFR Appendix A to Part 310 - Safeguarding Personally Identifiable Information (PII)

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... all computer products containing classified data in accordance with the requirements of DoD 5200.1-R... computer environments outside the data processing installation (such as, remote job entry stations... process classified material have adequate procedures and security for the purposes of this Regulation...

  2. 32 CFR Appendix A to Part 310 - Safeguarding Personally Identifiable Information (PII)

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... all computer products containing classified data in accordance with the requirements of DoD 5200.1-R... computer environments outside the data processing installation (such as, remote job entry stations... process classified material have adequate procedures and security for the purposes of this Regulation...

  3. Apples for Teachers Pay Off.

    ERIC Educational Resources Information Center

    Geller, Irving, Ed.

    1983-01-01

    Reviews current trends in the educational market for microcomputers and software. As of June 1982, about 214,000 microcomputers were installed in schools, with Apple Computer (followed by Radio Shack and others) leading the field. A new federal program virtually eliminating how schools use funds may benefit computer assisted instruction. (JN)

  4. Computer-Communications Networks and Teletraffic.

    ERIC Educational Resources Information Center

    Switzer, I.

    Bi-directional cable TV (CATV) systems that are being installed today may not be well suited for computer communications. Older CATV systems are being modified to bi-directional transmission and most new systems are being built with bi-directional capability included. The extreme bandwidth requirement for carrying 20 or more TV channels on a…

  5. Music Learning in Your School Computer Lab.

    ERIC Educational Resources Information Center

    Reese, Sam

    1998-01-01

    States that a growing number of schools are installing general computer labs equipped to use notation, accompaniment, and sequencing software independent of MIDI keyboards. Discusses (1) how to configure the software without MIDI keyboards or external sound modules, (2) using the actual MIDI software, (3) inexpensive enhancements, and (4) the…

  6. EPIC Computer Cards

    NASA Image and Video Library

    2011-12-29

    ISS030-E-017776 (29 Dec. 2011) --- Working in chorus with the International Space Station team in Houston?s Mission Control Center, this astronaut and his Expedition 30 crewmates on the station install a set of Enhanced Processor and Integrated Communications (EPIC) computer cards in one of seven primary computers onboard. The upgrade will allow more experiments to operate simultaneously, and prepare for the arrival of commercial cargo ships later this year.

  7. Development of Sensors for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro

    2005-01-01

    Advances in technology have led to the availability of smaller and more accurate sensors. Computer power to process large amounts of data is no longer the prevailing issue; thus multiple and redundant sensors can be used to obtain more accurate and comprehensive measurements in a space vehicle. The successful integration and commercialization of micro- and nanotechnology for aerospace applications require that a close and interactive relationship be developed between the technology provider and the end user early in the project. Close coordination between the developers and the end users is critical since qualification for flight is time-consuming and expensive. The successful integration of micro- and nanotechnology into space vehicles requires a coordinated effort throughout the design, development, installation, and integration processes

  8. Framework for Service Composition in G-Lite

    NASA Astrophysics Data System (ADS)

    Goranova, R.

    2011-11-01

    G-Lite is a Grid middleware, currently the main middleware installed on all clusters in Bulgaria. The middleware is used by scientists for solving problems, which require a large amount of storage and computational resources. On the other hand, the scientists work with complex processes, where job execution in Grid is just a step of the process. That is why, it is strategically important g-Lite to provide a mechanism for service compositions and business process management. Such mechanism is not specified yet. In this article we propose a framework for service composition in g-Lite. We discuss business process modeling, deployment and execution in this Grid environment. The examples used to demonstrate the concept are based on some IBM products.

  9. A De-Novo Genome Analysis Pipeline (DeNoGAP) for large-scale comparative prokaryotic genomics studies.

    PubMed

    Thakur, Shalabh; Guttman, David S

    2016-06-30

    Comparative analysis of whole genome sequence data from closely related prokaryotic species or strains is becoming an increasingly important and accessible approach for addressing both fundamental and applied biological questions. While there are number of excellent tools developed for performing this task, most scale poorly when faced with hundreds of genome sequences, and many require extensive manual curation. We have developed a de-novo genome analysis pipeline (DeNoGAP) for the automated, iterative and high-throughput analysis of data from comparative genomics projects involving hundreds of whole genome sequences. The pipeline is designed to perform reference-assisted and de novo gene prediction, homolog protein family assignment, ortholog prediction, functional annotation, and pan-genome analysis using a range of proven tools and databases. While most existing methods scale quadratically with the number of genomes since they rely on pairwise comparisons among predicted protein sequences, DeNoGAP scales linearly since the homology assignment is based on iteratively refined hidden Markov models. This iterative clustering strategy enables DeNoGAP to handle a very large number of genomes using minimal computational resources. Moreover, the modular structure of the pipeline permits easy updates as new analysis programs become available. DeNoGAP integrates bioinformatics tools and databases for comparative analysis of a large number of genomes. The pipeline offers tools and algorithms for annotation and analysis of completed and draft genome sequences. The pipeline is developed using Perl, BioPerl and SQLite on Ubuntu Linux version 12.04 LTS. Currently, the software package accompanies script for automated installation of necessary external programs on Ubuntu Linux; however, the pipeline should be also compatible with other Linux and Unix systems after necessary external programs are installed. DeNoGAP is freely available at https://sourceforge.net/projects/denogap/ .

  10. Self and world: large scale installations at science museums.

    PubMed

    Shimojo, Shinsuke

    2008-01-01

    This paper describes three examples of illusion installation in a science museum environment from the author's collaboration with the artist and architect. The installations amplify the illusory effects, such as vection (visually-induced sensation of self motion) and motion-induced blindness, to emphasize that perception is not just to obtain structure and features of objects, but rather to grasp the dynamic relationship between the self and the world. Scaling up the size and utilizing the live human body turned out to be keys for installations with higher emotional impact.

  11. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    PubMed

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  12. Accelerating the Original Profile Kernel.

    PubMed

    Hamp, Tobias; Goldberg, Tatyana; Rost, Burkhard

    2013-01-01

    One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel.

  13. Microbial community analysis using MEGAN.

    PubMed

    Huson, Daniel H; Weber, Nico

    2013-01-01

    Metagenomics, the study of microbes in the environment using DNA sequencing, depends upon dedicated software tools for processing and analyzing very large sequencing datasets. One such tool is MEGAN (MEtaGenome ANalyzer), which can be used to interactively analyze and compare metagenomic and metatranscriptomic data, both taxonomically and functionally. To perform a taxonomic analysis, the program places the reads onto the NCBI taxonomy, while functional analysis is performed by mapping reads to the SEED, COG, and KEGG classifications. Samples can be compared taxonomically and functionally, using a wide range of different charting and visualization techniques. PCoA analysis and clustering methods allow high-level comparison of large numbers of samples. Different attributes of the samples can be captured and used within analysis. The program supports various input formats for loading data and can export analysis results in different text-based and graphical formats. The program is designed to work with very large samples containing many millions of reads. It is written in Java and installers for the three major computer operating systems are available from http://www-ab.informatik.uni-tuebingen.de. © 2013 Elsevier Inc. All rights reserved.

  14. Simulation model for port shunting yards

    NASA Astrophysics Data System (ADS)

    Rusca, A.; Popa, M.; Rosca, E.; Rosca, M.; Dragu, V.; Rusca, F.

    2016-08-01

    Sea ports are important nodes in the supply chain, joining two high capacity transport modes: rail and maritime transport. The huge cargo flows transiting port requires high capacity construction and installation such as berths, large capacity cranes, respectively shunting yards. However, the port shunting yards specificity raises several problems such as: limited access since these are terminus stations for rail network, the in-output of large transit flows of cargo relatively to the scarcity of the departure/arrival of a ship, as well as limited land availability for implementing solutions to serve these flows. It is necessary to identify technological solutions that lead to an answer to these problems. The paper proposed a simulation model developed with ARENA computer simulation software suitable for shunting yards which serve sea ports with access to the rail network. Are investigates the principal aspects of shunting yards and adequate measures to increase their transit capacity. The operation capacity for shunting yards sub-system is assessed taking in consideration the required operating standards and the measure of performance (e.g. waiting time for freight wagons, number of railway line in station, storage area, etc.) of the railway station are computed. The conclusion and results, drawn from simulation, help transports and logistics specialists to test the proposals for improving the port management.

  15. GPU-powered model analysis with PySB/cupSODA.

    PubMed

    Harris, Leonard A; Nobile, Marco S; Pino, James C; Lubbock, Alexander L R; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo; Lopez, Carlos F

    2017-11-01

    A major barrier to the practical utilization of large, complex models of biochemical systems is the lack of open-source computational tools to evaluate model behaviors over high-dimensional parameter spaces. This is due to the high computational expense of performing thousands to millions of model simulations required for statistical analysis. To address this need, we have implemented a user-friendly interface between cupSODA, a GPU-powered kinetic simulator, and PySB, a Python-based modeling and simulation framework. For three example models of varying size, we show that for large numbers of simulations PySB/cupSODA achieves order-of-magnitude speedups relative to a CPU-based ordinary differential equation integrator. The PySB/cupSODA interface has been integrated into the PySB modeling framework (version 1.4.0), which can be installed from the Python Package Index (PyPI) using a Python package manager such as pip. cupSODA source code and precompiled binaries (Linux, Mac OS/X, Windows) are available at github.com/aresio/cupSODA (requires an Nvidia GPU; developer.nvidia.com/cuda-gpus). Additional information about PySB is available at pysb.org. paolo.cazzaniga@unibg.it or c.lopez@vanderbilt.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  16. BMPOS: a Flexible and User-Friendly Tool Sets for Microbiome Studies.

    PubMed

    Pylro, Victor S; Morais, Daniel K; de Oliveira, Francislon S; Dos Santos, Fausto G; Lemos, Leandro N; Oliveira, Guilherme; Roesch, Luiz F W

    2016-08-01

    Recent advances in science and technology are leading to a revision and re-orientation of methodologies, addressing old and current issues under a new perspective. Advances in next generation sequencing (NGS) are allowing comparative analysis of the abundance and diversity of whole microbial communities, generating a large amount of data and findings at a systems level. The current limitation for biologists has been the increasing demand for computational power and training required for processing of NGS data. Here, we describe the deployment of the Brazilian Microbiome Project Operating System (BMPOS), a flexible and user-friendly Linux distribution dedicated to microbiome studies. The Brazilian Microbiome Project (BMP) has developed data analyses pipelines for metagenomic studies (phylogenetic marker genes), conducted using the two main high-throughput sequencing platforms (Ion Torrent and Illumina MiSeq). The BMPOS is freely available and possesses the entire requirement of bioinformatics packages and databases to perform all the pipelines suggested by the BMP team. The BMPOS may be used as a bootable live USB stick or installed in any computer with at least 1 GHz CPU and 512 MB RAM, independent of the operating system previously installed. The BMPOS has proved to be effective for sequences processing, sequences clustering, alignment, taxonomic annotation, statistical analysis, and plotting of metagenomic data. The BMPOS has been used during several metagenomic analyses courses, being valuable as a tool for training, and an excellent starting point to anyone interested in performing metagenomic studies. The BMPOS and its documentation are available at http://www.brmicrobiome.org .

  17. BIRCH: a user-oriented, locally-customizable, bioinformatics system.

    PubMed

    Fristensky, Brian

    2007-02-09

    Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.

  18. BIRCH: A user-oriented, locally-customizable, bioinformatics system

    PubMed Central

    Fristensky, Brian

    2007-01-01

    Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere. PMID:17291351

  19. M/A-COM linkabit eastern operations

    NASA Astrophysics Data System (ADS)

    Mills, D. L.; Avramovic, Z.

    1983-03-01

    This first Quarterly Project Report on LINKABIT's contribution to the Defense Advanced Research Projects Agency (DARPA) Internet Program covers the period from 22 December 1982 through 21 March 1983. LINKABIT's support of the Internet Program is concentrated in the areas of protocol design, implementation, testing, and evaluation. In addition, LINKABIT staff are providing integration and support services for certain computer systems to be installed at DARPA sites in Washington, D.C., and Stuttgart, West Germany. During the period covered by this report, LINKABIT organized the project activities and established staff responsibilities. Several computers and peripheral devices were made available from Government sources for use in protocol development and network testing. Considerable time was devoted to installing this equipment, integrating the software, and testing it with the Internet system.

  20. National Fusion Collaboratory: Grid Computing for Simulations and Experiments

    NASA Astrophysics Data System (ADS)

    Greenwald, Martin

    2004-05-01

    The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.

  1. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    PubMed

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  2. Experiences running NASTRAN on the Microvax 2 computer

    NASA Technical Reports Server (NTRS)

    Butler, Thomas G.; Mitchell, Reginald S.

    1987-01-01

    The MicroVAX operates NASTRAN so well that the only detectable difference in its operation compared to an 11/780 VAX is in the execution time. On the modest installation described here, the engineer has all of the tools he needs to do an excellent job of analysis. System configuration decisions, system sizing, preparation of the system disk, definition of user quotas, installation, monitoring of system errors, and operation policies are discussed.

  3. Climate@Home: Crowdsourcing Climate Change Research

    NASA Astrophysics Data System (ADS)

    Xu, C.; Yang, C.; Li, J.; Sun, M.; Bambacus, M.

    2011-12-01

    Climate change deeply impacts human wellbeing. Significant amounts of resources have been invested in building super-computers that are capable of running advanced climate models, which help scientists understand climate change mechanisms, and predict its trend. Although climate change influences all human beings, the general public is largely excluded from the research. On the other hand, scientists are eagerly seeking communication mediums for effectively enlightening the public on climate change and its consequences. The Climate@Home project is devoted to connect the two ends with an innovative solution: crowdsourcing climate computing to the general public by harvesting volunteered computing resources from the participants. A distributed web-based computing platform will be built to support climate computing, and the general public can 'plug-in' their personal computers to participate in the research. People contribute the spare computing power of their computers to run a computer model, which is used by scientists to predict climate change. Traditionally, only super-computers could handle such a large computing processing load. By orchestrating massive amounts of personal computers to perform atomized data processing tasks, investments on new super-computers, energy consumed by super-computers, and carbon release from super-computers are reduced. Meanwhile, the platform forms a social network of climate researchers and the general public, which may be leveraged to raise climate awareness among the participants. A portal is to be built as the gateway to the climate@home project. Three types of roles and the corresponding functionalities are designed and supported. The end users include the citizen participants, climate scientists, and project managers. Citizen participants connect their computing resources to the platform by downloading and installing a computing engine on their personal computers. Computer climate models are defined at the server side. Climate scientists configure computer model parameters through the portal user interface. After model configuration, scientists then launch the computing task. Next, data is atomized and distributed to computing engines that are running on citizen participants' computers. Scientists will receive notifications on the completion of computing tasks, and examine modeling results via visualization modules of the portal. Computing tasks, computing resources, and participants are managed by project managers via portal tools. A portal prototype has been built for proof of concept. Three forums have been setup for different groups of users to share information on science aspect, technology aspect, and educational outreach aspect. A facebook account has been setup to distribute messages via the most popular social networking platform. New treads are synchronized from the forums to facebook. A mapping tool displays geographic locations of the participants and the status of tasks on each client node. A group of users have been invited to test functions such as forums, blogs, and computing resource monitoring.

  4. REMOVAL OF TANK AND SEWER SEDIMENT BY GATE FLUSHING: COMPUTATIONAL FLUID DYNAMICS MODEL STUDIES

    EPA Science Inventory

    This presentation will discuss the application of a computational fluid dynamics 3D flow model to simulate gate flushing for removing tank/sewer sediments. The physical model of the flushing device was a tank fabricated and installed at the head-end of a hydraulic flume. The fl...

  5. Simulating forest pictures by impact printers

    Treesearch

    Elliot L. Amidon; E. Joyce Dye

    1978-01-01

    Two mechanical devices that are mainly used to print computer output in text form can simulate pictures of terrain and forests. The line printer, which is available for batch processing at many computer installations, can approximate halftones by using overstruck characters to produce successively larger "dots." The printer/plotter, which is normally used as...

  6. Explorations in Statistics: The Analysis of Ratios and Normalized Data

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2013-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This ninth installment of "Explorations in Statistics" explores the analysis of ratios and normalized--or standardized--data. As researchers, we compute a ratio--a numerator divided by a denominator--to compute a…

  7. 48 CFR 9905.506-60 - Illustrations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., installs a computer service center to begin operations on May 1. The operating expense related to the new... operating expenses of the computer service center for the 8-month part of the cost accounting period may be... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Illustrations. 9905.506-60...

  8. Law School Experience in Pervasive Electronic Communications.

    ERIC Educational Resources Information Center

    Shiels, Rosemary

    1994-01-01

    Installation of a schoolwide local area computer network at Chicago-Kent College of Law (Illinois) is described. Uses of electronic mail within a course on computer law are described. Additional social, administrative, and research uses of electronic mail are noted as are positive effects and emerging problems (e.g., burdens on recipients and…

  9. International Futures (IFs): A Global Issues Simulation for Teaching and Research.

    ERIC Educational Resources Information Center

    Hughes, Barry B.

    This paper describes the International Futures (IFs) computer assisted simulation game for use with undergraduates. Written in Standard Fortran IV, the model currently runs on mainframe or mini computers, but has not been adapted for micros. It has been successfully installed on Harris, Burroughs, Telefunken, CDC, Univac, IBM, and Prime machines.…

  10. Flexstab on the IBM 360

    NASA Technical Reports Server (NTRS)

    Pyle, R. S.; Sykora, R. G.; Denman, S. C.

    1976-01-01

    FLEXSTAB, an array of computer programs developed on CDC equipment, has been converted to operate on the IBM 360 computation system. Instructions for installing, validating, and operating FLEXSTAB on the IBM 360 are included. Hardware requirements are itemized and supplemental materials describe JCL sequences, the CDC to IBM conversion, the input output subprograms, and the interprogram data flow.

  11. Critical Success Factors for E-Learning and Institutional Change--Some Organisational Perspectives on Campus-Wide E-Learning

    ERIC Educational Resources Information Center

    White, Su

    2007-01-01

    Computer technology has been harnessed for education in UK universities ever since the first computers for research were installed at 10 selected sites in 1957. Subsequently, real costs have fallen dramatically. Processing power has increased; network and communications infrastructure has proliferated, and information has become unimaginably…

  12. Installation and management of the SPS and LEP control system computers

    NASA Astrophysics Data System (ADS)

    Bland, Alastair

    1994-12-01

    Control of the CERN SPS and LEP accelerators and service equipment on the two CERN main sites is performed via workstations, file servers, Process Control Assemblies (PCAs) and Device Stub Controllers (DSCs). This paper describes the methods and tools that have been developed to manage the file servers, PCAs and DSCs since the LEP startup in 1989. There are five operational DECstation 5000s used as file servers and boot servers for the PCAs and DSCs. The PCAs consist of 90 SCO Xenix 386 PCs, 40 LynxOS 486 PCs and more than 40 older NORD 100s. The DSCs consist of 90 OS-968030 VME crates and 10 LynxOS 68030 VME crates. In addition there are over 100 development systems. The controls group is responsible for installing the computers, starting all the user processes and ensuring that the computers and the processes run correctly. The operators in the SPS/LEP control room and the Services control room have a Motif-based X window program which gives them, in real time, the state of all the computers and allows them to solve problems or reboot them.

  13. The road to business process improvement--can you get there from here?

    PubMed

    Gilberto, P A

    1995-11-01

    Historically, "improvements" within the organization have been frequently attained through automation by building and installing computer systems. Material requirements planning (MRP), manufacturing resource planning II (MRP II), just-in-time (JIT), computer aided design (CAD), computer aided manufacturing (CAM), electronic data interchange (EDI), and various other TLAs (three-letter acronyms) have been used as the methods to attain business objectives. But most companies have found that installing computer software, cleaning up their data, and providing every employee with training on how to best use the systems have not resulted in the level of business improvements needed. The software systems have simply made management around the problems easier but did little to solve the basic problems. The missing element in the efforts to improve the performance of the organization has been a shift in focus from individual department improvements to cross-organizational business process improvements. This article describes how the Electric Boat Division of General Dynamics Corporation, in conjunction with the Data Systems Division, moved its focus from one of vertical organizational processes to horizontal business processes. In other words, how we got rid of the dinosaurs.

  14. Feasibility study for the implementation of NASTRAN on the ILLIAC 4 parallel processor

    NASA Technical Reports Server (NTRS)

    Field, E. I.

    1975-01-01

    The ILLIAC IV, a fourth generation multiprocessor using parallel processing hardware concepts, is operational at Moffett Field, California. Its capability to excel at matrix manipulation, makes the ILLIAC well suited for performing structural analyses using the finite element displacement method. The feasibility of modifying the NASTRAN (NASA structural analysis) computer program to make effective use of the ILLIAC IV was investigated. The characteristics are summarized of the ILLIAC and the ARPANET, a telecommunications network which spans the continent making the ILLIAC accessible to nearly all major industrial centers in the United States. Two distinct approaches are studied: retaining NASTRAN as it now operates on many of the host computers of the ARPANET to process the input and output while using the ILLIAC only for the major computational tasks, and installing NASTRAN to operate entirely in the ILLIAC environment. Though both alternatives offer similar and significant increases in computational speed over modern third generation processors, the full installation of NASTRAN on the ILLIAC is recommended. Specifications are presented for performing that task with manpower estimates and schedules to correspond.

  15. 46 CFR 111.15-25 - Overload and reverse current protection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Storage Batteries and Battery Chargers: Construction and Installation... battery conductor, except conductors of engine cranking batteries and batteries with a nominal potential of 6 volts or less. For large storage battery installations, the overcurrent protective devices must...

  16. 46 CFR 111.15-25 - Overload and reverse current protection.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Storage Batteries and Battery Chargers: Construction and Installation... battery conductor, except conductors of engine cranking batteries and batteries with a nominal potential of 6 volts or less. For large storage battery installations, the overcurrent protective devices must...

  17. 46 CFR 111.15-25 - Overload and reverse current protection.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Storage Batteries and Battery Chargers: Construction and Installation... battery conductor, except conductors of engine cranking batteries and batteries with a nominal potential of 6 volts or less. For large storage battery installations, the overcurrent protective devices must...

  18. 46 CFR 111.15-25 - Overload and reverse current protection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Storage Batteries and Battery Chargers: Construction and Installation... battery conductor, except conductors of engine cranking batteries and batteries with a nominal potential of 6 volts or less. For large storage battery installations, the overcurrent protective devices must...

  19. 46 CFR 111.15-25 - Overload and reverse current protection.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Storage Batteries and Battery Chargers: Construction and Installation... battery conductor, except conductors of engine cranking batteries and batteries with a nominal potential of 6 volts or less. For large storage battery installations, the overcurrent protective devices must...

  20. Structured Problem Solving and the Basic Graphic Methods within a Total Quality Leadership Setting: Case Study

    DTIC Science & Technology

    1992-02-01

    develop,, and maintains computer programs for the Department of the Navy. It provides life cycle support for over 50 computer programs installed at over...the computer programs . Table 4 presents a list of possible product or output measures of functionality for ACDS Block 0 programs . Examples of output...were identified as important "causes" of process performance. Functionality of the computer programs was the result or "effect" of the combination of

  1. 32 CFR 989.32 - Noise.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... database entry. Utilize the current NOISEMAP computer program for air installations and the Assessment System for Aircraft Noise for military training routes and military operating areas. Guidance on...

  2. 32 CFR 989.32 - Noise.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... database entry. Utilize the current NOISEMAP computer program for air installations and the Assessment System for Aircraft Noise for military training routes and military operating areas. Guidance on...

  3. 32 CFR 989.32 - Noise.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... database entry. Utilize the current NOISEMAP computer program for air installations and the Assessment System for Aircraft Noise for military training routes and military operating areas. Guidance on...

  4. 32 CFR 989.32 - Noise.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... database entry. Utilize the current NOISEMAP computer program for air installations and the Assessment System for Aircraft Noise for military training routes and military operating areas. Guidance on...

  5. 32 CFR 989.32 - Noise.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... database entry. Utilize the current NOISEMAP computer program for air installations and the Assessment System for Aircraft Noise for military training routes and military operating areas. Guidance on...

  6. Potential climatic impacts and reliability of large-scale offshore wind farms

    NASA Astrophysics Data System (ADS)

    Wang, Chien; Prinn, Ronald G.

    2011-04-01

    The vast availability of wind power has fueled substantial interest in this renewable energy source as a potential near-zero greenhouse gas emission technology for meeting future world energy needs while addressing the climate change issue. However, in order to provide even a fraction of the estimated future energy needs, a large-scale deployment of wind turbines (several million) is required. The consequent environmental impacts, and the inherent reliability of such a large-scale usage of intermittent wind power would have to be carefully assessed, in addition to the need to lower the high current unit wind power costs. Our previous study (Wang and Prinn 2010 Atmos. Chem. Phys. 10 2053) using a three-dimensional climate model suggested that a large deployment of wind turbines over land to meet about 10% of predicted world energy needs in 2100 could lead to a significant temperature increase in the lower atmosphere over the installed regions. A global-scale perturbation to the general circulation patterns as well as to the cloud and precipitation distribution was also predicted. In the later study reported here, we conducted a set of six additional model simulations using an improved climate model to further address the potential environmental and intermittency issues of large-scale deployment of offshore wind turbines for differing installation areas and spatial densities. In contrast to the previous land installation results, the offshore wind turbine installations are found to cause a surface cooling over the installed offshore regions. This cooling is due principally to the enhanced latent heat flux from the sea surface to lower atmosphere, driven by an increase in turbulent mixing caused by the wind turbines which was not entirely offset by the concurrent reduction of mean wind kinetic energy. We found that the perturbation of the large-scale deployment of offshore wind turbines to the global climate is relatively small compared to the case of land-based installations. However, the intermittency caused by the significant seasonal wind variations over several major offshore sites is substantial, and demands further options to ensure the reliability of large-scale offshore wind power. The method that we used to simulate the offshore wind turbine effect on the lower atmosphere involved simply increasing the ocean surface drag coefficient. While this method is consistent with several detailed fine-scale simulations of wind turbines, it still needs further study to ensure its validity. New field observations of actual wind turbine arrays are definitely required to provide ultimate validation of the model predictions presented here.

  7. Design and Development of a Deep Acoustic Lining for the 40-by 80-Foot Wind Tunnel Test Section

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Schmitz, Fredric H.; Allen, Christopher S.; Jaeger, Stephen M.; Sacco, Joe N.; Mosher, Marianne; Hayes, Julie A.

    2002-01-01

    The work described in this report has made effective use of design teams to build a state-of-the-art anechoic wind-tunnel facility. Many potential design solutions were evaluated using engineering analysis, and computational tools. Design alternatives were then evaluated using specially developed testing techniques, Large-scale coupon testing was then performed to develop confidence that the preferred design would meet the acoustic, aerodynamic, and structural objectives of the project. Finally, designs were frozen and the final product was installed in the wind tunnel. The result of this technically ambitious project has been the creation of a unique acoustic wind tunnel. Its large test section (39 ft x 79 ft x SO ft), potentially near-anechoic environment, and medium subsonic speed capability (M = 0.45) will support a full range of aeroacoustic testing-from rotorcraft and other vertical takeoff and landing aircraft to the take-off/landing configurations of both subsonic and supersonic transports.

  8. Numerical modeling of water spray suppression of conveyor belt fires in a large-scale tunnel.

    PubMed

    Yuan, Liming; Smith, Alex C

    2015-05-01

    Conveyor belt fires in an underground mine pose a serious life threat to miners. Water sprinkler systems are usually used to extinguish underground conveyor belt fires, but because of the complex interaction between conveyor belt fires and mine ventilation airflow, more effective engineering designs are needed for the installation of water sprinkler systems. A computational fluid dynamics (CFD) model was developed to simulate the interaction between the ventilation airflow, the belt flame spread, and the water spray system in a mine entry. The CFD model was calibrated using test results from a large-scale conveyor belt fire suppression experiment. Simulations were conducted using the calibrated CFD model to investigate the effects of sprinkler location, water flow rate, and sprinkler activation temperature on the suppression of conveyor belt fires. The sprinkler location and the activation temperature were found to have a major effect on the suppression of the belt fire, while the water flow rate had a minor effect.

  9. Numerical modeling of water spray suppression of conveyor belt fires in a large-scale tunnel

    PubMed Central

    Yuan, Liming; Smith, Alex C.

    2015-01-01

    Conveyor belt fires in an underground mine pose a serious life threat to miners. Water sprinkler systems are usually used to extinguish underground conveyor belt fires, but because of the complex interaction between conveyor belt fires and mine ventilation airflow, more effective engineering designs are needed for the installation of water sprinkler systems. A computational fluid dynamics (CFD) model was developed to simulate the interaction between the ventilation airflow, the belt flame spread, and the water spray system in a mine entry. The CFD model was calibrated using test results from a large-scale conveyor belt fire suppression experiment. Simulations were conducted using the calibrated CFD model to investigate the effects of sprinkler location, water flow rate, and sprinkler activation temperature on the suppression of conveyor belt fires. The sprinkler location and the activation temperature were found to have a major effect on the suppression of the belt fire, while the water flow rate had a minor effect. PMID:26190905

  10. Smart home in a box: usability study for a large scale self-installation of smart home technologies.

    PubMed

    Hu, Yang; Tilke, Dominique; Adams, Taylor; Crandall, Aaron S; Cook, Diane J; Schmitter-Edgecombe, Maureen

    2016-07-01

    This study evaluates the ability of users to self-install a smart home in a box (SHiB) intended for use by a senior population. SHiB is a ubiquitous system, developed by the Washington State University Center for Advanced Studies in Adaptive Systems (CASAS). Participants involved in this study are from the greater Palouse region of Washington State, and there are 13 participants in the study with an average age of 69.23. The SHiB package, which included several different types of components to collect and transmit sensor data, was given to participants to self-install. After installation of the SHiB, the participants were visited by researchers for a check of the installation. The researchers evaluated how well the sensors were installed and asked the resident questions about the installation process to help improve the SHiB design. The results indicate strengths and weaknesses of the SHiB design. Indoor motion tracking sensors are installed with high success rate, low installation success rate was found for door sensors and setting up the Internet server.

  11. Smart home in a box: usability study for a large scale self-installation of smart home technologies

    PubMed Central

    Hu, Yang; Tilke, Dominique; Adams, Taylor; Crandall, Aaron S.; Schmitter-Edgecombe, Maureen

    2017-01-01

    This study evaluates the ability of users to self-install a smart home in a box (SHiB) intended for use by a senior population. SHiB is a ubiquitous system, developed by the Washington State University Center for Advanced Studies in Adaptive Systems (CASAS). Participants involved in this study are from the greater Palouse region of Washington State, and there are 13 participants in the study with an average age of 69.23. The SHiB package, which included several different types of components to collect and transmit sensor data, was given to participants to self-install. After installation of the SHiB, the participants were visited by researchers for a check of the installation. The researchers evaluated how well the sensors were installed and asked the resident questions about the installation process to help improve the SHiB design. The results indicate strengths and weaknesses of the SHiB design. Indoor motion tracking sensors are installed with high success rate, low installation success rate was found for door sensors and setting up the Internet server. PMID:28936390

  12. CernVM WebAPI - Controlling Virtual Machines from the Web

    NASA Astrophysics Data System (ADS)

    Charalampidis, I.; Berzano, D.; Blomer, J.; Buncic, P.; Ganis, G.; Meusel, R.; Segal, B.

    2015-12-01

    Lately, there is a trend in scientific projects to look for computing resources in the volunteering community. In addition, to reduce the development effort required to port the scientific software stack to all the known platforms, the use of Virtual Machines (VMs)u is becoming increasingly popular. Unfortunately their use further complicates the software installation and operation, restricting the volunteer audience to sufficiently expert people. CernVM WebAPI is a software solution addressing this specific case in a way that opens wide new application opportunities. It offers a very simple API for setting-up, controlling and interfacing with a VM instance in the users computer, while in the same time offloading the user from all the burden of downloading, installing and configuring the hypervisor. WebAPI comes with a lightweight javascript library that guides the user through the application installation process. Malicious usage is prohibited by offering a per-domain PKI validation mechanism. In this contribution we will overview this new technology, discuss its security features and examine some test cases where it is already in use.

  13. UKIRT fast guide system improvements

    NASA Astrophysics Data System (ADS)

    Balius, Al; Rees, Nicholas P.

    1997-09-01

    The United Kingdom Infra-Red Telescope (UKIRT) has recently undergone the first major upgrade program since its construction. One part of the upgrade program was an adaptive tip-tilt secondary mirror closed with a CCD system collectively called the fast guide system. The installation of the new secondary and associated systems was carried out in the first half of 1996. Initial testing of the fast guide system has shown great improvement in guide accuracy. The initial installation included a fixed integration time CCD. In the first part of 1997 an integration time controller based on computed guide star luminosity was implemented in the fast guide system. Also, a Kalman type estimator was installed in the image tracking loop based on a dynamic model and knowledge of the statistical properties of the guide star position error measurement as a function of computed guide star magnitude and CCD integration time. The new configuration was tested in terms of improved guide performance nd graceful degradation when tracking faint guide stars. This paper describes the modified fast guide system configuration and reports the results of performance tests.

  14. Near real time water resources data for river basin management

    NASA Technical Reports Server (NTRS)

    Paulson, R. W. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Twenty Data Collection Platforms (DCP) are being field installed on USGS water resources stations in the Delaware River Basin. DCP's have been successfully installed and are operating well on five stream gaging stations, three observation wells, and one water quality monitor in the basin. DCP's have been installed at nine additional water quality monitors, and work is progressing on interfacing the platforms to the monitors. ERTS-related water resources data from the platforms are being provided in near real time, by the Goddard Space Flight Center to the Pennsylvania district, Water Resources Division, U.S. Geological Survey. On a daily basis, the data are computer processed by the Survey and provided to the Delaware River Basin Commission. Each daily summary contains data that were relayed during 4 or 5 of the 15 orbits made by ERTS-1 during the previous day. Water resources parameters relays by the platforms include dissolved oxygen concentrations, temperature, pH, specific conductance, well level, and stream gage height, which is used to compute stream flow for the daily summary.

  15. TU-AB-BRC-12: Optimized Parallel MonteCarlo Dose Calculations for Secondary MU Checks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, S; Nazareth, D; Bellor, M

    Purpose: Secondary MU checks are an important tool used during a physics review of a treatment plan. Commercial software packages offer varying degrees of theoretical dose calculation accuracy, depending on the modality involved. Dose calculations of VMAT plans are especially prone to error due to the large approximations involved. Monte Carlo (MC) methods are not commonly used due to their long run times. We investigated two methods to increase the computational efficiency of MC dose simulations with the BEAMnrc code. Distributed computing resources, along with optimized code compilation, will allow for accurate and efficient VMAT dose calculations. Methods: The BEAMnrcmore » package was installed on a high performance computing cluster accessible to our clinic. MATLAB and PYTHON scripts were developed to convert a clinical VMAT DICOM plan into BEAMnrc input files. The BEAMnrc installation was optimized by running the VMAT simulations through profiling tools which indicated the behavior of the constituent routines in the code, e.g. the bremsstrahlung splitting routine, and the specified random number generator. This information aided in determining the most efficient compiling parallel configuration for the specific CPU’s available on our cluster, resulting in the fastest VMAT simulation times. Our method was evaluated with calculations involving 10{sup 8} – 10{sup 9} particle histories which are sufficient to verify patient dose using VMAT. Results: Parallelization allowed the calculation of patient dose on the order of 10 – 15 hours with 100 parallel jobs. Due to the compiler optimization process, further speed increases of 23% were achieved when compared with the open-source compiler BEAMnrc packages. Conclusion: Analysis of the BEAMnrc code allowed us to optimize the compiler configuration for VMAT dose calculations. In future work, the optimized MC code, in conjunction with the parallel processing capabilities of BEAMnrc, will be applied to provide accurate and efficient secondary MU checks.« less

  16. C-mii: a tool for plant miRNA and target identification.

    PubMed

    Numnark, Somrak; Mhuantong, Wuttichai; Ingsriswang, Supawadee; Wichadakul, Duangdao

    2012-01-01

    MicroRNAs (miRNAs) have been known to play an important role in several biological processes in both animals and plants. Although several tools for miRNA and target identification are available, the number of tools tailored towards plants is limited, and those that are available have specific functionality, lack graphical user interfaces, and restrict the number of input sequences. Large-scale computational identifications of miRNAs and/or targets of several plants have been also reported. Their methods, however, are only described as flow diagrams, which require programming skills and the understanding of input and output of the connected programs to reproduce. To overcome these limitations and programming complexities, we proposed C-mii as a ready-made software package for both plant miRNA and target identification. C-mii was designed and implemented based on established computational steps and criteria derived from previous literature with the following distinguishing features. First, software is easy to install with all-in-one programs and packaged databases. Second, it comes with graphical user interfaces (GUIs) for ease of use. Users can identify plant miRNAs and targets via step-by-step execution, explore the detailed results from each step, filter the results according to proposed constraints in plant miRNA and target biogenesis, and export sequences and structures of interest. Third, it supplies bird's eye views of the identification results with infographics and grouping information. Fourth, in terms of functionality, it extends the standard computational steps of miRNA target identification with miRNA-target folding and GO annotation. Fifth, it provides helper functions for the update of pre-installed databases and automatic recovery. Finally, it supports multi-project and multi-thread management. C-mii constitutes the first complete software package with graphical user interfaces enabling computational identification of both plant miRNA genes and miRNA targets. With the provided functionalities, it can help accelerate the study of plant miRNAs and targets, especially for small and medium plant molecular labs without bioinformaticians. C-mii is freely available at http://www.biotec.or.th/isl/c-mii for both Windows and Ubuntu Linux platforms.

  17. C-mii: a tool for plant miRNA and target identification

    PubMed Central

    2012-01-01

    Background MicroRNAs (miRNAs) have been known to play an important role in several biological processes in both animals and plants. Although several tools for miRNA and target identification are available, the number of tools tailored towards plants is limited, and those that are available have specific functionality, lack graphical user interfaces, and restrict the number of input sequences. Large-scale computational identifications of miRNAs and/or targets of several plants have been also reported. Their methods, however, are only described as flow diagrams, which require programming skills and the understanding of input and output of the connected programs to reproduce. Results To overcome these limitations and programming complexities, we proposed C-mii as a ready-made software package for both plant miRNA and target identification. C-mii was designed and implemented based on established computational steps and criteria derived from previous literature with the following distinguishing features. First, software is easy to install with all-in-one programs and packaged databases. Second, it comes with graphical user interfaces (GUIs) for ease of use. Users can identify plant miRNAs and targets via step-by-step execution, explore the detailed results from each step, filter the results according to proposed constraints in plant miRNA and target biogenesis, and export sequences and structures of interest. Third, it supplies bird's eye views of the identification results with infographics and grouping information. Fourth, in terms of functionality, it extends the standard computational steps of miRNA target identification with miRNA-target folding and GO annotation. Fifth, it provides helper functions for the update of pre-installed databases and automatic recovery. Finally, it supports multi-project and multi-thread management. Conclusions C-mii constitutes the first complete software package with graphical user interfaces enabling computational identification of both plant miRNA genes and miRNA targets. With the provided functionalities, it can help accelerate the study of plant miRNAs and targets, especially for small and medium plant molecular labs without bioinformaticians. C-mii is freely available at http://www.biotec.or.th/isl/c-mii for both Windows and Ubuntu Linux platforms. PMID:23281648

  18. Disclaimers

    MedlinePlus

    ... Web sites you visited or by third party software installed on your computer. The National Library of Medicine does not endorse or recommend products or services for which you may view a pop-up ...

  19. Computational Analysis of Static and Dynamic Behaviour of Magnetic Suspensions and Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P. (Editor); Groom, Nelson J.

    1996-01-01

    Static modelling of magnetic bearings is often carried out using magnetic circuit theory. This theory cannot easily include nonlinear effects such as magnetic saturation or the fringing of flux in air-gaps. Modern computational tools are able to accurately model complex magnetic bearing geometries, provided some care is exercised. In magnetic suspension applications, the magnetic fields are highly three-dimensional and require computational tools for the solution of most problems of interest. The dynamics of a magnetic bearing or magnetic suspension system can be strongly affected by eddy currents. Eddy currents are present whenever a time-varying magnetic flux penetrates a conducting medium. The direction of flow of the eddy current is such as to reduce the rate-of-change of flux. Analytic solutions for eddy currents are available for some simplified geometries, but complex geometries must be solved by computation. It is only in recent years that such computations have been considered truly practical. At NASA Langley Research Center, state-of-the-art finite-element computer codes, 'OPERA', 'TOSCA' and 'ELEKTRA' have recently been installed and applied to the magnetostatic and eddy current problems. This paper reviews results of theoretical analyses which suggest general forms of mathematical models for eddy currents, together with computational results. A simplified circuit-based eddy current model proposed appears to predict the observed trends in the case of large eddy current circuits in conducting non-magnetic material. A much more difficult case is seen to be that of eddy currents in magnetic material, or in non-magnetic material at higher frequencies, due to the lower skin depths. Even here, the dissipative behavior has been shown to yield at least somewhat to linear modelling. Magnetostatic and eddy current computations have been carried out relating to the Annular Suspension and Pointing System, a prototype for a space payload pointing and vibration isolation system, where the magnetic actuator geometry resembles a conventional magnetic bearing. Magnetostatic computations provide estimates of flux density within airgaps and the iron core material, fringing at the pole faces and the net force generated. Eddy current computations provide coil inductance, power dissipation and the phase lag in the magnetic field, all as functions of excitation frequency. Here, the dynamics of the magnetic bearings, notably the rise time of forces with changing currents, are found to be very strongly affected by eddy currents, even at quite low frequencies. Results are also compared to experimental measurements of the performance of a large-gap magnetic suspension system, the Large Angle Magnetic Suspension Test Fixture (LAMSTF). Eddy current effects are again shown to significantly affect the dynamics of the system. Some consideration is given to the ease and accuracy of computation, specifically relating to OPERA/TOSCA/ELEKTRA.

  20. Federal Aviation Administration Aviation System Capital Investment Plan 1993

    DTIC Science & Technology

    1993-12-01

    Facilitates full use of terminal airspace capacity. 0 Increases safety and efficiency. 62-21 Airport Surface Traffic 0 Optimizes sequencing and...installation of tower control computer complexes (TCCCs) in se- 0 AAS software for terminal and en route ATC lected airport traffic control towers. TCCCs...project provides economical ASR-4/5/6, and install 40 ASR-9s at radar service at airports with air traffic densi- ASR-4/5/6 sites). ties high enough to

  1. Aerodynamic stability analysis of NASA J85-13/planar pressure pulse generator installation

    NASA Technical Reports Server (NTRS)

    Chung, K.; Hosny, W. M.; Steenken, W. G.

    1980-01-01

    A digital computer simulation model for the J85-13/Planar Pressure Pulse Generator (P3 G) test installation was developed by modifying an existing General Electric compression system model. This modification included the incorporation of a novel method for describing the unsteady blade lift force. This approach significantly enhanced the capability of the model to handle unsteady flows. In addition, the frequency response characteristics of the J85-13/P3G test installation were analyzed in support of selecting instrumentation locations to avoid standing wave nodes within the test apparatus and thus, low signal levels. The feasibility of employing explicit analytical expression for surge prediction was also studied.

  2. Residential solar-heating system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Complete residential solar-heating and hot-water system, when installed in highly-insulated energy-saver home, can supply large percentage of total energy demand for space heating and domestic hot water. System which uses water-heating energy storage can be scaled to meet requirements of building in which it is installed.

  3. Survey of Computer Facilities in Minnesota and North Dakota.

    ERIC Educational Resources Information Center

    MacGregor, Donald

    In order to attain a better understanding of the data processing manpower needs of business and industry, a survey instrument was designed and mailed to 570 known and possible computer installations in the Minnesota/North Dakota area. The survey was conducted during the spring of 1975, and concentrated on the kinds of equipment and computer…

  4. Flexible and Secure Computer-Based Assessment Using a Single Zip Disk

    ERIC Educational Resources Information Center

    Ko, C. C.; Cheng, C. D.

    2008-01-01

    Electronic examination systems, which include Internet-based system, require extremely complicated installation, configuration and maintenance of software as well as hardware. In this paper, we present the design and development of a flexible, easy-to-use and secure examination system (e-Test), in which any commonly used computer can be used as a…

  5. Tangential scanning of hardwood logs: developing an industrial computer tomography scanner

    Treesearch

    Nand K. Gupta; Daniel L. Schmoldt; Bruce Isaacson

    1999-01-01

    It is generally believed that noninvasive scanning of hardwood logs such as computer tomography (CT) scanning prior to initial breakdown will greatly improve the processing of logs into lumber. This belief, however, has not translated into rapid development and widespread installation of industrial CT scanners for log processing. The roadblock has been more operational...

  6. Documentation of the ISA Micro Computed Tomography System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, William D.; Smith, Jerel A.

    2013-12-18

    This document is intended to provide information on the ISA Micro Computed Tomography (MicroCT) system that will be installed in Yavne, Israel. X-ray source, detector, and motion control hardware are specified as well as specimen platforms, containers, and reference material types. Most of the details on the system are derived from Reference 1 and 2.

  7. Space lab system analysis

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Rives, T. B.

    1987-01-01

    An analytical analysis of the HOSC Generic Peripheral processing system was conducted. The results are summarized and they indicate that the maximum delay in performing screen change requests should be less than 2.5 sec., occurring for a slow VAX host to video screen I/O rate of 50 KBps. This delay is due to the average I/O rate from the video terminals to their host computer. Software structure of the main computers and the host computers will have greater impact on screen change or refresh response times. The HOSC data system model was updated by a newly coded PASCAL based simulation program which was installed on the HOSC VAX system. This model is described and documented. Suggestions are offered to fine tune the performance of the ETERNET interconnection network. Suggestions for using the Nutcracker by Excelan to trace itinerate packets which appear on the network from time to time were offered in discussions with the HOSC personnel. Several visits to the HOSC facility were to install and demonstrate the simulation model.

  8. Research on starlight hardware-in-the-loop simulator

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Gao, Yang; Qu, Huiyang; Liu, Dongfang; Du, Huijie; Lei, Jie

    2016-10-01

    The starlight navigation is considered to be one of the most important methods for spacecraft navigation. Starlight simulation system is a high-precision system with large fields of view, designed to test the starlight navigation sensor performance on the ground. A complete hardware-in-the-loop simulation of the system has been built. The starlight simulator is made up of light source, light source controller, light filter, LCD, collimator and control computer. LCD is the key display component of the system, and is installed at the focal point of the collimator. For the LCD cannot emit light itself, so light source and light source power controller is specially designed for the brightness demanded by the LCD. Light filter is designed for the dark background which is also needed in the simulation.

  9. Clearing your Desk! Software and Data Services for Collaborative Web Based GIS Analysis

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Gichamo, T.; Yildirim, A. A.; Liu, Y.

    2015-12-01

    Can your desktop computer crunch the large GIS datasets that are becoming increasingly common across the geosciences? Do you have access to or the know-how to take advantage of advanced high performance computing (HPC) capability? Web based cyberinfrastructure takes work off your desk or laptop computer and onto infrastructure or "cloud" based data and processing servers. This talk will describe the HydroShare collaborative environment and web based services being developed to support the sharing and processing of hydrologic data and models. HydroShare supports the upload, storage, and sharing of a broad class of hydrologic data including time series, geographic features and raster datasets, multidimensional space-time data, and other structured collections of data. Web service tools and a Python client library provide researchers with access to HPC resources without requiring them to become HPC experts. This reduces the time and effort spent in finding and organizing the data required to prepare the inputs for hydrologic models and facilitates the management of online data and execution of models on HPC systems. This presentation will illustrate the use of web based data and computation services from both the browser and desktop client software. These web-based services implement the Terrain Analysis Using Digital Elevation Model (TauDEM) tools for watershed delineation, generation of hydrology-based terrain information, and preparation of hydrologic model inputs. They allow users to develop scripts on their desktop computer that call analytical functions that are executed completely in the cloud, on HPC resources using input datasets stored in the cloud, without installing specialized software, learning how to use HPC, or transferring large datasets back to the user's desktop. These cases serve as examples for how this approach can be extended to other models to enhance the use of web and data services in the geosciences.

  10. High performance computing enabling exhaustive analysis of higher order single nucleotide polymorphism interaction in Genome Wide Association Studies.

    PubMed

    Goudey, Benjamin; Abedini, Mani; Hopper, John L; Inouye, Michael; Makalic, Enes; Schmidt, Daniel F; Wagner, John; Zhou, Zeyu; Zobel, Justin; Reumann, Matthias

    2015-01-01

    Genome-wide association studies (GWAS) are a common approach for systematic discovery of single nucleotide polymorphisms (SNPs) which are associated with a given disease. Univariate analysis approaches commonly employed may miss important SNP associations that only appear through multivariate analysis in complex diseases. However, multivariate SNP analysis is currently limited by its inherent computational complexity. In this work, we present a computational framework that harnesses supercomputers. Based on our results, we estimate a three-way interaction analysis on 1.1 million SNP GWAS data requiring over 5.8 years on the full "Avoca" IBM Blue Gene/Q installation at the Victorian Life Sciences Computation Initiative. This is hundreds of times faster than estimates for other CPU based methods and four times faster than runtimes estimated for GPU methods, indicating how the improvement in the level of hardware applied to interaction analysis may alter the types of analysis that can be performed. Furthermore, the same analysis would take under 3 months on the currently largest IBM Blue Gene/Q supercomputer "Sequoia" at the Lawrence Livermore National Laboratory assuming linear scaling is maintained as our results suggest. Given that the implementation used in this study can be further optimised, this runtime means it is becoming feasible to carry out exhaustive analysis of higher order interaction studies on large modern GWAS.

  11. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility

    PubMed Central

    2012-01-01

    Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud”) and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/. PMID:22846423

  12. Architectural requirements for the Red Storm computing system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camp, William J.; Tomkins, James Lee

    This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latencymore » interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laughlin, Gary L.

    The International, Homeland, and Nuclear Security (IHNS) Program Management Unit (PMU) oversees a broad portfolio of Sandia’s programs in areas ranging from global nuclear security to critical asset protection. We use science and technology, innovative research, and global engagement to counter threats, reduce dangers, and respond to disasters. The PMU draws on the skills of scientists and engineers from across Sandia. Our programs focus on protecting US government installations, safeguarding nuclear weapons and materials, facilitating nonproliferation activities, securing infrastructures, countering chemical and biological dangers, and reducing the risk of terrorist threats. We conduct research in risk and threat analysis, monitoringmore » and detection, decontamination and recovery, and situational awareness. We develop technologies for verifying arms control agreements, neutralizing dangerous materials, detecting intruders, and strengthening resiliency. Our programs use Sandia’s High-Performance Computing resources for predictive modeling and simulation of interdependent systems, for modeling dynamic threats and forecasting adaptive behavior, and for enabling decision support and processing large cyber data streams. In this report, we highlight four advanced computation projects that illustrate the breadth of the IHNS mission space.« less

  14. Sonic Onyx: Case Study of an Interactive Artwork

    NASA Astrophysics Data System (ADS)

    Ahmed, Salah Uddin; Jaccheri, Letizia; M'kadmi, Samir

    Software supported art projects are increasing in numbers in recent years as artists are exploring how computing can be used to create new forms of live art. Interactive sound installation is one kind of art in this genre. In this article we present the development process and functional description of Sonic Onyx, an interactive sound installation. The objective is to show, through the life cycle of Sonic Onyx, how a software dependent interactive artwork involves its users and raises issues related to its interaction and functionalities.

  15. Building Columbia from the SysAdmin View

    NASA Technical Reports Server (NTRS)

    Chan, David

    2005-01-01

    Project Columbia was built at NASA Ames Research Center in partnership with SGI and Intel. Columbia consists of 20 512 processor Altix machines with 440TB of storage and achieved 51.87 TeraPlops to be ranked the second fastest on the top 500 at SuperComputing 2004. Columbia was delivered, installed and put into production in 3 months. On average, a new Columbia node was brought into production in less than a week. Columbia's configuration, installation, and future plans will be discussed.

  16. Simulations and experiments on RITA-2 at PSI

    NASA Astrophysics Data System (ADS)

    Klausen, S. N.; Lefmann, K.; McMorrow, D. F.; Altorfer, F.; Janssen, S.; Lüthy, M.

    The cold-neutron triple-axis spectrometer RITA-2 designed and built at Riso National Laboratory was installed at the neutron source SINQ at Paul Scherrer Institute in April/May 2001. In connection with the installation of RITA-2, computer simulations were performed using the neutron ray-tracing package McStas. The simulation results are compared to real experimental results obtained with a powder sample. Especially, the flux at the sample position and the resolution function of the spectrometer are investigated.

  17. A Study of the Efficiency of Spatial Indexing Methods Applied to Large Astronomical Databases

    NASA Astrophysics Data System (ADS)

    Donaldson, Tom; Berriman, G. Bruce; Good, John; Shiao, Bernie

    2018-01-01

    Spatial indexing of astronomical databases generally uses quadrature methods, which partition the sky into cells used to create an index (usually a B-tree) written as database column. We report the results of a study to compare the performance of two common indexing methods, HTM and HEALPix, on Solaris and Windows database servers installed with a PostgreSQL database, and a Windows Server installed with MS SQL Server. The indexing was applied to the 2MASS All-Sky Catalog and to the Hubble Source catalog. On each server, the study compared indexing performance by submitting 1 million queries at each index level with random sky positions and random cone search radius, which was computed on a logarithmic scale between 1 arcsec and 1 degree, and measuring the time to complete the query and write the output. These simulated queries, intended to model realistic use patterns, were run in a uniform way on many combinations of indexing method and indexing level. The query times in all simulations are strongly I/O-bound and are linear with number of records returned for large numbers of sources. There are, however, considerable differences between simulations, which reveal that hardware I/O throughput is a more important factor in managing the performance of a DBMS than the choice of indexing scheme. The choice of index itself is relatively unimportant: for comparable index levels, the performance is consistent within the scatter of the timings. At small index levels (large cells; e.g. level 4; cell size 3.7 deg), there is large scatter in the timings because of wide variations in the number of sources found in the cells. At larger index levels, performance improves and scatter decreases, but the improvement at level 8 (14 min) and higher is masked to some extent in the timing scatter caused by the range of query sizes. At very high levels (20; 0.0004 arsec), the granularity of the cells becomes so high that a large number of extraneous empty cells begin to degrade performance. Thus, for the use patterns studied here the database performance is not critically dependent on the exact choices of index or level.

  18. 77 FR 22835 - Notice of Passenger Facility Charge (PFC) Approvals and Disapprovals

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-17

    ... Description of Projects Approved for Collection and Use: Install primary crash network. Security enhancements--access control 1. Acquire computer based interactive training system. Security enhancements--access...

  19. Investigation of a computer virus outbreak in the pharmacy of a tertiary care teaching hospital.

    PubMed

    Bailey, T C; Reichley, R M

    1992-10-01

    A computer virus outbreak was recognized, verified, defined, investigated, and controlled using an infection control approach. The pathogenesis and epidemiology of computer virus infection are reviewed. Case-control study. Pharmacy of a tertiary care teaching institution. On October 28, 1991, 2 personal computers in the drug information center manifested symptoms consistent with the "Jerusalem" virus infection. The same day, a departmental personal computer began playing "Yankee Doodle," a sign of "Doodle" virus infection. An investigation of all departmental personal computers identified the "Stoned" virus in an additional personal computer. Controls were functioning virus-free personal computers within the department. Cases were associated with users who brought diskettes from outside the department (5/5 cases versus 5/13 controls, p = .04) and with College of Pharmacy student users (3/5 cases versus 0/13 controls, p = .012). The detection of a virus-infected diskette or personal computer was associated with the number of 5 1/4-inch diskettes in the files of personal computers, a surrogate for rate of media exchange (mean = 17.4 versus 152.5, p = .018, Wilcoxon rank sum test). After education of departmental personal computer users regarding appropriate computer hygiene and installation of virus protection software, no further spread of personal computer viruses occurred, although 2 additional Stoned-infected and 1 Jerusalem-infected diskettes were detected. We recommend that virus detection software be installed on personal computers where the interchange of diskettes among computers is necessary, that write-protect tabs be placed on all program master diskettes and data diskettes where data are being read and not written, that in the event of a computer virus outbreak, all available diskettes be quarantined and scanned by virus detection software, and to facilitate quarantine and scanning in an outbreak, that diskettes be stored in organized files.

  20. A Distributive, Non-Destructive, Real-Time Approach to Snowpack Monitoring

    NASA Technical Reports Server (NTRS)

    Frolik, Jeff; Skalka, Christian

    2012-01-01

    This invention is designed to ascertain the snow water equivalence (SWE) of snowpacks with better spatial and temporal resolutions than present techniques. The approach is ground-based, as opposed to some techniques that are air-based. In addition, the approach is compact, non-destructive, and can be communicated with remotely, and thus can be deployed in areas not possible with current methods. Presently there are two principal ground-based techniques for obtaining SWE measurements. The first is manual snow core measurements of the snowpack. This approach is labor-intensive, destructive, and has poor temporal resolution. The second approach is to deploy a large (e.g., 3x3 m) snowpillow, which requires significant infrastructure, is potentially hazardous [uses a approximately equal to 200-gallon (approximately equal to 760-L) antifreeze-filled bladder], and requires deployment in a large, flat area. High deployment costs necessitate few installations, thus yielding poor spatial resolution of data. Both approaches have limited usefulness in complex and/or avalanche-prone terrains. This approach is compact, non-destructive to the snowpack, provides high temporal resolution data, and due to potential low cost, can be deployed with high spatial resolution. The invention consists of three primary components: a robust wireless network and computing platform designed for harsh climates, new SWE sensing strategies, and algorithms for smart sampling, data logging, and SWE computation.

  1. A Set of Free Cross-Platform Authoring Programs for Flexible Web-Based CALL Exercises

    ERIC Educational Resources Information Center

    O'Brien, Myles

    2012-01-01

    The Mango Suite is a set of three freely downloadable cross-platform authoring programs for flexible network-based CALL exercises. They are Adobe Air applications, so they can be used on Windows, Macintosh, or Linux computers, provided the freely-available Adobe Air has been installed on the computer. The exercises which the programs generate are…

  2. Linux Makes the Grade: An Open Source Solution That's Time Has Come

    ERIC Educational Resources Information Center

    Houston, Melissa

    2007-01-01

    In 2001, Indiana officials at the Department of Education were taking stock. The schools had an excellent network infrastructure and had installed significant numbers of computers for 1 million public school enrollees. Yet students were spending less than an hour a week on the computer. It was then that state officials knew each student needed a…

  3. An Alternative to QUERY: Batch-Searching of the ERIC Information Collections.

    ERIC Educational Resources Information Center

    Krahmer, Edward; Horne, Kent

    A manual describing the RIC computer search program for retrieval of information from ERIC, CIJE, and other collections is presented. It is pointed out that two versions of this program have been developed. The first is for an IBM 360/370 computer. This version has been operational on a production basis for nearly a year. Four installations of…

  4. FY94 CAG trip reports, CAG memos and other products: Volume 2. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1994-12-15

    The Yucca Mountain Site Characterization Project (YMP) of the US DOE is tasked with designing, constructing, and operating an Exploratory Studies Facility (ESF) at Yucca Mountain, Nevada. The purpose of the YMP is to provide detailed characterization of the Yucca Mountain site for the potential mined geologic repository for permanent disposal of high-level radioactive waste. Detailed characterization of properties of the site are to be conducted through a wide variety of short-term and long-term in-situ tests. Testing methods require the installation of a large number of test instruments and sensors with a variety of functions. These instruments produce analog andmore » digital data that must be collected, processed, stored, and evaluated in an attempt to predict performance of the repository. The Integrated Data and Control System (IDCS) is envisioned as a distributed data acquisition that electronically acquires and stores data from these test instruments. IDCS designers are responsible for designing and overseeing the procurement of the system, IDCS Operation and Maintenance operates and maintains the installed system, and the IDCS Data Manager is responsible for distribution of IDCS data to participants. This report is a compilation of trip reports, interoffice memos, and other memos relevant to Computer Applications Group, Inc., work on this project.« less

  5. Crowd-Sourcing Seismic Data for Education and Research Opportunities with the Quake-Catcher Network

    NASA Astrophysics Data System (ADS)

    Sumy, D. F.; DeGroot, R. M.; Benthien, M. L.; Cochran, E. S.; Taber, J. J.

    2016-12-01

    The Quake Catcher Network (QCN; quakecatcher.net) uses low cost micro-electro-mechanical system (MEMS) sensors hosted by volunteers to collect seismic data. Volunteers use accelerometers internal to laptop computers, phones, tablets or small (the size of a matchbox) MEMS sensors plugged into desktop computers using a USB connector to collect scientifically useful data. Data are collected and sent to a central server using the Berkeley Open Infrastructure for Network Computing (BOINC) distributed computing software. Since 2008, sensors installed in museums, schools, offices, and residences have collected thousands of earthquake records, including the 2010 M8.8 Maule, Chile, the 2010 M7.1 Darfield, New Zealand, and 2015 M7.8 Gorkha, Nepal earthquakes. In 2016, the QCN in the United States transitioned to the Incorporated Research Institutions for Seismology (IRIS) Consortium and the Southern California Earthquake Center (SCEC), which are facilities funded through the National Science Foundation and the United States Geological Survey, respectively. The transition has allowed for an influx of new ideas and new education related efforts, which include focused installations in several school districts in southern California, on Native American reservations in North Dakota, and in the most seismically active state in the contiguous U.S. - Oklahoma. We present and describe these recent educational opportunities, and highlight how QCN has engaged a wide sector of the public in scientific data collection, particularly through the QCN-EPIcenter Network and NASA Mars InSight teacher programs. QCN provides the public with information and insight into how seismic data are collected, and how researchers use these data to better understand and characterize seismic activity. Lastly, we describe how students use data recorded by QCN sensors installed in their classrooms to explore and investigate felt earthquakes, and look towards the bright future of the network.

  6. Electricity Submetering on the Cheap: Stick-on Electricity Meters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lanzisera, Steven; Lorek, Michael; Pister, Kristofer

    2014-08-17

    We demonstrate a low-cost, 21 x 12 mm prototype Stick-on Electricity Meter (SEM) to replace traditional in-circuit-breaker-panel current and voltage sensors for building submetering. A SEM sensor is installed on the external face of a circuit breaker to generate voltage and current signals. This allows for the computation of real and apparent power as well as capturing harmonics created by non-linear loads. The prototype sensor is built using commercially available components, resulting in a production cost of under $10 per SEM. With no highvoltage install work requiring an electrician, home owners or other individuals can install the system in amore » few minutes with no safety implications. This leads to an installed system cost that is much lower than traditional submetering technology.. Measurement results from lab characterization as well as a real-world residential dwelling installation are presented, verifying the operation of our proposed SEM sensor. The SEM sensor can resolve breaker power levels below 10W, and it can be used to provide data for non-intrusive load monitoring systems at full sample rate.« less

  7. Flow Simulation of Modified Duct System Wind Turbines Installed on Vehicle

    NASA Astrophysics Data System (ADS)

    Rosly, N.; Mohd, S.; Zulkafli, M. F.; Ghafir, M. F. Abdul; Shamsudin, S. S.; Muhammad, W. N. A. Wan

    2017-10-01

    This study investigates the characteristics of airflow with a flow guide installed and output power generated by wind turbine system being installed on a pickup truck. The wind turbine models were modelled by using SolidWorks 2015 software. In order to investigate the characteristic of air flow inside the wind turbine system, a computer simulation (by using ANSYS Fluent software) is used. There were few models being designed and simulated, one without the rotor installed and another two with rotor installed in the wind turbine system. Three velocities being used for the simulation which are 16.7 m/s (60 km/h), 25 m/s (90 km/h) and 33.33 m/s (120 km/h). The study proved that the flow guide did give an impact to the output power produced by the wind turbine system. The predicted result from this study is the velocity of the air inside the ducting system of the present model is better that reference model. Besides, the flow guide implemented in the ducting system gives a big impact on the characteristics of the air flow.

  8. Performance enhancement of a web-based picture archiving and communication system using commercial off-the-shelf server clusters.

    PubMed

    Liu, Yan-Lin; Shih, Cheng-Ting; Chang, Yuan-Jen; Chang, Shu-Jun; Wu, Jay

    2014-01-01

    The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.

  9. Performance Enhancement of a Web-Based Picture Archiving and Communication System Using Commercial Off-the-Shelf Server Clusters

    PubMed Central

    Chang, Shu-Jun; Wu, Jay

    2014-01-01

    The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment. PMID:24701580

  10. Current Capabilities at SNL for the Integration of Small Modular Reactors onto Smart Microgrids Using Sandia's Smart Microgrid Technology High Performance Computing and Advanced Manufacturing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez, Salvador B.

    Smart grids are a crucial component for enabling the nation’s future energy needs, as part of a modernization effort led by the Department of Energy. Smart grids and smart microgrids are being considered in niche applications, and as part of a comprehensive energy strategy to help manage the nation’s growing energy demands, for critical infrastructures, military installations, small rural communities, and large populations with limited water supplies. As part of a far-reaching strategic initiative, Sandia National Laboratories (SNL) presents herein a unique, three-pronged approach to integrate small modular reactors (SMRs) into microgrids, with the goal of providing economically-competitive, reliable, andmore » secure energy to meet the nation’s needs. SNL’s triad methodology involves an innovative blend of smart microgrid technology, high performance computing (HPC), and advanced manufacturing (AM). In this report, Sandia’s current capabilities in those areas are summarized, as well as paths forward that will enable DOE to achieve its energy goals. In the area of smart grid/microgrid technology, Sandia’s current computational capabilities can model the entire grid, including temporal aspects and cyber security issues. Our tools include system development, integration, testing and evaluation, monitoring, and sustainment.« less

  11. Intelligent Systems for Assessing Aging Changes: Home-Based, Unobtrusive, and Continuous Assessment of Aging

    PubMed Central

    Maxwell, Shoshana A.; Mattek, Nora; Hayes, Tamara L.; Dodge, Hiroko; Pavel, Misha; Jimison, Holly B.; Wild, Katherine; Boise, Linda; Zitzelberger, Tracy A.

    2011-01-01

    Objectives. To describe a longitudinal community cohort study, Intelligent Systems for Assessing Aging Changes, that has deployed an unobtrusive home-based assessment platform in many seniors homes in the existing community. Methods. Several types of sensors have been installed in the homes of 265 elderly persons for an average of 33 months. Metrics assessed by the sensors include total daily activity, time out of home, and walking speed. Participants were given a computer as well as training, and computer usage was monitored. Participants are assessed annually with health and function questionnaires, physical examinations, and neuropsychological testing. Results. Mean age was 83.3 years, mean years of education was 15.5, and 73% of cohort were women. During a 4-week snapshot, participants left their home twice a day on average for a total of 208 min per day. Mean in-home walking speed was 61.0 cm/s. Participants spent 43% of days on the computer averaging 76 min per day. Discussion. These results demonstrate for the first time the feasibility of engaging seniors in a large-scale deployment of in-home activity assessment technology and the successful collection of these activity metrics. We plan to use this platform to determine if continuous unobtrusive monitoring may detect incident cognitive decline. PMID:21743050

  12. Efforts to reduce mortality to hydroelectric turbine-passed fish: locating and quantifying damaging shear stresses.

    PubMed

    Cada, Glenn; Loar, James; Garrison, Laura; Fisher, Richard; Neitzel, Duane

    2006-06-01

    Severe fluid forces are believed to be a source of injury and mortality to fish that pass through hydroelectric turbines. A process is described by which laboratory bioassays, computational fluid dynamics models, and field studies can be integrated to evaluate the significance of fluid shear stresses that occur in a turbine. Areas containing potentially lethal shear stresses were identified near the stay vanes and wicket gates, runner, and in the draft tube of a large Kaplan turbine. However, under typical operating conditions, computational models estimated that these dangerous areas comprise less than 2% of the flow path through the modeled turbine. The predicted volumes of the damaging shear stress zones did not correlate well with observed fish mortality at a field installation of this turbine, which ranged from less than 1% to nearly 12%. Possible reasons for the poor correlation are discussed. Computational modeling is necessary to develop an understanding of the role of particular fish injury mechanisms, to compare their effects with those of other sources of injury, and to minimize the trial and error previously needed to mitigate those effects. The process we describe is being used to modify the design of hydroelectric turbines to improve fish passage survival.

  13. [Problems encountered during the installation of an automated anesthesia documentation system (AIMS)].

    PubMed

    Müller, H; Naujoks, F; Dietz, S

    2002-08-01

    Problems encountered during the installation and introduction of an automated anaesthesia documentation system are discussed. Difficulties have to be expected in the area of staff training because of heterogeneous experience in computer usage and in the field of online documentation of vital signs. Moreover the areas of net administration and hardware configuration as well as general administrative issues also represent possible sources of drawbacks. System administration and reliable support provided by personnel of the department of anaesthesiology assuring staff motivation and reducing time of system failures require adequately staffed departments. Based on our own experiences, we recommend that anaesthesiology departments considering the future installation and use of an automated anaesthesia documentation system should verify sufficient personnel capacities prior to their decision.

  14. A Survey of the 1986 Canadian Library Systems Marketplace.

    ERIC Educational Resources Information Center

    Merilees, Bobbie

    1987-01-01

    This analysis of trends in the Canadian library systems marketplace in 1986, compares installations of large integrated systems and microcomputer based systems by relative market share, and number of installations by type of library. Canadian vendors' sales in international markets are also analyzed, and a director of vendors provided. (Author/CLB)

  15. 1.5 MW turbine installation at NREL's NWTC on Aug. 21

    ScienceCinema

    None

    2017-12-27

    Generating 20 percent of the nation's electricity from clean wind resources will require more and bigger wind turbines. NREL is installing two large wind turbines at the National Wind Technology Center to examine some of the industry's largest machines and address issues to expand wind energy on a commercial scale.

  16. Invasive Species Guidebook for Department of Defense Installations in the Chesapeake Bay Watershed: Identification, Control, and Restoration

    DTIC Science & Technology

    2007-11-01

    INSTALLATIONS IN THE CHESAPEAKE BAY WATERSHED IDENTIFICATION AND CONTROL METHODS Cogongrass ( Imperata cylindrica ) Description & Biology – A large...Crown vetch Coronilla varia MD, VA 14 Leafy spurge Euphorbia esula VA 15 Ground ivy Glechoma hederacea DC, MD, PA, VA, WV 17 Cogongrass Imperata

  17. Project SAGE: solar assisted gas energy. Final report and executive summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Phase III basic objective was establishment of a technical and economic baseline for proper assessment of the practical potential of solar water heating for apartments. Plans can then be formulated to improve SAGE technical design and performance; reduce SAGE costs; refine SAGE market assessment; and identify policies to encourage the use of SAGE. Two SAGE water heating systems were installed and tested. One system was retrofit onto an existing apartment building; the other was installed in a new apartment building. Each installation required approximately 1000 square feet of collector area tilted to an angle of 37/sup 0/ from themore » horizontal, and each was designed to supply about 70 percent of the energy for heating water for approximately 32 to 40 units of a typical two-story apartment complex in Southern California. Actual contruction costs were carefully compiled, and both installations were equipped with performance monitoring equipment. In addition, the operating and maintenance requirements of each installation was evaluated by gas company maintenance engineers. Upon completion of the installation analysis, the SAGE installation cost was further refined by obtaining firm SAGE construction bids from two plumbing contractors in Southern California. Market penetration was assessed by developing a computer simulation program using the technical and economic analysis from the installation experience. Also, the project examined the public policies required to encourage SAGE and other solar energy options. Results are presented and discussed. (WHK)« less

  18. Perspective of an Artist Inspired by Physics

    NASA Astrophysics Data System (ADS)

    Sanborn, Jim

    2010-02-01

    Using digital images and video I will be presenting thirty years of my science based artwork. Beginning in the late 1970's my gallery and museum installations used lodestones and suspended compasses to reveal the earths' magnetic field. Through the 1980's my work included these compass installations and geologically inspired tableaux that had one thing in common, they were designed to expose the invisible forces of nature. Tectonics, the Coriolis force, and magnetism were among the subjects of study. In 1988, on the basis of my work with invisible forces, I was selected for a commission from the General Services Administration for the new Central Intelligence Agency headquarters in Langley Virginia. This work titled Kryptos included a large cryptographic component that remains undeciphered twenty years after its installation. In the 1990's Kryptos inspired several of my museum and gallery installations using cryptography and secrecy as their main themes. From 1995-1998 I completed a series of large format projections on the landscape in the western US and Ireland. These projections and the resulting series of photographs emulated the 19th century cartographers hired by the United States Government to map the western landscape. In 1998 I began my project titled Atomic Time. This installation shown for the first time in 2004 at the Corcoran Gallery in Washington DC, then again in the Gwangju Biennale in South Korea was a recreation of the 1944 Manhattan Project laboratory that built the first Atomic Bomb. This installation used original equipment and prototypes from the Los Alamos Lab and was an extremely accurate representation of the laboratory and the first nuclear bomb called the ``Trinity Device.'' I began my current project Terrestrial Physics in 2005. This installation to be shown in June 2010 at the Museum of Contemporary Art in Denver is a recreation of the large particle accelerator and the experiment that fissioned Uranium in 1939 at the Carnegie Institution in Washington DC. This was the first time uranium had been fissioned using a particle accelerator and it was demonstrated for an audience including, Enrico Fermi, Niels Bohr and Merle Tuve. )

  19. The X6XS. 0 cross section library for MCNP-4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pruvost, N.L.; Seamon, R.E.; Rombaugh, C.T.

    1991-06-01

    This report documents the work done by X-6, HSE-6, and CTR Technical Services to produce a comprehensive working cross-section library for MCNP-4 suitable for SUN workstations and similar environments. The resulting library consists of a total of 436 files (one file for each ZAID). The library is 152 Megabytes in Type 1 format and 32 Megabytes in Type 2 format. Type 2 can be used when porting the library from one computer to another of the same make. Otherwise, Type 1 must be used to ensure portability between different computer systems. Instructions for installing the library and adding ZAIDs tomore » it are included here. Also included is a description of the steps necessary to install and test version 4 of MCNP. To improve readability of this report, certain commands and filenames are given in uppercase letters. The actual command or filename on the SUN workstation, however, must be specified in lowercase letters. Any questions regarding the data contained in the library should be directed to X-6 and any questions regarding the installation of the library and the testing that was performed should be directed to HSE-6. 9 refs., 7 tabs.« less

  20. 46 CFR 162.161-5 - Instruction manual for design, installation, operation, and maintenance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (CONTINUED) EQUIPMENT, CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL ENGINEERING EQUIPMENT Fixed... for halocarbon systems and UL 2127 for inert gas systems; (3) Identification of the computer program...

  1. 46 CFR 162.161-5 - Instruction manual for design, installation, operation, and maintenance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (CONTINUED) EQUIPMENT, CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL ENGINEERING EQUIPMENT Fixed... for halocarbon systems and UL 2127 for inert gas systems; (3) Identification of the computer program...

  2. 46 CFR 169.682 - Distribution and circuit loads.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the rating of the overcurrent protective device, computed using the greater of— (1) The lamp sizes to be installed; or (2) 50 watts per outlet. (b) Circuits supplying electrical discharge lamps must be...

  3. LANDSAT-D band 6 data evaluation

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A filter, fabricated to match the spectral response of the LANDSAT band 6 sensors, was received and the combined system response function computed. The half power points for the aircraft system are 10.5 micrometer and 11.55 micrometer compared to the 10.4 and 11.6 micrometer values for the satellite. These discrepancies are considered acceptable; their effect on the apparent temperature observed at the satellite is being evaluated. The filter was installed in the infrared line scanner and the line scanner was installed in the aircraft and field checked. A daytime underflight of the satellite is scheduled for the next clear overpass and the feasibility of a nightime overpass is being discussed with NASA. The LOWTRAN 5 computer code was obtained from the Air Force Geophysical Laboratory and is being implemented for use on this effort.

  4. IPv6 testing and deployment at Prague Tier 2

    NASA Astrophysics Data System (ADS)

    Kouba, Tomáŝ; Chudoba, Jiří; Eliáŝ, Marek; Fiala, Lukáŝ

    2012-12-01

    Computing Center of the Institute of Physics in Prague provides computing and storage resources for various HEP experiments (D0, Atlas, Alice, Auger) and currently operates more than 300 worker nodes with more than 2500 cores and provides more than 2PB of disk space. Our site is limited to one C-sized block of IPv4 addresses, and hence we had to move most of our worker nodes behind the NAT. However this solution demands more difficult routing setup. We see the IPv6 deployment as a solution that provides less routing, more switching and therefore promises higher network throughput. The administrators of the Computing Center strive to configure and install all provided services automatically. For installation tasks we use PXE and kickstart, for network configuration we use DHCP and for software configuration we use CFEngine. Many hardware boxes are configured via specific web pages or telnet/ssh protocol provided by the box itself. All our services are monitored with several tools e.g. Nagios, Munin, Ganglia. We rely heavily on the SNMP protocol for hardware health monitoring. All these installation, configuration and monitoring tools must be tested before we can switch completely to IPv6 network stack. In this contribution we present the tests we have made, limitations we have faced and configuration decisions that we have made during IPv6 testing. We also present testbed built on virtual machines that was used for all the testing and evaluation.

  5. Application of Computational Fluid Dynamics to the Study of Vortex Flow Control for the Management of Inlet Distortion

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Gibb, James

    1992-01-01

    The present study demonstrates that the Reduced Navier-Stokes code RNS3D can be used very effectively to develop a vortex generator installation for the purpose of minimizing the engine face circumferential distortion by controlling the development of secondary flow. The computing times required are small enough that studies such as this are feasible within an analysis-design environment with all its constraints of time and costs. This research study also established the nature of the performance improvements that can be realized with vortex flow control, and suggests a set of aerodynamic properties (called observations) that can be used to arrive at a successful vortex generator installation design. The ultimate aim of this research is to manage inlet distortion by controlling secondary flow through an arrangements of vortex generators configurations tailored to the specific aerodynamic characteristics of the inlet duct. This study also indicated that scaling between flight and typical wind tunnel test conditions is possible only within a very narrow range of generator configurations close to an optimum installation. This paper also suggests a possible law that can be used to scale generator blade height for experimental testing, but further research in this area is needed before it can be effectively applied to practical problems. Lastly, this study indicated that vortex generator installation design for inlet ducts is more complex than simply satisfying the requirement of attached flow, it must satisfy the requirement of minimum engine face distortion.

  6. Using Docker Containers to Extend Reproducibility Architecture for the NASA Earth Exchange (NEX)

    NASA Technical Reports Server (NTRS)

    Votava, Petr; Michaelis, Andrew; Spaulding, Ryan; Becker, Jeffrey C.

    2016-01-01

    NASA Earth Exchange (NEX) is a data, supercomputing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to address large-scale challenges in Earth sciences. As NEX has been growing into a petabyte-size platform for analysis, experiments and data production, it has been increasingly important to enable users to easily retrace their steps, identify what datasets were produced by which process chains, and give them ability to readily reproduce their results. This can be a tedious and difficult task even for a small project, but is almost impossible on large processing pipelines. We have developed an initial reproducibility and knowledge capture solution for the NEX, however, if users want to move the code to another system, whether it is their home institution cluster, laptop or the cloud, they have to find, build and install all the required dependencies that would run their code. This can be a very tedious and tricky process and is a big impediment to moving code to data and reproducibility outside the original system. The NEX team has tried to assist users who wanted to move their code into OpenNEX on Amazon cloud by creating custom virtual machines with all the software and dependencies installed, but this, while solving some of the issues, creates a new bottleneck that requires the NEX team to be involved with any new request, updates to virtual machines and general maintenance support. In this presentation, we will describe a solution that integrates NEX and Docker to bridge the gap in code-to-data migration. The core of the solution is saemi-automatic conversion of science codes, tools and services that are already tracked and described in the NEX provenance system, to Docker - an open-source Linux container software. Docker is available on most computer platforms, easy to install and capable of seamlessly creating and/or executing any application packaged in the appropriate format. We believe this is an important step towards seamless process deployment in heterogeneous environments that will enhance community access to NASA data and tools in a scalable way, promote software reuse, and improve reproducibility of scientific results.

  7. Using Docker Containers to Extend Reproducibility Architecture for the NASA Earth Exchange (NEX)

    NASA Astrophysics Data System (ADS)

    Votava, P.; Michaelis, A.; Spaulding, R.; Becker, J. C.

    2016-12-01

    NASA Earth Exchange (NEX) is a data, supercomputing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to address large-scale challenges in Earth sciences. As NEX has been growing into a petabyte-size platform for analysis, experiments and data production, it has been increasingly important to enable users to easily retrace their steps, identify what datasets were produced by which process chains, and give them ability to readily reproduce their results. This can be a tedious and difficult task even for a small project, but is almost impossible on large processing pipelines. We have developed an initial reproducibility and knowledge capture solution for the NEX, however, if users want to move the code to another system, whether it is their home institution cluster, laptop or the cloud, they have to find, build and install all the required dependencies that would run their code. This can be a very tedious and tricky process and is a big impediment to moving code to data and reproducibility outside the original system. The NEX team has tried to assist users who wanted to move their code into OpenNEX on Amazon cloud by creating custom virtual machines with all the software and dependencies installed, but this, while solving some of the issues, creates a new bottleneck that requires the NEX team to be involved with any new request, updates to virtual machines and general maintenance support. In this presentation, we will describe a solution that integrates NEX and Docker to bridge the gap in code-to-data migration. The core of the solution is saemi-automatic conversion of science codes, tools and services that are already tracked and described in the NEX provenance system, to Docker - an open-source Linux container software. Docker is available on most computer platforms, easy to install and capable of seamlessly creating and/or executing any application packaged in the appropriate format. We believe this is an important step towards seamless process deployment in heterogeneous environments that will enhance community access to NASA data and tools in a scalable way, promote software reuse, and improve reproducibility of scientific results.

  8. Consequence analysis in LPG installation using an integrated computer package.

    PubMed

    Ditali, S; Colombi, M; Moreschini, G; Senni, S

    2000-01-07

    This paper presents the prototype of the computer code, Atlantide, developed to assess the consequences associated with accidental events that can occur in a LPG storage plant. The characteristic of Atlantide is to be simple enough but at the same time adequate to cope with consequence analysis as required by Italian legislation in fulfilling the Seveso Directive. The application of Atlantide is appropriate for LPG storage/transferring installations. The models and correlations implemented in the code are relevant to flashing liquid releases, heavy gas dispersion and other typical phenomena such as BLEVE/Fireball. The computer code allows, on the basis of the operating/design characteristics, the study of the relevant accidental events from the evaluation of the release rate (liquid, gaseous and two-phase) in the unit involved, to the analysis of the subsequent evaporation and dispersion, up to the assessment of the final phenomena of fire and explosion. This is done taking as reference simplified Event Trees which describe the evolution of accidental scenarios, taking into account the most likely meteorological conditions, the different release situations and other features typical of a LPG installation. The limited input data required and the automatic linking between the single models, that are activated in a defined sequence, depending on the accidental event selected, minimize both the time required for the risk analysis and the possibility of errors. Models and equations implemented in Atlantide have been selected from public literature or in-house developed software and tailored with the aim to be easy to use and fast to run but, nevertheless, able to provide realistic simulation of the accidental event as well as reliable results, in terms of physical effects and hazardous areas. The results have been compared with those of other internationally recognized codes and with the criteria adopted by Italian authorities to verify the Safety Reports for LPG installations. A brief of the theoretical basis of each model implemented in Atlantide and an example of application are included in the paper.

  9. EPIC

    NASA Image and Video Library

    2011-12-29

    ISS030-E-017789 (29 Dec. 2011) --- Working in chorus with the International Space Station team in Houston?s Mission Control Center, this astronaut and his Expedition 30 crewmates on the station install a set of Enhanced Processor and Integrated Communications (EPIC) computer cards in one of seven primary computers onboard. The upgrade will allow more experiments to operate simultaneously, and prepare for the arrival of commercial cargo ships later this year.

  10. COED Transactions, Vol. IX, No. 3, March 1977. Evaluation of a Complex Variable Using Analog/Hybrid Computation Techniques.

    ERIC Educational Resources Information Center

    Marcovitz, Alan B., Ed.

    Described is the use of an analog/hybrid computer installation to study those physical phenomena that can be described through the evaluation of an algebraic function of a complex variable. This is an alternative way to study such phenomena on an interactive graphics terminal. The typical problem used, involving complex variables, is that of…

  11. Community College Uses a Video-Game Lab to Lure Students to Computer Courses

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    2007-01-01

    A computer lab has become one of the most popular hangouts at Northern Virginia Community College after officials decided to load its PCs with popular video games, install a PlayStation and an Xbox, and declare it "for gamers only." The goal of this lab is to entice students to take game-design and other IT courses. John Min, dean of…

  12. Lockheed L-1011 Test Station on-board in support of the Adaptive Performance Optimization flight res

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This console and its compliment of computers, monitors and commmunications equipment make up the Research Engineering Test Station, the nerve center for a new aerodynamics experiment being conducted by NASA's Dryden Flight Research Center, Edwards, California. The equipment is installed on a modified Lockheed L-1011 Tristar jetliner operated by Orbital Sciences Corp., of Dulles, Va., for Dryden's Adaptive Performance Optimization project. The experiment seeks to improve the efficiency of long-range jetliners by using small movements of the ailerons to improve the aerodynamics of the wing at cruise conditions. About a dozen research flights in the Adaptive Performance Optimization project are planned over the next two to three years. Improving the aerodynamic efficiency should result in equivalent reductions in fuel usage and costs for airlines operating large, wide-bodied jetliners.

  13. Political science. Reverse-engineering censorship in China: randomized experimentation and participant observation.

    PubMed

    King, Gary; Pan, Jennifer; Roberts, Margaret E

    2014-08-22

    Existing research on the extensive Chinese censorship organization uses observational methods with well-known limitations. We conducted the first large-scale experimental study of censorship by creating accounts on numerous social media sites, randomly submitting different texts, and observing from a worldwide network of computers which texts were censored and which were not. We also supplemented interviews with confidential sources by creating our own social media site, contracting with Chinese firms to install the same censoring technologies as existing sites, and--with their software, documentation, and even customer support--reverse-engineering how it all works. Our results offer rigorous support for the recent hypothesis that criticisms of the state, its leaders, and their policies are published, whereas posts about real-world events with collective action potential are censored. Copyright © 2014, American Association for the Advancement of Science.

  14. An automated environment for multiple spacecraft engineering subsystem mission operations

    NASA Technical Reports Server (NTRS)

    Bahrami, K. A.; Hioe, K.; Lai, J.; Imlay, E.; Schwuttke, U.; Hsu, E.; Mikes, S.

    1990-01-01

    Flight operations at the Jet Propulsion Laboratory (JPL) are now performed by teams of specialists, each team dedicated to a particular spacecraft. Certain members of each team are responsible for monitoring the performances of their respective spacecraft subsystems. Ground operations, which are very complex, are manual, labor-intensive, slow, and tedious, and therefore costly and inefficient. The challenge of the new decade is to operate a large number of spacecraft simultaneously while sharing limited human and computer resources, without compromising overall reliability. The Engineering Analysis Subsystem Environment (EASE) is an architecture that enables fewer controllers to monitor and control spacecraft engineering subsystems. A prototype of EASE has been installed in the JPL Space Flight Operations Facility for on-line testing. This article describes the underlying concept, development, testing, and benefits of the EASE prototype.

  15. Cables.

    PubMed

    Cushing, M

    1994-01-01

    If you want to control your own computer installation, get the satisfaction of doing your own maintenance, and compensate for an inept or uninformed vendor, the information in this article will help you achieve these ends. Good luck and good cabling!

  16. Boutiques: a flexible framework to integrate command-line applications in computing platforms.

    PubMed

    Glatard, Tristan; Kiar, Gregory; Aumentado-Armstrong, Tristan; Beck, Natacha; Bellec, Pierre; Bernard, Rémi; Bonnet, Axel; Brown, Shawn T; Camarasu-Pop, Sorina; Cervenansky, Frédéric; Das, Samir; Ferreira da Silva, Rafael; Flandin, Guillaume; Girard, Pascal; Gorgolewski, Krzysztof J; Guttmann, Charles R G; Hayot-Sasson, Valérie; Quirion, Pierre-Olivier; Rioux, Pierre; Rousseau, Marc-Étienne; Evans, Alan C

    2018-05-01

    We present Boutiques, a system to automatically publish, integrate, and execute command-line applications across computational platforms. Boutiques applications are installed through software containers described in a rich and flexible JSON language. A set of core tools facilitates the construction, validation, import, execution, and publishing of applications. Boutiques is currently supported by several distinct virtual research platforms, and it has been used to describe dozens of applications in the neuroinformatics domain. We expect Boutiques to improve the quality of application integration in computational platforms, to reduce redundancy of effort, to contribute to computational reproducibility, and to foster Open Science.

  17. Action Cam Footage from U.S. Spacewalk 41

    NASA Image and Video Library

    2017-05-09

    This footage was taken by NASA astronaut Peggy Whitson during a spacewalk on the International Space Station on Thursday, March 30. She was joined on the spacewalk by NASA astronaut Shane Kimbrough. The two spacewalkers reconnected cables and electrical connections on PMA-3 at its new home on top of the Harmony module. They also installed the second of the two upgraded computer relay boxes on the station’s truss and installed shields and covers on PMA-3 and the now-vacant common berthing mechanism port on Tranquility.

  18. A novel smart lighting clinical testbed.

    PubMed

    Gleason, Joseph D; Oishi, Meeko; Simkulet, Michelle; Tuzikas, Arunas; Brown, Lee K; Brueck, S R J; Karlicek, Robert F

    2017-07-01

    A real-time, feedback-capable, variable spectrum lighting system was recently installed at the University of New Mexico Hospital to facilitate biomedical research on the health impacts of lighting. The system consists of variable spectrum troffers, color sensors, occupancy sensors, and computing and communication infrastructure, and is the only such clinical facility in the US. The clinical environment posed special challenges for installation as well as for ongoing maintenance and operations. Pilot studies are currently underway to evaluate the effectiveness of the system to regulate circadian phase in subjects with delayed sleep-wake phase disorder.

  19. Helping safeguard Veterans Affairs' hospital buildings by advanced earthquake monitoring

    USGS Publications Warehouse

    Kalkan, Erol; Banga, Krishna; Ulusoy, Hasan S.; Fletcher, Jon Peter B.; Leith, William S.; Blair, James L.

    2012-01-01

    In collaboration with the U.S. Department of Veterans Affairs (VA), the National Strong Motion Project of the U.S. Geological Survey has recently installed sophisticated seismic systems that will monitor the structural integrity of hospital buildings during earthquake shaking. The new systems have been installed at more than 20 VA medical campuses across the country. These monitoring systems, which combine sensitive accelerometers and real-time computer calculations, are capable of determining the structural health of each structure rapidly after an event, helping to ensure the safety of patients and staff.

  20. Surface-Wave Data Acquisition and Dissemination by VHF Packet Radio and Computer Networking

    DTIC Science & Technology

    1988-04-01

    winter. , 4.3 Installation and Checkout The van was installed at the site by using a 3/4 ton pickup with a hydraulic tail gate; the van was simply lowered...13 ESCAPE OFF STREAMSW $7C FLOW ON STREAMCA OFF FRACK 4 STREAMCA OFF FULLDUP OFF STREAMDB OFF HEADERLN OFF TRFLOW OFFHID OFF TRIES 0 LCOK OFF TRACE...OFF RESPTIME 12 v DWAIT 1 SCREENLN 80 DIGIPEAT OFF SENDPAC SOD ECHO OFF START $11 ESCAPE OFF STOP $13 FLOW OFF STREAMSW $7C FRACK 4 STREAMCA OFF

  1. Tradeoffs and synergies between biofuel production and large-scale solar infrastructure in deserts

    NASA Astrophysics Data System (ADS)

    Ravi, S.; Lobell, D. B.; Field, C. B.

    2012-12-01

    Solar energy installations in deserts are on the rise, fueled by technological advances and policy changes. Deserts, with a combination of high solar radiation and availability of large areas unusable for crop production are ideal locations for large scale solar installations. For efficient power generation, solar infrastructures require large amounts of water for operation (mostly for cleaning panels and dust suppression), leading to significant moisture additions to desert soil. A pertinent question is how to use the moisture inputs for sustainable agriculture/biofuel production. We investigated the water requirements for large solar infrastructures in North American deserts and explored the possibilities for integrating biofuel production with solar infrastructure. In co-located systems the possible decline in yields due to shading by solar panels may be offsetted by the benefits of periodic water addition to biofuel crops, simpler dust management and more efficient power generation in solar installations, and decreased impacts on natural habitats and scarce resources in deserts. In particular, we evaluated the potential to integrate solar infrastructure with biomass feedstocks that grow in arid and semi-arid lands (Agave Spp), which are found to produce high yields with minimal water inputs. To this end, we conducted detailed life cycle analysis for these coupled agave biofuel - solar energy systems to explore the tradeoffs and synergies, in the context of energy input-output, water use and carbon emissions.

  2. Dendroscope: An interactive viewer for large phylogenetic trees

    PubMed Central

    Huson, Daniel H; Richter, Daniel C; Rausch, Christian; Dezulian, Tobias; Franz, Markus; Rupp, Regula

    2007-01-01

    Background Research in evolution requires software for visualizing and editing phylogenetic trees, for increasingly very large datasets, such as arise in expression analysis or metagenomics, for example. It would be desirable to have a program that provides these services in an effcient and user-friendly way, and that can be easily installed and run on all major operating systems. Although a large number of tree visualization tools are freely available, some as a part of more comprehensive analysis packages, all have drawbacks in one or more domains. They either lack some of the standard tree visualization techniques or basic graphics and editing features, or they are restricted to small trees containing only tens of thousands of taxa. Moreover, many programs are diffcult to install or are not available for all common operating systems. Results We have developed a new program, Dendroscope, for the interactive visualization and navigation of phylogenetic trees. The program provides all standard tree visualizations and is optimized to run interactively on trees containing hundreds of thousands of taxa. The program provides tree editing and graphics export capabilities. To support the inspection of large trees, Dendroscope offers a magnification tool. The software is written in Java 1.4 and installers are provided for Linux/Unix, MacOS X and Windows XP. Conclusion Dendroscope is a user-friendly program for visualizing and navigating phylogenetic trees, for both small and large datasets. PMID:18034891

  3. Large Synoptic Survey Telescope mount final design

    NASA Astrophysics Data System (ADS)

    Callahan, Shawn; Gressler, William; Thomas, Sandrine J.; Gessner, Chuck; Warner, Mike; Barr, Jeff; Lotz, Paul J.; Schumacher, German; Wiecha, Oliver; Angeli, George; Andrew, John; Claver, Chuck; Schoening, Bill; Sebag, Jacques; Krabbendam, Victor; Neill, Doug; Hileman, Ed; Muller, Gary; Araujo, Constanza; Orden Martinez, Alfredo; Perezagua Aguado, Manuel; García-Marchena, Luis; Ruiz de Argandoña, Ismael; Romero, Francisco M.; Rodríguez, Ricardo; Carlos González, José; Venturini, Marco

    2016-08-01

    This paper describes the status and details of the large synoptic survey telescope1,2,3 mount assembly (TMA). On June 9th, 2014 the contract for the design and build of the large synoptic survey telescope mount assembly (TMA) was awarded to GHESA Ingeniería y Tecnología, S.A. and Asturfeito, S.A. The design successfully passed the preliminary design review on October 2, 2015 and the final design review January 29, 2016. This paper describes the detailed design by subsystem, analytical model results, preparations being taken to complete the fabrication, and the transportation and installation plans to install the mount on Cerro Pachón in Chile. This large project is the culmination of work by many people and the authors would like to thank everyone that has contributed to the success of this project.

  4. Ultra Efficient Engine Technology Systems Integration and Environmental Assessment

    NASA Technical Reports Server (NTRS)

    Daggett, David L.; Geiselhart, Karl A. (Technical Monitor)

    2002-01-01

    This study documents the design and analysis of four types of advanced technology commercial transport airplane configurations (small, medium large and very large) with an assumed technology readiness date of 2010. These airplane configurations were used as a platform to evaluate the design concept and installed performance of advanced technology engines being developed under the NASA Ultra Efficient Engine Technology (UEET) program. Upon installation of the UEET engines onto the UEET advanced technology airframes, the small and medium airplanes both achieved an additional 16% increase in fuel efficiency when using GE advanced turbofan engines. The large airplane achieved an 18% increase in fuel efficiency when using the P&W geared fan engine. The very large airplane (i.e. BWB), also using P&W geared fan engines, only achieved an additional 16% that was attributed to a non-optimized airplane/engine combination.

  5. Grid Computing and Collaboration Technology in Support of Fusion Energy Sciences

    NASA Astrophysics Data System (ADS)

    Schissel, D. P.

    2004-11-01

    The SciDAC Initiative is creating a computational grid designed to advance scientific understanding in fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling, and allowing more efficient use of experimental facilities. The philosophy is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as easy to use network available services. Access to services is stressed rather than portability. Services share the same basic security infrastructure so that stakeholders can control their own resources and helps ensure fair use of resources. The collaborative control room is being developed using the open-source Access Grid software that enables secure group-to-group collaboration with capabilities beyond teleconferencing including application sharing and control. The ability to effectively integrate off-site scientists into a dynamic control room will be critical to the success of future international projects like ITER. Grid computing, the secure integration of computer systems over high-speed networks to provide on-demand access to data analysis capabilities and related functions, is being deployed as an alternative to traditional resource sharing among institutions. The first grid computational service deployed was the transport code TRANSP and included tools for run preparation, submission, monitoring and management. This approach saves user sites from the laborious effort of maintaining a complex code while at the same time reducing the burden on developers by avoiding the support of a large number of heterogeneous installations. This tutorial will present the philosophy behind an advanced collaborative environment, give specific examples, and discuss its usage beyond FES.

  6. Implementing Solar Technologies at Airports

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kandt, A.; Romero, R.

    2014-07-01

    Federal agencies, such as the Department of Defense and Department of Homeland Security, as well as numerous private entities are actively pursuing the installation of solar technologies to help reduce fossil fuel energy use and associated emissions, meet sustainability goals, and create more robust or reliable operations. One potential approach identified for siting solar technologies is the installation of solar energy technologies at airports and airfields, which present a significant opportunity for hosting solar technologies due to large amounts of open land. This report focuses largely on the Federal Aviation Administration's (FAA's) policies toward siting solar technologies at airports.

  7. Shielding and Radiation Protection in Ion Beam Therapy Facilities

    NASA Astrophysics Data System (ADS)

    Wroe, Andrew J.; Rightnar, Steven

    Radiation protection is a key aspect of any radiotherapy (RT) department and is made even more complex in ion beam therapy (IBT) by the large facility size, secondary particle spectra and intricate installation of these centers. In IBT, large and complex radiation producing devices are used and made available to the public for treatment. It is thus the responsibility of the facility to put in place measures to protect not only the patient but also the general public, occupationally and nonoccupationally exposed personnel working within the facility, and electronics installed within the department to ensure maximum safety while delivering maximum up-time.

  8. Nacelle Aerodynamic and Inertial Loads (NAIL) project

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A flight test survey of pressures measured on wing, pylon, and nacelle surfaces and of the operating loads on Boeing 747/Pratt & Whitney JT9D-7A nacelles was made to provide information on airflow patterns surrounding the propulsion system installations and to clarify processes responsible for inservice deterioration of fuel economy. Airloads at takeoff rotation were found to be larger than at any other normal service condition because of the combined effects of high angle of attack and high engine airflow. Inertial loads were smaller than previous estimates indicated. A procedure is given for estimating inlet airloads at low speeds and high angles of attack for any underwing high bypass ratio turbofan installation approximately resembling the one tested. Flight procedure modifications are suggested that may result in better fuel economy retention in service. Pressures were recorded on the core cowls and pylons of both engine installations and on adjacent wing surfaces for use in development of computer codes for analysis of installed propulsion system aerodynamic drag interference effects.

  9. Study of the damping characteristics of general aviation aircraft panels and development of computer programs to calculate the effectiveness of interior noise control treatment, part 1

    NASA Technical Reports Server (NTRS)

    Navaneethan, R.; Hunt, J.; Quayle, B.

    1982-01-01

    Tests were carried out on 20 inch x 20 inch panels at different test conditions using free-free panels, clamped panels, and panels as installed in the KU-FRL acoustic test facility. Tests with free-free panels verified the basic equipment set-up and test procedure. They also provided a basis for comparison. The results indicate that the effect of installed panels is to increase the damping ratio at the same frequency. However, a direct comparison is not possible, as the fundamental frequency of a free-free panel differs from the resonance frequency of the panel when installed. The damping values of panels installed in the test facility are closer to the damping values obtained with fixed-fixed panels. Effects of damping tape, stiffeners, and bonded and riveted edged conditions were also investigated. Progress in the development of a simple interior noise level control program is reported.

  10. Evaluation of analysis techniques for low frequency interior noise and vibration of commercial aircraft

    NASA Technical Reports Server (NTRS)

    Landmann, A. E.; Tillema, H. F.; Marshall, S. E.

    1989-01-01

    The application of selected analysis techniques to low frequency cabin noise associated with advanced propeller engine installations is evaluated. Three design analysis techniques were chosen for evaluation including finite element analysis, statistical energy analysis (SEA), and a power flow method using element of SEA (computer program Propeller Aircraft Interior Noise). An overview of the three procedures is provided. Data from tests of a 727 airplane (modified to accept a propeller engine) were used to compare with predictions. Comparisons of predicted and measured levels at the end of the first year's effort showed reasonable agreement leading to the conclusion that each technique had value for propeller engine noise predictions on large commercial transports. However, variations in agreement were large enough to remain cautious and to lead to recommendations for further work with each technique. Assessment of the second year's results leads to the conclusion that the selected techniques can accurately predict trends and can be useful to a designer, but that absolute level predictions remain unreliable due to complexity of the aircraft structure and low modal densities.

  11. Large field distributed aperture laser semiactive angle measurement system design with imaging fiber bundles.

    PubMed

    Xu, Chunyun; Cheng, Haobo; Feng, Yunpeng; Jing, Xiaoli

    2016-09-01

    A type of laser semiactive angle measurement system is designed for target detecting and tracking. Only one detector is used to detect target location from four distributed aperture optical systems through a 4×1 imaging fiber bundle. A telecentric optical system in image space is designed to increase the efficiency of imaging fiber bundles. According to the working principle of a four-quadrant (4Q) detector, fiber diamond alignment is adopted between an optical system and a 4Q detector. The structure of the laser semiactive angle measurement system is, we believe, novel. Tolerance analysis is carried out to determine tolerance limits of manufacture and installation errors of the optical system. The performance of the proposed method is identified by computer simulations and experiments. It is demonstrated that the linear region of the system is ±12°, with measurement error of better than 0.2°. In general, this new system can be used with large field of view and high accuracy, providing an efficient, stable, and fast method for angle measurement in practical situations.

  12. ICPSU Install onto Mobile Launcher

    NASA Image and Video Library

    2018-03-16

    A heavy-lift crane slowly lifts the Interim Cryogenic Propulsion Stage Umbilical (ICPSU) high up for installation on the tower of the mobile launcher (ML) at NASA's Kennedy Space Center in Florida. The last of the large umbilicals to be installed, the ICPSU will provide super-cooled hydrogen and liquid oxygen to the Space Launch System (SLS) rocket's interim cryogenic propulsion stage, or upper stage, at T-0 for Exploration Mission-1. The umbilical is located at about the 240-foot-level of the mobile launcher and will supply fuel, oxidizer, gaseous helium, hazardous gas leak detection, electrical commodities and environment control systems to the upper stage of the SLS rocket during launch. Exploration Ground Systems is overseeing installation of the umbilicals on the ML.

  13. ICPSU Install onto Mobile Launcher

    NASA Image and Video Library

    2018-03-16

    A crane and rigging lines are used to install the Interim Cryogenic Propulsion Stage Umbilical (ICPSU) high up on the mobile launcher (ML) at NASA's Kennedy Space Center in Florida. The last of the large umbilicals to be installed, the ICPSU will provide super-cooled hydrogen and liquid oxygen to the Space Launch System (SLS) rocket's interim cryogenic propulsion stage, or upper stage, at T-0 for Exploration Mission-1. The umbilical is located at about the 240-foot-level of the mobile launcher and will supply fuel, oxidizer, gaseous helium, hazardous gas leak detection, electrical commodities and environment control systems to the upper stage of the SLS rocket during launch. Exploration Ground Systems is overseeing installation of the umbilicals on the ML.

  14. ICPSU Install onto Mobile Launcher - Preps for Lift

    NASA Image and Video Library

    2018-03-15

    Construction workers with JP Donovan assist with preparations to lift and install the Interim Cryogenic Propulsion Stage Umbilical on the tower of the mobile launcher at NASA's Kennedy Space Center in Florida. The last of the large umbilicals to be installed, the ICPSU will provide super-cooled hydrogen and liquid oxygen to the Space Launch System (SLS) rocket's interim cryogenic propulsion stage, or upper stage, at T-0 for Exploration Mission-1. The umbilical is located at about the 240-foot-level of the mobile launcher and will supply fuel, oxidizer, gaseous helium, hazardous gas leak detection, electrical commodities and environment control systems to the upper stage of the SLS rocket during launch. Exploration Ground Systems is overseeing installation of the umbilicals on the ML.

  15. ICPSU Install onto Mobile Launcher

    NASA Image and Video Library

    2018-03-16

    Construction workers with JP Donovan install the Interim Cryogenic Propulsion Stage Umbilical (ICPSU) at about the 240-foot-level of the mobile launcher (ML) tower at NASA's Kennedy Space Center in Florida. The last of the large umbilicals to be installed, the ICPSU will provide super-cooled hydrogen and liquid oxygen to the Space Launch System (SLS) rocket's interim cryogenic propulsion stage, or upper stage, at T-0 for Exploration Mission-1. The umbilical is located at about the 240-foot-level of the mobile launcher and will supply fuel, oxidizer, gaseous helium, hazardous gas leak detection, electrical commodities and environment control systems to the upper stage of the SLS rocket during launch. Exploration Ground Systems is overseeing installation of the umbilicals on the ML.

  16. ICPSU Install onto Mobile Launcher

    NASA Image and Video Library

    2018-03-16

    A heavy-lift crane slowly lifts the Interim Cryogenic Propulsion Stage Umbilical (ICPSU) up for installation on the tower of the mobile launcher (ML) at NASA's Kennedy Space Center in Florida. The last of the large umbilicals to be installed, the ICPSU will provide super-cooled hydrogen and liquid oxygen to the Space Launch System (SLS) rocket's interim cryogenic propulsion stage, or upper stage, at T-0 for Exploration Mission-1. The umbilical is located at about the 240-foot-level of the mobile launcher and will supply fuel, oxidizer, gaseous helium, hazardous gas leak detection, electrical commodities and environment control systems to the upper stage of the SLS rocket during launch. Exploration Ground Systems is overseeing installation of the umbilicals on the ML.

  17. ICPSU Install onto Mobile Launcher - Preps for Lift

    NASA Image and Video Library

    2018-03-15

    The mobile launcher (ML) tower is lit up before early morning sunrise at NASA's Kennedy Space Center in Florida. Preparations are underway to lift and install the Interim Cryogenic Propulsion Stage Umbilical (ICPSU) at about the 240-foot-level on the tower. The last of the large umbilicals to be installed, the ICPSU will provide super-cooled hydrogen and liquid oxygen to the Space Launch System (SLS) rocket's interim cryogenic propulsion stage, or upper stage, at T-0 for Exploration Mission-1. The umbilical will supply fuel, oxidizer, gaseous helium, hazardous gas leak detection, electrical commodities and environment control systems to the upper stage of the SLS rocket during launch. Exploration Ground Systems is overseeing installation of the umbilicals on the ML.

  18. The Logistics Of Installing Pacs In An Existing Medical Center

    NASA Astrophysics Data System (ADS)

    Saarinen, Allan O.; Goodsitt, Mitchell M.; Loop, John W.

    1989-05-01

    A largely overlooked issue in the Picture Archiving and Communication Systems (PACS) area is the tremendous amount of site planning activity required to install such a system in an existing medical center. Present PACS equipment requires significant hospital real estate, specialized electrical power, cabling, and environmental controls to operate properly. Marshaling the hospital resources necessary to install PACS equipment requires many different players. The site preparation costs are nontrivial and usually include a number of hidden expenses. This paper summarizes the experience of the University of Washington Department of Radiology in installing an extensive digital imaging network (DIN) and PACS throughout the Department and several clinics in the hospital. The major logistical problems encountered at the University are discussed, a few recommendations are made, and the installation costs are documented. Overall, the University's site preparation costs equalled about seven percent (7%) of the total PACS equipment expenditure at the site.

  19. System Administrator for LCS Development Sets

    NASA Technical Reports Server (NTRS)

    Garcia, Aaron

    2013-01-01

    The Spaceport Command and Control System Project is creating a Checkout and Control System that will eventually launch the next generation of vehicles from Kennedy Space Center. KSC has a large set of Development and Operational equipment already deployed in several facilities, including the Launch Control Center, which requires support. The position of System Administrator will complete tasks across multiple platforms (Linux/Windows), many of them virtual. The Hardware Branch of the Control and Data Systems Division at the Kennedy Space Center uses system administrators for a variety of tasks. The position of system administrator comes with many responsibilities which include maintaining computer systems, repair or set up hardware, install software, create backups and recover drive images are a sample of jobs which one must complete. Other duties may include working with clients in person or over the phone and resolving their computer system needs. Training is a major part of learning how an organization functions and operates. Taking that into consideration, NASA is no exception. Training on how to better protect the NASA computer infrastructure will be a topic to learn, followed by NASA work polices. Attending meetings and discussing progress will be expected. A system administrator will have an account with root access. Root access gives a user full access to a computer system and or network. System admins can remove critical system files and recover files using a tape backup. Problem solving will be an important skill to develop in order to complete the many tasks.

  20. Rapid Energy Modeling Workflow Demonstration Project

    DTIC Science & Technology

    2014-01-01

    Conditioning Engineers BIM Building Information Model BLCC building life cycle costs BPA Building Performance Analysis CAD computer assisted...invited to enroll in the Autodesk Building Performance Analysis ( BPA ) Certificate Program under a group 30 specifically for DoD installation

  1. 39 CFR 775.6 - Categorical exclusions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... relate to routine activities such as personnel, organizational changes or similar administrative... quality. (12) Procurement or disposal of mail handling or transport equipment. (13) Acquisition, installation, operation, removal or disposal of communication systems, computers and data processing equipment...

  2. CADDIS Volume 4. Data Analysis: Download Software

    EPA Pesticide Factsheets

    Overview of the data analysis tools available for download on CADDIS. Provides instructions for downloading and installing CADStat, access to Microsoft Excel macro for computing SSDs, a brief overview of command line use of R, a statistical software.

  3. 47 CFR 54.639 - Ineligible expenses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., including the following: i. Computers, including servers, and related hardware (e.g., printers, scanners, laptops), unless used exclusively for network management, maintenance, or other network operations; ii... installation/construction; marketing studies, marketing activities, or outreach to potential network members...

  4. 47 CFR 54.639 - Ineligible expenses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., including the following: i. Computers, including servers, and related hardware (e.g., printers, scanners, laptops), unless used exclusively for network management, maintenance, or other network operations; ii... installation/construction; marketing studies, marketing activities, or outreach to potential network members...

  5. An alternative model to distribute VO software to WLCG sites based on CernVM-FS: a prototype at PIC Tier1

    NASA Astrophysics Data System (ADS)

    Lanciotti, E.; Merino, G.; Bria, A.; Blomer, J.

    2011-12-01

    In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.

  6. Increasing thermal efficiency of solar flat plate collectors

    NASA Astrophysics Data System (ADS)

    Pona, J.

    A study of methods to increase the efficiency of heat transfer in flat plate solar collectors is presented. In order to increase the heat transfer from the absorber plate to the working fluid inside the tubes, turbulent flow was induced by installing baffles within the tubes. The installation of the baffles resulted in a 7 to 12% increase in collector efficiency. Experiments were run on both 1 sq ft and 2 sq ft collectors each fitted with either slotted baffles or tubular baffles. A computer program was run comparing the baffled collector to the standard collector. The results obtained from the computer show that the baffled collectors have a 2.7% increase in life cycle cost (LCC) savings and a 3.6% increase in net cash flow for use in domestic hot water systems, and even greater increases when used in solar heating systems.

  7. Design of an HF-Band RFID System with Multiple Readers and Passive Tags for Indoor Mobile Robot Self-Localization

    PubMed Central

    Mi, Jian; Takahashi, Yasutake

    2016-01-01

    Radio frequency identification (RFID) technology has already been explored for efficient self-localization of indoor mobile robots. A mobile robot equipped with RFID readers detects passive RFID tags installed on the floor in order to locate itself. The Monte-Carlo localization (MCL) method enables the localization of a mobile robot equipped with an RFID system with reasonable accuracy, sufficient robustness and low computational cost. The arrangements of RFID readers and tags and the size of antennas are important design parameters for realizing accurate and robust self-localization using a low-cost RFID system. The design of a likelihood model of RFID tag detection is also crucial for the accurate self-localization. This paper presents a novel design and arrangement of RFID readers and tags for indoor mobile robot self-localization. First, by considering small-sized and large-sized antennas of an RFID reader, we show how the design of the likelihood model affects the accuracy of self-localization. We also design a novel likelihood model by taking into consideration the characteristics of the communication range of an RFID system with a large antenna. Second, we propose a novel arrangement of RFID tags with eight RFID readers, which results in the RFID system configuration requiring much fewer readers and tags while retaining reasonable accuracy of self-localization. We verify the performances of MCL-based self-localization realized using the high-frequency (HF)-band RFID system with eight RFID readers and a lower density of RFID tags installed on the floor based on MCL in simulated and real environments. The results of simulations and real environment experiments demonstrate that our proposed low-cost HF-band RFID system realizes accurate and robust self-localization of an indoor mobile robot. PMID:27483279

  8. Design of an HF-Band RFID System with Multiple Readers and Passive Tags for Indoor Mobile Robot Self-Localization.

    PubMed

    Mi, Jian; Takahashi, Yasutake

    2016-07-29

    Radio frequency identification (RFID) technology has already been explored for efficient self-localization of indoor mobile robots. A mobile robot equipped with RFID readers detects passive RFID tags installed on the floor in order to locate itself. The Monte-Carlo localization (MCL) method enables the localization of a mobile robot equipped with an RFID system with reasonable accuracy, sufficient robustness and low computational cost. The arrangements of RFID readers and tags and the size of antennas are important design parameters for realizing accurate and robust self-localization using a low-cost RFID system. The design of a likelihood model of RFID tag detection is also crucial for the accurate self-localization. This paper presents a novel design and arrangement of RFID readers and tags for indoor mobile robot self-localization. First, by considering small-sized and large-sized antennas of an RFID reader, we show how the design of the likelihood model affects the accuracy of self-localization. We also design a novel likelihood model by taking into consideration the characteristics of the communication range of an RFID system with a large antenna. Second, we propose a novel arrangement of RFID tags with eight RFID readers, which results in the RFID system configuration requiring much fewer readers and tags while retaining reasonable accuracy of self-localization. We verify the performances of MCL-based self-localization realized using the high-frequency (HF)-band RFID system with eight RFID readers and a lower density of RFID tags installed on the floor based on MCL in simulated and real environments. The results of simulations and real environment experiments demonstrate that our proposed low-cost HF-band RFID system realizes accurate and robust self-localization of an indoor mobile robot.

  9. Connecting an Ocean-Bottom Broadband Seismometer to a Seafloor Cabled Observatory: A Prototype System in Monterey Bay

    NASA Astrophysics Data System (ADS)

    McGill, P.; Neuhauser, D.; Romanowicz, B.

    2008-12-01

    The Monterey Ocean-Bottom Broadband (MOBB) seismic station was installed in April 2003, 40 km offshore from the central coast of California at a seafloor depth of 1000 m. It comprises a three-component broadband seismometer system (Guralp CMG-1T), installed in a hollow PVC caisson and buried under the seafloor; a current meter; and a differential pressure gauge. The station has been operating continuously since installation with no connection to the shore. Three times each year, the station is serviced with the aid of a Remotely Operated Vehicle (ROV) to change the batteries and retrieve the seismic data. In February 2009, the MOBB system will be connected to the Monterey Accelerated Research System (MARS) seafloor cabled observatory. The NSF-funded MARS observatory comprises a 52 km electro-optical cable that extends from a shore facility in Moss Landing out to a seafloor node in Monterey Bay. Once installation is completed in November 2008, the node will provide power and data to as many as eight science experiments through underwater electrical connectors. The MOBB system is located 3 km from the MARS node, and the two will be connected with an extension cable installed by an ROV with the aid of a cable-laying toolsled. The electronics module in the MOBB system is being refurbished to support the connection to the MARS observatory. The low-power autonomous data logger has been replaced with a PC/104 computer stack running embedded Linux. This new computer will run an Object Ring Buffer (ORB), which will collect data from the various MOBB sensors and forward it to another ORB running on a computer at the MARS shore station. There, the data will be archived and then forwarded to a third ORB running at the UC Berkeley Seismological Laboratory. Timing will be synchronized among MOBB's multiple acquisition systems using NTP, GPS clock emulation, and a precise timing signal from the MARS cable. The connection to the MARS observatory will provide real-time access to the MOBB data and eliminate the need for frequent servicing visits. The new system uses off-the-shelf hardware and open-source software, and will serve as a prototype for future instruments connected to seafloor cabled observatories.

  10. Temporary large guide signs.

    DOT National Transportation Integrated Search

    2014-05-01

    A common issue during phased highway construction projects is the need to temporarily relocate : large guide signs on the roadside or install new guide signs for temporary use. The conventional concrete : foundations used for these signs are costly a...

  11. Networking via wireless bridge produces greater speed and flexibility, lowers cost.

    PubMed

    1998-10-01

    Wireless computer networking. Computer connectivity is essential in today's high-tech health care industry. But telephone lines aren't fast enough, and high-speed connections like T-1 lines are costly. Read about an Ohio community hospital that installed a wireless network "bridge" to connect buildings that are miles apart, creating a reliable high-speed link that costs one-tenth of a T-1 line.

  12. The Brookline LOGO Project. Final Report. Part III: Profiles of Individual Students' Work. A.I. Memo No. 546. LOGO Memo No. 54.

    ERIC Educational Resources Information Center

    Watt, Daniel

    During the school year 1977/78 four computers equipped with LOGO and Turtle Graphics were installed in an elementary school in Brookline, Massachusetts. All sixth grade students in the school had between 20 and 40 hours of hands-on experience with the computers, and the work of 16 students ranging from intellectually gifted and average to learning…

  13. The Brookline LOGO Project. Final Report. Part II: Project Summary and Data Analysis. A.I. Memo No. 545. LOGO Memo No. 53.

    ERIC Educational Resources Information Center

    Papert, Seymour; And Others

    During the school year 1977/78 four computers equipped with LOGO and Turtle Graphics were installed in an elementary school in Brookline, Massachusetts. All sixth grade students in the school had between 20 and 40 hours of hands-on experience with the computers, and the work of 16 students ranging from intellectually gifted and average to learning…

  14. Environmental Analysis

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Burns & McDonnell Engineering's environmental control study is assisted by NASA's Computer Software Management and Information Center's programs in environmental analyses. Company is engaged primarily in design of such facilities as electrical utilities, industrial plants, wastewater treatment systems, dams and reservoirs and aviation installations. Company also conducts environmental engineering analyses and advises clients as to the environmental considerations of a particular construction project. Company makes use of many COSMIC computer programs which have allowed substantial savings.

  15. 98. View of IBM digital computer model 7090 magnet core ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    98. View of IBM digital computer model 7090 magnet core installation. ITT Artic Services, Inc., Official photograph BMEWS Site II, Clear, AK, by unknown photographer, 17 September 1965. BMEWS, clear as negative no. A-6606. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  16. A Generalized-Compliant-Motion Primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.

    1993-01-01

    Computer program bridges gap between planning and execution of compliant robotic motions developed and installed in control system of telerobot. Called "generalized-compliant-motion primitive," one of several task-execution-primitive computer programs, which receives commands from higher-level task-planning programs and executes commands by generating required trajectories and applying appropriate control laws. Program comprises four parts corresponding to nominal motion, compliant motion, ending motion, and monitoring. Written in C language.

  17. Ray Modeling Methods for Range Dependent Ocean Environments

    DTIC Science & Technology

    1983-12-01

    the eikonal equation, gives rise to equations for ray paths which are perpendicular to the wave fronts. Equation II.4, the transport equation, leads... databases for use by MEDUSA. The author has assisted in the installation of MEDUSA at computer facilities which possess databases containing archives of...sound velocity profiles, bathymetry, and bottom loss data. At each computer site, programs convert the archival data retrieved by the database system

  18. OASIS: a data and software distribution service for Open Science Grid

    NASA Astrophysics Data System (ADS)

    Bockelman, B.; Caballero Bejar, J.; De Stefano, J.; Hover, J.; Quick, R.; Teige, S.

    2014-06-01

    The Open Science Grid encourages the concept of software portability: a user's scientific application should be able to run at as many sites as possible. It is necessary to provide a mechanism for OSG Virtual Organizations to install software at sites. Since its initial release, the OSG Compute Element has provided an application software installation directory to Virtual Organizations, where they can create their own sub-directory, install software into that sub-directory, and have the directory shared on the worker nodes at that site. The current model has shortcomings with regard to permissions, policies, versioning, and the lack of a unified, collective procedure or toolset for deploying software across all sites. Therefore, a new mechanism for data and software distributing is desirable. The architecture for the OSG Application Software Installation Service (OASIS) is a server-client model: the software and data are installed only once in a single place, and are automatically distributed to all client sites simultaneously. Central file distribution offers other advantages, including server-side authentication and authorization, activity records, quota management, data validation and inspection, and well-defined versioning and deletion policies. The architecture, as well as a complete analysis of the current implementation, will be described in this paper.

  19. Cloud CPFP: a shotgun proteomics data analysis pipeline using cloud and high performance computing.

    PubMed

    Trudgian, David C; Mirzaei, Hamid

    2012-12-07

    We have extended the functionality of the Central Proteomics Facilities Pipeline (CPFP) to allow use of remote cloud and high performance computing (HPC) resources for shotgun proteomics data processing. CPFP has been modified to include modular local and remote scheduling for data processing jobs. The pipeline can now be run on a single PC or server, a local cluster, a remote HPC cluster, and/or the Amazon Web Services (AWS) cloud. We provide public images that allow easy deployment of CPFP in its entirety in the AWS cloud. This significantly reduces the effort necessary to use the software, and allows proteomics laboratories to pay for compute time ad hoc, rather than obtaining and maintaining expensive local server clusters. Alternatively the Amazon cloud can be used to increase the throughput of a local installation of CPFP as necessary. We demonstrate that cloud CPFP allows users to process data at higher speed than local installations but with similar cost and lower staff requirements. In addition to the computational improvements, the web interface to CPFP is simplified, and other functionalities are enhanced. The software is under active development at two leading institutions and continues to be released under an open-source license at http://cpfp.sourceforge.net.

  20. Performance history and upgrades for the DIII-D gyrotron complex

    DOE PAGES

    Lohr, J.; Anderson, J. P.; Cengher, M.; ...

    2015-03-12

    The gyrotron installation on the DIII-D tokamak has been in operation at the second harmonic of the electron cyclotron resonance since the mid-1990s. Prior to that a large installation of ten 60 GHz tubes was operated at the fundamental resonance. The system has been upgraded regularly and is an everyday tool for experiments on DIII-D.

  1. cisPath: an R/Bioconductor package for cloud users for visualization and management of functional protein interaction networks.

    PubMed

    Wang, Likun; Yang, Luhe; Peng, Zuohan; Lu, Dan; Jin, Yan; McNutt, Michael; Yin, Yuxin

    2015-01-01

    With the burgeoning development of cloud technology and services, there are an increasing number of users who prefer cloud to run their applications. All software and associated data are hosted on the cloud, allowing users to access them via a web browser from any computer, anywhere. This paper presents cisPath, an R/Bioconductor package deployed on cloud servers for client users to visualize, manage, and share functional protein interaction networks. With this R package, users can easily integrate downloaded protein-protein interaction information from different online databases with private data to construct new and personalized interaction networks. Additional functions allow users to generate specific networks based on private databases. Since the results produced with the use of this package are in the form of web pages, cloud users can easily view and edit the network graphs via the browser, using a mouse or touch screen, without the need to download them to a local computer. This package can also be installed and run on a local desktop computer. Depending on user preference, results can be publicized or shared by uploading to a web server or cloud driver, allowing other users to directly access results via a web browser. This package can be installed and run on a variety of platforms. Since all network views are shown in web pages, such package is particularly useful for cloud users. The easy installation and operation is an attractive quality for R beginners and users with no previous experience with cloud services.

  2. cisPath: an R/Bioconductor package for cloud users for visualization and management of functional protein interaction networks

    PubMed Central

    2015-01-01

    Background With the burgeoning development of cloud technology and services, there are an increasing number of users who prefer cloud to run their applications. All software and associated data are hosted on the cloud, allowing users to access them via a web browser from any computer, anywhere. This paper presents cisPath, an R/Bioconductor package deployed on cloud servers for client users to visualize, manage, and share functional protein interaction networks. Results With this R package, users can easily integrate downloaded protein-protein interaction information from different online databases with private data to construct new and personalized interaction networks. Additional functions allow users to generate specific networks based on private databases. Since the results produced with the use of this package are in the form of web pages, cloud users can easily view and edit the network graphs via the browser, using a mouse or touch screen, without the need to download them to a local computer. This package can also be installed and run on a local desktop computer. Depending on user preference, results can be publicized or shared by uploading to a web server or cloud driver, allowing other users to directly access results via a web browser. Conclusions This package can be installed and run on a variety of platforms. Since all network views are shown in web pages, such package is particularly useful for cloud users. The easy installation and operation is an attractive quality for R beginners and users with no previous experience with cloud services. PMID:25708840

  3. Reconfigurable wireless monitoring systems for bridges: validation on the Yeondae Bridge

    NASA Astrophysics Data System (ADS)

    Kim, Junhee; Lynch, Jerome P.; Zonta, Daniele; Lee, Jong-Jae; Yun, Chung-Bang

    2009-03-01

    The installation of a structural monitoring system on a medium- to large-span bridge can be a challenging undertaking due to high system costs and time consuming installations. However, these historical challenges can be eliminated by using wireless sensors as the primary building block of a structural monitoring system. Wireless sensors are low-cost data acquisition nodes that utilize wireless communication to transfer data from the sensor to the data repository. Another advantageous characteristic of wireless sensors is their ability to be easily removed and reinstalled in another sensor location on the same structure; this installation modularity is highlighted in this study. Wireless sensor nodes designed for structural monitoring applications are installed on the 180 m long Yeondae Bridge (Korea) to measure the dynamic response of the bridge to controlled truck loading. To attain a high nodal density with a small number (20) of wireless sensors, the wireless sensor network is installed three times with each installation concentrating sensors in one portion of the bridge. Using forced and free vibration response data from the three installations, the modal properties of the bridge are accurately identified. Intentional nodal overlapping of the three different sensor installations allows mode shapes from each installation to be stitched together into global mode shapes. Specifically, modal properties of the Yeondae Bridge are derived off-line using frequency domain decomposition (FDD) modal analysis methods.

  4. The geography of solar energy in the United States: Market definition, industry structure, and choice in solar PV adoption

    DOE PAGES

    O’Shaughnessy, Eric; Nemet, Gregory F.; Darghouth, Naïm

    2018-01-30

    The solar photovoltaic (PV) installation industry comprises thousands of firms around the world who collectively installed nearly 200 million panels in 2015. Spatial analysis of the emerging industry has received considerable attention from the literature, especially on the demand side concerning peer effects and adopter clustering. However this research area does not include similarly sophisticated spatial analysis on the supply side of the installation industry. The lack of understanding of the spatial structure of the PV installation industry leaves PV market research to rely on jurisdictional lines, such as counties, to define geographic PV markets. We develop an approach thatmore » uses the spatial distribution of installers' activity to define geographic boundaries for PV markets. Our method is useful for PV market research and applicable in the contexts of other industries. We use our approach to demonstrate that the PV industry in the United States is spatially heterogeneous. Despite the emergence of some national-scale PV installers, installers are largely local and installer communities are unique from one region to the next. The social implications of the spatial heterogeneity of the emerging PV industry involve improving understanding of issues such as market power, industry consolidation, and how much choice potential adopters have.« less

  5. The geography of solar energy in the United States: Market definition, industry structure, and choice in solar PV adoption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Shaughnessy, Eric; Nemet, Gregory F.; Darghouth, Naïm

    The solar photovoltaic (PV) installation industry comprises thousands of firms around the world who collectively installed nearly 200 million panels in 2015. Spatial analysis of the emerging industry has received considerable attention from the literature, especially on the demand side concerning peer effects and adopter clustering. However this research area does not include similarly sophisticated spatial analysis on the supply side of the installation industry. The lack of understanding of the spatial structure of the PV installation industry leaves PV market research to rely on jurisdictional lines, such as counties, to define geographic PV markets. We develop an approach thatmore » uses the spatial distribution of installers' activity to define geographic boundaries for PV markets. Our method is useful for PV market research and applicable in the contexts of other industries. We use our approach to demonstrate that the PV industry in the United States is spatially heterogeneous. Despite the emergence of some national-scale PV installers, installers are largely local and installer communities are unique from one region to the next. The social implications of the spatial heterogeneity of the emerging PV industry involve improving understanding of issues such as market power, industry consolidation, and how much choice potential adopters have.« less

  6. The capitalized value of rainwater tanks in the property market of Perth, Australia

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Polyakov, Maksym; Fogarty, James; Pannell, David J.

    2015-03-01

    In response to frequent water shortages, governments in Australia have encouraged home owners to install rainwater tanks, often by provision of partial funding for their installation. A simple investment analysis suggests that the net private benefits of rainwater tanks are negative, potentially providing justification for funding support for tank installation if it results in sufficiently large public benefits. However, using a hedonic price analysis we estimate that there is a premium of up to AU18,000 built into the sale prices of houses with tanks installed. The premium is likely to be greater than the costs of installation, even allowing for the cost of time that home owners must devote to research, purchase and installation. The premium is likely to reflect non-financial as well as financial benefits from installation. The robustness of our estimated premium is investigated using both bounded regression analysis and simulation methods and the result is found to be highly robust. The policy implication is that governments should not rely on payments to encourage installation of rainwater tanks, but instead should use information provision as their main mechanism for promoting uptake. Several explanations for the observation that many home owners are apparently leaving benefits on the table are canvased, but no fully satisfactory explanation is identified.

  7. Hazardous Materials Pharmacies - A Vital Component of a Robust P2 Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarter, S.

    2006-07-01

    Integrating pollution prevention (P2) into the Department of Energy Integrated Safety Management (ISM) - Environmental Management System (EMS) approach, required by DOE Order 450.1, leads to an enhanced ISM program at large and complex installations and facilities. One of the building blocks to integrating P2 into a comprehensive environmental and safety program is the control and tracking of the amounts, types, and flow of hazardous materials used on a facility. Hazardous materials pharmacies (typically called HazMarts) provide a solid approach to resolving this issue through business practice changes that reduce use, avoid excess, and redistribute surplus. If understood from conceptmore » to implementation, the HazMart is a powerful tool for reducing pollution at the source, tracking inventory storage, controlling usage and flow, and summarizing data for reporting requirements. Pharmacy options can range from a strict, single control point for all hazardous materials to a virtual system, where the inventory is user controlled and reported over a common system. Designing and implementing HazMarts on large, diverse installations or facilities present a unique set of issues. This is especially true of research and development (R and D) facilities where the chemical use requirements are extensive and often classified. There are often multiple sources of supply; a wide variety of chemical requirements; a mix of containers ranging from small ampoules to large bulk storage tanks; and a wide range of tools used to track hazardous materials, ranging from simple purchase inventories to sophisticated tracking software. Computer systems are often not uniform in capacity, capability, or operating systems, making it difficult to use a server-based unified tracking system software. Each of these issues has a solution or set of solutions tied to fundamental business practices. Each requires an understanding of the problem at hand, which, in turn, requires good communication among all potential users. A key attribute to a successful HazMart is that everybody must use the same program. That requirement often runs directly into the biggest issue of all... institutional resistance to change. To be successful, the program has to be both a top-down and bottom-up driven process. The installation or facility must set the policy and the requirement, but all of the players have to buy in and participate in building and implementing the program. Dynamac's years of experience assessing hazardous materials programs, providing business case analyses, and recommending and implementing pharmacy approaches for federal agencies has provided us with key insights into the issues, problems, and the array of solutions available. This paper presents the key steps required to implement a HazMart, explores the advantages and pitfalls associated with a HazMart, and presents some options for implementing a pharmacy or HazMart on complex installations and R and D facilities. (authors)« less

  8. Boutiques: a flexible framework to integrate command-line applications in computing platforms

    PubMed Central

    Glatard, Tristan; Kiar, Gregory; Aumentado-Armstrong, Tristan; Beck, Natacha; Bellec, Pierre; Bernard, Rémi; Bonnet, Axel; Brown, Shawn T; Camarasu-Pop, Sorina; Cervenansky, Frédéric; Das, Samir; Ferreira da Silva, Rafael; Flandin, Guillaume; Girard, Pascal; Gorgolewski, Krzysztof J; Guttmann, Charles R G; Hayot-Sasson, Valérie; Quirion, Pierre-Olivier; Rioux, Pierre; Rousseau, Marc-Étienne; Evans, Alan C

    2018-01-01

    Abstract We present Boutiques, a system to automatically publish, integrate, and execute command-line applications across computational platforms. Boutiques applications are installed through software containers described in a rich and flexible JSON language. A set of core tools facilitates the construction, validation, import, execution, and publishing of applications. Boutiques is currently supported by several distinct virtual research platforms, and it has been used to describe dozens of applications in the neuroinformatics domain. We expect Boutiques to improve the quality of application integration in computational platforms, to reduce redundancy of effort, to contribute to computational reproducibility, and to foster Open Science. PMID:29718199

  9. Teaching infant car seat installation via interactive visual presence: An experimental trial.

    PubMed

    Schwebel, David C; Johnston, Anna; Rouse, Jenni

    2017-02-17

    A large portion of child restraint systems (car seats) are installed incorrectly, especially when first-time parents install infant car seats. Expert instruction greatly improves the accuracy of car seat installation but is labor intensive and difficult to obtain for many parents. This study was designed to evaluate the efficacy of 3 ways of communicating instructions for proper car seat installation: phone conversation; HelpLightning, a mobile application (app) that offers virtual interactive presence permitting both verbal and interactive (telestration) visual communication; and the manufacturer's user manual. A sample of 39 young adults of child-bearing age who had no previous experience installing car seats were recruited and randomly assigned to install an infant car seat using guidance from one of those 3 communication sources. Both the phone and interactive app were more effective means to facilitate accurate car seat installation compared to the user manual. There was a trend for the app to offer superior communication compared to the phone, but that difference was not significant in most assessments. The phone and app groups also installed the car seat more efficiently and perceived the communication to be more effective and their installation to be more accurate than those in the user manual group. Interactive communication may help parents install car seats more accurately than using the manufacturer's manual alone. This was an initial study with a modestly sized sample; if results are replicated in future research, there may be reason to consider centralized "call centers" that provide verbal and/or interactive visual instruction from remote locations to parents installing car seats, paralleling the model of centralized Poison Control centers in the United States.

  10. Simulation of reflecting surface deviations of centimeter-band parabolic space radiotelescope (SRT) with the large-size mirror

    NASA Astrophysics Data System (ADS)

    Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.

    2017-11-01

    he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.

  11. A design strategy for the use of vortex generators to manage inlet-engine distortion using computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Levy, Ralph

    1991-01-01

    A reduced Navier-Stokes solution technique was successfully used to design vortex generator installations for the purpose of minimizing engine face distortion by restructuring the development of secondary flow that is induced in typical 3-D curved inlet ducts. The results indicate that there exists an optimum axial location for this installation of corotating vortex generators, and within this configuration, there exists a maximum spacing between generator blades above which the engine face distortion increases rapidly. Installed vortex generator performance, as measured by engine face circumferential distortion descriptors, is sensitive to Reynolds number and thereby the generator scale, i.e., the ratio of generator blade height to local boundary layer thickness. Installations of corotating vortex generators work well in terms of minimizing engine face distortion within a limited range of generator scales. Hence, the design of vortex generator installations is a point design, and all other conditions are off design. In general, the loss levels associated with a properly designed vortex generator installation are very small; thus, they represent a very good method to manage engine face distortion. This study also showed that the vortex strength, generator scale, and secondary flow field structure have a complicated and interrelated influence over engine face distortion, over and above the influence of the initial arrangement of generators.

  12. The Effect of Mounting Vortex Generators on the DTU 10MW Reference Wind Turbine Blade

    NASA Astrophysics Data System (ADS)

    Skrzypiński, Witold; Gaunaa, Mac; Bak, Christian

    2014-06-01

    The aim of the current work is to analyze possible advantages of mounting Vortex Generators (VG's) on a wind turbine blade. Specifically, the project aims at investigating at which radial sections of the DTU 10 MW Reference Wind Turbine blade it is most beneficial to mount the VG's in order to increase the Annual Energy Production (AEP) under realistic conditions. The present analysis was carried out in several steps: (1) The clean two dimensional airfoil characteristics were first modified to emulate the effect of all possible combinations of VG's (1% high at suction side x/c=0.2-0.25) and two Leading Edge Roughness (LER) values along the whole blade span. (2) The combinations from Step 1, including the clean case were subsequently modified to take into account three dimensional effects. (3) BEM computations were carried out to determine the aerodynamic rotor performance using each of the datasets from Step 2 along the whole blade span for all wind speeds in the turbine control scheme. (4) Employing the assumption of radial independence between sections of the blades, and using the results of the BEM computations described in Step 3, it is possible to determine for each radial position independently whether it is beneficial to install VG's in the smooth and LER cases, respectively. The results indicated that surface roughness that corresponds to degradation of the power curve may to some extent be mitigated by installation of VG's. The present results also indicated that the optimal VG configuration in terms of maximizing AEP depends on the degree of severity of the LER. This is because, depending on the condition of blade surface, installation of VG's on an incorrect blade span or installation of VG's too far out on the blade may cause loss in AEP. The results also indicated that the worse condition of the blade surface, the more gain may be obtained from the installation of VG's.

  13. Platform C Installation

    NASA Image and Video Library

    2016-10-19

    A heavy-lift crane lifts the first half of the C-level work platforms, C south, for NASA’s Space Launch System (SLS) rocket, up from the transfer aisle floor of the Vehicle Assembly Building (VAB) at NASA’s Kennedy Space Center in Florida. Large Tandemloc bars have been attached to the platform to keep it level during lifting and installation. The C platform will be installed on the south side of High Bay 3. The C platforms are the eighth of 10 levels of work platforms that will surround and provide access to the SLS rocket and Orion spacecraft for Exploration Mission 1. The Ground Systems Development and Operations Program is overseeing upgrades and modifications to VAB High Bay 3, including installation of the new work platforms, to prepare for NASA’s Journey to Mars.

  14. ICPSU Install onto Mobile Launcher

    NASA Image and Video Library

    2018-03-16

    The mobile launcher (ML) is reflected in the sunglasses of a construction worker with JP Donovan at NASA's Kennedy Space Center in Florida. A crane is lifting the Interim Cryogenic Propulsion Stage Umbilical (ICPSU) up for installation on the tower of the ML. The last of the large umbilicals to be installed, the ICPSU will provide super-cooled hydrogen and liquid oxygen to the Space Launch System (SLS) rocket's interim cryogenic propulsion stage, or upper stage, at T-0 for Exploration Mission-1. The umbilical is located at about the 240-foot-level of the mobile launcher and will supply fuel, oxidizer, gaseous helium, hazardous gas leak detection, electrical commodities and environment control systems to the upper stage of the SLS rocket during launch. Exploration Ground Systems is overseeing installation of the umbilicals on the ML.

  15. ICPSU Install onto Mobile Launcher - Preps for Lift

    NASA Image and Video Library

    2018-03-15

    A construction worker with JP Donovan helps prepare the Interim Cryogenic Propulsion Stage Umbilical (ICPSU) for installation high up on the tower of the mobile launcher (ML) at NASA's Kennedy Space Center in Florida. The last of the large umbilicals to be installed, the ICPSU will provide super-cooled hydrogen and liquid oxygen to the Space Launch System (SLS) rocket's interim cryogenic propulsion stage, or upper stage, at T-0 for Exploration Mission-1. The umbilical will be located at about the 240-foot-level of the mobile launcher and will supply fuel, oxidizer, gaseous helium, hazardous gas leak detection, electrical commodities and environment control systems to the upper stage of the SLS rocket during launch. Exploration Ground Systems is overseeing installation of the umbilicals on the ML.

  16. ICPSU Install onto Mobile Launcher - Preps for Lift

    NASA Image and Video Library

    2018-03-15

    Construction workers with JP Donovan attach a heavy-lift crane to the Interim Cryogenic Propulsion Stage Umbilical (ICPSU) to prepare for lifting and installation on the mobile launcher (ML) tower at NASA's Kennedy Space Center in Florida. The last of the large umbilicals to be installed, the ICPSU will provide super-cooled hydrogen and liquid oxygen to the Space Launch System (SLS) rocket's interim cryogenic propulsion stage, or upper stage, at T-0 for Exploration Mission-1. The umbilical will be located at about the 240-foot-level of the ML and will supply fuel, oxidizer, gaseous helium, hazardous gas leak detection, electrical commodities and environment control systems to the upper stage of the SLS rocket during launch. Exploration Ground Systems is overseeing installation of the umbilicals on the ML.

  17. Dynamic Simulation on the Installation Process of HGIS in Transformer Substation

    NASA Astrophysics Data System (ADS)

    Lin, Tao; Li, Shaohua; Wang, Hu; Che, Deyong; Qi, Guangcai; Yao, Jianfeng; Zhang, Qingzhe

    The technological requirements of Hypid Gas Insulated Switchgear (HGIS) installation in transformer substation is high and the control points of quality is excessive. Most of the engineers and technicians in the construction enterprises are not familiar with equipments of HGIS. In order to solve these problem, equipments of HGIS is modeled on the computer by SolidWorks software. Installation process of civil foundation and closed-type equipments is optimized dynamically with virtual assemble technology. Announcements and application work are composited into animation file. Skills of modeling and simulation is tidied classify as well. The result of the visual dynamic simulation can instruct the actual construction process of HGIS to a certain degree and can promote reasonable construction planning and management. It can also improve the method and quality of staff training for electric power construction enterprises.

  18. Octopus-toolkit: a workflow to automate mining of public epigenomic and transcriptomic next-generation sequencing data

    PubMed Central

    Kim, Taemook; Seo, Hogyu David; Hennighausen, Lothar; Lee, Daeyoup

    2018-01-01

    Abstract Octopus-toolkit is a stand-alone application for retrieving and processing large sets of next-generation sequencing (NGS) data with a single step. Octopus-toolkit is an automated set-up-and-analysis pipeline utilizing the Aspera, SRA Toolkit, FastQC, Trimmomatic, HISAT2, STAR, Samtools, and HOMER applications. All the applications are installed on the user's computer when the program starts. Upon the installation, it can automatically retrieve original files of various epigenomic and transcriptomic data sets, including ChIP-seq, ATAC-seq, DNase-seq, MeDIP-seq, MNase-seq and RNA-seq, from the gene expression omnibus data repository. The downloaded files can then be sequentially processed to generate BAM and BigWig files, which are used for advanced analyses and visualization. Currently, it can process NGS data from popular model genomes such as, human (Homo sapiens), mouse (Mus musculus), dog (Canis lupus familiaris), plant (Arabidopsis thaliana), zebrafish (Danio rerio), fruit fly (Drosophila melanogaster), worm (Caenorhabditis elegans), and budding yeast (Saccharomyces cerevisiae) genomes. With the processed files from Octopus-toolkit, the meta-analysis of various data sets, motif searches for DNA-binding proteins, and the identification of differentially expressed genes and/or protein-binding sites can be easily conducted with few commands by users. Overall, Octopus-toolkit facilitates the systematic and integrative analysis of available epigenomic and transcriptomic NGS big data. PMID:29420797

  19. Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E. Wes; van Rosendale, John; Southard, Dale

    2010-12-01

    Supercomputing Centers (SC's) are unique resources that aim to enable scientific knowledge discovery through the use of large computational resources, the Big Iron. Design, acquisition, installation, and management of the Big Iron are activities that are carefully planned and monitored. Since these Big Iron systems produce a tsunami of data, it is natural to co-locate visualization and analysis infrastructure as part of the same facility. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys does not receive the same level ofmore » treatment as that of the Big Iron. The main focus of this article is to explore different aspects of planning, designing, fielding, and maintaining the visualization and analysis infrastructure at supercomputing centers. Some of the questions we explore in this article include:"How should the Little Iron be sized to adequately support visualization and analysis of data coming off the Big Iron?" What sort of capabilities does it need to have?" Related questions concern the size of visualization support staff:"How big should a visualization program be (number of persons) and what should the staff do?" and"How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?"« less

  20. Development of a Computing Cluster At the University of Richmond

    NASA Astrophysics Data System (ADS)

    Carbonneau, J.; Gilfoyle, G. P.; Bunn, E. F.

    2010-11-01

    The University of Richmond has developed a computing cluster to support the massive simulation and data analysis requirements for programs in intermediate-energy nuclear physics, and cosmology. It is a 20-node, 240-core system running Red Hat Enterprise Linux 5. We have built and installed the physics software packages (Geant4, gemc, MADmap...) and developed shell and Perl scripts for running those programs on the remote nodes. The system has a theoretical processing peak of about 2500 GFLOPS. Testing with the High Performance Linpack (HPL) benchmarking program (one of the standard benchmarks used by the TOP500 list of fastest supercomputers) resulted in speeds of over 900 GFLOPS. The difference between the maximum and measured speeds is due to limitations in the communication speed among the nodes; creating a bottleneck for large memory problems. As HPL sends data between nodes, the gigabit Ethernet connection cannot keep up with the processing power. We will show how both the theoretical and actual performance of the cluster compares with other current and past clusters, as well as the cost per GFLOP. We will also examine the scaling of the performance when distributed to increasing numbers of nodes.

  1. CFD Lagrangian Modeling of Water Droplet Transport for ISS Hygiene Activity Application

    NASA Technical Reports Server (NTRS)

    Son, Chang H.

    2013-01-01

    The goal of this study was to assess the impacts of free water propagation in the Waste and Hygiene Compartment (WHC) installed in Node 3. Free water can be generated inside the WHC in small quantities due to crew hygiene activity. To mitigate potential impact of free water in Node 3 cabin the WHC doorway is enclosed by a waterproof bump-out, Kabin, with openings at the top and bottom. At the overhead side of the rack, there is a screen that prevents large drops of water from exiting. However, as the avionics fan in the WHC causes airflow toward the deck side of the rack, small quantities of free water may exit at the bottom of the Kabin. A Computational Fluid Dynamics (CFD) analysis of Node 3 cabin airflow enable identifying the paths of water transport. To simulate the droplet transport the Lagrangian discrete phase approach was used. Various initial droplet distributions were considered in the study. The droplet diameter was varied in the range of 5-20 mm. The results of the computations showed that most of the drops fall to the rack surface not far from the WHC curtain.

  2. ScipionCloud: An integrative and interactive gateway for large scale cryo electron microscopy image processing on commercial and academic clouds.

    PubMed

    Cuenca-Alba, Jesús; Del Cano, Laura; Gómez Blanco, Josué; de la Rosa Trevín, José Miguel; Conesa Mingo, Pablo; Marabini, Roberto; S Sorzano, Carlos Oscar; Carazo, Jose María

    2017-10-01

    New instrumentation for cryo electron microscopy (cryoEM) has significantly increased data collection rate as well as data quality, creating bottlenecks at the image processing level. Current image processing model of moving the acquired images from the data source (electron microscope) to desktops or local clusters for processing is encountering many practical limitations. However, computing may also take place in distributed and decentralized environments. In this way, cloud is a new form of accessing computing and storage resources on demand. Here, we evaluate on how this new computational paradigm can be effectively used by extending our current integrative framework for image processing, creating ScipionCloud. This new development has resulted in a full installation of Scipion both in public and private clouds, accessible as public "images", with all the required preinstalled cryoEM software, just requiring a Web browser to access all Graphical User Interfaces. We have profiled the performance of different configurations on Amazon Web Services and the European Federated Cloud, always on architectures incorporating GPU's, and compared them with a local facility. We have also analyzed the economical convenience of different scenarios, so cryoEM scientists have a clearer picture of the setup that is best suited for their needs and budgets. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Automation of a Large Analytical Chemistry Laboratory

    DTIC Science & Technology

    1990-12-01

    Division Brooks Air Force Base , Texas 78235-5501 NOTICES When Government drawings, specifications, or other data are used for any purpose other than a...been reviewed and is approved for publication. Air Force installations may direct requests for copies of this report to: Air Force Occupational and...remaining for the analyses. Our laboratory serves worldwide Air Force installations and therefore comes up against these sample holding time requirements

  4. 16. THE INSTALLATION OF CONVEYORS AND OVERHEAD RAILS ELIMINATED THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    16. THE INSTALLATION OF CONVEYORS AND OVERHEAD RAILS ELIMINATED THE NEED TO LAY MOLDS OUT ON FLOORS AND HAND-POUR THEM. INSTEAD, WORKERS PULLED LARGE LADLES ALONG OVERHEAD RAILS AND FILLED CONVEYOR-DRIVEN MOLDS WHILE THEY STOOD ON A MOVING PLATFORM THAT TRAVELED AT THE SAME SPEED AS THE MOLD CONVEYOR, CA. 1950. - Stockham Pipe & Fittings Company, 4000 Tenth Avenue North, Birmingham, Jefferson County, AL

  5. Turbine Blade Illusion

    PubMed Central

    Lee, Rob

    2017-01-01

    In January 2017, a large wind turbine blade was installed temporarily in a city square as a public artwork. At first sight, media photographs of the installation appeared to be fakes – the blade looks like it could not really be part of the scene. Close inspection of the object shows that its paradoxical visual appearance can be attributed to unconscious assumptions about object shape and light source direction. PMID:28596821

  6. AGOR 28

    DTIC Science & Technology

    2014-08-28

    release; distribution unlimited. Report No. A002.062 1. Meetings: i. Participated in weekly conference calls. ii. Design Review 16 2...outfitting lists for Sally Ride. iv. Working on NS5 Hierarchy 4. Sally Ride Progress: • HVAC – Ducting installation is moving forward with...large sections of ductwork being installed on the main deck port and starboard. HVAC crew is laying out runs on the foc’sle and 01 decks. • Pilot

  7. Outcomes from the First Wingman Software in the Loop Integration Event: January 2017

    DTIC Science & Technology

    2017-06-28

    for public release; distribution is unlimited. NOTICES Disclaimers The findings in this report are not to be construed as an official...and enhance communication among manned‐unmanned team members, which are critical to achieve Training and Doctrine Command 6+1 required capabilities...Computers to Run the SIL 10 4.1.2 Problem 2: Computer Networking 10 4.1.3 Problem 3: Installation of ARES 11 4.2 Developing Matching Virtual

  8. 97. View of International Business Machine (IBM) digital computer model ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    97. View of International Business Machine (IBM) digital computer model 7090 magnetic core installation, international telephone and telegraph (ITT) Artic Services Inc., Official photograph BMEWS site II, Clear, AK, by unknown photographer, 17 September 1965, BMEWS, clear as negative no. A-6604. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  9. PLAT: An Automated Fault and Behavioural Anomaly Detection Tool for PLC Controlled Manufacturing Systems.

    PubMed

    Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun; Wang, Gi-Nam

    2016-01-01

    Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively.

  10. PLAT: An Automated Fault and Behavioural Anomaly Detection Tool for PLC Controlled Manufacturing Systems

    PubMed Central

    Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun

    2016-01-01

    Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively. PMID:27974882

  11. Internally insulated thermal storage system development program

    NASA Technical Reports Server (NTRS)

    Scott, O. L.

    1980-01-01

    A cost effective thermal storage system for a solar central receiver power system using molten salt stored in internally insulated carbon steel tanks is described. Factors discussed include: testing of internal insulation materials in molten salt; preliminary design of storage tanks, including insulation and liner installation; optimization of the storage configuration; and definition of a subsystem research experiment to demonstrate the system. A thermal analytical model and analysis of a thermocline tank was performed. Data from a present thermocline test tank was compared to gain confidence in the analytical approach. A computer analysis of the various storage system parameters (insulation thickness, number of tanks, tank geometry, etc.,) showed that (1) the most cost-effective configuration was a small number of large cylindrical tanks, and (2) the optimum is set by the mechanical constraints of the system, such as soil bearing strength and tank hoop stress, not by the economics.

  12. Internally insulated thermal storage system development program

    NASA Astrophysics Data System (ADS)

    Scott, O. L.

    1980-03-01

    A cost effective thermal storage system for a solar central receiver power system using molten salt stored in internally insulated carbon steel tanks is described. Factors discussed include: testing of internal insulation materials in molten salt; preliminary design of storage tanks, including insulation and liner installation; optimization of the storage configuration; and definition of a subsystem research experiment to demonstrate the system. A thermal analytical model and analysis of a thermocline tank was performed. Data from a present thermocline test tank was compared to gain confidence in the analytical approach. A computer analysis of the various storage system parameters (insulation thickness, number of tanks, tank geometry, etc.,) showed that (1) the most cost-effective configuration was a small number of large cylindrical tanks, and (2) the optimum is set by the mechanical constraints of the system, such as soil bearing strength and tank hoop stress, not by the economics.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potts, C.; Faber, M.; Gunderson, G.

    The as-built lattice of the Rapid-Cycling Synchrotron (RCS) had two sets of correction sextupoles and two sets of quadrupoles energized by dc power supplies to control the tune and the tune tilt. With this method of powering these magnets, adjustment of tune conditions during the accelerating cycle as needed was not possible. A set of dynamically programmable power supplies has been built and operated to provide the required chromaticity adjustment. The short accelerating time (16.7 ms) of the RCS and the inductance of the magnets dictated large transistor amplifier power supplies. The required time resolution and waveform flexibility indicated themore » desirability of computer control. Both the amplifiers and controls are described, along with resulting improvements in the beam performance. A set of octupole magnets and programmable power supplies with similar dynamic qualities have been constructed and installed to control the anticipated high-intensity transverse instability. This system will be operational in the spring of 1981.« less

  14. The Network for Astronomy in Education in Southwest New Mexico

    NASA Astrophysics Data System (ADS)

    Neely, B.

    1998-12-01

    The Network for Astronomy in Education was organized to use astronomy as a motivational tool to teach science methods and principles in the public schools. NFO is a small private research observatory, associated with the local University, Western New Mexico. We started our program in 1996 with an IDEA grant by introducing local teachers to the Internet, funding a portable planetarium (Starlab) for the students, and upgrading our local radio linked computer network. Grant County is a rural mining and ranching county in Southwest New Mexico. It is ethnically diverse and has a large portion of the population below the poverty line. It's dryness and 6000' foot elevation, along with dark skies, suite it to the appreciation of astronomy. We now have 8 local schools involved in astronomy at some level. Our main programs are the Starlab and Project Astro, and we will soon install a Sidewalk Solar System in the center of Silver City.

  15. xQTL workbench: a scalable web environment for multi-level QTL analysis.

    PubMed

    Arends, Danny; van der Velde, K Joeri; Prins, Pjotr; Broman, Karl W; Möller, Steffen; Jansen, Ritsert C; Swertz, Morris A

    2012-04-01

    xQTL workbench is a scalable web platform for the mapping of quantitative trait loci (QTLs) at multiple levels: for example gene expression (eQTL), protein abundance (pQTL), metabolite abundance (mQTL) and phenotype (phQTL) data. Popular QTL mapping methods for model organism and human populations are accessible via the web user interface. Large calculations scale easily on to multi-core computers, clusters and Cloud. All data involved can be uploaded and queried online: markers, genotypes, microarrays, NGS, LC-MS, GC-MS, NMR, etc. When new data types come available, xQTL workbench is quickly customized using the Molgenis software generator. xQTL workbench runs on all common platforms, including Linux, Mac OS X and Windows. An online demo system, installation guide, tutorials, software and source code are available under the LGPL3 license from http://www.xqtl.org. m.a.swertz@rug.nl.

  16. xQTL workbench: a scalable web environment for multi-level QTL analysis

    PubMed Central

    Arends, Danny; van der Velde, K. Joeri; Prins, Pjotr; Broman, Karl W.; Möller, Steffen; Jansen, Ritsert C.; Swertz, Morris A.

    2012-01-01

    Summary: xQTL workbench is a scalable web platform for the mapping of quantitative trait loci (QTLs) at multiple levels: for example gene expression (eQTL), protein abundance (pQTL), metabolite abundance (mQTL) and phenotype (phQTL) data. Popular QTL mapping methods for model organism and human populations are accessible via the web user interface. Large calculations scale easily on to multi-core computers, clusters and Cloud. All data involved can be uploaded and queried online: markers, genotypes, microarrays, NGS, LC-MS, GC-MS, NMR, etc. When new data types come available, xQTL workbench is quickly customized using the Molgenis software generator. Availability: xQTL workbench runs on all common platforms, including Linux, Mac OS X and Windows. An online demo system, installation guide, tutorials, software and source code are available under the LGPL3 license from http://www.xqtl.org. Contact: m.a.swertz@rug.nl PMID:22308096

  17. Segment Alignment Maintenance System for the Hobby-Eberly Telescope

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Burdine, Robert (Technical Monitor)

    2001-01-01

    NASA's Marshall Space Flight Center, in collaboration with Blue Line Engineering of Colorado Springs, Colorado, is developing a Segment Alignment Maintenance System (SAMS) for McDonald Observatory's Hobby-Eberly Telescope (HET). The SAMS shall sense motions of the 91 primary mirror segments and send corrections to HET's primary mirror controller as the mirror segments misalign due to thermo -elastic deformations of the mirror support structure. The SAMS consists of inductive edge sensors. All measurements are sent to the SAMS computer where mirror motion corrections are calculated. In October 2000, a prototype SAMS was installed on a seven-segment cluster of the HET. Subsequent testing has shown that the SAMS concept and architecture are a viable practical approach to maintaining HET's primary mirror figure, or the figure of any large segmented telescope. This paper gives a functional description of the SAMS sub-array components and presents test data to characterize the performance of the subarray SAMS.

  18. Feasibility of Acoustic Doppler Velocity Meters for the Production of Discharge Records from U.S. Geological Survey Streamflow-Gaging Stations

    USGS Publications Warehouse

    Morlock, Scott E.; Nguyen, Hieu T.; Ross, Jerry H.

    2002-01-01

    It is feasible to use acoustic Doppler velocity meters (ADVM's) installed at U.S. Geological Survey (USGS) streamflow-gaging stations to compute records of river discharge. ADVM's are small acoustic current meters that use the Doppler principle to measure water velocities in a two-dimensional plane. Records of river discharge can be computed from stage and ADVM velocity data using the 'index velocity' method. The ADVM-measured velocities are used as an estimator or 'index' of the mean velocity in the channel. In evaluations of ADVM's for the computation of records of river discharge, the USGS installed ADVM's at three streamflow-gaging stations in Indiana: Kankakee River at Davis, Fall Creek at Millersville, and Iroquois River near Foresman. The ADVM evaluation study period was from June 1999 to February 2001. Discharge records were computed, using ADVM data from each station. Discharge records also were computed using conventional stage-discharge methods of the USGS. The records produced from ADVM and conventional methods were compared with discharge record hydrographs and statistics. Overall, the records compared closely from the Kankakee River and Fall Creek stations. For the Iroquois River station, variable backwater was present and affected the comparison; because the ADVM record compensates for backwater, the ADVM record may be superior to the conventional record. For the three stations, the ADVM records were judged to be of a quality acceptable to USGS standards for publications and near realtime ADVM-computed discharges are served on USGS real-time data World Wide Web pages.

  19. HEP Computing

    Science.gov Websites

    page.) How do I set up an ANL mailing list? How do I request an ANL collaborator account? How do I DayForce working you need to install the Citrix Receiver) How do I sign up for an account on the LCRC Blues

  20. The Vendors' Corner: Biblio-Techniques' Library and Information System (BLIS).

    ERIC Educational Resources Information Center

    Library Software Review, 1984

    1984-01-01

    Describes online catalog and integrated library computer system designed to enhance Washington Library Network's software. Highlights include system components; implementation options; system features (integrated library functions, database design, system management facilities); support services (installation and training, software maintenance and…

Top