Science.gov

Sample records for fermilab central computing

  1. The Fermilab Central Computing Facility architectural model

    SciTech Connect

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs.

  2. Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    SciTech Connect

    Krstulovich, S.F.

    1986-11-12

    This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

  3. Computer networking at FERMILAB

    SciTech Connect

    Chartrand, G.

    1986-05-01

    Management aspects of data communications facilities at Fermilab are described. Local area networks include Ferminet, a broadband CATV system which serves as a backbone-type carrier for high-speed data traffic between major network nodes; micom network, four Micom Micro-600/2A port selectors via private twisted pair cables, dedicated telephone circuits, or Micom 800/2 statistical multiplexors; and Decnet/Ethernet, several small local area networks which provide host-to-host communications for about 35 VAX computers systems. Wide area (off site) computer networking includes an off site Micom network which provides access to all of Fermilab's computer systems for 10 universities via leased lines or modem; Tymnet, used by many European and Japanese collaborations: Physnet, used for shared data processing task communications by large collaborations of universities; Bitnet, used for file transfer, electronic mail, and communications with CERN; and Mfenet, for access to supercomputers. Plans to participate in Hepnet are also addressed. 3 figs. (DWL)

  4. Future computing needs for Fermilab

    SciTech Connect

    Not Available

    1983-12-01

    The following recommendations are made: (1) Significant additional computing capacity and capability beyond the present procurement should be provided by 1986. A working group with representation from the principal computer user community should be formed to begin immediately to develop the technical specifications. High priority should be assigned to providing a large user memory, software portability and a productive computing environment. (2) A networked system of VAX-equivalent super-mini computers should be established with at least one such computer dedicated to each reasonably large experiment for both online and offline analysis. The laboratory staff responsible for mini computers should be augmented in order to handle the additional work of establishing, maintaining and coordinating this system. (3) The laboratory should move decisively to a more fully interactive environment. (4) A plan for networking both inside and outside the laboratory should be developed over the next year. (5) The laboratory resources devoted to computing, including manpower, should be increased over the next two to five years. A reasonable increase would be 50% over the next two years increasing thereafter to a level of about twice the present one. (6) A standing computer coordinating group, with membership of experts from all the principal computer user constituents of the laboratory, should

  5. Fermilab computing at the Intensity Frontier

    DOE PAGESBeta

    Group, Craig; Fuess, S.; Gutsche, O.; Kirby, M.; Kutschke, R.; Lyon, A.; Norman, A.; Perdue, G.; Sexton-Kennedy, E.

    2015-12-23

    The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less onmore » the development of tools and infrastructure.« less

  6. Fermilab computing at the Intensity Frontier

    SciTech Connect

    Group, Craig; Fuess, S.; Gutsche, O.; Kirby, M.; Kutschke, R.; Lyon, A.; Norman, A.; Perdue, G.; Sexton-Kennedy, E.

    2015-12-23

    The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less on the development of tools and infrastructure.

  7. Fermilab Computing at the Intensity Frontier

    NASA Astrophysics Data System (ADS)

    Fuess, S.; Gutsche, O.; Kirby, M.; Kutschke, R.; Lyon, A.; Norman, A.; Perdue, G.; Sexton-Kennedy, E.

    2015-12-01

    The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. The experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less on the development of tools and infrastructure.

  8. The Fermilab computing farms in 1997

    SciTech Connect

    Wolbers, S.

    1998-02-15

    The farms in 1997 went through a variety of changes. First, the farms expansion, begun in 1996, was completed. This boosted the computing capacity to something like 20,000 MIPS (where a MIP is a unit defined by running a program, TINY, on the machine and comparing the machine performance to a VAX 11/780). In SpecInt92, it would probably rate close to 40,000. The use of the farms was not all that large. The fixed target experiments were not generally in full production in 1997, but spent time tuning up code. Other users processed on the farms, but tended to come and go and not saturate the resource. Some of the old farms were retired, saving the lab money on maintenance and saving the farms support staff effort.

  9. The Fermilab experience: Integration of UNIX systems in a HEP computing environment

    SciTech Connect

    Pabrai, U.

    1991-03-01

    There is an increased emphasis within organizations to migrate to a distributed computing environment. Among the factors responsible for this migration are: (1) a proliferation of high performance systems based on processors such as the Intel 80{times}86, Motorola 680{times}0, RISC architecture CPU's such as MIPS R{times}000, Sun SPARC, Motorola 88000 and Intel 860 series; (2) a significant reduction in hardware costs; (3) configuration based on existing local area network technology; and (4) the same (to a large extent) operating system on all platforms. A characteristic of distributed computing is that communication takes the form of request-reply pairs. This is also referred to as the client-server model. The client-server model is rapidly growing in popularity and in many scientific and engineering environments is replacing transaction-based and mainframe systems. Over the last few years, Fermilab has been in the process of migrating to a client-server model of computing.

  10. Linux support at Fermilab

    SciTech Connect

    D.R. Yocum, C. Sieh, D. Skow, S. Kovich, D. Holmgren and R. Kennedy

    1998-12-01

    In January of 1998 Fermilab issued an official statement of support of the Linux operating system. This was the result of a ground swell of interest in the possibilities of a cheap, easily used platform for computation and analysis culminating with the successful demonstration of a small computation farm as reported at CHEP97. This paper will describe the current status of Linux support and deployment at Fermilab. The collaborative development process for Linux creates some problems with traditional support models. A primary example of this is that there is no definite OS distribution ala a CD distribution from a traditional Unix vendor. Fermilab has had to make a more definite statement about what is meant by Linux for this reason. Linux support at Fermilab is restricted to the Intel processor platform. A central distribution system has been created to mitigate problems with multiple distribution and configuration options. This system is based on the Red Hat distribution with the Fermi Unix Environment (FUE) layered above it. Deployment of Linux at the lab has been rapidly growing and by CHEP there are expected to be hundreds of machines running Linux. These include computational farms, trigger processing farms, and desktop workstations. The former groups are described in other talks and consist of clusters of many tens of very similar machines devoted to a few tasks. The latter group is more diverse and challenging. The user community has been very supportive and active in defining needs for Linux features and solving various compatibility issues. We will discuss the support arrangements currently in place.

  11. A Computer Program to Measure the Energy Spread of Multi-turn Beam in the Fermilab Booster at Injection

    NASA Astrophysics Data System (ADS)

    Nelson, Jovan; Bhat, Chandrashekhara; Hendricks, Brian

    2016-03-01

    We have developed a computer program interfaced with the ACNET environment for Fermilab accelerators in order to measure the energy spread of the injected proton beam from the LINAC, at the energy of 400 MeV. This program allows the user to configure a digitizing oscilloscope and timing devices to optimize data acquisition from a resistive wall current monitor. When the program is launched, it secures control of the oscilloscope and then generates a ``one-shot'' timeline which initiates injection into the Booster. Once this is complete, a kicker is set to create a notch in the beam and the line charge distribution data is collected by the oscilloscope. The program then analyzes this data in order to obtain notch width, beam revolution period, and beam energy spread. This allows the program to be a possible useful diagnostic tool for the beginning of the acceleration cycle for the proton beam. Thank you to the SIST program at Fermilab.

  12. Real-time data reorganizer for the D0 central fiber tracker trigger system at Fermilab

    SciTech Connect

    Stefano Marco Rapisarda, Jamieson T Olsen and Neal George Wilcer

    2002-12-13

    A custom digital data Mixer system has been designed to reorganize, in real time, the data produced by the Fermilab D0 Scintillating Fiber Detector. The data is used for the Level 1 and Level 2 trigger generation. The Mixer System receives the data from the front-end digitization electronics over 320 Low Voltage Differential Signaling (LVDS) links running at 371 MHz. The input data is de-serialized down to 53 MHz by the LVDS receivers, clock/frame re-synchronized and multiplexed in Field Programmable Gate Arrays (FPGAs). The data is then reserialized at 371 MHz by LVDS transmitters over 320 LVDS output links and sent to the electronics responsible for Level 1 and Level 2 trigger decisions. The Mixer System processes 311 Gigabits per second of data with an input to output delay of 200 nanoseconds.

  13. Fermilab E791

    NASA Astrophysics Data System (ADS)

    Cremaldi, L. M.; Aitala, E. M.; Almeida, F. M. L.; Amato, S.; Anjos, J. C.; Appel, J. A.; Ashery, D.; Astorga, J.; Banerjee, S.; Beck, S.; Bediaga, I.; Blaylock, G.; Bracker, S. B.; Burchat, P. R.; Burnstein, R.; Carter, T.; Costa, I.; Denisenko, K.; Darling, C.; Gagnon, P.; Gerzon, S.; Gounder, K.; Granite, D.; Halling, M.; James, C.; Kasper, P. A.; Kwan, S.; Lichtenstadt, J.; Lundberg, B.; de Mello Neto, J. R. T.; Milburn, R.; de Miranda, J. M.; Napier, A.; Nguyen, A.; d'Oliveira, A. B.; Peng, K. C.; Purohit, M. V.; Quinn, B.; Radeztsky, S.; Rafatian, A.; Ramalho, A. J.; Reay, N. W.; Reibel, K.; Reidy, J. J.; Rubin, H.; Santha, A.; Santoro, A. F. S.; Schwartz, A.; Sheaff, M.; Sidwell, R. A.; Carvalho, H. da Silva; Slaughter, J.; Sokoloff, M. D.; Souza, M.; Stanton, N.; Sugano, K.; Summers, D. J.; Takach, S.; Thorne, K.; Tripathi, A.; Trumer, D.; Watanabe, S.; Wiener, J.; Witchey, N.; Wolin, E.; Yi, D.

    1992-02-01

    Fermilab E791, a very high statistics charm particle experiment, recently completed its data taking at Fermilab's Tagged Photon Laboratory. Over 20 billion events were recorded through a loose transverse energy trigger and written to 8mm tape in the 1991-92 fixed target run at Fermilab. This unprecedented data sample containing charm is being analyzed on many-thousand MIP RISC computing farms set up at sites in the collaboration. A glimpse of the data taking and analysis effort is presented. We also show some preliminary results for common charm decay modes. Our present analysis indicates a very rich yield of over 200K reconstructed charm decays.

  14. Central Computational Facility CCF communications subsystem options

    NASA Technical Reports Server (NTRS)

    Hennigan, K. B.

    1979-01-01

    A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.

  15. Fermilab`s DART DA system

    SciTech Connect

    Pordes, R.; Anderson, J.; Berg, D.; Black, D.; Forster, R.; Franzen, J.; Kent, S.; Kwarciany, R.; Meadows, J.; Moore, C.

    1994-04-01

    DART is the new data acquisition system designed and implemented for six Fermilab experiments by the Fermilab Computing Division and the experiments themselves. The complexity of the experiments varies greatly. Their data taking throughput and event filtering requirements range from a few (2-5) to tens (80) of CAMAC, FASTBUS and home built front end crates; from a few 100 KByte/sec to 160 MByte/sec front end data collection rates; and from 0-3000 Mips of level 3 processing. The authors report on the architecture and implementation of DART to this date, and the hardware and software components that are being developed and supported.

  16. Integrating data acquisition and offline processing systems for small experiments at Fermilab

    SciTech Connect

    Streets, J.; Corbin, B.; Taylor, C.

    1995-10-01

    Two small experiments at Fermilab are using the large UNIX central computing facility at Fermilab (FNALU) to analyze data. The data acquisition systems are based on {open_quotes}off the shelf{close_quotes} software packages utilizing VAX/VMS computers and CAMAC readout. As the disk available on FNALU approaches the sizes of the raw data sets taken by the experiments (50 Gbytes) we have used the Andrew File System (AFS) to serve the data to experimenters for analysis.

  17. Computed tomography of the central nervous system

    SciTech Connect

    Bentson, J.R.

    1982-01-01

    The objective of this chapter is to review the most pertinent articles published during the past year on the subject of computed tomography of the central nervous system. The chapter contains sections on pediatric computed tomography, and on the diagnostic use of CT in white matter disease, in infectious disease, for intracranial aneurysms, trauma, and intracranial tumors. Metrizamide flow studies and contrast enhancement are also examined. (KRM)

  18. Central control element expands computer capability

    NASA Technical Reports Server (NTRS)

    Easton, R. A.

    1975-01-01

    Redundant processing and multiprocessing modes can be obtained from one computer by using logic configuration. Configuration serves as central control element which can automatically alternate between high-capacity multiprocessing mode and high-reliability redundant mode using dynamic mode switching in real time.

  19. Fermilab and Latin America

    SciTech Connect

    Lederman, Leon M.

    2006-09-25

    As Director of Fermilab, starting in 1979, I began a series of meetings with scientists in Latin America. The motivation was to stir collaboration in the field of high energy particle physics, the central focus of Fermilab. In the next 13 years, these Pan American Symposia stirred much discussion of the use of modern physics, created several groups to do collaborative research at Fermilab, and often centralized facilities and, today, still provides the possibility for much more productive North-South collaboration in research and education. In 1992, I handed these activities over to the AAAS, as President. This would, I hoped, broaden areas of collaboration. Such collaboration is unfortunately very sensitive to political events. In a rational world, it would be the rewards, cultural and economic, of collaboration that would modulate political relations. We are not there yet.

  20. Fermilab and Latin America

    NASA Astrophysics Data System (ADS)

    Lederman, Leon M.

    2006-09-01

    As Director of Fermilab, starting in 1979, I began a series of meetings with scientists in Latin America. The motivation was to stir collaboration in the field of high energy particle physics, the central focus of Fermilab. In the next 13 years, these Pan American Symposia stirred much discussion of the use of modern physics, created several groups to do collaborative research at Fermilab, and often centralized facilities and, today, still provides the possibility for much more productive North-South collaboration in research and education. In 1992, I handed these activities over to the AAAS, as President. This would, I hoped, broaden areas of collaboration. Such collaboration is unfortunately very sensitive to political events. In a rational world, it would be the rewards, cultural and economic, of collaboration that would modulate political relations. We are not there yet.

  1. The CDF Central Analysis Farm

    SciTech Connect

    Kim, T.H.; Neubauer, M.; Sfiligoi, I.; Weems, L.; Wurthwein, F.; /UC, San Diego

    2004-01-01

    With Run II of the Fermilab Tevatron well underway, many computing challenges inherent to analyzing large volumes of data produced in particle physics research need to be met. We present the computing model within CDF designed to address the physics needs of the collaboration. Particular emphasis is placed on current development of a large O(1000) processor PC cluster at Fermilab serving as the Central Analysis Farm for CDF. Future plans leading toward distributed computing and GRID within CDF are also discussed.

  2. Fermilab Software Tools Program: Fermitools

    SciTech Connect

    Pordes, R.

    1995-10-01

    The Fermilab Software Tools Program (Fermitools) was established in 1994 as an intiative under which Fermilab provides software it has developed to outside collaborators. During the year and a half since its start ten software products have been packaged and made available on the official Fermilab anonymous ftp site, and backup support and information services have been made available for them. During the past decade, institutions outside the Fermilab physics experiment user community have in general only been able to obtain and use Fermilab developed software on an adhoc or informal basis. With the Fermitools program the Fermilab Computing Division has instituted an umbrella under which software that is regarded by its internal user community as useful and of high quality can be provided to users outside of High Energy Physics experiments. The main thrust of the Fermitools program is stimulating collaborative use and further development of the software. Having established minimal umbrella beaurocracy makes collaborative development and support easier. The published caveat given to people who take the software includes the statement ``Provision of the software implies no commitment of support by Fermilab. The Fermilab Computing Division is open to discussing other levels of support for use of the software with responsible and committed users and collaborator``. There have been no negative comments in response to this and the policy has not given rise to any questions or complaints. In this paper we present the goals and strategy of the program and introduce some of the software made available through it. We discuss our experiences to date and mention the perceived benefits of the Program.

  3. Fermilab Steering Group Report

    SciTech Connect

    Beier, Eugene; Butler, Joel; Dawson, Sally; Edwards, Helen; Himel, Thomas; Holmes, Stephen; Kim, Young-Kee; Lankford, Andrew; McGinnis, David; Nagaitsev, Sergei; Raubenheimer, Tor; /SLAC /Fermilab

    2007-01-01

    The Fermilab Steering Group has developed a plan to keep U.S. accelerator-based particle physics on the pathway to discovery, both at the Terascale with the LHC and the ILC and in the domain of neutrinos and precision physics with a high-intensity accelerator. The plan puts discovering Terascale physics with the LHC and the ILC as Fermilab's highest priority. While supporting ILC development, the plan creates opportunities for exciting science at the intensity frontier. If the ILC remains near the Global Design Effort's technically driven timeline, Fermilab would continue neutrino science with the NOVA experiment, using the NuMI (Neutrinos at the Main Injector) proton plan, scheduled to begin operating in 2011. If ILC construction must wait somewhat longer, Fermilab's plan proposes SNuMI, an upgrade of NuMI to create a more powerful neutrino beam. If the ILC start is postponed significantly, a central feature of the proposed Fermilab plan calls for building an intense proton facility, Project X, consisting of a linear accelerator with the currently planned characteristics of the ILC combined with Fermilab's existing Recycler Ring and the Main Injector accelerator. The major component of Project X is the linac. Cryomodules, radio-frequency distribution, cryogenics and instrumentation for the linac are the same as or similar to those used in the ILC at a scale of about one percent of a full ILC linac. Project X's intense proton beams would open a path to discovery in neutrino science and in precision physics with charged leptons and quarks. World-leading experiments would allow physicists to address key questions of the Quantum Universe: How did the universe come to be? Are there undiscovered principles of nature: new symmetries, new physical laws? Do all the particles and forces become one? What happened to the antimatter? Building Project X's ILC-like linac would offer substantial support for ILC development by accelerating the industrialization of ILC components

  4. Fermilab Steering Group Report

    SciTech Connect

    Steering Group, Fermilab; /Fermilab

    2007-12-01

    The Fermilab Steering Group has developed a plan to keep U.S. accelerator-based particle physics on the pathway to discovery, both at the Terascale with the LHC and the ILC and in the domain of neutrinos and precision physics with a high-intensity accelerator. The plan puts discovering Terascale physics with the LHC and the ILC as Fermilab's highest priority. While supporting ILC development, the plan creates opportunities for exciting science at the intensity frontier. If the ILC remains near the Global Design Effort's technically driven timeline, Fermilab would continue neutrino science with the NOvA experiment, using the NuMI (Neutrinos at the Main Injector) proton plan, scheduled to begin operating in 2011. If ILC construction must wait somewhat longer, Fermilab's plan proposes SNuMI, an upgrade of NuMI to create a more powerful neutrino beam. If the ILC start is postponed significantly, a central feature of the proposed Fermilab plan calls for building an intense proton facility, Project X, consisting of a linear accelerator with the currently planned characteristics of the ILC combined with Fermilab's existing Recycler Ring and the Main Injector accelerator. The major component of Project X is the linac. Cryomodules, radio-frequency distribution, cryogenics and instrumentation for the linac are the same as or similar to those used in the ILC at a scale of about one percent of a full ILC linac. Project X's intense proton beams would open a path to discovery in neutrino science and in precision physics with charged leptons and quarks. World-leading experiments would allow physicists to address key questions of the Quantum Universe: How did the universe come to be? Are there undiscovered principles of nature: new symmetries, new physical laws? Do all the particles and forces become one? What happened to the antimatter? Building Project X's ILC-like linac would offer substantial support for ILC development by accelerating the industrialization of ILC components

  5. Introduction to the LaRC central scientific computing complex

    NASA Technical Reports Server (NTRS)

    Shoosmith, John N.

    1993-01-01

    The computers and associated equipment that make up the Central Scientific Computing Complex of the Langley Research Center are briefly described. The electronic networks that provide access to the various components of the complex and a number of areas that can be used by Langley and contractors staff for special applications (scientific visualization, image processing, software engineering, and grid generation) are also described. Flight simulation facilities that use the central computers are described. Management of the complex, procedures for its use, and available services and resources are discussed. This document is intended for new users of the complex, for current users who wish to keep appraised of changes, and for visitors who need to understand the role of central scientific computers at Langley.

  6. Data preservation at the Fermilab Tevatron

    NASA Astrophysics Data System (ADS)

    Boyd, J.; Herner, K.; Jayatilaka, B.; Roser, R.; Sakumoto, W.

    2015-12-01

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. These efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.

  7. Data preservation at the Fermilab Tevatron

    DOE PAGESBeta

    Boyd, J.; Herner, K.; Jayatilaka, B.; Roser, R.; Sakumoto, W.

    2015-12-23

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less

  8. Data preservation at the Fermilab Tevatron

    SciTech Connect

    Boyd, J.; Herner, K.; Jayatilaka, B.; Roser, R.; Sakumoto, W.

    2015-12-23

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.

  9. Economics of Computing: The Case of Centralized Network File Servers.

    ERIC Educational Resources Information Center

    Solomon, Martin B.

    1994-01-01

    Discusses computer networking and the cost effectiveness of decentralization, including local area networks. A planned experiment with a centralized approach to the operation and management of file servers at the University of South Carolina is described that hopes to realize cost savings and the avoidance of staffing problems. (Contains four…

  10. Empirical Foundation of Central Concepts for Computer Science Education

    ERIC Educational Resources Information Center

    Zendler, Andreas; Spannagel, Christian

    2008-01-01

    The design of computer science curricula should rely on central concepts of the discipline rather than on technical short-term developments. Several authors have proposed lists of basic concepts or fundamental ideas in the past. However, these catalogs were based on subjective decisions without any empirical support. This article describes the…

  11. Fermilab Program and Plans

    SciTech Connect

    Denisov, Dmitri

    2014-01-01

    This article is a short summary of the talk presented at 2014 Instrumentation Conference in Novosibirsk about Fermilab's experimental program and future plans. It includes brief description of the P5 long term planning progressing in US as well as discussion of the future accelerators considered at Fermilab.

  12. The Fermilab recycler ring

    SciTech Connect

    Martin Hu

    2001-07-24

    The Fermilab Recycler is a permanent magnet storage ring for the accumulation of antiprotons from the Antiproton Source, and the recovery and cooling of the antiprotons remaining at the end of a Tevatron store. It is an integral part of the Fermilab III luminosity upgrade. The following paper describes the design features, operational and commissioning status of the Recycler Ring.

  13. Injury reduction at Fermilab

    SciTech Connect

    Griffing, Bill; /Fermilab

    2005-06-01

    In a recent DOE Program Review, Fermilab's director presented results of the laboratory's effort to reduce the injury rate over the last decade. The results, shown in the figure below, reveal a consistent and dramatic downward trend in OSHA recordable injuries at Fermilab. The High Energy Physics Program Office has asked Fermilab to report in detail on how the laboratory has achieved the reduction. In fact, the reduction in the injury rate reflects a change in safety culture at Fermilab, which has evolved slowly over this period, due to a series of events, both planned and unplanned. This paper attempts to describe those significant events and analyze how each of them has shaped the safety culture that, in turn, has reduced the rate of injury at Fermilab to its current value.

  14. Simulation Needs and Priorities of the Fermilab Intensity Frontier

    SciTech Connect

    Elvira, V. D.; Genser, K. L.; Hatcher, R.; Perdue, G.; Wenzel, H. J.; Yarba, J.

    2015-06-11

    Over a two-year period, the Physics and Detector Simulations (PDS) group of the Fermilab Scientific Computing Division (SCD), collected information from Fermilab Intensity Frontier experiments on their simulation needs and concerns. The process and results of these activities are documented here.

  15. Photoproduction of charm particles at Fermilab

    SciTech Connect

    Cumalat, John P.

    1997-03-15

    A brief description of the Fermilab Photoproduction Experiment E831 or FOCUS is presented. The experiment concentrates on the reconstruction of charm particles. The FOCUS collaboration has participants from several Central American and Latin American institutions; CINVESTAV and Universidad Autonoma de Puebla from Mexico, University of Puerto Rico from the United States, and Centro Brasileiro de Pesquisas Fisicas in Rio de Janeiro from Brasil.

  16. CPS and the Fermilab farms

    SciTech Connect

    Fausey, M.R.

    1992-06-01

    Cooperative Processes Software (CPS) is a parallel programming toolkit developed at the Fermi National Accelerator Laboratory. It is the most recent product in an evolution of systems aimed at finding a cost-effective solution to the enormous computing requirements in experimental high energy physics. Parallel programs written with CPS are large-grained, which means that the parallelism occurs at the subroutine level, rather than at the traditional single line of code level. This fits the requirements of high energy physics applications, such as event reconstruction, or detector simulations, quite well. It also satisfies the requirements of applications in many other fields. One example is in the pharmaceutical industry. In the field of computational chemistry, the process of drug design may be accelerated with this approach. CPS programs run as a collection of processes distributed over many computers. CPS currently supports a mixture of heterogeneous UNIX-based workstations which communicate over networks with TCP/IR CPS is most suited for jobs with relatively low I/O requirements compared to CPU. The CPS toolkit supports message passing remote subroutine calls, process synchronization, bulk data transfers, and a mechanism called process queues, by which one process can find another which has reached a particular state. The CPS software supports both batch processing and computer center operations. The system is currently running in production mode on two farms of processors at Fermilab. One farm consists of approximately 90 IBM RS/6000 model 320 workstations, and the other has 85 Silicon Graphics 4D/35 workstations. This paper first briefly describes the history of parallel processing at Fermilab which lead to the development of CPS. Then the CPS software and the CPS Batch queueing system are described. Finally, the experiences of using CPS in production on the Fermilab processor farms are described.

  17. Highlights from Fermilab

    SciTech Connect

    Parke, Stephen J.; /Fermilab

    2009-12-01

    In these two lectures I will chose some highlights from the Tevatron experiments (CDF/D0) and the Neutrino experiments and then discuss the future direction of physics at Fermilab after the Tevatron collider era.

  18. Highlights from Fermilab

    NASA Astrophysics Data System (ADS)

    Oddone, P. J.

    2010-12-01

    DISCUSSION by CHAIRMAN: P.J. ODDONE, Scientific Secretaries: W. Fisher, A. Holzner Note from Publisher: The Slides of the Lecture: "Highlights from Fermilab" can be found at http://www.ccsem.infn.it/issp2007/

  19. Breakthrough: Fermilab Accelerator Technology

    ScienceCinema

    None

    2014-08-12

    There are more than 30,000 particle accelerators in operation around the world. At Fermilab, scientists are collaborating with other laboratories and industry to optimize the manufacturing processes for a new type of powerful accelerator that uses superconducting niobium cavities. Experimenting with unique polishing materials, a Fermilab team has now developed an efficient and environmentally friendly way of creating cavities that can propel particles with more than 30 million volts per meter.

  20. Fermilab: Science at Work

    ScienceCinema

    Brendan Casey; Herman White; Craig Hogan; Denton Morris; Mary Convery; Bonnie Fleming; Deborah Harris; Dave Schmitz; Brenna Flaugher; Aron Soha

    2013-02-14

    Six days. Three frontiers. One amazing lab. From 2010 to 2012, a film crew followed a group of scientists at the Department of Energy's Fermilab and filmed them at work and at home. This 40-minute documentary shows the diversity of the people, research and work at Fermilab. Viewers catch a true behind-the-scenes look of the United States' premier particle physics laboratory while scientists explain why their research is important to them and the world.

  1. Breakthrough: Fermilab Accelerator Technology

    SciTech Connect

    2012-04-23

    There are more than 30,000 particle accelerators in operation around the world. At Fermilab, scientists are collaborating with other laboratories and industry to optimize the manufacturing processes for a new type of powerful accelerator that uses superconducting niobium cavities. Experimenting with unique polishing materials, a Fermilab team has now developed an efficient and environmentally friendly way of creating cavities that can propel particles with more than 30 million volts per meter.

  2. Fermilab: Science at Work

    SciTech Connect

    Brendan Casey; Herman White; Craig Hogan; Denton Morris; Mary Convery; Bonnie Fleming; Deborah Harris; Dave Schmitz; Brenna Flaugher; Aron Soha

    2013-02-01

    Six days. Three frontiers. One amazing lab. From 2010 to 2012, a film crew followed a group of scientists at the Department of Energy's Fermilab and filmed them at work and at home. This 40-minute documentary shows the diversity of the people, research and work at Fermilab. Viewers catch a true behind-the-scenes look of the United States' premier particle physics laboratory while scientists explain why their research is important to them and the world.

  3. Lattice QCD clusters at Fermilab

    SciTech Connect

    Holmgren, D.; Mackenzie, Paul B.; Singh, Anitoj; Simone, Jim; /Fermilab

    2004-12-01

    As part of the DOE SciDAC ''National Infrastructure for Lattice Gauge Computing'' project, Fermilab builds and operates production clusters for lattice QCD simulations. This paper will describe these clusters. The design of lattice QCD clusters requires careful attention to balancing memory bandwidth, floating point throughput, and network performance. We will discuss our investigations of various commodity processors, including Pentium 4E, Xeon, Opteron, and PPC970. We will also discuss our early experiences with the emerging Infiniband and PCI Express architectures. Finally, we will present our predictions and plans for future clusters.

  4. Central nervous system leukemia and lymphoma: computed tomographic manifestations

    SciTech Connect

    Pagani, J.J.; Libshitz, H.I.; Wallace, S.; Hayman, L.A.

    1981-12-01

    Computed tomographic (CT) abnormalities in the brain were identified in 31 of 405 patients with leukemia or lymphoma. Abnormalities included neoplastic masses (15), hemorrhage (nine), abscess (two), other brain tumors (four), and methotrexate leukoencephalopathy (one). CT was normal in 374 patients including 148 with meningeal disease diagnosed by cerebrospinal fluid cytologic examination. Prior to treatment, malignant masses were isodense or of greater density with varying amounts of edema. Increase in size or number of the masses indicated worsening. Response to radiation and chemotherapy was manifested by development of a central low density region with an enhancing rim. CT findings correlated with clinical and cerebrospinal fluid findings. The differential diagnosis of the various abnormalities is considered.

  5. Grids, virtualization, and clouds at Fermilab

    SciTech Connect

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-11

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). Lastly, this work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  6. Grids, virtualization, and clouds at Fermilab

    DOE PAGESBeta

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-11

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture andmore » the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). Lastly, this work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.« less

  7. Grids, virtualization, and clouds at Fermilab

    NASA Astrophysics Data System (ADS)

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  8. Scintillator manufacture at Fermilab

    SciTech Connect

    Mellott, K.; Bross, A.; Pla-Dalmau, A.

    1998-08-01

    A decade of research into plastic scintillation materials at Fermilab is reviewed. Early work with plastic optical fiber fabrication is revisited and recent experiments with large-scale commercial methods for production of bulk scintillator are discussed. Costs for various forms of scintillator are examined and new development goals including cost reduction methods and quality improvement techniques are suggested.

  9. Mathematical modeling of a Fermilab helium liquefier coldbox

    SciTech Connect

    Geynisman, M.G.; Walker, R.J.

    1995-12-01

    Fermilab Central Helium Liquefier (CHL) facility is operated 24 hours-a-day to supply 4.6{degrees}K for the Fermilab Tevatron superconducting proton-antiproton collider Ring and to recover warm return gases. The centerpieces of the CHL are two independent cold boxes rated at 4000 and 5400 liters/hour with LN{sub 2} precool. These coldboxes are Claude cycle and have identical heat exchangers trains, but different turbo-expanders. The Tevatron cryogenics demand for higher helium supply from CHL was the driving force to investigate an installation of an expansion engine in place of the Joule-Thompson valve. A mathematical model was developed to describe the thermo- and gas-dynamic processes for the equipment included in the helium coldbox. The model is based on a finite element approach, opposite to a global variables approach, thus providing for higher accuracy and conversion stability. Though the coefficients used in thermo- and gas-dynamic equations are unique for a given coldbox, the general approach, the equations, the methods of computations, and most of the subroutines written in FORTRAN can be readily applied to different coldboxes. The simulation results are compared against actual operating data to demonstrate applicability of the model.

  10. Evaluating Computer Technology Integration in a Centralized School System

    ERIC Educational Resources Information Center

    Eteokleous, N.

    2008-01-01

    The study evaluated the current situation in Cyprus elementary classrooms regarding computer technology integration in an attempt to identify ways of expanding teachers' and students' experiences with computer technology. It examined how Cypriot elementary teachers use computers, and the factors that influence computer integration in their…

  11. Fluid Centrality: A Social Network Analysis of Social-Technical Relations in Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Enriquez, Judith Guevarra

    2010-01-01

    In this article, centrality is explored as a measure of computer-mediated communication (CMC) in networked learning. Centrality measure is quite common in performing social network analysis (SNA) and in analysing social cohesion, strength of ties and influence in CMC, and computer-supported collaborative learning research. It argues that measuring…

  12. The Fermilab physics class library

    SciTech Connect

    Fischler, M.; Brown, W.; Gaines, I.; Kennedy, R.D.; Marraffino, J.; Michelotti, L.; Sexton-Kennedy, E.; Yoh, J.; Adams, D.; Paterno, M.

    1997-02-01

    The Fermilab Physics Class Library Task Force has been formed to supply classes and utilities, primarily in support of efforts by CDF and D0 toward using C++. A collection of libraries and tools will be assembled via development by the task force, collaboration with other HEP developers, and acquisition of existing modules. The main emphasis is on a kit of resources which physics coders can incorporate into their programs, with confidence in robustness and correct behavior. The task force is drawn from CDF, DO and the FNAL Computing and Beams Divisions. Modules-containers, linear algebra, histograms, etc.-have been assigned priority, based on immediate Run II coding activity, and will be available at times ranging from now to late May.

  13. Fermilab Library projects

    SciTech Connect

    Garrett, P.; Ritchie, D.

    1990-05-03

    Preprint database management as done at various centers -- the subject of this workshop -- is hard to separate from the overall activities of the particular center. We therefore present the wider context at the Fermilab Library into which preprint database management fits. The day-to-day activities of the Library aside, the dominant activity at present is that of the ongoing Fermilab Library Automation. A less dominant but relatively time-consuming activity is that of doing more online searches in commercial databases on behalf of laboratory staff and visitors. A related activity is that of exploring the benefits of end-user searching of similar sources as opposed to library staff searching of the same. The Library Automation Project, which began about two years ago, is about to go fully online.'' The rationale behind this project is described in the documents developed during the December 1988--February 1989 planning phase.

  14. Scintillator manufacture at Fermilab

    SciTech Connect

    Mellott, K.; Bross, A.; Pla-Dalmau, A.

    1998-11-01

    A decade of research into plastic scintillation materials at Fermilab is reviewed. Early work with plastic optical fiber fabrication is revisited and recent experiments with large-scale commercial methods for production of bulk scintillator are discussed. Costs for various forms of scintillator are examined and new development goals including cost reduction methods and quality improvement techniques are suggested. {copyright} {ital 1998 American Institute of Physics.}

  15. Fixed target experiments at the Fermilab Tevatron

    SciTech Connect

    Gutierrez, Gaston; Reyes, Marco A.

    2014-11-10

    This paper presents a review of the study of Exclusive Central Production at a Center of Mass energy of √s = 40 GeV at the Fermilab Fixed Target program. In all reactions reviewed in this paper, protons with an energy of 800 GeV were extracted from the Tevatron accelerator at Fermilab and directed to a Liquid Hydrogen target. The states reviewed include π⁺π⁻, K⁰s K⁰s, K⁰s K±π, φφ and D. Partial Wave Analysis results will be presented on the light states but only the cross-section will be reviewed in the diffractive production of D.

  16. Fixed target experiments at the Fermilab Tevatron

    DOE PAGESBeta

    Gutierrez, Gaston; Reyes, Marco A.

    2014-11-10

    This paper presents a review of the study of Exclusive Central Production at a Center of Mass energy of √s = 40 GeV at the Fermilab Fixed Target program. In all reactions reviewed in this paper, protons with an energy of 800 GeV were extracted from the Tevatron accelerator at Fermilab and directed to a Liquid Hydrogen target. The states reviewed include π⁺π⁻, K⁰s K⁰s, K⁰s K±π∓, φφ and D*±. Partial Wave Analysis results will be presented on the light states but only the cross-section will be reviewed in the diffractive production of D*±.

  17. The Fermilab ISDN Pilot Project: Experiences and future plans

    SciTech Connect

    Martin, D.E.; Lego, A.J.; Clifford, A.E.

    1995-12-31

    Fully operational in June of 1994, the Fermilab ISDN Pilot Project was started to gain insight into the costs and benefits of providing ISDN service to the homes of Fermilab researchers. Fourteen users were chosen from throughout Fermilab, but the number of Fermilab-employed spouses pushed the total user count to 20. Each home was equipped with a basic rate ISDN (BRI) line, a BRI Ethernet half-bridge, and an NT-1. An inter-departmental team coordinated the project. Usage at each home was tracked and frequent surveys were attempted. Lessons learned include: working with Ameritech can be difficult; careful monitoring is essential; and configuration of home computing equipment is very time consuming. Plans include moving entirely to primary rate ISDN hubs, support for different home ISDN equipment and better usage and performance tracking.

  18. Process as Content in Computer Science Education: Empirical Determination of Central Processes

    ERIC Educational Resources Information Center

    Zendler, A.; Spannagel, C.; Klaudt, D.

    2008-01-01

    Computer science education should not be based on short-term developments but on content that is observable in multiple domains of computer science, may be taught at every intellectual level, will be relevant in the longer term, and is related to everyday language and/or thinking. Recently, a catalogue of "central concepts" for computer science…

  19. Progress on the Fabric for Frontier Experiments Project at Fermilab

    NASA Astrophysics Data System (ADS)

    Box, Dennis; Boyd, Joseph; Dykstra, Dave; Garzoglio, Gabriele; Herner, Kenneth; Kirby, Michael; Kreymer, Arthur; Levshina, Tanya; Mhashilkar, Parag; Sharma, Neha

    2015-12-01

    The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.

  20. Status of the Fermilab Recycler

    SciTech Connect

    Derwent, P.F.; /Fermilab

    2007-09-01

    The author presents the current operational status of the Fermilab Recycler Ring. Using a mix of stochastic and electron cooling, we prepare antiproton beams for the Fermilab Tevatron Collider program. Included are discussion of stashing and cooling performance, operational scenarios, and collider performance.

  1. Fermilab DART run control

    SciTech Connect

    Oleynik, G.; Engelfried, J.; Mengel, L.

    1995-05-01

    DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the, control and monitoring of the data acquisition systems. We discuss the unique and interesting concepts of the run control and some of our experiences in developing it. We also give a brief update and status of the whole DART system.

  2. Flying wires at Fermilab

    SciTech Connect

    Gannon, J.; Crawford, C.; Finley, D.; Flora, R.; Groves, T.; MacPherson, M.

    1989-03-01

    Transverse beam profile measurement systems called ''Flying Wires'' have been installed and made operational in the Fermilab Main Ring and Tevatron accelerators. These devices are used routinely to measure the emittance of both protons and antiprotons throughout the fill process, and for emittance growth measurements during stores. In the Tevatron, the individual transverse profiles of six proton and six antiproton bunches are obtained simultaneously, with a single pass of the wire through the beam. Essential features of the hardware, software, and system operation are explained in the rest of the paper. 3 refs., 4 figs.

  3. Neutrino Physics at Fermilab

    ScienceCinema

    Saoulidou, Niki

    2010-01-08

    Neutrino oscillations provide the first evidence for physics beyond the Standard Model. I will briefly overview the neutrino "hi-story", describing key discoveries over the past decades that shaped our understanding of neutrinos and their behavior. Fermilab was, is and hopefully will be at the forefront of the accelerator neutrino experiments.  NuMI, the most powerful accelerator neutrino beam in the world has ushered us into the era of precise measurements. Its further upgrades may give a chance to tackle the remaining mysteries of the neutrino mass hierarchy and possible CP violation.

  4. The Fermilab lattice information repository

    SciTech Connect

    Ostiguy, J.-F.; Michelotti, L.; McCusker-Whiting, M.; Kriss, M.; /Fermilab

    2005-05-01

    Over the years, it has become increasingly obvious that a centralized lattice and machine information repository with the capability of keeping track of revision information could be of great value. This is especially true in the context of a large accelerator laboratory like Fermilab with six rings and sixteen beamlines operating in various modes and configurations, constantly subject to modifications, improvements and even major redesign. While there exist a handful of potentially suitable revision systems--both freely available and commercial--our experience has shown that expecting beam physicists to become fully conversant with complex revision system software used on an occasional basis is neither realistic nor practical. In this paper, we discuss technical aspects of the FNAL lattice repository, whose fully web-based interface hides the complexity of Subversion, a comprehensive open source revision system. The FNAL repository has been operational since September 2004; the unique architecture of ''Subversion'' has been a key ingredient of the technical success of its implementation.

  5. The Fermilab Particle Astrophysics Center

    SciTech Connect

    Not Available

    2004-11-01

    The Particle Astrophysics Center was established in fall of 2004. Fermilab director Michael S. Witherell has named Fermilab cosmologist Edward ''Rocky'' Kolb as its first director. The Center will function as an intellectual focus for particle astrophysics at Fermilab, bringing together the Theoretical and Experimental Astrophysics Groups. It also encompasses existing astrophysics projects, including the Sloan Digital Sky Survey, the Cryogenic Dark Matter Search, and the Pierre Auger Cosmic Ray Observatory, as well as proposed projects, including the SuperNova Acceleration Probe to study dark energy as part of the Joint Dark Energy Mission, and the ground-based Dark Energy Survey aimed at measuring the dark energy equation of state.

  6. Research Activities at Fermilab for Big Data Movement

    SciTech Connect

    Mhashilkar, Parag; Wu, Wenji; Kim, Hyun W; Garzoglio, Gabriele; Dykstra, Dave; Slyz, Marko; DeMar, Phil

    2013-01-01

    Adaptation of 100GE Networking Infrastructure is the next step towards management of Big Data. Being the US Tier-1 Center for the Large Hadron Collider's (LHC) Compact Muon Solenoid (CMS) experiment and the central data center for several other large-scale research collaborations, Fermilab has to constantly deal with the scaling and wide-area distribution challenges of the big data. In this paper, we will describe some of the challenges involved in the movement of big data over 100GE infrastructure and the research activities at Fermilab to address these challenges.

  7. The Fermilab neutrino beam program

    SciTech Connect

    Rameika, Regina A.; /Fermilab

    2007-01-01

    This talk presents an overview of the Fermilab Neutrino Beam Program. Results from completed experiments as well as the status and outlook for current experiments is given. Emphasis is given to current activities towards planning for a future program.

  8. Vertically Integrated Circuits at Fermilab

    SciTech Connect

    Deptuch, Grzegorz; Demarteau, Marcel; Hoff, James; Lipton, Ronald; Shenai, Alpana; Trimpl, Marcel; Yarema, Raymond; Zimmerman, Tom; /Fermilab

    2009-01-01

    The exploration of the vertically integrated circuits, also commonly known as 3D-IC technology, for applications in radiation detection started at Fermilab in 2006. This paper examines the opportunities that vertical integration offers by looking at various 3D designs that have been completed by Fermilab. The emphasis is on opportunities that are presented by through silicon vias (TSV), wafer and circuit thinning and finally fusion bonding techniques to replace conventional bump bonding. Early work by Fermilab has led to an international consortium for the development of 3D-IC circuits for High Energy Physics. The consortium has submitted over 25 different designs for the Fermilab organized MPW run organized for the first time.

  9. Central Computer Science Concepts to Research-Based Teacher Training in Computer Science: An Experimental Study

    ERIC Educational Resources Information Center

    Zendler, Andreas; Klaudt, Dieter

    2012-01-01

    The significance of computer science for economics and society is undisputed. In particular, computer science is acknowledged to play a key role in schools (e.g., by opening multiple career paths). The provision of effective computer science education in schools is dependent on teachers who are able to properly represent the discipline and whose…

  10. Stochastic cooling technology at Fermilab

    NASA Astrophysics Data System (ADS)

    Pasquinelli, Ralph J.

    2004-10-01

    The first antiproton cooling systems were installed and commissioned at Fermilab in 1984-1985. In the interim period, there have been several major upgrades, system improvements, and complete reincarnation of cooling systems. This paper will present some of the technology that was pioneered at Fermilab to implement stochastic cooling systems in both the Antiproton Source and Recycler accelerators. Current performance data will also be presented.

  11. Bunch coalescing in the Fermilab Main Ring

    SciTech Connect

    Wildman, D.; Martin, P.; Meisner, K.; Miller, H.W.

    1987-03-01

    A new rf system has been installed in the Fermilab Main Ring to coalesce up to 13 individual bunches of protons or antiprotons into a single high-intensity bunch. The coalescing process consists of adiabatically reducing the h = 1113 Main Ring rf voltage from 1 MV to less than 1 kV, capturing the debunched beam in a linearized h = 53 and h = 106 bucket, rotating for a quarter of a synchrotron oscillation period, and then recapturing the beam in a single h = 1113 bucket. The new system will be described and the results of recent coalescing experiments will be compared with computer-generated particle tracking simulations.

  12. Integrating Micro-computers with a Centralized DBMS: ORACLE, SEED AND INGRES

    NASA Technical Reports Server (NTRS)

    Hoerger, J.

    1984-01-01

    Users of ADABAS, a relational-like data base management system (ADABAS) with its data base programming language (NATURAL) are acquiring microcomputers with hopes of solving their individual word processing, office automation, decision support, and simple data processing problems. As processor speeds, memory sizes, and disk storage capacities increase, individual departments begin to maintain "their own" data base on "their own" micro-computer. This situation can adversely affect several of the primary goals set for implementing a centralized DBMS. In order to avoid this potential problem, these micro-computers must be integrated with the centralized DBMS. An easy to use and flexible means for transferring logic data base files between the central data base machine and micro-computers must be provided. Some of the problems encounted in an effort to accomplish this integration and possible solutions are discussed.

  13. Computation of Baarda's lower bound of the non-centrality parameter

    NASA Astrophysics Data System (ADS)

    Aydin, C.; Demirel, H.

    2005-03-01

    Baarda's reliability measures for outliers, as well as sensitivity and separability measures for deformations, are functions of the lower bound of the non-centrality parameter (LBNP). This parameter, which is taken from Baarda's well-known nomograms, is actually a non-centrality parameter of the cumulative distribution function (CDF) of the non-central χ2-distribution yielding a complementary probability of the desired power of the test, i.e. probability of Type II error. It is investigated how the LBNP can be computed for desired probabilities (power of the test and significance level) and known degrees of freedom. Two recursive algorithms, namely bisection and the Newton algorithm, were applied to compute the LBNP after the definition of a stable and accurate algorithm for the computation of the corresponding CDF. Despite the fact that the recursive algorithms ensure some desired accuracy, it is presented numerically that the Newton algorithm has a faster convergence to the solution than the bisection algorithm.

  14. Resistive-wall instability at Fermilab recycler ring

    SciTech Connect

    Ng, King-Yuen B.; /Fermilab

    2004-11-01

    Sporadic transverse instabilities have been observed at the Fermilab Recycler Ring leading to increase in transverse emittances and beam loss. The driving source of these instabilities has been attributed to the resistive-wall impedance with space-charge playing an important role in suppressing Landau damping. Growth rates of the instabilities have been computed. Remaining problems are discussed.

  15. Review of programmable systems associated with Fermilab experiments

    SciTech Connect

    Nash, T.

    1981-05-01

    The design and application of programmable systems for Fermilab experiments are reviewed. The high luminosity fixed target environment at Fermilab has been a very fertile ground for the development of sophisticated, powerful triggering systems. A few of these are integrated systems designed to be flexible and to have broad application. Many are dedicated triggers taking advantage of large scale integrated circuits to focus on the specific needs of one experiment. In addition, the data acquisition requirements of large detectors, existing and planned, are being met with programmable systems to process the data. Offline reconstruction of data places a very heavy load on large general purpose computers. This offers a potentially very fruitful area for new developments involving programmable dedicated systems. Some of the present thinking at Fermilab regarding offline reconstruction processors will be described.

  16. Using the central VAX 8700 computer at ANL (Argonne National Laboratory)

    SciTech Connect

    Lark, D.T.; Caruthers, C.M.; Bragg, R.W.

    1988-09-01

    This paper is a manual for using the VAX 8700 computer at ANL. The chapters include: The central VAX cluster: What it is and how it works; Training and other available assistance; Getting started with the VAX 8700 computer and VAX/VMS; Using the VAX/VMS file system; Developing programs in VMS; Using batch; Using available software; and Using graphics in VAX/VMS. (LSP)

  17. Theoretical Astrophysics at Fermilab

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Theoretical Astrophysics Group works on a broad range of topics ranging from string theory to data analysis in the Sloan Digital Sky Survey. The group is motivated by the belief that a deep understanding of fundamental physics is necessary to explain a wide variety of phenomena in the universe. During the three years 2001-2003 of our previous NASA grant, over 120 papers were written; ten of our postdocs went on to faculty positions; and we hosted or organized many workshops and conferences. Kolb and collaborators focused on the early universe, in particular and models and ramifications of the theory of inflation. They also studied models with extra dimensions, new types of dark matter, and the second order effects of super-horizon perturbations. S tebbins, Frieman, Hui, and Dodelson worked on phenomenological cosmology, extracting cosmological constraints from surveys such as the Sloan Digital Sky Survey. They also worked on theoretical topics such as weak lensing, reionization, and dark energy. This work has proved important to a number of experimental groups [including those at Fermilab] planning future observations. In general, the work of the Theoretical Astrophysics Group has served as a catalyst for experimental projects at Fennilab. An example of this is the Joint Dark Energy Mission. Fennilab is now a member of SNAP, and much of the work done here is by people formerly working on the accelerator. We have created an environment where many of these people made transition from physics to astronomy. We also worked on many other topics related to NASA s focus: cosmic rays, dark matter, the Sunyaev-Zel dovich effect, the galaxy distribution in the universe, and the Lyman alpha forest. The group organized and hosted a number of conferences and workshop over the years covered by the grant. Among them were:

  18. The Organization and Evaluation of a Computer-Assisted, Centralized Immunization Registry.

    ERIC Educational Resources Information Center

    Loeser, Helen; And Others

    1983-01-01

    Evaluation of a computer-assisted, centralized immunization registry after one year shows that 93 percent of eligible health practitioners initially agreed to provide data and that 73 percent continue to do so. Immunization rates in audited groups have improved significantly. (GC)

  19. A Computer Program for Training Eccentric Reading in Persons with Central Scotoma

    ERIC Educational Resources Information Center

    Kasten, Erich; Haschke, Peggy; Meinhold, Ulrike; Oertel-Verweyen, Petra

    2010-01-01

    This article explores the effectiveness of a computer program--Xcentric viewing--for training eccentric reading in persons with central scotoma. The authors conducted a small study to investigate whether this program increases the reading capacities of individuals with age-related macular degeneration (AMD). Instead of a control group, they…

  20. 51. VIEW OF LORAL ADS 100A COMPUTERS LOCATED CENTRALLY ON ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    51. VIEW OF LORAL ADS 100A COMPUTERS LOCATED CENTRALLY ON NORTH WALL OF TELEMETRY ROOM (ROOM 106). SLC-3W CONTROL ROOM IS VISIBLE IN BACKGROUND THROUGH WINDOW IN NORTH WALL. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  1. Using the central VAX (Virtual Address Extension) 8700 computer at ANL

    SciTech Connect

    Bennington, T.A.; Savage, M.A.; Lifka, D.A.

    1990-05-01

    This report discusses: the central VAX cluster: what it is and how it works; getting started with the VAX8700 computer and VAX/VMS; training and other available assistance; using the VAX/VMS file system; tape management; printing VMS files; developing programs in VMS; using VMS batch, command procedures, and subprocesses; using available software; and using graphics in VAX/VMS.

  2. Eddy current scanning at Fermilab

    SciTech Connect

    Boffo, C.; Bauer, P.; Foley, M.; Brinkmann, A.; Ozelis, J.; /Jefferson Lab

    2005-07-01

    In the framework of SRF cavity development, Fermilab is creating the infrastructure needed for the characterization of the material used in the cavity fabrication. An important step in the characterization of ''as received'' niobium sheets is the eddy current scanning. Eddy current scanning is a non-destructive technique first adopted and further developed by DESY with the purpose of checking the cavity material for sub-surface defects and inclusions. Fermilab has received and further upgraded a commercial eddy current scanner previously used for the SNS project. The upgrading process included developing new filtering software. This scanner is now used daily to scan the niobium sheets for the Fermilab third harmonic and transverse deflecting cavities. This paper gives a status report on the scanning results obtained so far, including a discussion of the typology of signals being detected. We also report on the efforts to calibrate this scanner, a work conducted in collaboration with DESY.

  3. Future hadron physics facilities at Fermilab

    SciTech Connect

    Appel, Jeffrey A.; /Fermilab

    2004-12-01

    Fermilab's hadron physics research continues in all its accelerator-based programs. These efforts will be identified, and the optimization of the Fermilab schedules for physics will be described. In addition to the immediate plans, the Fermilab Long Range Plan will be cited, and the status and potential role of a new proton source, the Proton Driver, is described.

  4. The Fermilab data storage infrastructure

    SciTech Connect

    Jon A Bakken et al.

    2003-02-06

    Fermilab, in collaboration with the DESY laboratory in Hamburg, Germany, has created a petabyte scale data storage infrastructure to meet the requirements of experiments to store and access large data sets. The Fermilab data storage infrastructure consists of the following major storage and data transfer components: Enstore mass storage system, DCache distributed data cache, ftp and Grid ftp for primarily external data transfers. This infrastructure provides a data throughput sufficient for transferring data from experiments' data acquisition systems. It also allows access to data in the Grid framework.

  5. Beam Trail Tracking at Fermilab

    SciTech Connect

    Nicklaus, Dennis J.; Carmichael, Linden Ralph; Neswold, Richard; Yuan, Zongwei

    2015-01-01

    We present a system for acquiring and sorting data from select devices depending on the destination of each particular beam pulse in the Fermilab accelerator chain. The 15 Hz beam that begins in the Fermilab ion source can be directed to a variety of additional accelerators, beam lines, beam dumps, and experiments. We have implemented a data acquisition system that senses the destination of each pulse and reads the appropriate beam intensity devices so that profiles of the beam can be stored and analysed for each type of beam trail. We envision utilizing this data long term to identify trends in the performance of the accelerators

  6. Fermilab's Satellite Refrigerator Expansion Engines

    SciTech Connect

    Peterson, Thomas J.

    1983-01-01

    Each of Fermilab's 24 satellite refrigerators includes two reciprocating expanders, a "wet" engine and a "dry" engine. The wet engines and all but eleven of the dry engines were manufactured by Koch Process Systems (Westboro, Massachusetts). These are basically Koch Model 1400 expaaders installed in cryostats designed by Fermilab. The other eleven dry engines are an in-hou~e design referred to as "Gardner-Fermi" engines since they evolved from the GX3-2500 engines purchas~d from Gardner Cryogenics. Table I surmnarizes the features of our three types of expanders....

  7. Accelerator neutrino program at Fermilab

    SciTech Connect

    Parke, Stephen J.; /Fermilab

    2010-05-01

    The accelerator neutrino programme in the USA consists primarily of the Fermilab neutrino programme. Currently, Fermilab operates two neutrino beamlines, the Booster neutrino beamline and the NuMI neutrino beamline and is the planning stages for a third neutrino beam to send neutrinos to DUSEL. The experiments in the Booster neutrino beamline are miniBooNE, SciBooNE and in the future microBooNE, whereas in the NuMI beamline we have MINOS, ArgoNut, MINERVA and coming soon NOvA. The major experiment in the beamline to DUSEL will be LBNE.

  8. Future hadron physics at Fermilab

    SciTech Connect

    Appel, Jeffrey A.; /Fermilab

    2005-09-01

    Today, hadron physics research occurs at Fermilab as parts of broader experimental programs. This is very likely to be the case in the future. Thus, much of this presentation focuses on our vision of that future--a future aimed at making Fermilab the host laboratory for the International Linear Collider (ILC). Given the uncertainties associated with the ILC--the level of needed R&D, the ILC costs, and the timing--Fermilab is also preparing for other program choices. I will describe these latter efforts, efforts focused on a Proton Driver to increase the numbers of protons available for experiments. As examples of the hadron physics which will be coming from Fermilab, I summarize three experiments: MIPP/E907 which is running currently, and MINERvA and Drell-Yan/E906 which are scheduled for future running periods. Hadron physics coming from the Tevatron Collider program will be summarized by Arthur Maciel in another talk at Hadron05.

  9. The Holometer: A Fermilab Experiment

    SciTech Connect

    Chou, Aaron

    2015-12-16

    Do we live in a two-dimensional hologram? A group of Fermilab scientists has designed an experiment to find out. It’s called the Holometer, and this video gives you a behind-the-scenes look at the device that could change the way we see the universe.

  10. Development of Cogging at the Fermilab Booster

    SciTech Connect

    Seiya, K.; Chaurize, S.; Drennan, C.; Pellico, W.; Triplett, A. K.; Waller, A.

    2015-01-30

    The development of magnetic cogging is part of the Fermilab Booster upgrade within the Proton Improvement Plan (PIP). The Booster is going to send 2.25E17 protons/hour which is almost double the present flux, 1.4E17 protons/hour to the Main Injector (MI) and Recycler (RR). The extraction kicker gap has to synchronize to the MI and RR injection bucket in order to avoid a beam loss at the rising edge of the extraction and injection kickers. Magnetic cogging is able to control the revolution frequency and the position of the gap using the magnetic field from dipole correctors while radial position feedback keeps the beam at the central orbit. The new cogging is expected to reduce beam loss due to the orbit changes and reduce beam energy loss when the gap is created. The progress of the magnetic cogging system development is going to be discussed in this paper.

  11. Computer and photogrammetric general land use study of central north Alabama

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.; Larsen, P. A.; Campbell, C. W.

    1974-01-01

    The object of this report is to acquaint potential users with two computer programs, developed at NASA, Marshall Space Flight Center. They were used in producing a land use survey and maps of central north Alabama from Earth Resources Technology Satellite (ERTS) digital data. The report describes in detail the thought processes and analysis procedures used from the initiation of the land use study to its completion, as well as a photogrammetric study that was used in conjunction with the computer analysis to produce similar land use maps. The results of the land use demonstration indicate that, with respect to computer time and cost, such a study may be economically and realistically feasible on a statewide basis.

  12. Neuromotor recovery from stroke: computational models at central, functional, and muscle synergy level

    PubMed Central

    Casadio, Maura; Tamagnone, Irene; Summa, Susanna; Sanguineti, Vittorio

    2013-01-01

    Computational models of neuromotor recovery after a stroke might help to unveil the underlying physiological mechanisms and might suggest how to make recovery faster and more effective. At least in principle, these models could serve: (i) To provide testable hypotheses on the nature of recovery; (ii) To predict the recovery of individual patients; (iii) To design patient-specific “optimal” therapy, by setting the treatment variables for maximizing the amount of recovery or for achieving a better generalization of the learned abilities across different tasks. Here we review the state of the art of computational models for neuromotor recovery through exercise, and their implications for treatment. We show that to properly account for the computational mechanisms of neuromotor recovery, multiple levels of description need to be taken into account. The review specifically covers models of recovery at central, functional and muscle synergy level. PMID:23986688

  13. Neutrino Project X at Fermilab

    SciTech Connect

    Parke, Stephen J.; /Fermilab

    2008-07-01

    In this talk I will give a brief description of Project X and an outline of the Neutrino Physics possibilities it provides at Fermilab. Project X is the generic name given to a new intense proton source at Fermilab. This source would produce more than 2 MW of proton power at 50 to 120 GeV, using the main injector, which could be used for a variety of long baseline neutrino experiments. A new 8 GeV linac would be required with many components aligned with a possible future ILC. In addition to the beam power from the main injector there is an additional 200 kW of 8 GeV protons that could be used for kaon, muon, experiments.

  14. Beam intensity upgrade at Fermilab

    SciTech Connect

    Marchionni, A.; /Fermilab

    2006-07-01

    The performance of the Fermilab proton accelerator complex is reviewed. The coming into operation of the NuMI neutrino line and the implementation of slip-stacking to increase the anti-proton production rate has pushed the total beam intensity in the Main Injector up to {approx} 3 x 10{sup 13} protons/pulse. A maximum beam power of 270 kW has been delivered on the NuMI target during the first year of operation. A plan is in place to increase it to 350 kW, in parallel with the operation of the Collider program. As more machines of the Fermilab complex become available with the termination of the Collider operation, a set of upgrades are being planned to reach first 700 kW and then 1.2 MW by reducing the Main Injector cycle time and by implementing proton stacking.

  15. Fermilab recycler stochastic cooling commissioning and performance

    SciTech Connect

    D. Broemmelsiek; Ralph Pasquinelli

    2003-06-04

    The Fermilab Recycler is a fixed 8 GeV kinetic energy storage ring located in the Fermilab Main Injector tunnel near the ceiling. The Recycler has two roles in Run II. First, to store antiprotons from the Fermilab Antiproton Accumulator so that the antiproton production rate is no longer compromised by large numbers of antiprotons stored in the Accumulator. Second, to receive antiprotons from the Fermilab Tevatron at the end of luminosity periods. To perform each of these roles, stochastic cooling in the Recycler is needed to preserve and cool antiprotons in preparation for transfer to the Tevatron. The commissioning and performance of the Recycler stochastic cooling systems will be reviewed.

  16. Speeding Up Network Layout and Centrality Measures for Social Computing Goals

    NASA Astrophysics Data System (ADS)

    Sharma, Puneet; Khurana, Udayan; Shneiderman, Ben; Scharrenbroich, Max; Locke, John

    This paper presents strategies for speeding up calculation of graph metrics and layout by exploiting the parallel architecture of modern day Graphics Processing Units (GPU), specifically Compute Unified Device Architecture (CUDA) by Nvidia. Graph centrality metrics like Eigenvector, Betweenness, Page Rank and layout algorithms like Fruchterman - Rheingold are essential components of Social Network Analysis (SNA). With the growth in adoption of SNA in different domains and increasing availability of huge networked datasets for analysis, social network analysts require faster tools that are also scalable. Our results, using NodeXL, show up to 802 times speedup for a Fruchterman-Rheingold graph layout and up to 17,972 times speedup for Eigenvector centrality metric calculations on a 240 core CUDA-capable GPU.

  17. Progress on the FabrIc for Frontier Experiments project at Fermilab

    DOE PAGESBeta

    Box, Dennis; Boyd, Joseph; Dykstra, Dave; Garzoglio, Gabriele; Herner, Kenneth; Kirby, Michael; Kreymer, Arthur; Levshina, Tanya; Mhashilkar, Parag; Sharma, Neha

    2015-01-01

    The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercialmore » cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. Hence, the progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide« less

  18. Progress on the FabrIc for Frontier Experiments project at Fermilab

    SciTech Connect

    Box, Dennis; Boyd, Joseph; Dykstra, Dave; Garzoglio, Gabriele; Herner, Kenneth; Kirby, Michael; Kreymer, Arthur; Levshina, Tanya; Mhashilkar, Parag; Sharma, Neha

    2015-01-01

    The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. Hence, the progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide

  19. Search for top quark at Fermilab Collider

    SciTech Connect

    Sliwa, K.; The CDF Collaboration

    1991-10-01

    The status of a search for the top quark with Collider Detector at Fermilab (CDF), based on a data sample recorded during the 1988--1989 run is presented. The plans for the next Fermilab Collider run in 1992--1993 and the prospects of discovering the top quark are discussed. 19 refs., 4 figs., 2 tabs.

  20. Physics at an upgraded Fermilab proton driver

    SciTech Connect

    Geer, S.; /Fermilab

    2005-07-01

    In 2004 the Fermilab Long Range Planning Committee identified a new high intensity Proton Driver as an attractive option for the future, primarily motivated by the recent exciting developments in neutrino physics. Over the last few months a physics study has developed the physics case for the Fermilab Proton Driver. The potential physics opportunities are discussed.

  1. Neutrino SuperBeams at Fermilab

    SciTech Connect

    Parke, Stephen J.; /Fermilab

    2011-08-23

    In this talk I will give a brief description of long baseline neutrino physics, the LBNE experiment and Project X at Fermilab. A brief outline of the physics of long baseline neutrino experiments, LBNE and Project X at Fermilab is given in this talk.

  2. Computer model of Raritan River Basin water-supply system in central New Jersey

    USGS Publications Warehouse

    Dunne, Paul; Tasker, Gary D.

    1996-01-01

    This report describes a computer model of the Raritan River Basin water-supply system in central New Jersey. The computer model provides a technical basis for evaluating the effects of alternative patterns of operation of the Raritan River Basin water-supply system during extended periods of below-average precipitation. The computer model is a continuity-accounting model consisting of a series of interconnected nodes. At each node, the inflow volume, outflow volume, and change in storage are determined and recorded for each month. The model runs with a given set of operating rules and water-use requirements including releases, pumpages, and diversions. The model can be used to assess the hypothetical performance of the Raritan River Basin water- supply system in past years under alternative sets of operating rules. It also can be used to forecast the likelihood of specified outcomes, such as the depletion of reservoir contents below a specified threshold or of streamflows below statutory minimum passing flows, for a period of up to 12 months. The model was constructed on the basis of current reservoir capacities and the natural, unregulated monthly runoff values recorded at U.S. Geological Survey streamflow- gaging stations in the basin.

  3. Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems

    NASA Astrophysics Data System (ADS)

    Varela Rodriguez, F.

    2011-12-01

    The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.

  4. Barrier RF stacking at Fermilab

    SciTech Connect

    Weiren Chou et al.

    2003-06-04

    A key issue to upgrade the luminosity of the Tevatron Run2 program and to meet the neutrino requirement of the NuMI experiment at Fermilab is to increase the proton intensity on the target. This paper introduces a new scheme to double the number of protons from the Main Injector (MI) to the pbar production target (Run2) and to the pion production target (NuMI). It is based on the fact that the MI momentum acceptance is about a factor of four larger than the momentum spread of the Booster beam. Two RF barriers--one fixed, another moving--are employed to confine the proton beam. The Booster beams are injected off-momentum into the MI and are continuously reflected and compressed by the two barriers. Calculations and simulations show that this scheme could work provided that the Booster beam momentum spread can be kept under control. Compared with slip stacking, a main advantage of this new method is small beam loading effect thanks to the low peak beam current. The RF barriers can be generated by an inductive device, which uses nanocrystal magnet alloy (Finemet) cores and fast high voltage MOSFET switches. This device has been designed and fabricated by a Fermilab-KEK-Caltech team. The first bench test was successful. Beam experiments are being planned.

  5. Extruding plastic scintillator at Fermilab

    SciTech Connect

    Anna Pla-Dalmau; Alan D. Bross; Victor V. Rykalin

    2003-10-31

    An understanding of the costs involved in the production of plastic scintillators and the development of a less expensive material have become necessary with the prospects of building very large plastic scintillation detectors. Several factors contribute to the high cost of plastic scintillating sheets, but the principal reason is the labor-intensive nature of the manufacturing process. In order to significantly lower the costs, the current casting procedures had to be abandoned. Since polystyrene is widely used in the consumer industry, the logical path was to investigate the extrusion of commercial-grade polystyrene pellets with dopants to yield high quality plastic scintillator. This concept was tested and high quality extruded plastic scintillator was produced. The D0 and MINOS experiments are already using extruded scintillator strips in their detectors. An extrusion line has recently been installed at Fermilab in collaboration with NICADD (Northern Illinois Center for Accelerator and Detector Development). This new facility will serve to further develop and improve extruded plastic scintillator. This paper will discuss the characteristics of extruded plastic scintillator and its raw materials, the different manufacturing techniques and the current R&D program at Fermilab.

  6. The Fabric for Frontier Experiments Project at Fermilab

    SciTech Connect

    Kirby, Michael

    2014-01-01

    The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere, 2) an extensive data management system for managing local and remote caches, cataloging, querying, moving, and tracking the use of data, 3) custom and generic database applications for calibrations, beam information, and other purposes, 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.

  7. The Fabric for Frontier Experiments Project at Fermilab

    NASA Astrophysics Data System (ADS)

    Kirby, Michael

    2014-06-01

    The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere; 2) an extensive data management system for managing local and remote caches, cataloging, querying, moving, and tracking the use of data; 3) custom and generic database applications for calibrations, beam information, and other purposes; 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.

  8. Consequences of stochastic release of neurotransmitters for network computation in the central nervous system.

    PubMed Central

    Burnod, Y; Korn, H

    1989-01-01

    Neuronal membrane potentials vary continuously due largely to background synaptic noise produced by ongoing discharges in their presynaptic afferents and shaped by probabilistic factors of transmitter release. We investigated how the random activity of an identified population of interneurons with known release properties influences the performance of central cells. In stochastic models such as thermodynamic ones, the probabilistic input-output function of a formal neuron is sigmoid, having its maximal slope inversely related to a variable called "temperature." Our results indicate that, for a biological neuron, the probability that given excitatory input signals reach threshold is also sigmoid, allowing definition of a temperature that is proportional to the mean number of quanta comprising noise and can be modified by activity in the presynaptic network, a notion which could be included in neural models. By introducing uncertainty to the input-output relation of central neurons, synaptic noise could be a critical determinant of neuronal computational systems, allowing assemblies of cells to undergo continuous transitions between states. Images PMID:2563165

  9. Double diffraction dissociation at the Fermilab Tevatron collider.

    PubMed

    Affolder, T; Akimoto, H; Akopian, A; Albrow, M G; Amaral, P; Amidei, D; Anikeev, K; Antos, J; Apollinari, G; Arisawa, T; Asakawa, T; Ashmanskas, W; Azfar, F; Azzi-Bacchetta, P; Bacchetta, N; Bailey, M W; Bailey, S; de Barbaro, P; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Barone, M; Bauer, G; Bedeschi, F; Belforte, S; Bell, W H; Bellettini, G; Bellinger, J; Benjamin, D; Bensinger, J; Beretvas, A; Berge, J P; Berryhill, J; Bhatti, A; Binkley, M; Bisello, D; Bishai, M; Blair, R E; Blocker, C; Bloom, K; Blumenfeld, B; Blusk, S R; Bocci, A; Bodek, A; Bokhari, W; Bolla, G; Bonushkin, Y; Borras, K; Bortoletto, D; Boudreau, J; Brandl, A; van Den Brink, S; Bromberg, C; Brozovic, M; Bruner, N; Buckley-Geer, E; Budagov, J; Budd, H S; Burkett, K; Busetto, G; Byon-Wagner, A; Byrum, K L; Cabrera, S; Calafiura, P; Campbell, M; Carithers, W; Carlson, J; Carlsmith, D; Caskey, W; Castro, A; Cauz, D; Cerri, A; Chan, A W; Chang, P S; Chang, P T; Chapman, J; Chen, C; Chen, Y C; Cheng, M T; Chertok, M; Chiarelli, G; Chirikov-Zorin, I; Chlachidze, G; Chlebana, F; Christofek, L; Chu, M L; Chung, Y S; Ciobanu, C I; Clark, A G; Connolly, A; Convery, M; Conway, J; Cordelli, M; Cranshaw, J; Cropp, R; Culbertson, R; Dagenhart, D; D'Auria, S; DeJongh, F; Dell'Agnello, S; Dell'Orso, M; Demortier, L; Deninno, M; Derwent, P F; Devlin, T; Dittmann, J R; Dominguez, A; Donati, S; Done, J; D'Onofrio, M; Dorigo, T; Eddy, N; Einsweiler, K; Elias, J E; Engels, E; Erbacher, R; Errede, D; Errede, S; Fan, Q; Feild, R G; Fernandez, J P; Ferretti, C; Field, R D; Fiori, I; Flaugher, B; Foster, G W; Franklin, M; Freeman, J; Friedman, J; Fukui, Y; Furic, I; Galeotti, S; Gallas, A; Gallinaro, M; Gao, T; Garcia-Sciveres, M; Garfinkel, A F; Gatti, P; Gay, C; Gerdes, D W; Giannetti, P; Glagolev, V; Glenzinski, D; Gold, M; Goldstein, J; Gorelov, I; Goshaw, A T; Gotra, Y; Goulianos, K; Green, C; Grim, G; Gris, P; Groer, L; Grosso-Pilcher, C; Guenther, M; Guillian, G; Guimaraes Da Costa, J; Haas, R M; Haber, C; Hahn, S R; Hall, C; Handa, T; Handler, R; Hao, W; Happacher, F; Hara, K; Hardman, A D; Harris, R M; Hartmann, F; Hatakeyama, K; Hauser, J; Heinrich, J; Heiss, A; Herndon, M; Hill, C; Hoffman, K D; Holck, C; Hollebeek, R; Holloway, L; Hughes, R; Huston, J; Huth, J; Ikeda, H; Incandela, J; Introzzi, G; Iwai, J; Iwata, Y; James, E; Jones, M; Joshi, U; Kambara, H; Kamon, T; Kaneko, T; Karr, K; Kasha, H; Kato, Y; Keaffaber, T A; Kelley, K; Kelly, M; Kennedy, R D; Kephart, R; Khazins, D; Kikuchi, T; Kilminster, B; Kim, B J; Kim, D H; Kim, H S; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kirby, M; Kirk, M; Kirsch, L; Klimenko, S; Koehn, P; Kondo, K; Konigsberg, J; Korn, A; Korytov, A; Kovacs, E; Kroll, J; Kruse, M; Kuhlmann, S E; Kurino, K; Kuwabara, T; Laasanen, A T; Lai, N; Lami, S; Lammel, S; Lancaster, J; Lancaster, M; Lander, R; Latino, G; LeCompte, T; Lee, A M; Lee, K; Leone, S; Lewis, J D; Lindgren, M; Liss, T M; Liu, J B; Liu, Y C; Litvintsev, D O; Lobban, O; Lockyer, N; Loken, J; Loreti, M; Lucchesi, D; Lukens, P; Lusin, S; Lyons, L; Lys, J; Madrak, R; Maeshima, K; Maksimovic, P; Malferrari, L; Mangano, M; Mariotti, M; Martignon, G; Martin, A; Matthews, J A; Mayer, J; Mazzanti, P; McFarland, K S; McIntyre, P; McKigney, E; Menguzzato, M; Menzione, A; Mesropian, C; Meyer, A; Miao, T; Miller, R; Miller, J S; Minato, H; Miscetti, S; Mishina, M; Mitselmakher, G; Moggi, N; Moore, E; Moore, R; Morita, Y; Moulik, T; Mulhearn, M; Mukherjee, A; Muller, T; Munar, A; Murat, P; Murgia, S; Nachtman, J; Nagaslaev, V; Nahn, S; Nakada, H; Nakano, I; Nelson, C; Nelson, T; Neu, C; Neuberger, D; Newman-Holmes, C; Ngan, C Y; Niu, H; Nodulman, L; Nomerotski, A; Oh, S H; Oh, Y D; Ohmoto, T; Ohsugi, T; Oishi, R; Okusawa, T; Olsen, J; Orejudos, W; Pagliarone, C; Palmonari, F; Paoletti, R; Papadimitriou, V; Partos, D; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pescara, L; Phillips, T J; Piacentino, G; Pitts, K T; Pompos, A; Pondrom, L; Pope, G; Popovic, M; Prokoshin, F; Proudfoot, J; Ptohos, F; Pukhov, O; Punzi, G; Rakitine, A; Reher, D; Reichold, A; Ribon, A; Riegler, W; Rimondi, F; Ristori, L; Riveline, M; Robertson, W J; Robinson, A; Rodrigo, T; Rolli, S; Rosenson, L; Roser, R; Rossin, R; Roy, A; Ruiz, A; Safonov, A; St Denis, R; Sakumoto, W K; Saltzberg, D; Sanchez, C; Sansoni, A; Santi, L; Sato, H; Savard, P; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Scodellaro, L; Scott, A; Scribano, A; Segler, S; Seidel, S; Seiya, Y; Semenov, A; Semeria, F; Shah, T; Shapiro, M D; Shepard, P F; Shibayama, T; Shimojima, M; Shochet, M; Sidoti, A; Siegrist, J; Sill, A; Sinervo, P; Singh, P; Slaughter, A J; Sliwa, K; Smith, C; Snider, F D; Solodsky, A; Spalding, J; Speer, T; Sphicas, P; Spinella, F; Spiropulu, M; Spiegel, L; Steele, J; Stefanini, A; Strologas, J; Strumia, F; Stuart, D; Sumorok, K; Suzuki, T; Takano, T; Takashima, R; Takikawa, K; Tamburello, P; Tanaka, M

    2001-10-01

    We present results from a measurement of double diffraction dissociation in pp collisions at the Fermilab Tevatron collider. The production cross section for events with a central pseudorapidity gap of width Deltaeta(0)>3 (overlapping eta = 0) is found to be 4.43+/-0.02(stat)+/-1.18(syst) mb [ 3.42+/-0.01(stat)+/-1.09(syst) mb] at square root of (s) = 1800[630] GeV. Our results are compared with previous measurements and with predictions based on Regge theory and factorization. PMID:11580642

  10. Increasing the energy of the Fermilab Tevatron accelerator

    SciTech Connect

    Fuerst, J.D.; Theilacker, J.C.

    1994-07-01

    The superconducting Tevatron accelerator at Fermilab has reached its eleventh year of operation since being commissioned in 1983. Last summer, four significant upgrades to the cryogenic system became operational which allow Tevatron operation at higher energy. This came after many years of R&D, power testing in sectors (one sixth) of the Tevatron, and final system installation. The improvements include the addition of cold helium vapor compressors, supporting hardware for subatmospheric operation, a new satellite refrigerator control system, and a higher capacity central helium liquefier. A description of each cryogenic upgrade, commissioning experience, and attempts to increase the energy of the Tevatron are presented.

  11. Supporting multiple control systems at Fermilab

    SciTech Connect

    Nicklaus, Dennis J.; /Fermilab

    2009-10-01

    The Fermilab control system, ACNET, is used for controlling the Tevatron and all of its pre-accelerators. However, other smaller experiments at Fermilab have been using different controls systems, in particular DOOCS and EPICS. This paper reports some of the steps taken at Fermilab to integrate support for these outside systems. We will describe specific tools that we have built or adapted to facilitate interaction between the architectures. We also examine some of the difficulties that arise from managing this heterogeneous environment. Incompatibilities as well as common elements will be described.

  12. Central mechanisms for force and motion--towards computational synthesis of human movement.

    PubMed

    Hemami, Hooshang; Dariush, Behzad

    2012-12-01

    Anatomical, physiological and experimental research on the human body can be supplemented by computational synthesis of the human body for all movement: routine daily activities, sports, dancing, and artistic and exploratory involvements. The synthesis requires thorough knowledge about all subsystems of the human body and their interactions, and allows for integration of known knowledge in working modules. It also affords confirmation and/or verification of scientific hypotheses about workings of the central nervous system (CNS). A simple step in this direction is explored here for controlling the forces of constraint. It requires co-activation of agonist-antagonist musculature. The desired trajectories of motion and the force of contact have to be provided by the CNS. The spinal control involves projection onto a muscular subset that induces the force of contact. The projection of force in the sensory motor cortex is implemented via a well-defined neural population unit, and is executed in the spinal cord by a standard integral controller requiring input from tendon organs. The sensory motor cortex structure is extended to the case for directing motion via two neural population units with vision input and spindle efferents. Digital computer simulations show the feasibility of the system. The formulation is modular and can be extended to multi-link limbs, robot and humanoid systems with many pairs of actuators or muscles. It can be expanded to include reticular activating structures and learning. PMID:23142849

  13. Central Weighted Non-Oscillatory (CWENO) and Operator Splitting Schemes in Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Ivanovski, Stavro

    2011-05-01

    High-resolution shock-capturing schemes (HRSC) are known to be the most adequate and advanced technique used for numerical approximation to the solution of hyperbolic systems of conservation laws. Since most of the astrophysical phenomena can be described by means of system of (M)HD conservation equations, finding most ac- curate, computationally not expensive and robust numerical approaches for their solution is a task of great importance for numerical astrophysics. Based on the Central Weighted Non-Oscillatory (CWENO) reconstruction approach, which relies on the adaptive choice of the smoothest stencil for resolving strong shocks and discontinuities in central framework on staggered grid, we present a new algorithm for systems of conservation laws using the key idea of evolving the intermediate stages in the Runge Kutta time discretization in primitive variables . In this thesis, we introduce a new so-called conservative-primitive variables strategy (CPVS) by integrating the latter into the earlier proposed Central Runge Kutta schemes (Pareschi et al., 2005). The advantages of the new shock-capturing algorithm with respect to the state-of-the-art HRSC schemes used in astrophysics like upwind Godunov-type schemes can be summarized as follows: (i) Riemann-solver-free central approach; (ii) favoring dissipation (especially needed for multidimensional applications in astrophysics) owing to the diffusivity coming from the design of the scheme; (iii) high accuracy and speed of the method. The latter stems from the fact that the advancing in time in the predictor step does not need inversion between the primitive and conservative variables and is essential in applications where the conservative variables are neither trivial to compute nor to invert in the set of primitive ones as it is in relativistic hydrodynamics. The main objective of the research adopted in the thesis is to outline the promising application of the CWENO (with CPVS) in the problems of the

  14. Operation and maintenance of Fermilab`s satellite refrigerator expansion engines

    SciTech Connect

    Soyars, W.M.

    1996-09-01

    Fermilab`s superconducting Tevatron accelerator is cooled to liquid helium temperatures by 24 satellite refrigerators, each of which uses for normal operations a reciprocating `wet` expansion engine. These expanders are basically Process System (formerly Koch) Model 1400 expanders installed in standalone cryostats designed by Fermilab. This paper will summarize recent experience with operations and maintenance of these expansion engines. Some of the statistics presented will include total engine hours, mean time between major and minor maintenance, and frequent causes of major maintenance.

  15. Fermilab Recycler damper requirements and design

    SciTech Connect

    Crisp, J.; Hu, M.; Tupikov, V.; /Fermilab

    2005-05-01

    The design of transverse dampers for the Fermilab Recycler storage ring is described. An observed instability and analysis of subsequent measurements where used to identify the requirements. The digital approach being implemented is presented.

  16. Physics History Books in the Fermilab Library

    SciTech Connect

    Sara Tompson

    1999-09-17

    Fermilab is a basic research high-energy physics laboratory operated by Universities Research Association, Inc. under contract to the U.S. Department of Energy. Fermilab researchers utilize the Tevatron particle accelerator (currently the world�s most powerful accelerator) to better understand subatomic particles as they exist now and as they existed near the birth of the universe. A collection review of the Fermilab Library monographs was conducted during the summers of 1998 and 1999. While some items were identified for deselection, the review proved most fruitful in highlighting some of the strengths of the Fermilab monograph collection. One of these strengths is history of physics, including biographies and astrophysics. A bibliography of the physics history books in the collection as of Summer, 1999 follows, arranged by author. Note that the call numbers are Library of Congress classification.

  17. Physics History Books in the Fermilab Library

    SciTech Connect

    Sara Tompson.

    1999-09-17

    Fermilab is a basic research high-energy physics laboratory operated by Universities Research Association, Inc. under contract to the U.S. Department of Energy. Fermilab researchers utilize the Tevatron particle accelerator (currently the worlds most powerful accelerator) to better understand subatomic particles as they exist now and as they existed near the birth of the universe. A collection review of the Fermilab Library monographs was conducted during the summers of 1998 and 1999. While some items were identified for deselection, the review proved most fruitful in highlighting some of the strengths of the Fermilab monograph collection. One of these strengths is history of physics, including biographies and astrophysics. A bibliography of the physics history books in the collection as of Summer, 1999 follows, arranged by author. Note that the call numbers are Library of Congress classification.

  18. The Fermilab long-baseline neutrino program

    SciTech Connect

    Goodman, M.; MINOS Collaboration

    1997-10-01

    Fermilab is embarking upon a neutrino oscillation program which includes a long-baseline neutrino experiment MINOS. MINOS will be a 10 kiloton detector located 730 km Northwest of Fermilab in the Soudan underground laboratory. It will be sensitive to neutrino oscillations with parameters above {Delta}m{sup 2} {approximately} 3 {times} 10{sup {minus}3} eV{sup 2} and sin{sup 2}(2{theta}) {approximately} 0.02.

  19. Gun and optics calculations for the Fermilab recirculation experiment

    SciTech Connect

    Kroc, T.

    1997-10-01

    Fermilab is investigating electron cooling to recycle 8 Gev antiprotons recovered from the Tevatron. To do so, it is developing an experiment to recirculate 2 Mev electrons generated by a Pelletron at National Electrostatics Corporation. This paper reports on the optics calculations done in support of that work. We have used the computer codes EGN2 and MacTrace to represent the gun area and acceleration columns respectively. In addition to the results of our simulations, we discuss some of the problems encountered in interfacing the two codes.

  20. Computed Tomography-Guided Central Venous Catheter Placement in a Patient with Superior Vena Cava and Inferior Vena Cava Occlusion

    SciTech Connect

    Rivero, Maria A.; Shaw, Dennis W.W.; Schaller, Robert T. Jr.

    1999-01-15

    An 18-year-old man with a gastrointestinal hypomotility syndrome required lifelong parenteral nutrition. Both the superior and inferior vena cava were occluded. Computed tomography guidance was used to place a long-term central venous catheter via a large tributary to the azygos vein.

  1. Comparison of measured and computed plasma loading resistance in the tandem mirror experiment-upgrade (TMX-U) central cell

    SciTech Connect

    Mett, R.R.

    1984-08-01

    The plasma loading resistance vs density plots computed with McVey's Code XANTENA1, agree well with experimental measurements in the TMX-U central cell. The agreement is much better for frequencies where ..omega../..omega../sub ci/ <1 than for ..omega../..omega../sub ci/ greater than or equal to 1.

  2. Feasibility Study for a Remote Terminal Central Computing Facility Serving School and College Institutions. Volume II, Preliminary Specifications.

    ERIC Educational Resources Information Center

    International Business Machines Corp., White Plains, NY.

    Preliminary specifications of major equipment and programing systems characteristics for a remote terminal central computing facility serving 25-75 secondary schools are presented. Estimation techniques developed in a previous feasibility study were used to delineate workload demands for four model regions with different numbers of institutions…

  3. Identification of Misconceptions in the Central Limit Theorem and Related Concepts and Evaluation of Computer Media as a Remedial Tool.

    ERIC Educational Resources Information Center

    Yu, Chong Ho; And Others

    Central limit theorem (CLT) is considered an important topic in statistics, because it serves as the basis for subsequent learning in other crucial concepts such as hypothesis testing and power analysis. There is an increasing popularity in using dynamic computer software for illustrating CLT. Graphical displays do not necessarily clear up…

  4. Functional Analysis and Preliminary Specifications for a Single Integrated Central Computer System for Secondary Schools and Junior Colleges. Interim Report.

    ERIC Educational Resources Information Center

    1968

    The present report proposes a central computing facility and presents the preliminary specifications for such a system. It is based, in part, on the results of earlier studies by two previous contractors on behalf of the U.S. Office of Education. The recommendations are based upon the present contractors considered evaluation of the earlier…

  5. U.S. EPA computational toxicology programs: Central role of chemical-annotation efforts and molecular databases

    EPA Science Inventory

    EPA’s National Center for Computational Toxicology is engaged in high-profile research efforts to improve the ability to more efficiently and effectively prioritize and screen thousands of environmental chemicals for potential toxicity. A central component of these efforts invol...

  6. Bunch coalescing and bunch rotation in the Fermilab Main Ring: Operational experience and comparison with simulations

    SciTech Connect

    Martin, P.S.; Wildman, D.W.

    1988-07-01

    The Fermilab Tevatron I proton-antiproton collider project requires that the Fermilab Main Ring produce intense bunches of protons and antiprotons for injection into the Tevatron. The process of coalescing a small number of harmonic number h=1113 bunches into a single bunch by bunch-rotating in a lower harmonic rf system is described.The Main Ring is also required to extract onto the antiproton production target bunches with as narrow a time spread as possible. This operation is also discussed. The operation of the bunch coalescing and bunch rotation are compared with simulations using the computer program ESME. 2 refs., 8 figs.

  7. UNIX trademark in high energy physics: What we can learn from the initial experiences at Fermilab

    SciTech Connect

    Butler, J.N.

    1991-03-01

    The reasons why Fermilab decided to support the UNIX operating system are reviewed and placed in the content of an overall model for high energy physics data analysis. The strengths and deficiencies of the UNIX environment for high energy physics are discussed. Fermilab's early experience in dealing with a an open'' multivendor environment, both for computers and for peripherals, is described. The human resources required to fully exploit the opportunities are clearly growing. The possibility of keeping the development and support efforts within reasonable bounds may depend on our ability to collaborate or at least to share information even more effectively than we have in the past. 7 refs., 4 figs., 5 tabs.

  8. CP violation experiment at Fermilab

    SciTech Connect

    Hsiung, Yee B.

    1990-07-01

    The E731 experiment at Fermilab has searched for direct'' CP violation in K{sup 0} {yields} {pi}{pi}, which is parametrized by {var epsilon}{prime}/{var epsilon}. For the first time, in 20% of the data set, all four modes of the K{sub L,S} {yields} {pi}{sup +}{pi}{sup {minus}} ({pi}{sup 0}{pi}{sup 0}) were collected simultaneously, providing a great check on the systematic uncertainty. The result is Re({var epsilon}{prime}/{var epsilon}) = {minus}0.0004 {plus minus} 0.0014 (stat) {plus minus} 0.0006(syst), which provides no evidence for direct'' CP violation. The CPT symmetry has also been tested by measuring the phase difference {Delta}{phi} = {phi}{sub 00} {minus} {phi}{sub {plus minus}} between the two CP violating parameters {eta}{sub 00} and {eta}{sub {plus minus}}. We fine {Delta}{phi} = {minus}0.3{degrees} {plus minus} 2.4{degree}(stat) {plus minus} 1.2{degree}(syst). Using this together with the world average {phi}{sub {plus minus}}, we fine that the phase of the K{sup 0}-{bar K}{sup 0} mixing parameter {var epsilon} is 44.5{degree} {plus minus} 1.5{degree}. Both of these results agree well with the predictions of CPT symmetry. 17 refs., 10 figs.

  9. Big Data over a 100G network at Fermilab

    SciTech Connect

    Garzoglio, Gabriele; Mhashilkar, Parag; Kim, Hyunwoo; Dykstra, Dave; Slyz, Marko

    2014-01-01

    As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data-taking rate of the major LHC experiments reaching tens of petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the Wide area network (WAN) in and out of the laboratory of about 30 Gbit/s and on the Local are network (LAN) between storage and computational farms of 160 Gbit/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESnet) through a 100 Gb/s link. To understand the optimal system-and application-level configuration to interface computational systems with the new highspeed interconnect, Fermilab has deployed a Network Research & Development facility connected to the ESnet 100G Testbed. For the past two years, the High Throughput Data Program (HTDP) has been using the Testbed to identify gaps in data movement middleware [5] when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP [4], XrootD [9], and Squid [8]. Furthermore, this work presents the new R&D facility and the continuation of the evaluation program.

  10. Big Data Over a 100G Network at Fermilab

    NASA Astrophysics Data System (ADS)

    Garzoglio, Gabriele; Mhashilkar, Parag; Kim, Hyunwoo; Dykstra, Dave; Slyz, Marko

    2014-06-01

    As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data-taking rate of the major LHC experiments reaching tens of petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the Wide area network (WAN) in and out of the laboratory of about 30 Gbit/s and on the Local are network (LAN) between storage and computational farms of 160 Gbit/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESnet) through a 100 Gb/s link. To understand the optimal system-and application-level configuration to interface computational systems with the new highspeed interconnect, Fermilab has deployed a Network Research & Development facility connected to the ESnet 100G Testbed. For the past two years, the High Throughput Data Program (HTDP) has been using the Testbed to identify gaps in data movement middleware [5] when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP [4], XrootD [9], and Squid [8]. This work presents the new R&D facility and the continuation of the evaluation program.

  11. Big Data over a 100G network at Fermilab

    DOE PAGESBeta

    Garzoglio, Gabriele; Mhashilkar, Parag; Kim, Hyunwoo; Dykstra, Dave; Slyz, Marko

    2014-01-01

    As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data-taking rate of the major LHC experiments reaching tens of petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the Wide area network (WAN) in and out ofmore » the laboratory of about 30 Gbit/s and on the Local are network (LAN) between storage and computational farms of 160 Gbit/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESnet) through a 100 Gb/s link. To understand the optimal system-and application-level configuration to interface computational systems with the new highspeed interconnect, Fermilab has deployed a Network Research & Development facility connected to the ESnet 100G Testbed. For the past two years, the High Throughput Data Program (HTDP) has been using the Testbed to identify gaps in data movement middleware [5] when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP [4], XrootD [9], and Squid [8]. Furthermore, this work presents the new R&D facility and the continuation of the evaluation program.« less

  12. The Science Training Program for Young Italian Physicists and Engineers at Fermilab

    SciTech Connect

    Barzi, Emanuela; Bellettini, Giorgio; Donati, Simone

    2015-03-12

    Since 1984 Fermilab has been hosting a two-month summer training program for selected undergraduate and graduate Italian students in physics and engineering. Building on the traditional close collaboration between the Italian National Institute of Nuclear Physics (INFN) and Fermilab, the program is supported by INFN, by the DOE and by the Scuola Superiore di Sant`Anna of Pisa (SSSA), and is run by the Cultural Association of Italians at Fermilab (CAIF). This year the University of Pisa has qualified it as a “University of Pisa Summer School”, and will grant successful students with European Supplementary Credits. Physics students join the Fermilab HEP research groups, while engineers join the Particle Physics, Accelerator, Technical, and Computing Divisions. Some students have also been sent to other U.S. laboratories and universities for special trainings. The programs cover topics of great interest for science and for social applications in general, like advanced computing, distributed data analysis, nanoelectronics, particle detectors for earth and space experiments, high precision mechanics, applied superconductivity. In the years, over 350 students have been trained and are now employed in the most diverse fields in Italy, Europe, and the U.S. In addition, the existing Laurea Program in Fermilab Technical Division was extended to the whole laboratory, with presently two students in Master’s thesis programs on neutrino physics and detectors in the Neutrino Division. And finally, a joint venture with the Italian Scientists and Scholars North-America Foundation (ISSNAF) provided this year 4 professional engineers free of charge for Fermilab. More details on all of the above can be found below.

  13. Charm and beauty measurements at Fermilab fixed target

    SciTech Connect

    Mishra, C.S.

    1993-10-01

    Eighteen months after a successful run of the Fermilab fixed target program, interesting results from several experiments are available. This is the first time that more than one Fermilab fixed target experiment has reported the observation of beauty mesons. In this paper we review recent results from charm and beauty fixed target experiments at Fermilab.

  14. Shielding design at Fermilab: Calculations and measurements

    SciTech Connect

    Cossairt, J.D.

    1986-11-01

    The development of the Fermilab accelerator complex during the past two decades from its concept as the ''200 BeV accelerator'' to that of the present tevatron, designed to operate at energies as high as 1 TeV, has required a coincidental refinement and development in methods of shielding design. In this paper I describe these methods as used by the radiation protection staff of Fermilab. This description will review experimental measurements which substantiate these techniques in realistic situations. Along the way, observations will be stated which likely are applicable to other protron accelerators in the multi-hundred GeV energy region, including larger ones yet to be constructed.

  15. The 1994 Fermilab Fixed Target Program

    SciTech Connect

    Conrad, J. |

    1994-11-01

    This paper highlights the results of the Fermilab Fixed Target Program that were announced between October, 1993 and October, 1994. These results are drawn from 18 experiments that took data in the 1985, 1987 and 1990/91 fixed target running periods. For this discussion, the Fermilab Fixed Target Program is divided into 5 major topics: hadron structure, precision electroweak measurements, heavy quark production, polarization and magnetic moments, and searches for new phenomena. However, it should be noted that most experiments span several subtopics. Also, measurements within each subtopic often affect the results in other subtopics. For example, parton distributions from hadron structure measurements are used in the studies of heavy quark production.

  16. Alveolar bone thickness around maxillary central incisors of different inclination assessed with cone-beam computed tomography

    PubMed Central

    Liu, Fang; Sun, Hong-jing; Lv, Pin; Cao, Yu-ming; Yu, Mo; Yue, Yang

    2015-01-01

    Objective To assess the labial and lingual alveolar bone thickness in adults with maxillary central incisors of different inclination by cone-beam computed tomography (CBCT). Methods Ninety maxillary central incisors from 45 patients were divided into three groups based on the maxillary central incisors to palatal plane angle; lingual-inclined, normal, and labial-inclined. Reformatted CBCT images were used to measure the labial and lingual alveolar bone thickness (ABT) at intervals corresponding to every 1/10 of the root length. The sum of labial ABT and lingual ABT at the level of the root apex was used to calculate the total ABT (TABT). The number of teeth exhibiting alveolar fenestration and dehiscence in each group was also tallied. One-way analysis of variance and Tukey's honestly significant difference test were applied for statistical analysis. Results The labial ABT and TABT values at the root apex in the lingual-inclined group were significantly lower than in the other groups (p < 0.05). Lingual and labial ABT values were very low at the cervical level in the lingual-inclined and normal groups. There was a higher prevalence of alveolar fenestration in the lingual-inclined group. Conclusions Lingual-inclined maxillary central incisors have less bone support at the level of the root apex and a greater frequency of alveolar bone defects than normal maxillary central incisors. The bone plate at the marginal level is also very thin. PMID:26445719

  17. Design of the 2 Tesla superconducting solenoid for the Fermilab D0 detector upgrade

    SciTech Connect

    Squires, B.; Brzezniak, J.; Fast, R.W.; Krempetz, K.; Kristalinski, A.; Lee, A.; Markley, D.; Mesin, A.; Orr, S.; Rucinski, R.

    1994-12-31

    A thin superconducting solenoid has been designed for an upgrade to the Fermilab D0 detector, one of two major hadron collider detectors at Fermilab. The original design of the D0 detector did not incorporate a central magnetic field which necessitates a retrofit within the parameters of the existing tracking volume of the detector. The two layer solenoid coil is indirectly cooled and provides a 2 T magnetic field for a central tracking system. To minimize end effects in this no iron configuration, the conductor width is varied thereby increasing current density at the ends and improving field uniformity. This paper summarizes the results of the conceptual design study for the D0 superconducting solenoid.

  18. Correction magnets for the Fermilab Recycler Ring

    SciTech Connect

    James T Volk et al.

    2003-05-27

    In the commissioning of the Fermilab Recycler ring the need for higher order corrector magnets in the regions near beam transfers was discovered. Three types of permanent magnet skew quadrupoles, and two types of permanent magnet sextupoles were designed and built. This paper describes the need for these magnets, the design, assembly, and magnetic measurements.

  19. W+ jets production at the Fermilab Tevatron

    SciTech Connect

    Dittmann, J.R.; CDF Collaboration; D0 Collaboration

    1997-05-01

    The production properties of jets in W events have been measured using {radical}s = 1.8 TeV pp collisions at the Fermilab Tevatron Collider. Experimental results from several CDF and D0 analyses are compared to leading-order and next-to-leading-order QCD predictions.

  20. Exabyte helical scan devices at Fermilab

    SciTech Connect

    Constanta-Fanourakis, P.; Kaczar, K.; Oleynik, G.; Petravick, D.; Votava, M.; White, V.; Hockney, G.; Bracker, S.; de Miranda, J.M.

    1989-05-01

    Exabyte 8mm helical scan storage devices are in use at Fermilab in a number of applications. These devices have the functionality of magnetic tape, but use media which is much more economical and much more dense than conventional 9 track tape. 6 refs., 3 figs.

  1. Slow extraction from the Fermilab Main Injector

    SciTech Connect

    Craig D. Moore et al.

    2001-07-20

    Slow resonant extraction from the Fermilab Main Injector through the extraction channel was achieved in February, 2000, with a spill length of 0.3 sec. Beam losses were small. Excellent wire chamber profiles were obtained and analyzed. The duty factor was not very good and needs to be improved.

  2. Recent results from Fermilab E791

    NASA Astrophysics Data System (ADS)

    Nguyen, A.; Aitala, E. M.; Amato, S.; Anjos, J. C.; Appel, J. A.; Aryal, M.; Ashery, D.; Banerjee, S.; Bediaga, I.; Blaylock, G.; Bracker, S. B.; Burchat, P. R.; Burnstein, R. A.; Carter, T.; Carvalho, H. S.; Costa, I.; Cremaldi, L. M.; Darling, C.; Denisenko, K.; Dubbs, T.; Fernandez, A.; Gagnon, P.; Gerson, S.; Gounder, K.; Granite, D.; Halling, M.; Herrera, G.; Hurwitz, G.; James, C.; Kasper, P. A.; Kwan, S.; Langs, D. C.; Leslie, J.; Lichtenstadt, J.; Lundberg, B.; MayTal-Beck, S.; Meadows, B.; de Mello Neto, J. R. T.; Milburn, R. H.; de Miranda, J. M.; Napier, A.; d'Oliveira, A. B.; Peng, K. C.; Perera, L. P.; Purohit, M. V.; Quinn, B.; Radeztsky, S.; Rafatian, A.; Reay, N. W.; Reidy, J. J.; dos Reis, A. C.; Rubin, H. A.; Santha, A. K. S.; Santoro, A. F. S.; Schwartz, A.; Sheaff, M.; O'Shaughnessy, K.; Sidwell, R. A.; Slaughter, A. J.; Smith, J. G.; Sokoloff, M. D.; Stanton, N.; Sugano, K.; Summers, D. J.; Takach, S.; Thorne, K.; Tripathi, A. K.; Watanabe, S.; Weiss, R.; Wiener, J.; Witchey, N.; Wolin, E.; Yi, D.; Zaliznyak, R.; Zhang, C.

    1995-07-01

    Fermilab E791 is a high statistics charm experiment using a 500 GeV/c π- beam incident on a segmented target. We present results based on one third of the 1991-1992 data, with particular emphasis on a search for the flavor changing neutral current decay D+→π+μ+μ-.

  3. Cloud Services for the Fermilab Scientific Stakeholders

    SciTech Connect

    Timm, S.; Garzoglio, G.; Mhashilkar, P.; Boyd, J.; Bernabeu, G.; Sharma, N.; Peregonow, N.; Kim, H.; Noh, S.; Palur, S.; Raicu, I.

    2015-12-23

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic ray simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. We present in detail the technological improvements that were used to make this work a reality.

  4. Electron cloud in the Fermilab Booster

    SciTech Connect

    Ng, K.Y.; /Fermilab

    2007-06-01

    Simulations of the Fermilab Booster reveal a substantial electron-cloud buildup both inside the unshielded combined-function magnets and the beam pipes joining the magnets, when the second-emission yield (SEY) is larger than {approx}1.6. The implication of the electron-cloud effects on space charge and collective instabilities of the beam is discussed.

  5. Cloud services for the Fermilab scientific stakeholders

    NASA Astrophysics Data System (ADS)

    Timm, S.; Garzoglio, G.; Mhashilkar, P.; Boyd, J.; Bernabeu, G.; Sharma, N.; Peregonow, N.; Kim, H.; Noh, S.; Palur, S.; Raicu, I.

    2015-12-01

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic ray simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. We present in detail the technological improvements that were used to make this work a reality.

  6. Cloud services for the Fermilab scientific stakeholders

    SciTech Connect

    Timm, S.; Garzoglio, G.; Mhashilkar, P.; Boyd, J.; Bernabeu, G.; Sharma, N.; Peregonow, N.; Kim, H.; Noh, S.; Palur, S.; Raicu, I.

    2015-01-01

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic ray simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.

  7. Cloud services for the Fermilab scientific stakeholders

    DOE PAGESBeta

    Timm, S.; Garzoglio, G.; Mhashilkar, P.; Boyd, J.; Bernabeu, G.; Sharma, N.; Peregonow, N.; Kim, H.; Noh, S.; Palur, S.; et al

    2015-01-01

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  8. Charm and beauty physics at Fermilab

    SciTech Connect

    Lipton, R.

    1992-01-01

    The status of charm and beauty physics studies at Fermilab is reviewed. Data from fixed target experiments on charm production, semi-leptonic decay, and Cabibbo suppressed decays as well as charmonium studies in antiproton annihilation are described. In addition beauty results from CDF and E653 are reviewed and prospects for studies of B physics at collider detectors are discussed.

  9. Fermilab Recycler Stochastic Cooling for Luminosity Production

    SciTech Connect

    Broemmelsiek, D.; Gattuso, C.

    2006-03-20

    The Fermilab Recycler began regularly delivering antiprotons for Tevatron luminosity operations in 2005. Methods for tuning the Recycler stochastic cooling system are presented. The unique conditions and resulting procedures for minimizing the longitudinal phase space density of the Recycler antiproton beam are outlined.

  10. The "last mile" of data handling: Fermilab's IFDH tools

    NASA Astrophysics Data System (ADS)

    Lyon, Adam L.; Mengel, Marc W.

    2014-06-01

    IFDH (Intensity Frontier Data Handling), is a suite of tools for data movement tasks for Fermilab experiments and is an important part of the FIFE[2] (Fabric for Intensity Frontier [1] Experiments) initiative described at this conference. IFDH encompasses moving input data from caches or storage elements to compute nodes (the "last mile" of data movement) and moving output data potentially to those caches as part of the journey back to the user. IFDH also involves throttling and locking to ensure that large numbers of jobs do not cause data movement bottlenecks. IFDH is realized as an easy to use layer that users call in their job scripts (e.g. "ifdh cp"), hiding the low level data movement tools. One advantage of this layer is that the underlying low level tools can be selected or changed without the need for the user to alter their scripts. Logging and performance monitoring can also be added easily. This system will be presented in detail as well as its impact on the ease of data handling at Fermilab experiments.

  11. Radiation shielding for the Fermilab Vertical Cavity Test Facility

    SciTech Connect

    Ginsburg, Camille; Rakhno, Igor; /Fermilab

    2010-03-01

    The results of radiation shielding studies for the vertical test cryostat VTS1 at Fermilab performed with the codes FISHPACT and MARS15 are presented and discussed. The analysis is focused on operations with two RF cavities in the cryostat. The vertical cavity test facility (VCTF) for superconducting RF cavities in Industrial Building 1 at Fermilab has been in operation since 2007. The facility currently consists of a single vertical test cryostat VTS1. Radiation shielding for VTS1 was designed for operations with single 9-cell 1.3 GHz cavities, and the shielding calculations were performed using a simplified model of field emission as the radiation source. The operations are proposed to be extended in such a way that two RF cavities will be in VTS1 at a time, one above the other, with tests for each cavity performed sequentially. In such a case the radiation emitted during the tests from the lower cavity can, in part, bypass the initially designed shielding which can lead to a higher dose in the building. Space for additional shielding, either internal or external to VTS1, is limited. Therefore, a re-evaluation of the radiation shielding was performed. An essential part of the present analysis is in using realistic models for cavity geometry and spatial, angular and energy distributions of field-emitted electrons inside the cavities. The calculations were performed with the computer codes FISHPACT and MARS15.

  12. The 'last mile' of data handling: Fermilab's IFDH tools

    SciTech Connect

    Lyon, Adam L.; Mengel, Marc W.

    2014-01-01

    IFDH (Intensity Frontier Data Handling), is a suite of tools for data movement tasks for Fermilab experiments and is an important part of the FIFE[2] (Fabric for Intensity Frontier [1] Experiments) initiative described at this conference. IFDH encompasses moving input data from caches or storage elements to compute nodes (the 'last mile' of data movement) and moving output data potentially to those caches as part of the journey back to the user. IFDH also involves throttling and locking to ensure that large numbers of jobs do not cause data movement bottlenecks. IFDH is realized as an easy to use layer that users call in their job scripts (e.g. 'ifdh cp'), hiding the low level data movement tools. One advantage of this layer is that the underlying low level tools can be selected or changed without the need for the user to alter their scripts. Logging and performance monitoring can also be added easily. This system will be presented in detail as well as its impact on the ease of data handling at Fermilab experiments.

  13. Wide area network monitoring system for HEP experiments at Fermilab

    SciTech Connect

    Grigoriev, Maxim; Cottrell, Les; Logg, Connie; /SLAC

    2004-12-01

    Large, distributed High Energy Physics (HEP) collaborations, such as D0, CDF and US-CMS, depend on stable and robust network paths between major world research centers. The evolving emphasis on data and compute Grids increases the reliance on network performance. Fermilab's experimental groups and network support personnel identified a critical need for WAN monitoring to ensure the quality and efficient utilization of such network paths. This has led to the development of the Network Monitoring system we will present in this paper. The system evolved from the IEPM-BW project, started at SLAC three years ago. At Fermilab this system has developed into a fully functional infrastructure with bi-directional active network probes and path characterizations. It is based on the Iperf achievable throughput tool, Ping and Synack to test ICMP/TCP connectivity. It uses Pipechar and Traceroute to test, compare and report hop-by-hop network path characterization. It also measures real file transfer performance by BBFTP and GridFTP. The Monitoring system has an extensive web-interface and all the data is available through standalone SOAP web services or by a MonaLISA client. Also in this paper we will present a case study of network path asymmetry and abnormal performance between FNAL and SDSC, which was discovered and resolved by utilizing the Network Monitoring system.

  14. Wide Area Network Monitoring System for HEP Experiments at Fermilab

    SciTech Connect

    Grigoriev, M.

    2004-11-23

    Large, distributed High Energy Physics (HEP) collaborations, such as D0, CDF and US-CMS, depend on stable and robust network paths between major world research centres. The evolving emphasis on data and compute Grids increases the reliance on network performance. Fermilab's experimental groups and network support personnel identified a critical need for WAN monitoring to ensure the quality and efficient utilization of such network paths. This has led to the development of the Network Monitoring system we will present in this paper. The system evolved from the IEPM-BW project, started at SLAC three years ago. At Fermilab this system has developed into a fully functional infrastructure with bi-directional active network probes and path characterizations. It is based on the Iperf achievable throughput tool, Ping and Synack to test ICMP/TCP connectivity. It uses Pipechar and Traceroute to test, compare and report hop-by-hop network path characterization. It also measures real file transfer performance by BBFTP and GridFTP. The Monitoring system has an extensive web-interface and all the data is available through standalone SOAP web services or by a MonaLISA client. Also in this paper we will present a case study of network path asymmetry and abnormal performance between FNAL and SDSC, which was discovered and resolved by utilizing the Network Monitoring system.

  15. Central Issues in the Use of Computer-Based Materials for High Volume Entrepreneurship Education

    ERIC Educational Resources Information Center

    Cooper, Billy

    2007-01-01

    This article discusses issues relating to the use of computer-based learning (CBL) materials for entrepreneurship education at university level. It considers CBL as a means of addressing the increased volume and range of provision required in the current context. The issues raised in this article have importance for all forms of computer-based…

  16. 41 CFR 105-56.027 - Centralized salary offset computer match.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... offset computer match. 105-56.027 Section 105-56.027 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the...

  17. 41 CFR 105-56.027 - Centralized salary offset computer match.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... offset computer match. 105-56.027 Section 105-56.027 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the...

  18. 41 CFR 105-56.017 - Centralized salary offset computer match.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... offset computer match. 105-56.017 Section 105-56.017 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the...

  19. 41 CFR 105-56.027 - Centralized salary offset computer match.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... offset computer match. 105-56.027 Section 105-56.027 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the...

  20. 41 CFR 105-56.017 - Centralized salary offset computer match.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... offset computer match. 105-56.017 Section 105-56.017 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the...

  1. 41 CFR 105-56.017 - Centralized salary offset computer match.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... offset computer match. 105-56.017 Section 105-56.017 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the...

  2. 41 CFR 105-56.017 - Centralized salary offset computer match.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... offset computer match. 105-56.017 Section 105-56.017 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the...

  3. 41 CFR 105-56.017 - Centralized salary offset computer match.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... offset computer match. 105-56.017 Section 105-56.017 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the...

  4. 41 CFR 105-56.027 - Centralized salary offset computer match.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... offset computer match. 105-56.027 Section 105-56.027 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the...

  5. 41 CFR 105-56.027 - Centralized salary offset computer match.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... offset computer match. 105-56.027 Section 105-56.027 Public Contracts and Property Management Federal... computer match. (a) Delinquent debt records will be compared with Federal employee records maintained by... a delegation of authority from the Secretary, has waived certain requirements of the...

  6. Serving Six Institutions: A History of Administrative Computing at the Associated Colleges of Central Kansas.

    ERIC Educational Resources Information Center

    Brown, Ray; Doughty, Gavin; McDowell, Valerie

    This paper offers a brief history and description of a voluntary consortium of six church-related private colleges, the Associated Colleges of Central Kansas (ACCK), that is governed by a board of directors made up of the presidents of the member colleges. The schools, all within close geographical proximity, have a combined enrollment of over…

  7. Managing drought risk with a computer model of the Raritan River Basin water-supply system in central New Jersey

    USGS Publications Warehouse

    Dunne, Paul; Tasker, Gary

    1996-01-01

    The reservoirs and pumping stations that comprise the Raritan River Basin water-supply system and its interconnections to the Delaware-Raritan Canal water-supply system, operated by the New Jersey Water Supply Authority (NJWSA), provide potable water to central New Jersey communities. The water reserve of this combined system can easily be depleted by an extended period of below-normal precipitation. Efficient operation of the combined system is vital to meeting the water-supply needs of central New Jersey. In an effort to improve the efficiency of the system operation, the U.S. Geological Survey (USGS), in cooperation with the NJWSA, has developed a computer model that provides a technical basis for evaluating the effects of alternative patterns of operation of the Raritan River Basin water-supply system. This fact sheet describes the model, its technical basis, and its operation.

  8. Run control techniques for the Fermilab DART data acquisition system

    SciTech Connect

    Oleynik, G.; Engelfried, J.; Mengel, L.

    1995-10-01

    DART is the high speed, Unix based data acquisition system being developed by the Fermilab Computing Division in collaboration with eight High Energy Physics Experiments. This paper describes DART run-control which implements flexible, distributed, extensible and portable paradigms for the control and monitoring of data acquisition systems. We discuss the unique and interesting aspects of the run-control - why we chose the concepts we did, the benefits we have seen from the choices we made, as well as our experiences in deploying and supporting it for experiments during their commissioning and sub-system testing phases. We emphasize the software and techniques we believe are extensible to future use, and potential future modifications and extensions for those we feel are not.

  9. Fermilab Booster Transition Crossing Simulations and Beam Studies

    SciTech Connect

    Bhat, C. M.; Tan, C. Y.

    2016-01-01

    The Fermilab Booster accelerates beam from 400 MeV to 8 GeV at 15 Hz. In the PIP (Proton Improvement Plan) era, it is required that Booster deliver 4.2 x $10^{12}$ protons per pulse to extraction. One of the obstacles for providing quality beam to the users is the longitudinal quadrupole oscillation that the beam suffers from right after transition. Although this oscillation is well taken care of with quadrupole dampers, it is important to understand the source of these oscillations in light of the PIP II requirements that require 6.5 x $10^{12}$ protons per pulse at extraction. This paper explores the results from machine studies, computer simulations and solutions to prevent the quadrupole oscillations after transition.

  10. FNAL central email systems

    SciTech Connect

    Schmidt, Jack; Lilianstrom, Al; Pasetes, Ray; Hill, Kevin; /Fermilab

    2004-10-01

    The FNAL Email System is the primary point of entry for email destined for an employee or user at Fermilab. This centrally supported system is designed for reliability and availability. It uses multiple layers of protection to help ensure that: (1) SPAM messages are tagged properly; (2) All mail is inspected for viruses; and (3) Valid mail gets delivered. This system employs numerous redundant subsystems to accomplish these tasks.

  11. Seismic studies for Fermilab future collider projects

    SciTech Connect

    Lauh, J.; Shiltsev, V.

    1997-11-01

    Ground motion can cause significant beam emittance growth and orbit oscillations in large hadron colliders due to a vibration of numerous focusing magnets. Larger accelerator ring circumference leads to smaller revolution frequency and, e.g. for the Fermilab Very Large Hadron Collider(VLHC) 50-150 Hz vibrations are of particular interest as they are resonant with the beam betatron frequency. Seismic measurements at an existing large accelerator under operation can help to estimate the vibrations generated by the technical systems in future machines. Comparison of noisy and quiet microseismic conditions might be useful for proper choice of technical solutions for future colliders. This article presents results of wide-band seismic measurements at the Fermilab site, namely, in the tunnel of the Tevatron and on the surface nearby, and in two deep tunnels in the Illinois dolomite which is though to be a possible geological environment of the future accelerators.

  12. Hydro static water level systems at Fermilab

    SciTech Connect

    Volk, J.T.; Guerra, J.A.; Hansen, S.U.; Kiper, T.E.; Jostlein, H.; Shiltsev, V.; Chupyra, A.; Kondaurov, M.; Singatulin, S.

    2006-09-01

    Several Hydrostatic Water Leveling systems (HLS) are in use at Fermilab. Three systems are used to monitor quadrupoles in the Tevatron and two systems are used to monitor ground motion for potential sites for the International Linear Collider (ILC). All systems use capacitive sensors to determine the water level of water in a pool. These pools are connected with tubing so that relative vertical shifts between sensors can be determined. There are low beta quadrupoles at the B0 and D0 interaction regions of Tevatron accelerator. These quadrupoles use BINP designed and built sensors and have a resolution of 1 micron. All regular lattice superconducting quadrupoles (a total of 204) in the Tevatron use a Fermilab designed system and have a resolution of 6 microns. Data on quadrupole motion due to quenches, changes in temperature will be presented. In addition data for ground motion for ILC studies caused by natural and cultural factors will be presented.

  13. The Fermilab main injector neutrino program

    SciTech Connect

    Morfin, Jorge G.; /Fermilab

    2007-01-01

    The NuMI Facility at Fermilab provides an extremely intense beam of neutrinos making it an ideal place for the study of neutrino oscillations as well as high statistics (anti)neutrino-nucleon/nucleus scattering experiments. The MINOS neutrino oscillation {nu}{mu} disappearance experiment is currently taking data and has published first results. The NO{nu}A {nu}e appearance experiment is planning to begin taking data at the start of the next decade. For the study of neutrino scattering, the MINER{nu}A experiment at Fermilab is a collaboration of elementary-particle and nuclear physicists planning to use a fully active fine-grained solid scintillator detector. The overall goals of the experiment are to measure absolute exclusive cross-sections, nuclear effects in {nu} - A interactions, a systematic study of the resonance-DIS transition region and the high-xBj - low Q2 DIS region.

  14. Physics at a new Fermilab proton driver

    SciTech Connect

    Geer, Steve; /Fermilab

    2006-04-01

    In 2004, motivated by the recent exciting developments in neutrino physics, the Fermilab Long Range Planning Committee identified a new high intensity Proton Driver as an attractive option for the future. At the end of 2004 the APS ''Study on the Physics of Neutrinos'' concluded that the future US neutrino program should have, as one of its components, ''A proton driver in the megawatt class or above and neutrino superbeam with an appropriate very large detector capable of observing Cp violation and measuring the neutrino mass-squared differences and mixing parameters with high precision''. The presently proposed Fermilab Proton Driver is designed to accomplish these goals, and is based on, and would help develop, Linear Collider technology. In this paper the Proton Driver parameters are summarized, and the potential physics program is described.

  15. A Superconducting Linac Proton Driver at Fermilab

    NASA Astrophysics Data System (ADS)

    Foster, G. William

    2004-05-01

    A proton driver has emerged as the leading candidate for Fermilab's next near-term accelerator project. The preferred technical solution is an 8 GeV superconducting linac based on technology developed for TESLA and the Spallation Neutron Source (SNS). Its primary mission is to serve as a single-stage H- injector to prepare 2 MW "Super-Beams" for Neutrino experiments using the Fermilab Main Injector. The linac can also accelerate electrons, protons, and relativistic muons, permitting future applications such as a driver for an FEL, a long-pulse spallation source, the driver for an intense 8 GeV neutrino or kaon program, and potential applications to a neutrino factory or muon collider. The technical design of the 8 GeV linac, as well as the design of an alternative synchrotron based proton driver, will be described along with plans for project proposal and construction.

  16. Collider Detector (CDF) at FERMILAB: an overview

    SciTech Connect

    Theriot, D.

    1984-07-01

    CDF, the Collider Detector at Fermilab, is a collaboration of almost 150 physicists from ten US universities (University of Chicago, Brandeis University, Harvard University, University of Illinois, University of Pennsylvania, Purdue University, Rockefeller University, Rutgers University, Texas A and M University, and University of Wisconsin), three US DOE supported national laboratories (Fermilab, Argonne National Laboratory, and Lawrence Berkeley Laboratory), Italy (Frascati Laboratory and University of Pisa), and Japan (KEK National Laboratory and Unversity of Tsukuba). The primary physics goal for CDF is to study the general features of proton-antiproton collisions at 2 TeV center-of-mass energy. On general grounds, we expect that parton subenergies in the range 50 to 500 GeV will provide the most interesting physics at this energy. Work at the present CERN Collider has already demonstrated the richness of the 100 GeV scale in parton subenergies.

  17. Fixed-target physics at Fermilab

    SciTech Connect

    Bjorken, J.D.

    1985-03-01

    The Fermilab Energy Saver is now successfully commissioned and fixed-target experimentation at high energy (800 GeV) has begun. In addition, a number of new experiments designed to exploit the unique features of the Tevatron are yet to come on-line. In this talk, we will review recent accomplishments in the fixed-target program and describe experiments in progress and others yet to come.

  18. Upgrade of Fermilab/NICADD photoinjector laboratory

    SciTech Connect

    Piot, P.; Edwards, H.; Huning, M.; Li, J.; Tikhoplav, R.; Koeth, T.; /Rutgers U., Piscataway

    2005-05-01

    The Fermilab/NICADD photoinjector laboratory is a 16 MeV electron accelerator dedicated to beam dynamics and advanced accelerator physics studies. FNPL will soon be capable of operating at {approx} 40 MeV, after the installation of a high gradient TESLA cavity. In this paper we present the foreseen design for the upgraded facility along with its performance. We discuss the possibilities of using of FNPL as an injector for the superconducting module and test facility (SM&TF).

  19. The evolution of cryogenic safety at Fermilab

    SciTech Connect

    Stanek, R.; Kilmer, J.

    1992-12-01

    Over the past twenty-five years, Fermilab has been involved in cryogenic technology as it relates to pursuing experimentation in high energy physics. The Laboratory has instituted a strong cryogenic safety program and has maintained a very positive safety record. The solid commitment of management and the cryogenic community to incorporating safety into the system life cycle has led to policies that set requirements and help establish consistency for the purchase and installation of equipment and the safety analysis and documentation.

  20. Future possibilities with Fermilab neutrino beams

    SciTech Connect

    Saoulidou, Niki

    2008-01-01

    We will start with a brief overview of neutrino oscillation physics with emphasis on the remaining unanswered questions. Next, after mentioning near future reactor and accelerator experiments searching for a non zero {theta}{sub 13}, we will introduce the plans for the next generation of long-baseline accelerator neutrino oscillation experiments. We will focus on experiments utilizing powerful (0.7-2.1 MW) Fermilab neutrino beams, either existing or in the design phase.

  1. Estimates of Fermilab Tevatron collider performance

    SciTech Connect

    Dugan, G.

    1991-09-01

    This paper describes a model which has been used to estimate the average luminosity performance of the Tevatron collider. In the model, the average luminosity is related quantitatively to various performance parameters of the Fermilab Tevatron collider complex. The model is useful in allowing estimates to be developed for the improvements in average collider luminosity to be expected from changes in the fundamental performance parameters as a result of upgrades to various parts of the accelerator complex.

  2. A Roadmap for the Future of Fermilab

    SciTech Connect

    Oddone, Pier

    2005-12-12

    The principal aim of this roadmap is to place the US and Fermilab in the best position to host the International Linear Collider (ILC). The strategy must be resilient against the many vicissitudes that will attend the development of such a large project. Pier Oddone will explore the tension between the needed concentration of effort to move a project as large as the ILC forward and the need to maintain the breadth of our field.

  3. Fermilab Proton Beam for Mu2e

    SciTech Connect

    Syphers, M.J.; /Fermilab

    2009-10-01

    Plans to use existing Fermilab facilities to provide beam for the Muon to Electron Conversion Experiment (Mu2e) are under development. The experiment will follow the completion of the Tevatron Collider Run II, utilizing the beam lines and storage rings used today for antiproton accumulation without considerable reconfiguration. The proposed Mu2e operating scenario is described as well as the accelerator issues being addressed to meet the experimental goals.

  4. Preparations for Muon Experiments at Fermilab

    SciTech Connect

    Syphers, M.J.; Popovic, M.; Prebys, E.; Ankenbrandt, C.; /Muons Inc., Batavia

    2009-05-01

    The use of existing Fermilab facilities to provide beams for two muon experiments--the Muon to Electron Conversion Experiment (Mu2e) and the New g-2 Experiment--is under consideration. Plans are being pursued to perform these experiments following the completion of the Tevatron Collider Run II, utilizing the beam lines and storage rings used today for antiproton accumulation without considerable reconfiguration.

  5. Land Classification of South-Central Iowa from Computer Enhanced Images

    NASA Technical Reports Server (NTRS)

    Taranik, J. V.; Lucas, J. R. (Principal Investigator); Billingsley, F. C.

    1975-01-01

    The author has identified the following significant results. Two CCT (computer compatible tapes) scenes were digitally enhanced. The IMAGE 100 system was utilized for image processing. The real time ability of this machine allowed large scale viewing of several selected areas on both CCT's.

  6. Proposal for Fermilab remote access via ISDN (Ver. 1.0)

    SciTech Connect

    Lidinsky, W.P.; Martin, D.E.

    1993-07-02

    Currently, most users at remote sites connect to the Fermilab network via dial-up over analog modems using a dumb terminal or a personal computer emulating a dumb terminal. This level of connectivity is suitable for accessing a single, character-based application. The power of personal computers that are becoming ubiquitous is under-utilized. National HEPnet Management (NHM) has been monitoring and experimenting with remote access via the integrated services digital network (ISDN) for over two years. Members of NHM felt that basic rate ISDN had the potential for providing excellent remote access capability. Initially ISDN was not able to achieve this, but recently the situation has improved. The authors feel that ISDN can now provide, at a remote site such as a user`s home, a computing environment very similar to that which is available at Fermilab. Such an environment can include direct LAN access, windowing systems, graphics, networked file systems, and demanding software applications. This paper proposes using ethernet bridging over ISDN for remote connectivity. With ISDN remote bridging, a remote Macintosh, PC, X-terminal, workstation, or other computer will be transparently connected to the Fermilab LAN. Except for a slight speed difference, the remote machine should function just as if it were on the LAN at Fermilab, with all network services-file sharing, printer sharing, X-windows, etc. - fully available. There are two additional reasons for exploring technologies such as ISDN. First, by mid-decade environmental legislation such as the Federal Clean Air Act of 1990 and Illinois Senate Bill 2177 will likely force increased remote-worker arrangements. Second, recent pilot programs and studies have shown that for many types of work there may be a substantial cost benefits to supporting work away from the site.

  7. Mechanical construction of the 805 MHz side-coupled cavities for the Fermilab Linac Upgrade

    SciTech Connect

    May, M.P.; Fritz, J.R; Jurgens, T.G; Miller, H.W.; Olson, J.; Snee, D.

    1990-10-01

    The manufacturing processes for the Side Coupled Structures (SCS) are intimately connected with their tuning requirements. Present Computer Numerical Controlled machining allows very repeatable accuracies of dimensions. This has led to a manufacturing sequence which reduces the need for repeated machining step. Surface tolerances in the high field region of the accelerating cells were assured. Tuning steps were reduced at all stages of construction. This paper will describe the mechanical steps used to fabricate the SCS structure at Fermilab.

  8. Data from Fermilab E-687 (Photoproduction of Heavy Flavours) and Fermilab E-831 (FOCUS)

    DOE Data Explorer

    The FERMILAB E687 Collaboration studies production and decay properties of heavy flavours produced in photon-hadron interactions. The experiment recorded approximately 500 million hadronic triggers in the 1990-91 fixed target run at Fermilab from which over 80 thousand charm decays were fully reconstructed. Physics publications include the precision lifetime measurements of the charm hadrons, D meson semileptonic form factors, detailed Dalitz plot analyses, charm meson and baryon decay modes and spectroscopy, searches for rare and forbidden phenomena, and tests of QCD production mechanisms. The follow-on experiment FOCUS Collaboration (Fermilab E831) successfully recorded huge amount of data during the 1996-1997 fixed target run. The FOCUS home page is located at http://www-focus.fnal.gov/. FOCUS is an international collaboration with institutions in Brazil, Italy, South Korea, Mexico, Puerto Rico, and the U.S.

  9. QCD Results from the Fermilab Tevatron proton-antiproton Collider

    SciTech Connect

    Warburton, Andreas; CDF, for the; Collaborations, D0

    2010-01-01

    Selected recent quantum chromodynamics (QCD) measurements are reviewed for Fermilab Run II Tevatron proton-antiproton collisions studied by the Collider Detector at Fermilab (CDF) and D0 Collaborations at a centre-of-mass energy of {radical}s = 1.96 TeV. Tantamount to Rutherford scattering studies at the TeV scale, inclusive jet and dijet production cross-section measurements are used to seek and constrain new particle physics phenomena, test perturbative QCD calculations, inform parton distribution function (PDF) determinations, and extract a precise value of the strong coupling constant, a{sub s}(m{sub Z}) = 0.1161{sub -0.0048}{sup +0.0041}. Inclusive photon production cross-section measurements reveal an inability of next-to-leading-order (NLO) perturbative QCD (pQCD) calculations to describe low-energy photons arising directly in the hard scatter. Events with {gamma} + 3-jet configurations are used to measure the increasingly important double parton scattering (DPS) phenomenon, with an obtained effective interaction cross section of {sigma}{sub eff} = 16.4 {+-} 2.3 mb. Observations of central exclusive particle production demonstrate the viability of observing the Standard Model Higgs boson using similar techniques at the Large Hadron Collider (LHC). Three areas of inquiry into lower energy QCD, crucial to understanding high-energy collider phenomena, are discussed: the examination of intra-jet track kinematics to infer that jet formation is dominated by pQCD, and not hadronization, effects; detailed studies of the underlying event and its universality; and inclusive minimum-bias charged-particle momentum and multiplicity measurements, which are shown to challenge the Monte Carlo generators.

  10. Progress Towards Doubling the Beam Power at Fermilab's Accelerator Complex

    SciTech Connect

    Kourbanis, Ioanis

    2014-07-01

    After a 16 month shutdown to reconfigure the Fermilab Accelerators for high power operations, the Fermilab Accelerator Complex is again providing beams for numerous Physics Experiments. By using the Recycler to slip stack protons while the Main Injector is ramping, the beam power at 120 GeV can reach 700 KW, a factor of 2 increase. The progress towards doubling the Fermilab's Accelerator complex beam power will be presented.

  11. Comparison of Computed Tomography and Cineangiography in the Demonstration of Central Pulmonary Arteries in Cyanotic Congenital Heart Disease

    SciTech Connect

    Taneja, Karuna; Sharma, Sanjiv; Kumar, Krishan; Rajani, Mira

    1996-03-15

    Purpose: To assess the diagnostic accuracy of contrast-enhanced computed tomography (CT) for central pulmonary artery pathology in patients with cyanotic congenital heart disease (CCHD) and right ventricular outflow obstruction. Methods: We compared contrast-enhanced CT and cine pulmonary arteriography in 24 patients with CCHD to assess central pulmonary arteries including the confluence. Both investigations were interpreted by a cardiac radiologist in a double-blinded manner at an interval of 3 weeks. Angiography was used as the gold standard for comparison. Results: The sensitivity for visualization of main pulmonary artery (MPA), right pulmonary artery (RPA), left pulmonary artery (LPA), and confluence on CT was 94%, 100%, 92.8%, and 92.8%, respectively. Diagnostic specificity for the same entities was 28.5%, 100%, 80%, and 50%, respectively. The positive predictive value for each was 76.2%, 100%, 94.1%, and 72.2%, respectively. The low specificity of CT in the evaluation of the MPA and the confluence is perhaps due to distorted right ventricular outflow anatomy in CCHD. Large aortopulmonary collaterals in this region were mistaken for the MPA in some patients with pulmonary atresia. Conclusion: CT is a useful, relatively noninvasive, imaging technique for the central pulmonary arteries in selected patients. It can supplement diagnostic information from angiography but cannot replace it. LPA demonstration on axial images alone is inadequate.

  12. Central Neural Circuits for Coordination of Swallowing, Breathing, and Coughing: Predictions from Computational Modeling and Simulation

    PubMed Central

    Bolser, Donald C.; Gestreau, Christian; Morris, Kendall F.; Davenport, Paul W.; Pitts, Teresa E.

    2013-01-01

    SYNOPSIS The purpose of this article is to update the otolaryngologic community on recent developments in the basic understanding of how cough, swallow, and breathing are controlled. These behaviors are coordinated to occur at specific times relative to one another to minimize the risk of aspiration. The control system that generates and coordinates these behaviors is complex and advanced computational modeling methods are useful tools to elucidate its function. PMID:24262953

  13. Land classification of south-central Iowa from computer enhanced images

    NASA Technical Reports Server (NTRS)

    Lucas, J. R. (Principal Investigator); Taranik, J. V.; Billingsley, F. C.

    1976-01-01

    The author has identified the following significant results. The Iowa Geological Survey developed its own capability for producing color products from digitally enhanced LANDSAT data. Research showed that efficient production of enhanced images required full utilization of both computer and photographic enhancement procedures. The 29 August 1972 photo-optically enhanced color composite was more easily interpreted for land classification purposes than standard color composites.

  14. Report of the Fermilab Committee for Site Studies

    SciTech Connect

    Steve Holmes, Vic Kuchler et. al.

    2001-09-10

    Fermilab is the flagship laboratory of the U.S. high-energy physics program. The Fermilab accelerator complex has occupied the energy frontier nearly continuously since its construction in the early 1970s. It will remain at the frontier until the Large Hadron Collider at CERN begins operating in 2006-7. A healthy future for Fermilab will likely require construction of a new accelerator in the post-LHC era. The process of identifying, constructing and operating a future forefront facility will require the support of the world high-energy-physics community, the governments and funding agencies of many nations and the people of surrounding communities. This report explores options for construction of a new facility on or near the existing Fermilab site. We began the study that forms the basis of this report with the idea that Fermilab, and the surrounding area of northeastern Illinois, possesses attributes that make it an attractive candidate for a new accelerator construction project: excellent geology; a Fermilab staff and local contractors who are experienced in subsurface construction; abundant energy supplies; good access to transportation networks; the presence of local universities with strong interest and participation in the Fermilab research program; Fermilab's demonstrated ability to mount large accelerator construction projects and operate complex accelerator facilities; and a surrounding community that is largely supportive of Fermilab's presence. Our report largely confirms these perceptions.

  15. Jet production in muon-proton and muon-nuclei scattering at Fermilab-E665

    SciTech Connect

    Salgado, C.W.; E665 Collaboration

    1993-08-01

    Measurements of multi-jet production rates from Muon-Proton Muon- Nuclei scattering at Fermilab-E665 are presented. Jet rates are defined by the JADE clustering algorithm. Rates in Muon-Proton deep-inelastic scattering are compared to perturbative Quantum Chromodynamics (PQCD) and Monte Carlo model predictions. We observe hadronic (2+1)-jet rates which are a factor of two higher than PQCD predictions at the partonic level. Preliminary results from jet production on heavy targets, in the shadowing region, show a suppression of the jet rates as compared to deuterium. The two- forward jet sample present higher suppression as compared to the one-forward jet sample.

  16. Medium-range objective predictions of thunderstorms on the McIDAS/CSIS interactive computer system. [Computer Interactive Data Access System/Centralized Storm Information System

    NASA Technical Reports Server (NTRS)

    Wilson, G. S.

    1982-01-01

    Until recently, all operational meteorological data has been made available to forecasters in a variety of different forms. Predictions based upon these different data formats have been complicated by the inability of forecasters to easily assimilate, in real-time, all data to provide an optimum decision regarding future weather occurrences. By March 1980, a joint NASA/NOAA effort had been initiated to develop the Centralized Storm Information System (CSIS). The primary objectives of this joint project are related to an improvement of the overall severe storm forecast and warning procedure and to a demonstration of the operational utility of techniques developed within the applied research community. CSIS is to utilize the Man Computer Interactive Data Access System (McIDAS). The present investigation is concerned with one of the first attempts to employ the CSIS system for the evaluation of a new research technique involving the prediction of thunderstorms over a forecast period of 12-48 hours.

  17. Computational Insights into the Central Role of Nonbonding Interactions in Modern Covalent Organocatalysis.

    PubMed

    Walden, Daniel M; Ogba, O Maduka; Johnston, Ryne C; Cheong, Paul Ha-Yeon

    2016-06-21

    The flexibility, complexity, and size of contemporary organocatalytic transformations pose interesting and powerful opportunities to computational and experimental chemists alike. In this Account, we disclose our recent computational investigations of three branches of organocatalysis in which nonbonding interactions, such as C-H···O/N interactions, play a crucial role in the organization of transition states, catalysis, and selectivity. We begin with two examples of N-heterocyclic carbene (NHC) catalysis, both collaborations with the Scheidt laboratory at Northwestern. In the first example, we discuss the discovery of an unusual diverging mechanism in a catalytic kinetic resolution of a dynamic racemate that depends on the stereochemistry of the product being formed. Specifically, the major product is formed through a concerted asynchronous [2 + 2] aldol-lactonization, while the minor products come from a stepwise spiro-lactonization pathway. Stereoselectivity and catalysis are the results of electrophilic activation from C-H···O interactions between the catalyst and the substrate and conjugative stabilization of the electrophile. In the second example, we show how knowledge and understanding of the computed transition states led to the development of a more enantioselective NHC catalyst for the butyrolactonization of acyl phosphonates. The identification of mutually exclusive C-H···O interactions in the computed major and minor TSs directly resulted in structural hypotheses that would lead to targeted destabilization of the minor TS, leading to enhanced stereoinduction. Synthesis and evaluation of the newly designed NHC catalyst validated our hypotheses. Next, we discuss two works related to Lewis base catalysis involving 4-dimethylaminopyridine (DMAP) and its derivatives. In the first, we discuss our collaboration with the Smith laboratory at St Andrews, in which we discovered the origins of the regioselectivity in carboxyl transfer reactions. We

  18. Measurements of Hbeta Stark central asymmetry and its analysis through standard theory and computer simulations.

    PubMed

    Djurović, S; Cirisan, M; Demura, A V; Demchenko, G V; Nikolić, D; Gigosos, M A; González, M A

    2009-04-01

    Experimental measurements of the center of the H_{beta} Stark profile on three different installations have been done to study its asymmetry in wide ranges of electron density, temperature, and plasma conditions. Theoretical calculations for the analysis of experimental results have been performed using the standard theory and computer simulations and included separately quadrupolar and quadratic Stark effects. Earlier experimental results and theoretical calculations of other authors have been reviewed as well. The experimental results are well reproduced by the calculations at high and moderate densities. PMID:19518354

  19. Land classification of south-central Iowa from computer enhanced images

    NASA Technical Reports Server (NTRS)

    Lucas, J. R.; Taranik, J. V.; Billingsley, F. C. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. Enhanced LANDSAT imagery was most useful for land classification purposes, because these images could be photographically printed at large scales such as 1:63,360. The ability to see individual picture elements was no hindrance as long as general image patterns could be discerned. Low cost photographic processing systems for color printings have proved to be effective in the utilization of computer enhanced LANDSAT products for land classification purposes. The initial investment for this type of system was very low, ranging from $100 to $200 beyond a black and white photo lab. The technical expertise can be acquired from reading a color printing and processing manual.

  20. Computer aided graphics simulation modelling using seismogeologic approach in sequence stratigraphy of Early Cretaceous Punjab platform, Central Indus Basin, Pakistan

    SciTech Connect

    Qureshi, T.M.; Khan, K.A.

    1996-08-01

    Modelling stratigraphic sequence by using seismo-geologic approach, integrated with cyclic transgressive-regressive deposits, helps to identify a number of non-structural subtle traps. Most of the hydrocarbons found in Early Cretaceous of Central Indus Basin pertain to structural entrapments of upper transgressive sands. A few wells are producing from middle and basal regressive sands, but the massive regressive sands have not been tested so far. The possibility of stratigraphic traps like wedging or pinch-out, a lateral gradation, an uplift, truncation and overlapping of reservoir rocks is quite promising. The natural basin physiography at times has been modified by extensional episodic events into tectono-morphic terrain. Thus, seismo scanning of tectonically controlled sedimentation might delineate some subtle stratigraphic traps. Amplitude maps representing stratigraphic sequences are generated to identify the traps. Seismic expressions indicate the reservoir quality in terms of amplitude increase or decrease. The data is modelled on computer using graphics simulation techniques.

  1. Magnetic field data on Fermilab Energy-Saver quadrupoles

    SciTech Connect

    Schmidt, E.E.; Brown, B.C.; Cooper, W.E.; Fisk, H.E.; Gross, D.A.; Hanft, R.; Ohnuma, S.; Turkot, F.T.

    1983-03-01

    The Fermilab Energy Saver/Doubler (Tevatron) accelerator contains 216 superconducting quadrupole magnets. Before installation in the Tevatron ring, these magnets plus an additional number of spares were extensively tested at the Fermilab Magnet Test Facility (MTF). Details on the results of the tests are presented here.

  2. Implementation of Stochastic Cooling Hardware at Fermilab's Tevatron Collider

    SciTech Connect

    Pasquinelli, Ralph J.; /Fermilab

    2011-08-01

    The invention of Stochastic cooling by Simon van der Meer made possible the increase in phase space density of charged particle beams. In particular, this feedback technique allowed the development of proton antiproton colliders at both CERN and Fermilab. This paper describes the development of hardware systems necessary to cool antiprotons at the Fermilab Tevatron Collider complex.

  3. Implementation of stochastic cooling hardware at Fermilab's Tevatron collider

    NASA Astrophysics Data System (ADS)

    Pasquinelli, Ralph J.

    2011-08-01

    The invention of Stochastic cooling by Simon van der Meer made possible the increase in phase space density of charged particle beams. In particular, this feedback technique allowed the development of proton antiproton colliders at both CERN and Fermilab. This paper describes the development of hardware systems necessary to cool antiprotons at the Fermilab Tevatron Collider complex.

  4. New corrector system for the Fermilab booster

    SciTech Connect

    Prebys, E.J.; Drennan, C.C.; Harding, D.J.; Kashikhin, V.; Lackey, J.R.; Makarov, A.; Pellico, W.A.; /Fermilab

    2007-06-01

    We present an ambitious ongoing project to build and install a new corrector system in the Fermilab 8 GeV Booster. The system consists of 48 corrector packages, each containing horizontal and vertical dipoles, normal and skew quadrupoles, and normal and skew sextupoles. Space limitations in the machine have motivated a unique design, which utilizes custom wound coils around a 12 pole laminated core. Each of the 288 discrete multipole elements in the system will have a dedicated power supply, the output current of which is controlled by an individual programmable ramp. This paper describes the physics considerations which drove the design, as well as issues in the control of the system.

  5. Fermilab accelerator control system: Analog monitoring facilities

    SciTech Connect

    Seino, K.; Anderson, L.; Smedinghoff, J.

    1987-10-01

    Thousands of analog signals are monitored in different areas of the Fermilab accelerator complex. For general purposes, analog signals are sent over coaxial or twinaxial cables with varying lengths, collected at fan-in boxes and digitized with 12 bit multiplexed ADCs. For higher resolution requirements, analog signals are digitized at sources and are serially sent to the control system. This paper surveys ADC subsystems that are used with the accelerator control systems and discusses practical problems and solutions, and it describes how analog data are presented on the console system.

  6. The Fermilab short-baseline neutrino program

    SciTech Connect

    Camilleri, Leslie

    2015-10-15

    The Fermilab short-baseline program is a multi-facetted one. Primarily it searches for evidence of sterile neutrinos as hinted at by the MiniBooNE and LSND results. It will also measure a whole suite of ν-Argon cross sections which will be very useful in future liquid argon long-baseline projects. The program is based on MicroBooNE, already installed in the beam line, the recently approved LAr1-ND and the future addition of the refurbished ICARUS.

  7. Dijet production in hadron collisions at Fermilab

    SciTech Connect

    Fields, T.H.

    1984-01-01

    We have studied dijet final states produced in hard hadron collisions at Fermilab using the E-609 calorimetric detector. Using dijet events produced in ..pi../sup -/p, ..pi../sup +/p and pp collisions at 200 GeV, we have made a detailed search for the higher-twist process proposed by Berger and Brodsky. In this process, the entire energy at the incident pion goes into dijet production, leaving an event with no forward beam jet and satisfying two-body kinematics.

  8. Numerical Tests of the Improved Fermilab Action

    SciTech Connect

    Detar, C.; Kronfeld, A.S.; Oktay, M.B.

    2010-11-01

    Recently, the Fermilab heavy-quark action was extended to include dimension-six and -seven operators in order to reduce the discretization errors. In this talk, we present results of the first numerical simulations with this action (the OK action), where we study the masses of the quarkonium and heavy-light systems. We calculate combinations of masses designed to test improvement and compare results obtained with the OK action to their counterparts obtained with the clover action. Our preliminary results show a clear improvement.

  9. Radiation issues in the Fermilab booster magnets

    SciTech Connect

    Prebys, E.; /Fermilab

    2005-05-01

    The demands of the Fermilab neutrino program will require the lab's 30+ year old 8 GeV Booster to deliver higher intensities than it ever has. Total proton throughput is limited by radiation damage and activation due to beam loss in the Booster tunnel. Of particular concern is the epoxy resin that acts as the insulation in the 96 combined function lattice magnets. This paper describes a simulation study to determine the integrated radiation dose to this epoxy and a discussion of the potential effects.

  10. Slip stacking experiments at Fermilab main injector

    SciTech Connect

    Kiyomi Koba et al.

    2003-06-02

    In order to achieve an increase in proton intensity, Fermilab Main Injector will use a stacking process called ''slip stacking''. The intensity will be doubled by injecting one train of bunches at a slightly lower energy, another at a slightly higher energy, then bringing them together for the final capture. Beam studies have started for this process and we have already verified that, at least for a low beam intensity, the stacking procedure works as expected. For high intensity operation, development work of the feedback and feedforward systems is under way.

  11. The Mu2e Experiment at Fermilab

    NASA Astrophysics Data System (ADS)

    Kutschke, Robert K.

    2009-12-01

    The Mu2e collaboration has proposed an experiment to search for the coherent decay of a muon to an electron in the Coulomb field of a nucleus with an expected sensitivity of Rμe<6.0×10-17, at the 90% confidence level. Mu2e has received strong support from the P5 panel and has received Stage I approval from Fermilab. If all resources are made available as required, the experiment could begin taking data as early as 2016.

  12. Electropolishing on small samples at Fermilab

    SciTech Connect

    Boffo, C.; Bauer, P.; Teid, T.; Geng, R.; /Cornell U., Phys. Dept.

    2005-07-01

    The electropolishing process (EP) is considered an essential step in the processing of high gradient SRF cavities. Studies on EP of small samples has been started at Fermilab as part of the SRF materials R&D program. A simple bench top setup was developed to understand the basic variables affecting the EP. In addition a setup for vertical EP of half cells, based on the Cornell design, was used and another one for dumbbells was designed and tested. Results and findings are reported.

  13. The VAXONLINE software system at Fermilab

    SciTech Connect

    White, V.; Heinicke, P.; Berman, E.; Constanta-Fanourakis, P.; MacKinnon, B.; Moore, C.; Nicinski, T.; Petravick, D.; Pordes, R.; Quigg, L.

    1987-06-01

    The VAXONLINE software system, started in late 1984, is now in use at 12 experiments at Fermilab, with at least one VAX or MicroVax. Data acquisition features now provide for the collection and combination of data from one or more sources, via a list-driven Event Builder program. Supported sources include CAMAC, FASTBUS, Front-end PDP-11's, Disk, Tape, DECnet, and other processors running VAXONLINE. This paper describes the functionality provided by the VAXONLINE system, gives performance figures, and discusses the ongoing program of enhancements.

  14. Superconducting radiofrequency linac development at Fermilab

    SciTech Connect

    Holmes, Stephen D.; /Fermilab

    2009-10-01

    As the Fermilab Tevatron Collider program draws to a close, a strategy has emerged of an experimental program built around the high intensity frontier. The centerpiece of this program is a superconducting H- linac that will support world leading programs in long baseline neutrino experimentation and the study of rare processes. Based on technology shared with the International Linear Collider, Project X will provide multi-MW beams at 60-120 GeV from the Main Injector, simultaneous with very high intensity beams at lower energies. Project X also supports development of a Muon Collider as a future facility at the energy frontier.

  15. Electron cooling rates characterization at Fermilab's Recycler

    SciTech Connect

    Prost, Lionel R.; Shemyakin, A.; /Fermilab

    2007-06-01

    A 0.1 A, 4.3 MeV DC electron beam is routinely used to cool 8 GeV antiprotons in Fermilab's Recycler storage ring [1]. The primary function of the electron cooler is to increase the longitudinal phase-space density of the antiprotons for storing and preparing high-density bunches for injection into the Tevatron. The longitudinal cooling rate is found to significantly depend on the transverse emittance of the antiproton beam. The paper presents the measured rates and compares them with calculations based on drag force data.

  16. Recent ground motion studies at Fermilab

    SciTech Connect

    Shiltsev, V.; Volk, J.; Singatulin, S.; /Novosibirsk, IYF

    2009-04-01

    Understanding slow and fast ground motion is important for the successful operation and design for present and future colliders. Since 2000 there have been several studies of ground motion at Fermilab. Several different types of HLS (hydro static level sensors) have been used to study slow ground motion (less than 1 hertz) seismometers have been used for fast (greater than 1 hertz) motions. Data have been taken at the surface and at locations 100 meters below the surface. Data of recent slow ground motion measurements with HLSs, many years of alignment data and results of the ATL-analysis are presented and discussed.

  17. Measuring and computing natural ground-water recharge at sites in south-central Kansas

    USGS Publications Warehouse

    Sophocleous, M.A.; Perry, C.A.

    1987-01-01

    To measure the natural groundwater recharge process, two sites in south-central Kansas were instrumented with sensors and data microloggers. The atmospheric-boundary layer and the unsaturated and saturated soil zones were monitored as a single regime. Direct observations also were used to evaluate the measurements. Atmospheric sensors included an anemometer, a tipping-bucket rain gage, an air-temperature thermistor, a relative-humidity probe, a net radiometer, and a barometric-pressure transducer. Sensors in the unsaturated zone consisted of soil-temperature thermocouples, tensiometers coupled with pressure transducers and dial gages, gypsum blocks, and a neutron-moisture probe. The saturated-zone sensors consisted of a water-level pressure transducer, a conventional float gage connected to a variable potentiometer, soil thermocouples, and a number of multiple-depth piezometers. Evaluation of the operation of these sensors and recorders indicates that certain types of equipment, such as pressure transducers, are very sensitive to environmental conditions. A number of suggestions aimed at improving instrumentation of recharge investigations are outlined. Precipitation and evapotranspiration data, taken together with soil moisture profiles and storage changes, water fluxes in the unsaturated zone and hydraulic gradients in the saturated zone at various depths, soil temperature, water table hydrographs, and water level changes in nearby wells, describe the recharge process. Although the two instrumented sites are located in sand-dune environments in area characterized by a shallow water table and a sub-humid continental climate, a significant difference was observed in the estimated total recharge. The estimates ranged from less than 2.5 mm at the Zenith site to approximately 154 mm at the Burrton site from February to June 1983. The principal reasons that the Burrton site had more recharge than the Zenith site were more precipitation, less evapotranspiration, and a

  18. Muon g-2 Experiment at Fermilab

    SciTech Connect

    Gray, Frederick

    2015-10-01

    A new experiment at Fermilab will measure the anomalous magnetic moment of the muon with a precision of 140 parts per billion (ppb). This measurement is motivated by the results of the Brookhaven E821 experiment that were first released more than a decade ago, which reached a precision of 540 ppb. As the corresponding Standard Model predictions have been refined, the experimental and theoretical values have persistently differed by about 3 standard deviations. If the Brookhaven result is confirmed at Fermilab with this improved precision, it will constitute definitive evidence for physics beyond the Standard Model. The experiment observes the muon spin precession frequency in flight in a well-calibrated magnetic fi eld; the improvement in precision will require both 20 times as many recorded muon decay events as in E821 and a reduction by a factor of 3 in the systematic uncertainties. This paper describes the current experimental status as well as the plans for the upgraded magnet, detector and storage ring systems that are being prepared for the start of beam data collection in 2017.

  19. Charm physics at Fermilab E791

    SciTech Connect

    Amato, S.; Anjos, J.C.; Bediaga, I.; Costa, I.; de Mello Neto, J.R.T.; de Miranda, J.; Santoro, A.F.S.; Souza, M.H.G.; Blaylock, G.; Burchat, P.R.; Gagnon, P.; Sugano, K.; de Oliveira, A.J.; Santha, A.; Sokoloff, M.D.; Appel, J.A.; Banerjee, S.; Carter, T.; Denisenko, K.; Halling, M.; James, C.; Kwan, S.; Lundberg, B.; Thorne, K.; Burnstein, R.; Kasper, P.A.; Peng, K.C.; Rubin, H.; Summers, D.J.; Aitala, E.M.; Gounder, K.; Rafatian, A.; Reidy, J.J.; Yi, D.; Granite, D.; Nguyen, A.; Reay, N.W.; Reibel, K.; Sidwell, R.; Stanton, N.; Tripathi, A.; Witchey, N.; Purohit, M.V.; Schwartz, A.; Wiener, J.; Almeida, F.M.L.; Ramalho, A.J.; da Silva Carvalho, H.; Ashery, D.; Gerzon, S.; Lichtenstadt, J.; May-Tal-Beck, S.; Trumer, D.; Bracker, S.B.; Astorga, J.; Milburn, R.; Napier, A.; Radeztsky, S.; Sheaff, M.; Darling, C.; Slaughter, J.; Takach, S.; Wolin, E.

    1992-05-26

    Experiment 791 at Fermilab`s Tagged Photon Laboratory has just accumulated a high statistics charm sample by recording 20 billion events on 24000 8mm tapes. A 500 GeV/c {pi}{sup {minus}} beam was used with a fixed target and a magnetic spectrometer which now includes 23 silicon microstrip planes for vertex reconstruction. A new data acquisition system read out 9000 events/sec during the part of the Tevatron cycle that delivered beam. Digitization and readout took 50 {mu}S per event. Data was buffered in eight large FIFO memories to allow continuous event building and continuous tape writing to a wall of 42 Exabytes at 9.6 MB/sec. The 50 terabytes of data buffered to tape is now being filtered on RISC CPUs. Preliminary results show D{sup 0} {yields} K{sup {minus}}{pi}{sup +} and D{sup +} {yields} K{sup {minus}}{pi}{pi}{sup +} decays. Rarer decays will be pursued.

  20. Metropolitan area network support at Fermilab

    SciTech Connect

    DeMar, Phil; Andrews, Chuck; Bobyshev, Andrey; Crawford, Matt; Colon, Orlando; Fry, Steve; Grigaliunas, Vyto; Lamore, Donna; Petravick, Don; /Fermilab

    2007-09-01

    Advances in wide area network service offerings, coupled with comparable developments in local area network technology have enabled many research sites to keep their offsite network bandwidth ahead of demand. For most sites, the more difficult and costly aspect of increasing wide area network capacity is the local loop, which connects the facility LAN to the wide area service provider(s). Fermilab, in coordination with neighboring Argonne National Laboratory, has chosen to provide its own local loop access through leasing of dark fiber to nearby network exchange points, and procuring dense wave division multiplexing (DWDM) equipment to provide data channels across those fibers. Installing and managing such optical network infrastructure has broadened the Laboratory's network support responsibilities to include operating network equipment that is located off-site, and is technically much different than classic LAN network equipment. Effectively, the Laboratory has assumed the role of a local service provider. This paper will cover Fermilab's experiences with deploying and supporting a Metropolitan Area Network (MAN) infrastructure to satisfy its offsite networking needs. The benefits and drawbacks of providing and supporting such a service will be discussed.

  1. Registration of central paths and colonic polyps between supine and prone scans in computed tomography colonography: Pilot study

    SciTech Connect

    Li Ping; Napel, Sandy; Acar, Burak; Paik, David S.; Jeffrey, R. Brooke Jr.; Beaulieu, Christopher F.

    2004-10-01

    Computed tomography colonography (CTC) is a minimally invasive method that allows the evaluation of the colon wall from CT sections of the abdomen/pelvis. The primary goal of CTC is to detect colonic polyps, precursors to colorectal cancer. Because imperfect cleansing and distension can cause portions of the colon wall to be collapsed, covered with water, and/or covered with retained stool, patients are scanned in both prone and supine positions. We believe that both reading efficiency and computer aided detection (CAD) of CTC images can be improved by accurate registration of data from the supine and prone positions. We developed a two-stage approach that first registers the colonic central paths using a heuristic and automated algorithm and then matches polyps or polyp candidates (CAD hits) by a statistical approach. We evaluated the registration algorithm on 24 patient cases. After path registration, the mean misalignment distance between prone and supine identical anatomic landmarks was reduced from 47.08 to 12.66 mm, a 73% improvement. The polyp registration algorithm was specifically evaluated using eight patient cases for which radiologists identified polyps separately for both supine and prone data sets, and then manually registered corresponding pairs. The algorithm correctly matched 78% of these pairs without user input. The algorithm was also applied to the 30 highest-scoring CAD hits in the prone and supine scans and showed a success rate of 50% in automatically registering corresponding polyp pairs. Finally, we computed the average number of CAD hits that need to be manually compared in order to find the correct matches among the top 30 CAD hits. With polyp registration, the average number of comparisons was 1.78 per polyp, as opposed to 4.28 comparisons without polyp registration.

  2. [Multislice computed tomographic angiography in the assessment of central veins for endovascular treatment planning: comparison with phlebography].

    PubMed

    Patanè, Domenico; Morale, Walter; Malfa, Pierantonio; Seminara, Giuseppe; L'Anfusa, Giuseppe; Spanti, Demetrio; Incardona, Concetta; Mandalà, Maria Luisa; Di Landro, Domenico

    2010-01-01

    The dysfunction of a vascular access for hemodialysis and its loss may depend on drainage difficulties of the superficial or deep venation due to hemodynamically significant stenosis or obstruction of a central vein, which generally involve the innominate-subclavian veins or superior vena cava. These alterations are often neglected due to their central and deep location; when there is hemodynamic compensation, they may remain asymptomatic. For these reasons every suspect clinical sign for central vein stenosis (gross arm syndrome or venous hypertension in an arteriovenous fistula) must not be ignored, as timely intervention is essential for functional recovery of the vessel and for the protection of the arteriovenous fistula. The modern imaging techniques ensure thorough diagnostic assessment, while the possibilities of endovascular treatment with interventional radiology allow, in a large proportion of cases, optimal minimally invasive treatment, but above all the recovery of venation in a hemodialyzed patient. We report our experience with multislice computed tomographic angiography (MS-CTA) and reconstruction software for treatment planning of central vein stenosis or obstruction. Forty-nine patients were studied with MS-CTA (GE 16). Images were acquired in the venous phase (120-180 seconds after contrast medium injection) followed by digital vascular reconstruction (AutoBone for bone removal, vessel analysis for caliber and length measurements, thin and curved MIP, MPR). Within a week control phlebography was performed. The venous tree was divided into seven segments and analyzed in a double-blind fashion with a distinction between patent segments, 50-70% stenosis, >70% stenosis, occlusion, and collateral vascular beds. There was excellent correspondence in all the examined segments for patency, >70% stenosis, and occlusion, with high sensitivity (98%), specificity (99.3%), and diagnostic accuracy (99.1%). The binomial test demonstrated a highly significant

  3. Beam position correction in the Fermilab Linac

    NASA Astrophysics Data System (ADS)

    Junck, K. L.; McCrory, E.

    1994-08-01

    Orbit correction has long been an essential feature of circular accelerators, storage rings, multipass linacs, and linear colliders. In a drift tube linear accelerator (DTL) such as the H- Linac at Fermilab, beam position monitors (BPMs) and dipole corrector magnets can only be located in between accelerating tanks. Within a tank many drift tubes (from 20 to 60) each house a quadrupole magnet to provide strong transverse focusing of the beam. With good alignment of the drift tubes and quadrupoles and a sufficiently large diameter for the drift tubes, beam position is not typically a major concern. In the Fermilab DTL, 95 percent of the beam occupies only 35 percent of the available physical aperture (4.4 cm). The recent upgrade of the Fermilab Linac from a final energy of 200 MeV to 400 MeV has been achieved by replacing four 201.25 MHz drift tube linac tanks with seven 805 MHz side-coupled cavity modules (the high energy portion of the linac or HEL). In order to achieve this increase in energy within the existing enclosure, an accelerating gradient is required that is a factor of 3 larger than that found in the DTL. This in turn required that the physical aperture through which the beam must pass be significantly reduced. In addition, the lattice of the side-coupled structure provides significantly less transverse focusing than the DTL. Therefore in the early portion of the HEL the beam occupies over 95 percent of the available physical aperture (3.0 cm). In order to prevent beam loss and the creation of excess radiation, the ability to correct beam position throughout the HEL is of importance. An orbit smoothing algorithm commonly used in the correction of closed orbits of circular machines has been implemented to achieve a global least-squares minimization of beam position errors. In order to accommodate several features of this accelerator a refinement in the algorithm has been made to increase its robustness and utilize correctors of varying strengths.

  4. Beam position correction in the Fermilab linac

    SciTech Connect

    Junck, K.L.; McCrory, E.

    1994-08-01

    Orbit correction has long been an essential feature of circular accelerators, storage rings, multipass linacs, and linear colliders. In a drift tube linear accelerator (DTL) such as the H- Linac at Fermilab, beam position monitors (BPMs) and dipole corrector magnets can only be located in between accelerating tanks. Within a tank many drift tubes (from 20 to 60) each house a quadrupole magnet to provide strong transverse focusing of the beam. With good alignment of the drift tubes and quadrupoles and a sufficiently large diameter for the drift tubes, beam position is not typically a major concern. In the Fermilab DTL, 95% of the beam occupies only 35% of the available physical aperture (4.4 cm). The recent upgrade of the Fermilab Linac from a final energy of 200 MeV to 400 MeV has been achieved by replacing four 201.25 MHz drift tube linac tanks with seven 805 MHz side-coupled cavity modules (the high energy portion of the linac or HEL). In order to achieve this increase in energy within the existing enclosure, an accelerating gradient is required that is a factor of 3 larger than that found in the DTL. This in turn required that the physical aperture through which the beam must pass be significantly reduced. In addition, the lattice of the side-coupled structure provides significantly less transverse focusing than the DTL. Therefore in the early portion of the HEL the beam occupies over 95% of the available physical aperture (3.0 cm). In order to prevent beam loss and the creation of excess radiation, the ability to correct beam position throughout the HEL is of importance. An orbit smoothing algorithm commonly used in the correction of closed orbits of circular machines has been implemented to achieve a global least-squares minimization of beam position errors. In order to accommodate several features of this accelerator a refinement in the algorithm has been made to increase its robustness and utilize correctors of varying strengths.

  5. The calibration system of the new g-2 experiment at Fermilab

    NASA Astrophysics Data System (ADS)

    Anastasi, A.; Babusci, D.; Cantatore, G.; Cauz, D.; Corradi, G.; Dabagov, S.; Di Meo, P.; Di Sciascio, G.; Di Stefano, R.; Ferrari, C.; Fienberg, A. T.; Fioretti, A.; Gabbanini, C.; Hampai, D.; Hertzog, D. W.; Iacovacci, M.; Karuza, M.; Kaspar, J.; Marignetti, F.; Mastroianni, S.; Moricciani, D.; Pauletta, G.; Santi, L.; Venanzoni, G.

    2016-07-01

    The muon anomaly (g - 2)μ / 2 has been measured to 0.54 parts per million by E821 experiment at Brookhaven National Laboratory, and at present there is a 3-4 standard-deviation difference between the Standard Model prediction and the experimental value. A new muon g-2 experiment, E989, is being prepared at Fermilab that will improve the experimental error by a factor of four to clarify this difference. A central component to reach this fourfold improvement in accuracy is the high-precision laser calibration system which should monitor the gain fluctuations of the calorimeter photodetectors at 0.04% accuracy.

  6. The calibration system of the new g-2 experiment at Fermilab

    NASA Astrophysics Data System (ADS)

    Anastasi, A.; Babusci, D.; Cantatore, G.; Cauz, D.; Corradi, G.; Dabagov, S.; Di Meo, P.; Di Sciascio, G.; Di Stefano, R.; Ferrari, C.; Fienberg, A. T.; Fioretti, A.; Gabbanini, C.; Hampai, D.; Hertzog, D. W.; Iacovacci, M.; Karuza, M.; Kaspar, J.; Marignetti, F.; Mastroianni, S.; Moricciani, D.; Pauletta, G.; Santi, L.; Venanzoni, G.

    2016-07-01

    The muon anomaly (g - 2) μ / 2 has been measured to 0.54 parts per million by E821 experiment at Brookhaven National Laboratory, and at present there is a 3-4 standard-deviation difference between the Standard Model prediction and the experimental value. A new muon g-2 experiment, E989, is being prepared at Fermilab that will improve the experimental error by a factor of four to clarify this difference. A central component to reach this fourfold improvement in accuracy is the high-precision laser calibration system which should monitor the gain fluctuations of the calorimeter photodetectors at 0.04% accuracy.

  7. Conceptual design of a 2 tesla superconducting solenoid for the Fermilab D{O} detector upgrade

    SciTech Connect

    Brzezniak, J.; Fast, R.W.; Krempetz, K.

    1994-05-01

    This paper presents a conceptual design of a superconducting solenoid to be part of a proposed upgrade for the D0 detector. This detector was completed in 1992, and has been taking data since then. The Fermilab Tevatron had scheduled a series of luminosity enhancements prior to the startup of this detector. In response to this accelerator upgrade, efforts have been underway to design upgrades for D0 to take advantage of the new luminosity, and improvements in detector technology. This magnet is conceived as part of the new central tracking system for D0, providing a radiation-hard high-precision magnetic tracking system with excellent electron identification.

  8. The Lilongwe Central Hospital Patient Management Information System: A Success in Computer-Based Order Entry Where One Might Least Expect It

    PubMed Central

    GP, Douglas; RA, Deula; SE, Connor

    2003-01-01

    Computer-based order entry is a powerful tool for enhancing patient care. A pilot project in the pediatric department of the Lilongwe Central Hospital (LCH) in Malawi, Africa has demonstrated that computer-based order entry (COE): 1) can be successfully deployed and adopted in resource-poor settings, 2) can be built, deployed and sustained at relatively low cost and with local resources, and 3) has a greater potential to improve patient care in developing than in developed countries. PMID:14728338

  9. Commissioning of polarized-proton and antiproton beams at Fermilab

    SciTech Connect

    Yokosawa, A.

    1988-05-04

    The author described the polarized-proton and polarized-antiproton beams up to 200 GeV/c at Fermilab. The beam line, called MP, consists of the 400-m long primary and 350-m long secondary beam line followed by 60-m long experimental hall. We discuss the characteristics of the polarized beams. The Fermilab polarization projects are designated at E-581/704 initiated and carried out by an international collaboration, Argonne (US), Fermilab (US), Kyoto-Kyushu-Hiroshima-KEK (Japan), LAPP (France), Northwestern University (US), Los Alamos Laboratory (US), Rice (US), Saclay (France), Serpukhov (USSR), INFN Trieste (Italy), and University of Texas (US).

  10. GammaCHI: A package for the inversion and computation of the gamma and chi-square cumulative distribution functions (central and noncentral)

    NASA Astrophysics Data System (ADS)

    Gil, Amparo; Segura, Javier; Temme, Nico M.

    2015-06-01

    A Fortran 90 module GammaCHI for computing and inverting the gamma and chi-square cumulative distribution functions (central and noncentral) is presented. The main novelty of this package is the reliable and accurate inversion routines for the noncentral cumulative distribution functions. Additionally, the package also provides routines for computing the gamma function, the error function and other functions related to the gamma function. The module includes the routines cdfgamC, invcdfgamC, cdfgamNC, invcdfgamNC, errorfunction, inverfc, gamma, loggam, gamstar and quotgamm for the computation of the central gamma distribution function (and its complementary function), the inversion of the central gamma distribution function, the computation of the noncentral gamma distribution function (and its complementary function), the inversion of the noncentral gamma distribution function, the computation of the error function and its complementary function, the inversion of the complementary error function, the computation of: the gamma function, the logarithm of the gamma function, the regulated gamma function and the ratio of two gamma functions, respectively.

  11. Fermilab Physics Department Fastbus TDC module

    SciTech Connect

    Cancelo, G.; Hansen, S.; Cotta-Ramusino, A.

    1991-07-01

    A prototype 64 channel Fastbus TDC built at Fermilab is described. The module features a full custom CMOS four channel gated integrator chip. One level of analog buffering at the inputs is implemented on chip. A four event deep output queue at the bus interface allows a high event rate with low dead time. Each channel can record up to two hits per event. With an occupation rate of 10%, the module can operate at 40,000 events per second with dead time on the order of 15%. The TDC operates in common stop mode with a full scale of 1 {mu}sec and a resolution of 1 nsec. 5 refs., 6 figs.

  12. The LArIAT Experiment at Fermilab

    NASA Astrophysics Data System (ADS)

    Nutini, Irene; LArIAT Collaboration

    2016-02-01

    The LArIAT experiment at Fermilab is part of the International Neutrino program recently approved in the US. LArIAT aims to measure the main features of charged particles interactions in argon in the energy range (0.2 - 2.0 GeV) corresponding to the energy spectrum of the same particles when produced in a neutrino-argon interaction (neutrino energies of few GeV) typical of the short- and long-baseline neutrino beams of the Neutrino Program. Data collected from the 1 st Run are being analyzed for both Physics studies and a technical characterization of the scintillation light collection system. Two analysis topics are reported: the method developed for charged pion cross section measurement, based on the specific features of the LArTPC, and the development and test of the LArIAT custom-designed cold front-end electronics for SiPM devices to collect LAr scintillation light.

  13. Rebuild of Capture Cavity 1 at Fermilab

    SciTech Connect

    Harms, E.; Arkan, T.; Borissov, E.; Dhanaraj, N.; Hocker, A.; Orlov, Y.; Peterson, T.; Premo, K.

    2014-01-01

    The front end of the proposed Advanced Superconducting Test Accelerator at Fermilab employs two single cavity cryomodules, known as 'Capture Cavity 1' and 'Capture Cavity 2', for the first stage of acceleration. Capture Cavity 1 was previously used as the accelerating structure for the A0 Photoinjector to a peak energy of ~14 MeV. In its new location a gradient of ~25 MV/m is required. This has necessitated a major rebuild of the cryomodule including replacement of the cavity with a higher gradient one. Retrofitting the cavity and making upgrades to the module required significant redesign. The design choices and their rationale, summary of the rebuild, and early test results are presented.

  14. Some recent experimental results from Fermilab

    SciTech Connect

    Montgomery, H.E.

    1994-02-01

    The aim of this talk was to give an impression of the tremendous range and depth of the data being produced by experiments at Fermilab, both fixed target and collider. Despite the generous allotment of time it was not possible to do more than scratch the surface of some subjects. The collider experiments, using the measurements of the W mass and with top search and mass limits, are approaching the situation where a statement about the Higgs mass, or a sensitive test of the consistency of the standard model become a possibility. Subjects discussed were: (1) cross-sections, QCD measurements; (2) decay physics; (3) W/Z physics; (4) searches for new physics; and (5) search for top quark.

  15. Early history of the Fermilab Main Ring

    SciTech Connect

    Malamud, E.; /Fermilab

    1983-10-01

    This note is written in response to a request from Phil Livdahl for corrections, and additions to a TM he is writing on Staffing Levels at Fermilab during Initial Construction Years and to a note that Hank Hinterberger is preparing on milestones. In my spare time over the past few years I have taken the original files of the Main Ring Section, my own notes from that period, and various other collections of relevant paper, and arranged them in a set of 44 large loose leaf binders in chronological order. I call this set of volumes the 'Main Ring Chronological Archives'. In response to Phil's request I have recently skimmed through these records of the period and extracted a small subset of documents which relate to the specific questions that Phil is addressing: staffing. administration, and milestones.

  16. Numerically controlled oscillator for the Fermilab booster

    SciTech Connect

    Crisp, J.L.; Ducar, R.J.

    1989-04-01

    In order to improve the stability of the Fermilab Booster low level rf system, a numerically controlled oscillator system is being constructed. Although the system has not been implemented to date, the design is outlined in this paper. The heart of the new system consists of a numerically synthesized frequency generator manufactured by the Sciteq Company. The 3 Ghz/sec rate and 30 to 53 MHz range of the Booster frequency program required the design of a CAMAC based, fast-cycling (1 MHz), 65K X 32 bit, digital function generator. A 1 MHz digital adder and 12 bit analog to digital converter will be used to correct small program errors by phase locking the oscillator to the beam. 6 refs., 1 fig.

  17. Bob Wilson and The Birth of Fermilab

    ScienceCinema

    Edwin L. Goldwasser

    2010-01-08

    In the 1960?s the Lawrence Berkeley Laboratory (then The Lawrence Radiation Laboratory) submitted two proposals to build the next high energy physics research laboratory. The first included a 200 GeV accelerator and associated experimental facilities. The cost was $350 million. The Bureau of the Budget rejected that proposal as a ?budget buster?. It ruled that $250 million was the maximum that could be accepted. The second proposal was for a reduced scope laboratory that met the Bureau of the Budget?s cost limitation, but it was for a lower energy accelerator and somewhat smaller and fewer experimental facilities. The powerful Congressional Joint Committee on Atomic Energy rejected the reduced scope proposal as inadequate to provide physics results of sufficient interest to justify the cost. It was then that Bob Wilson came forth with a third proposal, coping with that ?Catch 22? and leading to the creation of Fermilab. How he did it will be the subject of this colloquium.

  18. The Fermilab Holometer: Probing the Planck Scale

    NASA Astrophysics Data System (ADS)

    Kamai, Brittany; Chou, A.; Evans, M.; Glass, H.; Gustafson, R.; Hogan, C. J.; Lanza, R.; McCuller, L.; Meyer, S.; Richardson, J.; Sippel, A.; Steffen, J.; Stoughton, C.; Tomlin, R.; Volk, J.; Waldman, S.; Weiss, R.; Wester, W.; Holometer, Fermilab

    2013-01-01

    Experimentally probing the Planck scale can offer insights into understanding a quantum origin of spacetime. The Fermilab Holometer team will look for a new noise source arising from the Planck scale by using the precision of power-recycled Michelson interferometers. The two nested 40 meter interferometers may have a characteristic power spectral density based on the conjectured frequency independent Planckian noise. By cross-correlating the dark port signal of two nearby interferometers, we can rule out conventional noise sources that are not common to both devices. A common source of noise could be from the underlying spacetime itself. A positive result will lead to insights in theories of an emergent quantum spacetime. The Holometer team has finished construction and begun scientific commissioning. First results of the experiment are expected in Spring 2015.

  19. Full Discharges in Fermilab's Electron Cooler

    SciTech Connect

    Prost, L. R.; Shemyakin, A.

    2006-03-20

    Fermilab's 4.3 MeV electron cooler is based on an electrostatic accelerator, which generates a DC electron beam in an energy recovery mode. Effective cooling of the antiprotons in the Recycler requires that the beam remains stable for hours. While short beam interruptions do not deteriorate the performance of the Recycler ring, the beam may provoke full discharges in the accelerator, which significantly affect the duty factor of the machine as well as the reliability of various components. Although cooling of 8 GeV antiprotons has been successfully achieved, full discharges still occur in the current setup. The paper describes factors leading to full discharges and ways to prevent them.

  20. Full discharges in Fermilab's electron cooler

    SciTech Connect

    Prost, L.R.; Shemyakin, A.; /Fermilab

    2005-09-01

    Fermilab's 4.3 MeV electron cooler is based on an electrostatic accelerator, which generates a DC electron beam in an energy recovery mode. Effective cooling of the antiprotons in the Recycler requires that the beam remains stable for hours. While short beam interruptions do not deteriorate the performance of the Recycler ring, the beam may provoke full discharges in the accelerator, which significantly affect the duty factor of the machine as well as the reliability of various components. Although cooling of 8 GeV antiprotons has been successfully achieved, full discharges still occur in the current setup. The paper describes factors leading to full discharges and ways to prevent them.

  1. Serial cranial computed-tomography scans in children with leukemia given two different forms of central nervous system therapy

    SciTech Connect

    Ochs, J.J.; Parvey, L.S.; Whitaker, J.N.; Bowman, W.P.; Ch'ien, L.; Campbell, M.; Coburn, T.

    1983-12-01

    Cranial computed tomography (CT) was used to estimate the frequency and permanence of brain abnormalities in 108 consecutive children with acute lymphoblastic leukemia (ALL). Fifty-five patients received cranial irradiation (1,800 rad) with intrathecal methotrexate (RT group) and 53 patients received intravenous and intrathecal methotrexate without irradiation (IVIT group). Continuation treatment included sequential drug pairs for the RT group and periodic IVIT methotrexate for the other group. After 12 to 24 months of serial evaluation, five (9%) of the 55 patients in the RT group have had CT scan abnormalities, compared to 10 (19%) of 52 in the IVIT group (p . 0.171). Fourteen of the 15 patients with CT scan abnormalities had focal or diffuse white-matter hypodensity; these have reverted to normal in most cases, reflecting a dynamic process. While such CT findings are of concern and may be an early indicator of central nervous system toxicity, this remains to be proven. Therapy should not be altered on the basis of abnormal CT scans alone but in the context of the entire clinical situation.

  2. SUNBURN: A computer code for evaluating the economic viability of hybrid solar central receiver electric power plants

    SciTech Connect

    Chiang, C.J.

    1987-06-01

    The computer program SUNBURN simulates the annual performance of solar-only, solar-hybrid, and fuel-only electric power plants. SUNBURN calculates the levelized value of electricity generated by, and the levelized cost of, these plants. Central receiver solar technology is represented, with molten salt as the receiver coolant and thermal storage medium. For each hour of a year, the thermal energy use, or dispatch, strategy of SUNBURN maximizes the value of electricity by operating the turbine when the demand for electricity is greatest and by minimizing overflow of thermal storage. Fuel is burned to augment solar energy if the value of electricity generated by using fuel is greater than the cost of the fuel consumed. SUNBURN was used to determine the optimal power plant configuration, based on value-to-cost ratio, for dates of initial plant operation from 1990 to 1998. The turbine size for all plants was 80 MWe net. Before 1994, fuel-only was found to be the preferred plant configuration. After 1994, a solar-only plant was found to have the greatest value-to-cost ratio. A hybrid configuration was never found to be better than both fuel-only and solar-only configurations. The value of electricity was calculated as The Southern California Edison Company's avoided generation costs of electricity. These costs vary with time of day. Utility ownership of the power plants was assumed. The simulation was performed using weather data recorded in Barstow, California, in 1984.

  3. Focal axonal swellings and associated ultrastructural changes attenuate conduction velocity in central nervous system axons: a computer modeling study.

    PubMed

    Kolaric, Katarina V; Thomson, Gemma; Edgar, Julia M; Brown, Angus M

    2013-08-01

    The constancy of action potential conduction in the central nervous system (CNS) relies on uniform axon diameter coupled with fidelity of the overlying myelin providing high-resistance, low capacitance insulation. Whereas the effects of demyelination on conduction have been extensively studied/modeled, equivalent studies on the repercussions for conduction of axon swelling, a common early pathological feature of (potentially reversible) axonal injury, are lacking. The recent description of experimentally acquired morphological and electrical properties of small CNS axons and oligodendrocytes prompted us to incorporate these data into a computer model, with the aim of simulating the effects of focal axon swelling on action potential conduction. A single swelling on an otherwise intact axon, as occurs in optic nerve axons of Cnp1 null mice caused a small decrease in conduction velocity. The presence of single swellings on multiple contiguous internodal regions (INR), as likely occurs in advanced disease, caused qualitatively similar results, except the dimensions of the swellings required to produce equivalent attenuation of conduction were significantly decreased. Our simulations of the consequences of metabolic insult to axons, namely, the appearance of multiple swollen regions, accompanied by perturbation of overlying myelin and increased axolemmal permeability, contained within a single INR, revealed that conduction block occurred when the dimensions of the simulated swellings were within the limits of those measured experimentally, suggesting that multiple swellings on a single axon could contribute to axonal dysfunction, and that increased axolemmal permeability is the decisive factor that promotes conduction block. PMID:24303138

  4. The influence of central neuropathic pain in paraplegic patients on performance of a motor imagery based Brain Computer Interface☆

    PubMed Central

    Vuckovic, A.; Hasan, M.A.; Osuagwu, B.; Fraser, M.; Allan, D.B.; Conway, B.A.; Nasseroleslami, B.

    2015-01-01

    Objective The aim of this study was to test how the presence of central neuropathic pain (CNP) influences the performance of a motor imagery based Brain Computer Interface (BCI). Methods In this electroencephalography (EEG) based study, we tested BCI classification accuracy and analysed event related desynchronisation (ERD) in 3 groups of volunteers during imagined movements of their arms and legs. The groups comprised of nine able-bodied people, ten paraplegic patients with CNP (lower abdomen and legs) and nine paraplegic patients without CNP. We tested two types of classifiers: a 3 channel bipolar montage and classifiers based on common spatial patterns (CSPs), with varying number of channels and CSPs. Results Paraplegic patients with CNP achieved higher classification accuracy and had stronger ERD than paraplegic patients with no pain for all classifier configurations. Highest 2-class classification accuracy was achieved for CSP classifier covering wider cortical area: 82 ± 7% for patients with CNP, 82 ± 4% for able-bodied and 78 ± 5% for patients with no pain. Conclusion Presence of CNP improves BCI classification accuracy due to stronger and more distinct ERD. Significance Results of the study show that CNP is an important confounding factor influencing the performance of motor imagery based BCI based on ERD. PMID:25698307

  5. Focal axonal swellings and associated ultrastructural changes attenuate conduction velocity in central nervous system axons: a computer modeling study

    PubMed Central

    Kolaric, Katarina V; Thomson, Gemma; Edgar, Julia M; Brown, Angus M

    2013-01-01

    The constancy of action potential conduction in the central nervous system (CNS) relies on uniform axon diameter coupled with fidelity of the overlying myelin providing high-resistance, low capacitance insulation. Whereas the effects of demyelination on conduction have been extensively studied/modeled, equivalent studies on the repercussions for conduction of axon swelling, a common early pathological feature of (potentially reversible) axonal injury, are lacking. The recent description of experimentally acquired morphological and electrical properties of small CNS axons and oligodendrocytes prompted us to incorporate these data into a computer model, with the aim of simulating the effects of focal axon swelling on action potential conduction. A single swelling on an otherwise intact axon, as occurs in optic nerve axons of Cnp1 null mice caused a small decrease in conduction velocity. The presence of single swellings on multiple contiguous internodal regions (INR), as likely occurs in advanced disease, caused qualitatively similar results, except the dimensions of the swellings required to produce equivalent attenuation of conduction were significantly decreased. Our simulations of the consequences of metabolic insult to axons, namely, the appearance of multiple swollen regions, accompanied by perturbation of overlying myelin and increased axolemmal permeability, contained within a single INR, revealed that conduction block occurred when the dimensions of the simulated swellings were within the limits of those measured experimentally, suggesting that multiple swellings on a single axon could contribute to axonal dysfunction, and that increased axolemmal permeability is the decisive factor that promotes conduction block. PMID:24303138

  6. Computer-Enriched Instruction (CEI) Is Better for Preview Material Instead of Review Material: An Example of a Biostatistics Chapter, the Central Limit Theorem

    ERIC Educational Resources Information Center

    See, Lai-Chu; Huang, Yu-Hsun; Chang, Yi-Hu; Chiu, Yeo-Ju; Chen, Yi-Fen; Napper, Vicki S.

    2010-01-01

    This study examines the timing using computer-enriched instruction (CEI), before or after a traditional lecture to determine cross-over effect, period effect, and learning effect arising from sequencing of instruction. A 2 x 2 cross-over design was used with CEI to teach central limit theorem (CLT). Two sequences of graduate students in nursing…

  7. Hyperon polarization, crystal channeling, and E781 at Fermilab

    SciTech Connect

    Lach, J.

    1994-01-01

    Early experiments at Fermilab observed significant polarization of inclusively produced hyperons. these and subsequent experiments showed that {Lambda}{degree} were produced polarized while {bar {Lambda}}{degree} had no polarization in the same kinematical region. Other hyperons and antihyperons were also seen to be polarized. Recent Fermilab experiments have showed this to be a rich and complex phenomena. Theoretical understanding is still lacking. Fermilab E761 has shown that bent single crystals can be used to process the polarization of hyperons and from the precession angle measure the hyperon`s magnetic moment. This opens the possibility of measuring the magnetic moments of charmed baryons. Finally, I will briefly discuss Fermilab E781, an experiment designed to study charmed particle production by {Sigma} {sup {minus}} hyperons.

  8. The performance of the Tevatron collider at Fermilab

    SciTech Connect

    Gelfand, N.M.

    1991-10-01

    This paper will describe the actual operating performance of the Tevatron, operating as a collider, and will indicate the planned upgrades which will enhance, the physics results coming from the experiments being performed at Fermilab.

  9. A Radiation shielding study for the Fermilab Linac

    SciTech Connect

    Rakhno, I.; Johnstone, C.; /Fermilab

    2006-02-01

    Radiation shielding calculations are performed for the Fermilab Linac enclosure and gallery. The predicted dose rates around the access labyrinth at normal operation and a comparison to measured dose rates are presented. An accident scenario is considered as well.

  10. Fermilab Recycler Ring BPM Upgrade Based on Digital Receiver Technology

    SciTech Connect

    Webber, R.; Crisp, J.; Prieto, P.; Voy, D.; Briegel, C.; McClure, C.; West, R.; Pordes, S.; Mengel, M.

    2004-11-10

    Electronics for the 237 BPMs in the Fermilab Recycler Ring have been upgraded from a log-amplifier based system to a commercially produced digitizer-digital down converter based system. The hardware consists of a pre-amplifier connected to a split-plate BPM, an analog differential receiver-filter module and an 8-channel 80-MHz digital down converter VME board. The system produces position and intensity with a dynamic range of 30 dB and a resolution of {+-}10 microns. The position measurements are made on 2.5-MHz bunched beam and barrier buckets of the un-bunched beam. The digital receiver system operates in one of six different signal processing modes that include 2.5-MHz average, 2.5-MHz bunch-by-bunch, 2.5-MHz narrow band, unbunched average, un-bunched head/tail and 89-kHz narrow band. Receiver data is acquired on any of up to sixteen clock events related to Recycler beam transfers and other machine activities. Data from the digital receiver board are transferred to the front-end CPU for position and intensity computation on an on-demand basis through the VME bus. Data buffers are maintained for each of the acquisition events and support flash, closed orbit and turn-by-turn measurements. A calibration system provides evaluation of the BPM signal path and application programs.