NASA Astrophysics Data System (ADS)
Palmer, R. B.; Gallardo, J. C.
INTRODUCTION PHYSICS CONSIDERATIONS GENERAL REQUIRED LUMINOSITY FOR LEPTON COLLIDERS THE EFFECTIVE PHYSICS ENERGIES OF HADRON COLLIDERS HADRON-HADRON MACHINES LUMINOSITY SIZE AND COST CIRCULAR e^{+}e^- MACHINES LUMINOSITY SIZE AND COST e^{+}e^- LINEAR COLLIDERS LUMINOSITY CONVENTIONAL RF SUPERCONDUCTING RF AT HIGHER ENERGIES γ - γ COLLIDERS μ ^{+} μ^- COLLIDERS ADVANTAGES AND DISADVANTAGES DESIGN STUDIES STATUS AND REQUIRED R AND D COMPARISION OF MACHINES CONCLUSIONS DISCUSSION
Induced activation studies for the LHC upgrade to High Luminosity LHC
NASA Astrophysics Data System (ADS)
Adorisio, C.; Roesler, S.
2018-06-01
The Large Hadron Collider (LHC) will be upgraded in 2019/2020 to increase its luminosity (rate of collisions) by a factor of five beyond its design value and the integrated luminosity by a factor ten, in order to maintain scientific progress and exploit its full capacity. The novel machine configuration, called High Luminosity LHC (HL-LHC), will increase consequently the level of activation of its components. The evaluation of the radiological impact of the HL-LHC operation in the Long Straight Sections of the Insertion Region 1 (ATLAS) and Insertion Region 5 (CMS) is presented. Using the Monte Carlo code FLUKA, ambient dose equivalent rate estimations have been performed on the basis of two announced operating scenarios and using the latest available machine layout. The HL-LHC project requires new technical infrastructure with caverns and 300 m long tunnels along the Insertion Regions 1 and 5. The new underground service galleries will be accessible during the operation of the accelerator machine. The radiological risk assessment for the Civil Engineering work foreseen to start excavating the new galleries in the next LHC Long Shutdown and the radiological impact of the machine operation will be discussed.
Machine Protection with a 700 MJ Beam
NASA Astrophysics Data System (ADS)
Baer, T.; Schmidt, R.; Wenninger, J.; Wollmann, D.; Zerlauth, M.
After the high luminosity upgrade of the LHC, the stored energy per proton beam will increase by a factor of two as compared to the nominal LHC. Therefore, many damage studies need to be revisited to ensure a safe machine operation with the new beam parameters. Furthermore, new accelerator equipment like crab cavities might cause new failure modes, which are not sufficiently covered by the current machine protection system of the LHC. These failure modes have to be carefully studied and mitigated by new protection systems. Finally the ambitious goals for integrated luminosity delivered to the experiments during the era of HL-LHC require an increase of the machine availability without jeopardizing equipment protection.
Crabbing System for an Electron-Ion Collider
NASA Astrophysics Data System (ADS)
Castilla, Alejandro
As high energy and nuclear physicists continue to push further the boundaries of knowledge using colliders, there is an imperative need, not only to increase the colliding beams' energies, but also to improve the accuracy of the experiments, and to collect a large quantity of events with good statistical sensitivity. To achieve the latter, it is necessary to collect more data by increasing the rate at which these pro- cesses are being produced and detected in the machine. This rate of events depends directly on the machine's luminosity. The luminosity itself is proportional to the frequency at which the beams are being delivered, the number of particles in each beam, and inversely proportional to the cross-sectional size of the colliding beams. There are several approaches that can be considered to increase the events statistics in a collider other than increasing the luminosity, such as running the experiments for a longer time. However, this also elevates the operation expenses, while increas- ing the frequency at which the beams are delivered implies strong physical changes along the accelerator and the detectors. Therefore, it is preferred to increase the beam intensities and reduce the beams cross-sectional areas to achieve these higher luminosities. In the case where the goal is to push the limits, sometimes even beyond the machines design parameters, one must develop a detailed High Luminosity Scheme. Any high luminosity scheme on a modern collider considers--in one of their versions--the use of crab cavities to correct the geometrical reduction of the luminosity due to the beams crossing angle. In this dissertation, we present the design and testing of a proof-of-principle compact superconducting crab cavity, at 750 MHz, for the future electron-ion collider, currently under design at Jefferson Lab. In addition to the design and validation of the cavity prototype, we present the analysis of the first order beam dynamics and the integration of the crabbing systems to the interaction region. Following this, we propose the concept of twin crabs to allow machines with variable beam transverse coupling in the interaction region to have full crabbing in only the desired plane. Finally, we present recommendations to extend this work to other frequencies.
CP Violation and the Future of Flavor Physics
NASA Astrophysics Data System (ADS)
Kiesling, Christian
2009-12-01
With the nearing completion of the first-generation experiments at asymmetric e+e- colliders running at the Υ(4S) resonance ("B-Factories") a new era of high luminosity machines is at the horizon. We report here on the plans at KEK in Japan to upgrade the KEKB machine ("SuperKEKB") with the goal of achieving an instantaneous luminosity exceeding 8×1035 cm-2 s-1, which is almost two orders of magnitude higher than KEKB. Together with the machine, the Belle detector will be upgraded as well ("Belle-II"), with significant improvements to increase its background tolerance as well as improving its physics performance. The new generation of experiments is scheduled to take first data in the year 2013.
Crabbing system for an electron-ion collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castilla, Alejandro
2017-05-01
As high energy and nuclear physicists continue to push further the boundaries of knowledge using colliders, there is an imperative need, not only to increase the colliding beams' energies, but also to improve the accuracy of the experiments, and to collect a large quantity of events with good statistical sensitivity. To achieve the latter, it is necessary to collect more data by increasing the rate at which these processes are being produced and detected in the machine. This rate of events depends directly on the machine's luminosity. The luminosity itself is proportional to the frequency at which the beams aremore » being delivered, the number of particles in each beam, and inversely proportional to the cross-sectional size of the colliding beams. There are several approaches that can be considered to increase the events statistics in a collider other than increasing the luminosity, such as running the experiments for a longer time. However, this also elevates the operation expenses, while increasing the frequency at which the beams are delivered implies strong physical changes along the accelerator and the detectors. Therefore, it is preferred to increase the beam intensities and reduce the beams cross-sectional areas to achieve these higher luminosities. In the case where the goal is to push the limits, sometimes even beyond the machines design parameters, one must develop a detailed High Luminosity Scheme. Any high luminosity scheme on a modern collider considers|in one of their versions|the use of crab cavities to correct the geometrical reduction of the luminosity due to the beams crossing angle. In this dissertation, we present the design and testing of a proof-of-principle compact superconducting crab cavity, at 750 MHz, for the future electron-ion collider, currently under design at Jefferson Lab. In addition to the design and validation of the cavity prototype, we present the analysis of the first order beam dynamics and the integration of the crabbing systems to the interaction region. Following this, we propose the concept of twin crabs to allow machines with variable beam transverse coupling in the interaction region to have full crabbing in only the desired plane. Finally, we present recommendations to extend this work to other frequencies.« less
Introduction to the HL-LHC Project
NASA Astrophysics Data System (ADS)
Rossi, L.; Brüning, O.
The Large Hadron Collider (LHC) is one of largest scientific instruments ever built. It has been exploring the new energy frontier since 2010, gathering a global user community of 7,000 scientists. To extend its discovery potential, the LHC will need a major upgrade in the 2020s to increase its luminosity (rate of collisions) by a factor of five beyond its design value and the integrated luminosity by a factor of ten. As a highly complex and optimized machine, such an upgrade of the LHC must be carefully studied and requires about ten years to implement. The novel machine configuration, called High Luminosity LHC (HL-LHC), will rely on a number of key innovative technologies, representing exceptional technological challenges, such as cutting-edge 11-12 tesla superconducting magnets, very compact superconducting cavities for beam rotation with ultra-precise phase control, new technology for beam collimation and 300-meter-long high-power superconducting links with negligible energy dissipation. HL-LHC federates efforts and R&D of a large community in Europe, in the US and in Japan, which will facilitate the implementation of the construction phase as a global project.
IBS and Potential Luminosity Improvement for RHIC Operation Below Transition Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fedotov,A.
There is a strong interest in low-energy RHIC operations in the single-beam total energy range of 2.5-25 GeV/nucleon [1-3]. Collisions in this energy range, much of which is below nominal RHIC injection energy, will help to answer one of the key questions in the field of QeD about the existence and location of a critical point on the QCD phase diagram [4]. There have been several short test runs during 2006-2008 RHIC operations to evaluate RHIC operational challenges at these low energies [5]. Beam lifetimes observed during the test runs were limited by machine nonlinearities. This performance limit can bemore » improved with sufficient machine tuning. The next luminosity limitation comes from transverse and longitudinal Intra-beam Scattering (IBS), and ultimately from the space-charge limit. Detailed discussion of limiting beam dynamics effects and possible luminosity improvement with electron cooling can be found in Refs. [6-8]. For low-energy RHIC operation, particle losses from the RF bucket are of particular concern since the longitudinal beam size is comparable to the existing RF bucket at low energies. However, operation below transition energy allows us to exploit an Intra-beam Scattering (IBS) feature that drives the transverse and longitudinal beam temperatures towards equilibrium by minimizing the longitudinal diffusion rate using a high RF voltage. Simulation studies were performed with the goal to understand whether one can use this feature of IBS to improve luminosity of RHIC collider at low-energies. This Note presents results of simulations which show that additional luminosity improvement for low-energy RHIC project may be possible with high RF voltage from a 56 MHz superconducting RF cavity that is presently under development for RHIC.« less
LHC Status and Upgrade Challenges
NASA Astrophysics Data System (ADS)
Smith, Jeffrey
2009-11-01
The Large Hadron Collider has had a trying start-up and a challenging operational future lays ahead. Critical to the machine's performance is controlling a beam of particles whose stored energy is equivalent to 80 kg of TNT. Unavoidable beam losses result in energy deposition throughout the machine and without adequate protection this power would result in quenching of the superconducting magnets. A brief overview of the machine layout and principles of operation will be reviewed including a summary of the September 2008 accident. The current status of the LHC, startup schedule and upgrade options to achieve the target luminosity will be presented.
Future hadron colliders: From physics perspectives to technology R&D
NASA Astrophysics Data System (ADS)
Barletta, William; Battaglia, Marco; Klute, Markus; Mangano, Michelangelo; Prestemon, Soren; Rossi, Lucio; Skands, Peter
2014-11-01
High energy hadron colliders have been instrumental to discoveries in particle physics at the energy frontier and their role as discovery machines will remain unchallenged for the foreseeable future. The full exploitation of the LHC is now the highest priority of the energy frontier collider program. This includes the high luminosity LHC project which is made possible by a successful technology-readiness program for Nb3Sn superconductor and magnet engineering based on long-term high-field magnet R&D programs. These programs open the path towards collisions with luminosity of 5×1034 cm-2 s-1 and represents the foundation to consider future proton colliders of higher energies. This paper discusses physics requirements, experimental conditions, technological aspects and design challenges for the development towards proton colliders of increasing energy and luminosity.
Support Structure Design of the $$\\hbox{Nb}_{3}\\hbox{Sn}$$ Quadrupole for the High Luminosity LHC
Juchno, M.; Ambrosio, G.; Anerella, M.; ...
2014-10-31
New low-β quadrupole magnets are being developed within the scope of the High Luminosity LHC (HL-LHC) project in collaboration with the US LARP program. The aim of the HLLHC project is to study and implement machine upgrades necessary for increasing the luminosity of the LHC. The new quadrupoles, which are based on the Nb₃Sn superconducting technology, will be installed in the LHC Interaction Regions and will have to generate a gradient of 140 T/m in a coil aperture of 150 mm. In this paper, we describe the design of the short model magnet support structure and discuss results of themore » detailed 3D numerical analysis performed in preparation for the first short model test.« less
Frequency domain analysis of knock images
NASA Astrophysics Data System (ADS)
Qi, Yunliang; He, Xin; Wang, Zhi; Wang, Jianxin
2014-12-01
High speed imaging-based knock analysis has mainly focused on time domain information, e.g. the spark triggered flame speed, the time when end gas auto-ignition occurs and the end gas flame speed after auto-ignition. This study presents a frequency domain analysis on the knock images recorded using a high speed camera with direct photography in a rapid compression machine (RCM). To clearly visualize the pressure wave oscillation in the combustion chamber, the images were high-pass-filtered to extract the luminosity oscillation. The luminosity spectrum was then obtained by applying fast Fourier transform (FFT) to three basic colour components (red, green and blue) of the high-pass-filtered images. Compared to the pressure spectrum, the luminosity spectra better identify the resonant modes of pressure wave oscillation. More importantly, the resonant mode shapes can be clearly visualized by reconstructing the images based on the amplitudes of luminosity spectra at the corresponding resonant frequencies, which agree well with the analytical solutions for mode shapes of gas vibration in a cylindrical cavity.
CEPC-SPPC accelerator status towards CDR
NASA Astrophysics Data System (ADS)
Gao, J.
2017-12-01
In this paper we will give an introduction to the Circular Electron Positron Collider (CEPC). The scientific background, physics goal, the collider design requirements and the conceptual design principle of the CEPC are described. On the CEPC accelerator, the optimization of parameter designs for the CEPC with different energies, machine lengths, single ring and crab-waist collision partial double ring, advanced partial double ring and fully partial double ring options, etc. have been discussed systematically, and compared. The CEPC accelerator baseline and alternative designs have been proposed based on the luminosity potential in relation with the design goals. The CEPC sub-systems, such as the collider main ring, booster, electron positron injector, etc. have also been introduced. The detector and the MAchine-Detector Interface (MDI) design have been briefly mentioned. Finally, the optimization design of the Super Proton-Proton Collider (SppC), its energy and luminosity potentials, in the same tunnel of the CEPC are also discussed. The CEPC-SppC Progress Report (2015-2016) has been published.
The upgraded ATLAS and CMS detectors and their physics capabilities.
Wells, Pippa S
2015-01-13
The update of the European Strategy for Particle Physics from 2013 states that Europe's top priority should be the exploitation of the full potential of the LHC, including the high-luminosity upgrade of the machine and detectors with a view to collecting 10 times more data than in the initial design. The plans for upgrading the ATLAS and CMS detectors so as to maintain their performance and meet the challenges of increasing luminosity are presented here. A cornerstone of the physics programme is to measure the properties of the 125GeV Higgs boson with the highest possible precision, to test its consistency with the Standard Model. The high-luminosity data will allow precise measurements of the dominant production and decay modes, and offer the possibility of observing rare modes including Higgs boson pair production. Direct and indirect searches for additional Higgs bosons beyond the Standard Model will also continue.
Possible limits of plasma linear colliders
NASA Astrophysics Data System (ADS)
Zimmermann, F.
2017-07-01
Plasma linear colliders have been proposed as next or next-next generation energy-frontier machines for high-energy physics. I investigate possible fundamental limits on energy and luminosity of such type of colliders, considering acceleration, multiple scattering off plasma ions, intrabeam scattering, bremsstrahlung, and betatron radiation. The question of energy efficiency is also addressed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hand, L.N.
Some proposed techniques for using laser beams to accelerate charged particles are reviewed. Two specific ideas for 'grating-type' accelerating structures are discussed. Speculations are presented about how a successful laser accelerator could be used in a 'multi-pass collider', a type of machine which would have characteristics intermediate between those of synchrotrons and linear (single-pass) colliders. No definite conclusions about practical structures for laser accelerators are reached, but it is suggested that a serious effort be made to design a small prototype machine. Achieving a reasonable luminosity demands that the accelerator either be a cw machine or that laser peak powermore » requirements be much higher than those presently available. Use of superconducting gratings requires a wavelength in the sub-millimeter range.« less
The laser accelerator-another unicorn in the garden
NASA Astrophysics Data System (ADS)
Hand, L. N.
1981-07-01
Some proposed techniques for using laser beams to accelerate charged particles was reviewed. Two specific ideas for grating type accelerating structures are discussed. Speculations are presented about how a successful laser accelerator could be used in a multipass collider; a type of machine which would have characteristics intermediate between those of synchrotrons and linear (single pass) colliders. No definite conclusions about practical structures for laser accelerators are reached, but it is suggested that a serious effort be made to design a small prototype machine. Achieving a reasonable luminosity demands that the accelerator either be a cw machine or that laser peak power requirements to be much higher than those presently available. Use of superconducting gratings requires a wavelength in the sub-millimeter range.
Modeling of beam-induced damage of the LHC tertiary collimators
NASA Astrophysics Data System (ADS)
Quaranta, E.; Bertarelli, A.; Bruce, R.; Carra, F.; Cerutti, F.; Lechner, A.; Redaelli, S.; Skordis, E.; Gradassi, P.
2017-09-01
Modern hadron machines with high beam intensity may suffer from material damage in the case of large beam losses and even beam-intercepting devices, such as collimators, can be harmed. A systematic method to evaluate thresholds of damage owing to the impact of high energy particles is therefore crucial for safe operation and for predicting possible limitations in the overall machine performance. For this, a three-step simulation approach is presented, based on tracking simulations followed by calculations of energy deposited in the impacted material and hydrodynamic simulations to predict the thermomechanical effect of the impact. This approach is applied to metallic collimators at the CERN Large Hadron Collider (LHC), which in standard operation intercept halo protons, but risk to be damaged in the case of extraction kicker malfunction. In particular, tertiary collimators protect the aperture bottlenecks, their settings constrain the reach in β* and hence the achievable luminosity at the LHC experiments. Our calculated damage levels provide a very important input on how close to the beam these collimators can be operated without risk of damage. The results of this approach have been used already to push further the performance of the present machine. The risk of damage is even higher in the upgraded high-luminosity LHC with higher beam intensity, for which we quantify existing margins before equipment damage for the proposed baseline settings.
Machine learning of network metrics in ATLAS Distributed Data Management
NASA Astrophysics Data System (ADS)
Lassnig, Mario; Toler, Wesley; Vamosi, Ralf; Bogado, Joaquin; ATLAS Collaboration
2017-10-01
The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for networkaware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.
Conceptual design of the cryostat for the new high luminosity (HL-LHC) triplet magnets
NASA Astrophysics Data System (ADS)
Ramos, D.; Parma, V.; Moretti, M.; Eymin, C.; Todesco, E.; Van Weelderen, R.; Prin, H.; Berkowitz Zamora, D.
2017-12-01
The High Luminosity LHC (HL-LHC) is a project to upgrade the LHC collider after 2020-2025 to increase the integrated luminosity by about one order of magnitude and extend the physics production until 2035. An upgrade of the focusing triplets insertion system for the ATLAS and CMS experiments is foreseen using superconducting magnets operating in a pressurised superfluid helium bath at 1.9 K. This will require the design and construction of four continuous cryostats, each about sixty meters in length and one meter in diameter, for the final beam focusing quadrupoles, corrector magnets and beam separation dipoles. The design is constrained by the dimensions of the existing tunnel and accessibility restrictions imposing the integration of cryogenic piping inside the cryostat, thus resulting in a very compact integration. As the alignment and position stability of the magnets is crucial for the luminosity performance of the machine, the magnet support system must be carefully designed in order to cope with parasitic forces and thermo-mechanical load cycles. In this paper, we present the conceptual design of the cryostat and discuss the approach to address the stringent and often conflicting requirements of alignment, integration and thermal aspects.
Long term dynamics of the high luminosity Large Hadron Collider with crab cavities
NASA Astrophysics Data System (ADS)
Barranco García, J.; De Maria, R.; Grudiev, A.; Tomás García, R.; Appleby, R. B.; Brett, D. R.
2016-10-01
The High Luminosity upgrade of the Large Hadron Collider (HL-LHC) aims to achieve an integrated luminosity of 200 - 300 fb-1 per year, including the contribution from the upgrade of the injector chain. For the HL-LHC the larger crossing angle together with a smaller beta function at the collision point would result in more than 70% luminosity loss due to the incomplete geometric overlap of colliding bunches. To recover head-on collisions at the high-luminosity particle-physics detectors ATLAS and CMS and benefit from the very low β* provided by the Achromatic Telescopic Squeezing (ATS) optics, a local crab cavity scheme provides transverse kicks to the proton bunches. The tight space constraints at the location of these cavities leads to designs which are axially non-symmetric, giving rise to high order multipoles components of the main deflecting mode and, since these kicks are harmonic in time, we expand them in a series of multipoles in a similar fashion as is done for static field magnets. In this work we calculate, for the first time, the higher order multipoles and their impact on beam dynamics for three different crab cavity prototypes. Different approaches to calculate the multipoles are presented. Furthermore, we perform the first calculation of their impact on the long term stability of the machine using the concept of dynamic aperture.
Documentation for the machine-readable version of the catalog of galactic O type stars
NASA Technical Reports Server (NTRS)
Warren, W. H., Jr.
1982-01-01
The Catalog of Galactic O-Type Stars (Garmany, Conti and Chiosi 1982), a compilation from the literature of all O-type stars for which spectral types, luminosity classes and UBV photometry exist, contains 765 stars, for each of which designation (HD, DM, etc.), spectral type, V, B-V, cluster membership, Galactic coordinates, and source references are given. Derived values of absolute visual and bolometric magnitudes, and distances are included. The source reference should be consulted for additional details concerning the derived quantities. This description of the machine-readable version of the catalog seeks to enable users to read and process the data with a minimum of guesswork. A copy of this document should be distributed with any machine readable version of the catalog.
Simulations of fast crab cavity failures in the high luminosity Large Hadron Collider
NASA Astrophysics Data System (ADS)
Yee-Rendon, Bruce; Lopez-Fernandez, Ricardo; Barranco, Javier; Calaga, Rama; Marsili, Aurelien; Tomás, Rogelio; Zimmermann, Frank; Bouly, Frédéric
2014-05-01
Crab cavities (CCs) are a key ingredient of the high luminosity Large Hadron Collider (HL-LHC) project for increasing the luminosity of the LHC. At KEKB, CCs have exhibited abrupt changes of phase and voltage during a time period of the order of a few LHC turns and considering the significant stored energy in the HL-LHC beam, CC failures represent a serious threat in regard to LHC machine protection. In this paper, we discuss the effect of CC voltage or phase changes on a time interval similar to, or longer than, the one needed to dump the beam. The simulations assume a quasistationary-state distribution to assess the particles losses for the HL-LHC. These distributions produce beam losses below the safe operation threshold for Gaussian tails, while, for non-Gaussian tails are on the same order of the limit. Additionally, some mitigation strategies are studied for reducing the damage caused by the CC failures.
Luminosity correlations in quasars
NASA Technical Reports Server (NTRS)
Chanan, G. A.
1983-01-01
Simulations are conducted with and without flux thresholds in an investigation of quasar luminosity correlations by means of a Monte Carlo analysis, for various model distributions of quasars in X-rays and optical luminosity. For the case where the X-ray photons are primary, an anticorrelation between X-ray-to-optical luminosity ratio and optical luminosity arises as a natural consequence which resembles observations. The low optical luminosities of X-ray selected quasars can be understood as a consequence of the same effect, and similar conclusions may hold if the X-ray and optical luminosities are determined independently by a third parameter, although they do not hold if the optical photons are primary. The importance of such considerations is demonstrated through a reanalysis of the published X-ray-to-optical flux ratios for the 3CR sample.
The Herschel ATLAS: Evolution of the 250 Micrometer Luminosity Function Out to z = 0.5
NASA Technical Reports Server (NTRS)
Dye, S.; Dunne, L.; Eales, S.; Smith, D. J. B.; Amblard, A.; Auld, R.; Baes, M.; Baldry, I. K.; Bamford, S.; Blain, A. W.;
2010-01-01
We have determined the luminosity function of 250 micrometer-selected galaxies detected in the approximately equal to 14 deg(sup 2) science demonstration region of the Herschel-ATLAS project out to a redshift of z = 0.5. Our findings very clearly show that the luminosity function evolves steadily out to this redshift. By selecting a sub-group of sources within a fixed luminosity interval where incompleteness effects are minimal, we have measured a smooth increase in the comoving 250 micrometer luminosity density out to z = 0.2 where it is 3.6(sup +1.4) (sub -0.9) times higher than the local value.
NASA Astrophysics Data System (ADS)
Brodzinski, K.; Claudet, S.; Ferlin, G.; Tavian, L.; Wagner, U.; Van Weelderen, R.
The discovery of a Higgs boson at CERN in 2012 is the start of a major program of work to measure this particle's properties with the highest possible precision for testing the validity of the Standard Model and to search for further new physics at the energy frontier. The LHC is in a unique position to pursue this program. Europe's top priority is the exploitation of the full potential of the LHC, including the high-luminosity upgrade of the machine and detectors with an objective to collect ten times more data than in the initial design, by around 2030. To reach this objective, the LHC cryogenic system must be upgraded to withstand higher beam current and higher luminosity at top energy while keeping the same operation availability by improving the collimation system and the protection of electronics sensitive to radiation. This paper will present the conceptual design of the cryogenic system upgrade with recent updates in performance requirements, the corresponding layout and architecture of the system as well as the main technical challenges which have to be met in the coming years.
NASA Technical Reports Server (NTRS)
Warren, W. H., Jr.
1984-01-01
The machine readable library as it is currently being distributed from the Astronomical Data Center is described. The library contains digital spectral for 161 stars of spectral classes O through M and luminosity classes 1, 3 and 5 in the wavelength range 3510 A to 7427 A. The resolution is approximately 4.5 A, while the typical photometric uncertainty of each resolution element is approximately 1 percent and broadband variations are 3 percent. The documentation includes a format description, a table of the indigenous characteristics of the magnetic tape file, and a sample listing of logical records exactly as they are recorded on the tape.
NASA Technical Reports Server (NTRS)
Warren, W. H., Jr.
1984-01-01
The machine-readable version of the Atlas as it is currently being distributed from the Astronomical Data Center is described. The data were obtained with the Oke multichannel scanner on the 5-meter Hale reflector for purposes of synthesizing galaxy spectra, and the digitized Atlas contains normalized spectral energy distributions, computed colors, scan line and continuum indices for 175 selected stars covering the complete ranges of spectral type and luminosity class. The documentation includes a byte-by-byte format description, a table of the indigenous characteristics of the magnetic tape file, and a sample listing of logical records exactly as they are recorded on the tape.
Gonzalez Viejo, Claudia; Fuentes, Sigfredo; Torrico, Damir D; Dunshea, Frank R
2018-06-03
Traditional methods to assess heart rate (HR) and blood pressure (BP) are intrusive and can affect results in sensory analysis of food as participants are aware of the sensors. This paper aims to validate a non-contact method to measure HR using the photoplethysmography (PPG) technique and to develop models to predict the real HR and BP based on raw video analysis (RVA) with an example application in chocolate consumption using machine learning (ML). The RVA used a computer vision algorithm based on luminosity changes on the different RGB color channels using three face-regions (forehead and both cheeks). To validate the proposed method and ML models, a home oscillometric monitor and a finger sensor were used. Results showed high correlations with the G color channel (R² = 0.83). Two ML models were developed using three face-regions: (i) Model 1 to predict HR and BP using the RVA outputs with R = 0.85 and (ii) Model 2 based on time-series prediction with HR, magnitude and luminosity from RVA inputs to HR values every second with R = 0.97. An application for the sensory analysis of chocolate showed significant correlations between changes in HR and BP with chocolate hardness and purchase intention.
High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apollinari, G.; Béjar Alonso, I.; Brüning, O.
2015-12-17
The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHCmore » is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cavities for beam rotation with ultra-precise phase control, new technology and physical processes for beam collimation and 300 metre-long high-power superconducting links with negligible energy dissipation. The present document describes the technologies and components that will be used to realise the project and is intended to serve as the basis for the detailed engineering design of HL-LHC.« less
Radiation Hard Silicon Particle Detectors for Phase-II LHC Trackers
NASA Astrophysics Data System (ADS)
Oblakowska-Mucha, A.
2017-02-01
The major LHC upgrade is planned after ten years of accelerator operation. It is foreseen to significantly increase the luminosity of the current machine up to 1035 cm-2s-1 and operate as the upcoming High Luminosity LHC (HL-LHC) . The major detectors upgrade, called the Phase-II Upgrade, is also planned, a main reason being the aging processes caused by severe particle radiation. Within the RD50 Collaboration, a large Research and Development program has been underway to develop silicon sensors with sufficient radiation tolerance for HL-LHC trackers. In this summary, several results obtained during the testing of the devices after irradiation to HL-LHC levels are presented. Among the studied structures, one can find advanced sensors types like 3D silicon detectors, High-Voltage CMOS technologies, or sensors with intrinsic gain (LGAD). Based on these results, the RD50 Collaboration gives recommendation for the silicon detectors to be used in the detector upgrade.
On the Feasibility of a Pulsed 14 TeV C.M.E. Muon Collider in the LHC Tunnel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiltsev, Vladimir; Neuffer, D.
We discuss the technical feasibility, key machine pa-rameters and major challenges of a 14 TeV c.m.e. muon-muon collider in the LHC tunnel [1]. The luminosity of the collider is evaluated for three alternative muon sources – the PS synchrotron, one of a type developed by the US Muon Accelerator Program (MAP) and a low-emittance option based on resonant μ-pair production.
Luminosity Limitations of Linear Colliders Based on Plasma Acceleration
Lebedev, Valeri; Burov, Alexey; Nagaitsev, Sergei
2016-01-01
Particle acceleration in plasma creates a possibility of exceptionally high accelerating gradients and appears as a very attractive option for future linear electron-positron and/or photon-photon colliders. These high accelerating gradients were already demonstrated in a number of experiments. Furthermore, a linear collider requires exceptionally high beam brightness which still needs to be demonstrated. In this article we discuss major phenomena which limit the beam brightness of accelerated beam and, consequently, the collider luminosity.
NASA Astrophysics Data System (ADS)
Sopczak, André; Ali, Babar; Asawatavonvanich, Thanawat; Begera, Jakub; Bergmann, Benedikt; Billoud, Thomas; Burian, Petr; Caicedo, Ivan; Caforio, Davide; Heijne, Erik; Janeček, Josef; Leroy, Claude; Mánek, Petr; Mochizuki, Kazuya; Mora, Yesid; Pacík, Josef; Papadatos, Costa; Platkevič, Michal; Polanský, Štěpán; Pospíšil, Stanislav; Suk, Michal; Svoboda, Zdeněk
2017-03-01
A network of Timepix (TPX) devices installed in the ATLAS cavern measures the LHC luminosity as a function of time as a stand-alone system. The data were recorded from 13-TeV proton-proton collisions in 2015. Using two TPX devices, the number of hits created by particles passing the pixel matrices was counted. A van der Meer scan of the LHC beams was analyzed using bunch-integrated luminosity averages over the different bunch profiles for an approximate absolute luminosity normalization. It is demonstrated that the TPX network has the capability to measure the reduction of LHC luminosity with precision. Comparative studies were performed among four sensors (two sensors in each TPX device) and the relative short-term precision of the luminosity measurement was determined to be 0.1% for 10-s time intervals. The internal long-term time stability of the measurements was below 0.5% for the data-taking period.
Bruning, Oliver
2018-05-23
Overview of the operation and upgrade plans for the machine. Upgrade studies and taskforces. The Chamonix 2010 discussions led to five new task forces: planning for a long shut down in 2012 for splice consolidation; long term consolidation planning for the injector complex; SPS upgrade task force (accelerated program for SPS upgrade); PSB upgrade and its implications for the PS (e.g. radiation etc.); LHC High Luminosity project (investigate planning for ONE upgrade by 2018-2020); Launch of a dedicated study for doubling the beam energy in the LHC->HE-LHC.
ATCA for Machines-- Advanced Telecommunications Computing Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, R.S.; /SLAC
2008-04-22
The Advanced Telecommunications Computing Architecture is a new industry open standard for electronics instrument modules and shelves being evaluated for the International Linear Collider (ILC). It is the first industrial standard designed for High Availability (HA). ILC availability simulations have shown clearly that the capabilities of ATCA are needed in order to achieve acceptable integrated luminosity. The ATCA architecture looks attractive for beam instruments and detector applications as well. This paper provides an overview of ongoing R&D including application of HA principles to power electronics systems.
Summary of the Optics, IR, Injection, Operations, Reliability and Instrumentation Working Group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wienands, U.; /SLAC; Funakoshi, Y.
2012-04-20
The facilities reported on are all in a fairly mature state of operation, as evidenced by the very detailed studies and correction schemes that all groups are working on. First- and higher-order aberrations are diagnosed and planned to be corrected. Very detailed beam measurements are done to get a global picture of the beam dynamics. More than other facilities the high-luminosity colliders are struggling with experimental background issues, mitigation of which is a permanent challenge. The working group dealt with a very wide rage of practical issues which limit performance of the machines and compared their techniques of operations andmore » their performance. We anticipate this to be a first attempt. In a future workshop in this series, we propose to attempt more fundamental comparisons of each machine, including design parameters. For example, DAPHNE and KEKB employ a finite crossing angle. The minimum value of {beta}*{sub y} attainable at KEKB seems to relate to this scheme. Effectiveness of compensation solenoids and turn-by-turn BPMs etc. should be examined in more detail. In the near future, CESR-C and VEPP-2000 will start their operation. We expect to hear important new experiences from these machines; in particular VEPP-2000 will be the first machine to have adopted round beams. At SLAC and KEK, next generation B Factories are being considered. It will be worthwhile to discuss the design issues of these machines based on the experiences of the existing factory machines.« less
Design of the large hadron electron collider interaction region
NASA Astrophysics Data System (ADS)
Cruz-Alaniz, E.; Newton, D.; Tomás, R.; Korostelev, M.
2015-11-01
The large hadron electron collider (LHeC) is a proposed upgrade of the Large Hadron Collider (LHC) within the high luminosity LHC (HL-LHC) project, to provide electron-nucleon collisions and explore a new regime of energy and luminosity for deep inelastic scattering. The design of an interaction region for any collider is always a challenging task given that the beams are brought into crossing with the smallest beam sizes in a region where there are tight detector constraints. In this case integrating the LHeC into the existing HL-LHC lattice, to allow simultaneous proton-proton and electron-proton collisions, increases the difficulty of the task. A nominal design was presented in the the LHeC conceptual design report in 2012 featuring an optical configuration that focuses one of the proton beams of the LHC to β*=10 cm in the LHeC interaction point to reach the desired luminosity of L =1033 cm-2 s-1 . This value is achieved with the aid of a new inner triplet of quadrupoles at a distance L*=10 m from the interaction point. However the chromatic beta beating was found intolerable regarding machine protection issues. An advanced chromatic correction scheme was required. This paper explores the feasibility of the extension of a novel optical technique called the achromatic telescopic squeezing scheme and the flexibility of the interaction region design, in order to find the optimal solution that would produce the highest luminosity while controlling the chromaticity, minimizing the synchrotron radiation power and maintaining the dynamic aperture required for stability.
The Lhc Collider:. Status and Outlook to Operation
NASA Astrophysics Data System (ADS)
Schmidt, Rüdiger
2006-04-01
For the LHC to provide particle physics with proton-proton collisions at the centre of mass energy of 14 TeV with a luminosity of 1034 cm-2s-1, the machine will operate with high-field dipole magnets using NbTi superconductors cooled to below the lambda point of helium. In order to reach design performance, the LHC requires both, the use of existing technologies pushed to the limits as well as the application of novel technologies. The construction follows a decade of intensive R&D and technical validation of major collider sub-systems. This paper will focus on the required LHC performance, and on the implications on the used technologies. The consequences of the unprecedented quantity of energy stored in both magnets and beams will be discussed. A brief outlook to operation and its consequences for machine protection will be given.
Exploring the Faint End of the Luminosity-Metallicity Relation with Hα Dots
NASA Astrophysics Data System (ADS)
Hirschauer, Alec S.; Salzer, John J.
2015-01-01
The well-known correlation between a galaxy's luminosity and its gas-phase oxygen abundance (the luminosity-metallicity (L-Z) relation) offers clues toward our understanding of chemical enrichment histories and evolution. Bright galaxies are comparatively better studied than faint ones, leaving a relative dearth of observational data points to constrain the L-Z relation in the low-luminosity regime. We present high S/N nebular spectroscopy of low-luminosity star-forming galaxies observed with the KPNO 4m using the new KOSMOS spectrograph to derive direct-method metallicities. Our targets are strong point-like emission-line sources discovered serendipitously in continuum-subtracted narrowband images from the ALFALFA Hα survey. Follow-up spectroscopy of these "Hα dots" shows that these objects represent some of the lowest luminosity star-forming systems in the local Universe. Our KOSMOS spectra cover the full optical region and include detection of [O III] λ4363 in roughly a dozen objects. This paper presents some of the first scientific results obtained using this new spectrograph, and demonstrates its capabilities and effectiveness in deriving direct-method metallicities of faint objects.
Analytical N beam position monitor method
NASA Astrophysics Data System (ADS)
Wegscheider, A.; Langner, A.; Tomás, R.; Franchi, A.
2017-11-01
Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β*-leveling on luminosity will require many operational optics. A fast measurement of the β -function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs). A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.
NASA Astrophysics Data System (ADS)
Borg, M.; Bertarelli, A.; Carra, F.; Gradassi, P.; Guardia-Valenzuela, J.; Guinchard, M.; Izquierdo, G. Arnau; Mollicone, P.; Sacristan-de-Frutos, O.; Sammut, N.
2018-03-01
The CERN Large Hadron Collider is currently being upgraded to operate at a stored beam energy of 680 MJ through the High Luminosity upgrade. The LHC performance is dependent on the functionality of beam collimation systems, essential for safe beam cleaning and machine protection. A dedicated beam experiment at the CERN High Radiation to Materials facility is created under the HRMT-23 experimental campaign. This experiment investigates the behavior of three collimation jaws having novel composite absorbers made of copper diamond, molybdenum carbide graphite, and carbon fiber carbon, experiencing accidental scenarios involving the direct beam impact on the material. Material characterization is imperative for the design, execution, and analysis of such experiments. This paper presents new data and analysis of the thermostructural characteristics of some of the absorber materials commissioned within CERN facilities. In turn, characterized elastic properties are optimized through the development and implementation of a mixed numerical-experimental optimization technique.
Gauge-invariance and infrared divergences in the luminosity distance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biern, Sang Gyu; Yoo, Jaiyul, E-mail: sgbiern@physik.uzh.ch, E-mail: jyoo@physik.uzh.ch
2017-04-01
Measurements of the luminosity distance have played a key role in discovering the late-time cosmic acceleration. However, when accounting for inhomogeneities in the Universe, its interpretation has been plagued with infrared divergences in its theoretical predictions, which are in some cases used to explain the cosmic acceleration without dark energy. The infrared divergences in most calculations are artificially removed by imposing an infrared cut-off scale. We show that a gauge-invariant calculation of the luminosity distance is devoid of such divergences and consistent with the equivalence principle, eliminating the need to impose a cut-off scale. We present proper numerical calculations ofmore » the luminosity distance using the gauge-invariant expression and demonstrate that the numerical results with an ad hoc cut-off scale in previous calculations have negligible systematic errors as long as the cut-off scale is larger than the horizon scale. We discuss the origin of infrared divergences and their cancellation in the luminosity distance.« less
Ground Motion Studies for Large Future Accelerator
NASA Astrophysics Data System (ADS)
Takeda, Shigeru; Oide, Katsunobu
1997-05-01
The future large accelerator, such as TeV linear collider, should have extremely small emittance to perform the required luminosity. Precise alignment of machine components is essential to prevent emittance dilution. The ground motion spoils alignment of accelerator elements and results in emittance growth. The ground motion in the frequency range of seismic vibration is mostly coherent in the related accelerator. But the incoherent diffusive or Brownian like motion becomes dominant at frequency region less than seismic vibration [1, 2, 3]. Slow ground motion with respect to the machine performance is discussed including the method of tunnel construction. Our experimental results and recent excavated results clarify that application of TBMs is better excavating method than NATM (Drill + Blast) for accelerator tunnel to prevent emittance dilution. ([1] V. Shiltsev, Proc. of IWAA95 Tsukuba, 1995. [2] Shigeru Takeda et al., Proc. of EPAC96, 1996. [3] A. Sery, Proc. of LINAC96, 1996.)
Ultraluminous X-ray sources as neutrino pulsars
NASA Astrophysics Data System (ADS)
Mushtukov, Alexander A.; Tsygankov, Sergey S.; Suleimanov, Valery F.; Poutanen, Juri
2018-05-01
The classical limit on the accretion luminosity of a neutron star is given by the Eddington luminosity. The advanced models of accretion on to magnetized neutron stars account for the appearance of magnetically confined accretion columns and allow the accretion luminosity to be higher than the Eddington value by a factor of tens. However, the recent discovery of pulsations from ultraluminous X-ray source (ULX) in NGC 5907 demonstrates that the accretion luminosity can exceed the Eddington value up to by a factor of 500. We propose a model explaining observational properties of ULX-1 in NGC 5907 without any ad hoc assumptions. We show that the accretion column at extreme luminosity becomes advective. Enormous energy release within a small geometrical volume and advection result in very high temperatures at the bottom of accretion column, which demand to account for the energy losses due to neutrino emission which can be even more effective than the radiation energy losses. We show that the total luminosity at the mass accretion rates above 1021 g s-1 is dominated by the neutrino emission similarly to the case of core-collapse supernovae. We argue that the accretion rate measurements based on detected photon luminosity in case of bright ULXs powered by neutron stars can be largely underestimated due to intense neutrino emission. The recently discovered pulsating ULX-1 in galaxy NGC 5907 with photon luminosity of {˜ } 10^{41} {erg s^{-1}} is expected to be even brighter in neutrinos and is thus the first known Neutrino Pulsar.
NASA Astrophysics Data System (ADS)
Bruce, R.; Bracco, C.; De Maria, R.; Giovannozzi, M.; Mereghetti, A.; Mirarchi, D.; Redaelli, S.; Quaranta, E.; Salvachua, B.
2017-03-01
The Large Hadron Collider (LHC) at CERN is built to collide intense proton beams with an unprecedented energy of 7 TeV. The design stored energy per beam of 362 MJ makes the LHC beams highly destructive, so that any beam losses risk to cause quenches of superconducting magnets or damage to accelerator components. Collimators are installed to protect the machine and they define a minimum normalized aperture, below which no other element is allowed. This imposes a limit on the achievable luminosity, since when squeezing β* (the β-function at the collision point) to smaller values for increased luminosity, the β-function in the final focusing system increases. This leads to a smaller normalized aperture that risks to go below the allowed collimation aperture. In the first run of the LHC, this was the main limitation on β*, which was constrained to values above the design specification. In this article, we show through theoretical and experimental studies how tighter collimator openings and a new optics with specific phase-advance constraints allows a β* as small as 40 cm, a factor 2 smaller than β*=80 cm used in 2015 and significantly below the design value β*=55 cm, in spite of a lower beam energy. The proposed configuration with β*=40 cm has been successfully put into operation and has been used throughout 2016 as the LHC baseline. The decrease in β* compared to 2015 has been an essential contribution to reaching and surpassing, in 2016, the LHC design luminosity for the first time, and to accumulating a record-high integrated luminosity of around 40 fb-1 in one year, in spite of using less bunches than in the design.
Transverse emittance growth due to rf noise in the high-luminosity LHC crab cavities
NASA Astrophysics Data System (ADS)
Baudrenghien, P.; Mastoridis, T.
2015-10-01
The high-luminosity LHC (HiLumi LHC) upgrade with planned operation from 2025 onward has a goal of achieving a tenfold increase in the number of recorded collisions thanks to a doubling of the intensity per bunch (2.2e11 protons) and a reduction of β* to 15 cm. Such an increase would significantly expedite new discoveries and exploration. To avoid detrimental effects from long-range beam-beam interactions, the half crossing angle must be increased to 295 microrad. Without bunch crabbing, this large crossing angle and small transverse beam size would result in a luminosity reduction factor of 0.3 (Piwinski angle). Therefore, crab cavities are an important component of the LHC upgrade, and will contribute strongly to achieving an increase in the number of recorded collisions. The proposed crab cavities are electromagnetic devices with a resonance in the radio frequency (rf) region of the spectrum (400.789 MHz). They cause a kick perpendicular to the direction of motion (transverse kick) to restore an effective head-on collision between the particle beams, thereby restoring the geometric factor to 0.8 [K. Oide and K. Yokoya, Phys. Rev. A 40, 315 (1989).]. Noise injected through the rf/low level rf (llrf) system could cause significant transverse emittance growth and limit luminosity lifetime. In this work, a theoretical relationship between the phase and amplitude rf noise spectrum and the transverse emittance growth rate is derived, for a hadron machine assuming zero synchrotron radiation damping and broadband rf noise, excluding infinitely narrow spectral lines. This derivation is for a single beam. Both amplitude and phase noise are investigated. The potential improvement in the presence of the transverse damper is also investigated.
A catalogue of faint local radio AGN and the properties of their host galaxies
NASA Astrophysics Data System (ADS)
Lofthouse, E. K.; Kaviraj, S.; Smith, D. JB; Hardcastle, M. J.
2018-05-01
We present a catalogue of local (z < 0.1) galaxies that contain faint AGN. We select these objects by identifying galaxies that exhibit a significant excess in their radio luminosities, compared to what is expected from the observed levels of star-formation activity in these systems. This is achieved by comparing the optical (spectroscopic) star formation rate (SFR) to the 1.4 GHz luminosity measured from the FIRST survey. The majority of the AGN identified in this study are fainter than those in previous work, such as in the Best and Heckman (2012) catalogue. We show that these faint AGN make a non-negligible contribution to the radio luminosity function at low luminosities (below 1022.5 W Hz-1), and host ˜13 per cent of the local radio luminosity budget. Their host galaxies are predominantly high stellar-mass systems (with a median stellar mass of 1011M⊙), are found across a range of environments (but typically in denser environments than star-forming galaxies) and have early-type morphologies. This study demonstrates a general technique to identify AGN in galaxy populations where reliable optical SFRs can be extracted using spectro-photometry and where radio data are also available so that a radio excess can be measured. Our results also demonstrate that it is unsafe to infer SFRs from radio emission alone, even if bright AGN have been excluded from a sample, since there is a significant population of faint radio AGN which may contaminate the radio-derived SFRs.
NASA Astrophysics Data System (ADS)
Ballantyne, David R.
2016-04-01
Deep X-ray surveys have provided a comprehensive and largely unbiased view of AGN evolution stretching back to z˜5. However, it has been challenging to use the survey results to connect this evolution to the cosmological environment that AGNs inhabit. Exploring this connection will be crucial to understanding the triggering mechanisms of AGNs and how these processes manifest in observations at all wavelengths. In anticipation of upcoming wide-field X-ray surveys that will allow quantitative analysis of AGN environments, we present a method to observationally constrain the Conditional Luminosity Function (CLF) of AGNs at a specific z. Once measured, the CLF allows the calculation of the AGN bias, mean dark matter halo mass, AGN lifetime, halo occupation number, and AGN correlation function - all as a function of luminosity. The CLF can be constrained using a measurement of the X-ray luminosity function and the correlation length at different luminosities. The method is demonstrated at z ≈0 and 0.9, and clear luminosity dependence in the AGN bias and mean halo mass is predicted at both z. The results support the idea that there are at least two different modes of AGN triggering: one, at high luminosity, that only occurs in high mass, highly biased haloes, and one that can occur over a wide range of halo masses and leads to luminosities that are correlated with halo mass. This latter mode dominates at z<0.9. The CLFs for Type 2 and Type 1 AGNs are also constrained at z ≈0, and we find evidence that unobscured quasars are more likely to be found in higher mass halos than obscured quasars. Thus, the AGN unification model seems to fail at quasar luminosities.
LUMINOSITY FUNCTIONS OF LMXBs IN CENTAURUS A: GLOBULAR CLUSTERS VERSUS THE FIELD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voss, Rasmus; Gilfanov, Marat; Sivakoff, Gregory R.
2009-08-10
We study the X-ray luminosity function (XLF) of low-mass X-ray binaries (LMXB) in the nearby early-type galaxy Centaurus A, concentrating primarily on two aspects of binary populations: the XLF behavior at the low-luminosity limit and the comparison between globular cluster and field sources. The 800 ksec exposure of the deep Chandra VLP program allows us to reach a limiting luminosity of {approx}8 x 10{sup 35} erg s{sup -1}, about {approx}2-3 times deeper than previous investigations. We confirm the presence of the low-luminosity break of the overall LMXB XLF at log(L{sub X} ) {approx} 37.2-37.6, below which the luminosity distribution followsmore » a dN/d(ln L) {approx} const law. Separating globular cluster and field sources, we find a statistically significant difference between the two luminosity distributions with a relative underabundance of faint sources in the globular cluster population. This demonstrates that the samples are drawn from distinct parent populations and may disprove the hypothesis that the entire LMXB population in early-type galaxies is created dynamically in globular clusters. As a plausible explanation for this difference in the XLFs, we suggest an enhanced fraction of helium-accreting systems in globular clusters, which are created in collisions between red giants and neutron stars. Due to the four times higher ionization temperature of He, such systems are subject to accretion disk instabilities at {approx}20 times higher mass accretion rate and, therefore, are not observed as persistent sources at low luminosities.« less
Does the obscured AGN fraction really depend on luminosity?
NASA Astrophysics Data System (ADS)
Sazonov, S.; Churazov, E.; Krivonos, R.
2015-12-01
We use a sample of 151 local non-blazar active galactic nuclei (AGN) selected from the INTEGRAL all-sky hard X-ray survey to investigate if the observed declining trend of the fraction of obscured (i.e. showing X-ray absorption) AGN with increasing luminosity is mostly an intrinsic or selection effect. Using a torus-obscuration model, we demonstrate that in addition to negative bias, due to absorption in the torus, in finding obscured AGN in hard X-ray flux-limited surveys, there is also positive bias in finding unobscured AGN, due to Compton reflection in the torus. These biases can be even stronger taking into account plausible intrinsic collimation of hard X-ray emission along the axis of the obscuring torus. Given the AGN luminosity function, which steepens at high luminosities, these observational biases lead to a decreasing observed fraction of obscured AGN with increasing luminosity even if this fraction has no intrinsic luminosity dependence. We find that if the central hard X-ray source in AGN is isotropic, the intrinsic (i.e. corrected for biases) obscured AGN fraction still shows a declining trend with luminosity, although the intrinsic obscured fraction is significantly larger than the observed one: the actual fraction is larger than ˜85 per cent at L ≲ 1042.5 erg s-1 (17-60 keV), and decreases to ≲60 per cent at L ≳ 1044 erg s-1. In terms of the half-opening angle θ of an obscuring torus, this implies that θ ≲ 30° in lower luminosity AGN, and θ ≳ 45° in higher luminosity ones. If, however, the emission from the central supermassive black hole is collimated as dL/dΩ ∝ cos α, the intrinsic dependence of the obscured AGN fraction is consistent with a luminosity-independent torus half-opening angle θ ˜ 30°.
A study of excess H-alpha emission in chromospherically active M dwarf
NASA Technical Reports Server (NTRS)
Young, Arthur; Skumanich, Andrew; Stauffer, John R.; Harlan, Eugene; Bopp, Bernard W.
1989-01-01
Spectroscopic observations from three observatories are combined to study the properties of the excess H-alpha emission which characterizes the most chromospherically active subset of the M dwarf stars, known as the dMe stars. It is demonstrated that the excess H-alpha luminosity from these stars is a monotonically decreasing function of their (R-I) color, and evidence is presented which suggests that the product of the mean surface brightness and the mean filling factor of the emissive regions is essentially constant with color. Another significant result of the study is a linear correlation between the excess luminosity in H-alpha and the coronal X-ray luminosity.
Muon collider interaction region design
Alexahin, Y. I.; Gianfelice-Wendt, E.; Kashikhin, V. V.; ...
2011-06-02
Design of a muon collider interaction region (IR) presents a number of challenges arising from low β* < 1 cm, correspondingly large beta-function values and beam sizes at IR magnets, as well as the necessity to protect superconducting magnets and collider detectors from muon decay products. As a consequence, the designs of the IR optics, magnets and machine-detector interface are strongly interlaced and iterative. A consistent solution for the 1.5 TeV center-of-mass muon collider IR is presented. It can too provide an average luminosity of 10 34 cm -2s -1 with an adequate protection of magnet and detector components.
Numerical Analysis of Parasitic Crossing Compensation with Wires in DA$$\\Phi$$NE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valishev, A.; Shatilov, D.; Milardi, C.
2015-06-24
Current-bearing wire compensators were successfully used in the 2005-2006 run of the DAΦNE collider to mitigate the detrimental effects of parasitic beam-beam interactions. A marked improvement of the positron beam lifetime was observed in machine operation with the KLOE detector. In view of the possible application of wire beam-beam compensators for the High Luminosity LHC upgrade, we revisit the DAΦNE experiments. We use an improved model of the accelerator with the goal to validate the modern simulation tools and provide valuable input for the LHC upgrade project.
NASA Astrophysics Data System (ADS)
Tavian, L.; Brodzinski, K.; Claudet, S.; Ferlin, G.; Wagner, U.; van Weelderen, R.
The discovery of a Higgs boson at CERN in 2012 is the start of a major program of work to measure this particle's properties with the highest possible precision for testing the validity of the Standard Model and to search for further new physics at the energy frontier. The LHC is in a unique position to pursue this program. Europe's top priority is the exploitation of the full potential of the LHC, including the high-luminosity upgrade of the machine and detectors with an objective to collect ten times more data than in the initial design, by around 2030. To reach this objective, the LHC cryogenic system must be upgraded to withstand higher beam current and higher luminosity at top energy while keeping the same operation availability by improving the collimation system and the protection of electronics sensitive to radiation. This chapter will present the conceptual design of the cryogenic system upgrade with recent updates in performance requirements, the corresponding layout and architecture of the system as well as the main technical challenges which have to be met in the coming years.
NASA Astrophysics Data System (ADS)
Shipley, Heath; Papovich, Casey
2015-08-01
We provide a new robust star-formation rate (SFR) calibration using the luminosity from polycyclic aromatic hydrogen (PAH) molecules. The PAH features emit strongly in the mid-infrared (mid-IR; 3-19μm), mitigating dust extinction, and they are very luminous, containing 5-10% of the total IR luminosity in galaxies. We derive the calibration of the PAH luminosity as a SFR indicator using a sample of 105 star-forming galaxies covering a range of total IR luminosity, LIR = L(8-1000μm) = 109 - 1012 L⊙ and redshift 0 < z < 0.6. The PAH luminosity correlates linearly with the SFR as measured by the dust-corrected Hα luminosity (using the sum of the Hα and rest-frame 24μm luminosity from Kennicutt et al. 2009), with tight scatter of ~0.15 dex, comparable to the scatter in the dust-corrected Hα SFRs and Paα SFRs. We show this relation is sensitive to galaxy metallicity, where the PAH luminosity of galaxies with Z < 0.7 Z⊙ departs from the linear SFR relationship but in a behaved manor. We derive for this a correction to galaxies below solar metallicity. As a case study for observations with JWST, we apply the PAH SFR calibration to a sample of lensed galaxies at 1 < z < 3 with Spitzer Infrared Spectrograph (IRS) data, and we demonstrate the utility of PAHs to derive SFRs as accurate as those available from any other indicator. This new SFR indicator will be useful for probing the peak of the SFR density of the universe (1 < z < 3) and for studying the coevolution of star-formation and supermassive blackhole accretion contemporaneously in a galaxy.
Calculations of safe collimator settings and β* at the CERN Large Hadron Collider
NASA Astrophysics Data System (ADS)
Bruce, R.; Assmann, R. W.; Redaelli, S.
2015-06-01
The first run of the Large Hadron Collider (LHC) at CERN was very successful and resulted in important physics discoveries. One way of increasing the luminosity in a collider, which gave a very significant contribution to the LHC performance in the first run and can be used even if the beam intensity cannot be increased, is to decrease the transverse beam size at the interaction points by reducing the optical function β*. However, when doing so, the beam becomes larger in the final focusing system, which could expose its aperture to beam losses. For the LHC, which is designed to store beams with a total energy of 362 MJ, this is critical, since the loss of even a small fraction of the beam could cause a magnet quench or even damage. Therefore, the machine aperture has to be protected by the collimation system. The settings of the collimators constrain the maximum beam size that can be tolerated and therefore impose a lower limit on β*. In this paper, we present calculations to determine safe collimator settings and the resulting limit on β*, based on available aperture and operational stability of the machine. Our model was used to determine the LHC configurations in 2011 and 2012 and it was found that β* could be decreased significantly compared to the conservative model used in 2010. The gain in luminosity resulting from the decreased margins between collimators was more than a factor 2, and a further contribution from the use of realistic aperture estimates based on measurements was almost as large. This has played an essential role in the rapid and successful accumulation of experimental data in the LHC.
A global view on the Higgs self-coupling at lepton colliders
Di Vita, Stefano; Durieux, Gauthier; Grojean, Christophe; ...
2018-02-28
We perform a global effective-field-theory analysis to assess the precision on the determination of the Higgs trilinear self-coupling at future lepton colliders. Two main scenarios are considered, depending on whether the center-of-mass energy of the colliders is sufficient or not to access Higgs pair production processes. Low-energy machines allow for ~40% precision on the extraction of the Higgs trilinear coupling through the exploitation of next-to-leading-order effects in single Higgs measurements, provided that runs at both 240/250 GeV and 350 GeV are available with luminosities in the few attobarns range. A global fit, including possible deviations in other SM couplings, ismore » essential in this case to obtain a robust determination of the Higgs self-coupling. High-energy machines can easily achieve a ~20% precision through Higgs pair production processes. In this case, the impact of additional coupling modifications is milder, although not completely negligible.« less
A global view on the Higgs self-coupling at lepton colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Vita, Stefano; Durieux, Gauthier; Grojean, Christophe
We perform a global effective-field-theory analysis to assess the precision on the determination of the Higgs trilinear self-coupling at future lepton colliders. Two main scenarios are considered, depending on whether the center-of-mass energy of the colliders is sufficient or not to access Higgs pair production processes. Low-energy machines allow for ~40% precision on the extraction of the Higgs trilinear coupling through the exploitation of next-to-leading-order effects in single Higgs measurements, provided that runs at both 240/250 GeV and 350 GeV are available with luminosities in the few attobarns range. A global fit, including possible deviations in other SM couplings, ismore » essential in this case to obtain a robust determination of the Higgs self-coupling. High-energy machines can easily achieve a ~20% precision through Higgs pair production processes. In this case, the impact of additional coupling modifications is milder, although not completely negligible.« less
NASA Astrophysics Data System (ADS)
Schmidt, R.; Blanco Sancho, J.; Burkart, F.; Grenier, D.; Wollmann, D.; Tahir, N. A.; Shutov, A.; Piriz, A. R.
2014-08-01
A novel experiment has been performed at the CERN HiRadMat test facility to study the impact of the 440 GeV proton beam generated by the Super Proton Synchrotron on extended solid copper cylindrical targets. Substantial hydrodynamic tunneling of the protons in the target material has been observed that leads to significant lengthening of the projectile range, which confirms our previous theoretical predictions [N. A. Tahir et al., Phys. Rev. Spec. Top.-Accel. Beams 15, 051003 (2012)]. Simulation results show very good agreement with the experimental measurements. These results have very important implications on the machine protection design for powerful machines like the Large Hadron Collider (LHC), the future High Luminosity LHC, and the proposed huge 80 km circumference Future Circular Collider, which is currently being discussed at CERN. Another very interesting outcome of this work is that one may also study the field of High Energy Density Physics at this test facility.
Dust Grains and the Luminosity of Circumnuclear Water Masers in Active Galaxies
NASA Technical Reports Server (NTRS)
Collison, Alan J.; Watson, William D.
1995-01-01
In previous calculations for the luminosities of 22 GHz water masers, the pumping is reduced and ultimately quenched with increasing depth into the gas because of trapping of the infrared (approximately equals 30-150 micrometers), spectral line radiation of the water molecule. When the absorption (and reemission) of infrared radiation by dust grains is included, we demonstrate that the pumping is no longer quenched but remains constant with increasing optical depth. A temperature difference between the grains and the gas is required. Such conditions are expected to occur, for example, in the circumnuclear masing environments created by X-rays in active galaxies. Here, the calculated 22 GHz maser luminosities are increased by more than an order of magnitude. Application to the well-studied, circumnuclear masing disk in the galaxy NGC 4258 yields a maser luminosity near that inferred from observations if the observed X-ray flux is assumed to be incident onto only the inner surface of the disk.
NASA Astrophysics Data System (ADS)
Bernhard, E.; Mullaney, J. R.; Aird, J.; Hickox, R. C.; Jones, M. L.; Stanley, F.; Grimmett, L. P.; Daddi, E.
2018-05-01
The lack of a strong correlation between AGN X-ray luminosity (LX; a proxy for AGN power) and the star formation rate (SFR) of their host galaxies has recently been attributed to stochastic AGN variability. Studies using population synthesis models have incorporated this by assuming a broad, universal (i.e. does not depend on the host galaxy properties) probability distribution for AGN specific X-ray luminosities (i.e. the ratio of LX to host stellar mass; a common proxy for Eddington ratio). However, recent studies have demonstrated that this universal Eddington ratio distribution fails to reproduce the observed X-ray luminosity functions beyond z ˜ 1.2. Furthermore, empirical studies have recently shown that the Eddington ratio distribution may instead depend upon host galaxy properties, such as SFR and/or stellar mass. To investigate this further, we develop a population synthesis model in which the Eddington ratio distribution is different for star-forming and quiescent host galaxies. We show that, although this model is able to reproduce the observed X-ray luminosity functions out to z ˜ 2, it fails to simultaneously reproduce the observed flat relationship between SFR and X-ray luminosity. We can solve this, however, by incorporating a mass dependency in the AGN Eddington ratio distribution for star-forming host galaxies. Overall, our models indicate that a relative suppression of low Eddington ratios (λEdd ≲ 0.1) in lower mass galaxies (M* ≲ 1010 - 11 M⊙) is required to reproduce both the observed X-ray luminosity functions and the observed flat SFR/X-ray relationship.
A new method for finding and characterizing galaxy groups via low-frequency radio surveys
NASA Astrophysics Data System (ADS)
Croston, J. H.; Ineson, J.; Hardcastle, M. J.; Mingo, B.
2017-09-01
We describe a new method for identifying and characterizing the thermodynamic state of large samples of evolved galaxy groups at high redshifts using high-resolution, low-frequency radio surveys, such as those that will be carried out with LOFAR and the Square Kilometre Array. We identify a sub-population of morphologically regular powerful [Fanaroff-Riley type II (FR II)] radio galaxies and demonstrate that, for this sub-population, the internal pressure of the radio lobes is a reliable tracer of the external intragroup/intracluster medium (ICM) pressure, and that the assumption of a universal pressure profile for relaxed groups enables the total mass and X-ray luminosity to be estimated. Using a sample of well-studied FR II radio galaxies, we demonstrate that our method enables the estimation of group/cluster X-ray luminosities over three orders of magnitude in luminosity to within a factor of ˜2 from low-frequency radio properties alone. Our method could provide a powerful new tool for building samples of thousands of evolved galaxy groups at z > 1 and characterizing their ICM.
Influence of the Solar Luminosity on the Glaciations, sea Level Changes and Resulting Earthquakes.
NASA Astrophysics Data System (ADS)
Shopov, Y. Y.; Stoykova, D. A.; Tsankov, L. T.; Sanabria, M. E.; Georgieva, D. I.; Ford, D. C.; Georgiev, L. N.
2002-12-01
Glaciations were attributed to variations of the Earth's orbit (Milankovitch cycles). But the best ever dated paleoclimatic record (from Devils Hole, Nevada) demonstrated that the end of the last glacial period (termination II) happened 10 000 years before the one suggested by the orbital variations, i.e. the result appeared before the reason. This fact suggests that there is something wrong in the theory. Calcite speleothems luminescence of organics depends exponentially upon soil temperatures that are determined primarily by the solar radiation. So the microzonality of luminescence of speleothems may be used as an indirect Solar Insolation (radiation) proxy index. We obtained luminescence solar insolation proxy records in speleothems (from Jewel Cave, South Dakota, US and Duhlata cave, Bulgaria). These records exhibit very rapid increasing of the solar insolation at 139 kyrs BP responsible for the termination II (the end of the last glaciation) and demonstrate that solar luminosity variations contribute to Earth's heating almost as much as the orbital variations of the Earth's orbit (Milankovitch cycles). The most powerful cycle of the solar luminosity (11500 yrs) is responsible for almost 1/2 of the variations in solar insolation experimental records. Changes in the speed of Earth's rotation during glacial- interglacial transitions produce fracturing of the Earth's crust and major earthquakes along the fractures. The intensity of this process is as higher as faster is the change of the sea level and as higher is its amplitude. Glaciations and deglaciations drive changes of the sea level. Much higher dimensions of this process should be caused by eruptive increasing of solar luminosity, which may be caused only by collision of large asteroids with the Sun. We demonstrate that such collision may cause "Bible Deluge" type of event.
High redshift QSOs and the x ray background
NASA Technical Reports Server (NTRS)
Impey, Chris
1993-01-01
ROSAT pointed observations were made of 9 QSO's from the Large Bright Quasar Survey (LBQS). The LBQS is based on machine measurement of objective prism plates taken with the UK Schmidt Telescope. Software has been used to select QSO's by both color and by the presence of spectral features and continuum breaks. The probability of detection can be calculated as a function of magnitude, redshift and spectral features, and the completeness of the survey can be accurately estimated. Nine out of 1040 QSO's in the LBQS have z greater than 3. The observations will provide an important data point in the X-ray luminosity function of QSO's at high redshift. The QSO's with z greater than 3 span less than a magnitude in M(sub B), so can be combined as a homogeneous sample. This analysis is only possible with a sample drawn from a large and complete catalog such as the LBQS. Four of the 9 QSO's that were observed with the ROSAT PSPC for this proposal were detected, including one of the most luminous X-ray sources ever observed. The April 1992 version of the PROS DETECT package was used to reduce the data. The results have been used to search for evolution of the X-ray properties of QSO's in redshift. The 9 QSO's lie in the range -28.7 less than M(sub B) less than -27.8. When combined with data for 16 QSO's in a similar luminosity range at lower redshift correlations with luminosity and redshift can be separated out. The LBQS sample also yields a new constraint on the contribution of high redshift QSO's to the X-ray background. An initial requirement is knowledge of the X-ray properties (alpha(sub OX)) as a function of redshift. Integration over the evolving luminosity function of the LBQS then gives the QSO contribution to the source counts.
Enhancing RHIC luminosity capabilities with in-situ beam piple coating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herschcovitch,A.; Blaskiewicz, M.; Fischer, W.
Electron clouds have been observed in many accelerators, including the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory (BNL). They can limit the machine performance through pressure degradation, beam instabilities or incoherent emittance growth. The formation of electron clouds can be suppressed with beam pipe surfaces that have low secondary electron yield. At the same time, high wall resistivity in accelerators can result in levels of ohmic heating unacceptably high for superconducting magnets. This is a concern for the RHIC machine, as its vacuum chamber in the superconducting dipoles is made from relatively high resistivity 316LN stainless steel.more » The high resistivity can be addressed with a copper (Cu) coating; a reduction in the secondary electron yield can be achieved with a titanium nitride (TiN) or amorphous carbon (a-C) coating. Applying such coatings in an already constructed machine is rather challenging. We started developing a robotic plasma deposition technique for in-situ coating of long, small diameter tubes. The technique entails fabricating a device comprised of staged magnetrons and/or cathodic arcs mounted on a mobile mole for deposition of about 5 {micro}m (a few skin depths) of Cu followed by about 0.1 {micro}m of TiN (or a-C).« less
ERIC Educational Resources Information Center
Gilhousen, David
2004-01-01
In this article, the author discusses a tornado-producing machine that he used in teacher-led, student assisted demonstrations in order to reinforce concepts learned during a unit on weather. The machine, or simulator, was powered by a hair dryer, fan, and cool-mist humidifier. The machine consists of a demonstration table containing a plenum box,…
The Critical Importance of Russell's Diagram
NASA Astrophysics Data System (ADS)
Gingerich, O.
2013-04-01
The idea of dwarf and giants stars, but not the nomenclature, was first established by Eijnar Hertzsprung in 1905; his first diagrams in support appeared in 1911. In 1913 Henry Norris Russell could demonstrate the effect far more strikingly because he measured the parallaxes of many stars at Cambridge, and could plot absolute magnitude against spectral type for many points. The general concept of dwarf and giant stars was essential in the galactic structure work of Harlow Shapley, Russell's first graduate student. In order to calibrate the period-luminosity relation of Cepheid variables, he was obliged to fall back on statistical parallax using only 11 Cepheids, a very sparse sample. Here the insight provided by the Russell diagram became critical. The presence of yellow K giant stars in globular clusters credentialed his calibration of the period-luminosity relation by showing that the calibrated luminosity of the Cepheids was comparable to the luminosity of the K giants. It is well known that in 1920 Shapley did not believe in the cosmological distances of Heber Curtis' spiral nebulae. It is not so well known that in 1920 Curtis' plot of the period-luminosity relation suggests that he didn't believe it was a physical relation and also he failed to appreciate the significance of the Russell diagram for understanding the large size of the Milky Way.
DISCOVERY OF BRIGHT GALACTIC R CORONAE BOREALIS AND DY PERSEI VARIABLES: RARE GEMS MINED FROM ACVS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, A. A.; Richards, J. W.; Bloom, J. S.
2012-08-20
We present the results of a machine-learning (ML)-based search for new R Coronae Borealis (RCB) stars and DY Persei-like stars (DYPers) in the Galaxy using cataloged light curves from the All-Sky Automated Survey (ASAS) Catalog of Variable Stars (ACVS). RCB stars-a rare class of hydrogen-deficient carbon-rich supergiants-are of great interest owing to the insights they can provide on the late stages of stellar evolution. DYPers are possibly the low-temperature, low-luminosity analogs to the RCB phenomenon, though additional examples are needed to fully establish this connection. While RCB stars and DYPers are traditionally identified by epochs of extreme dimming that occurmore » without regularity, the ML search framework more fully captures the richness and diversity of their photometric behavior. We demonstrate that our ML method can use newly discovered RCB stars to identify additional candidates within the same data set. Our search yields 15 candidates that we consider likely RCB stars/DYPers: new spectroscopic observations confirm that four of these candidates are RCB stars and four are DYPers. Our discovery of four new DYPers increases the number of known Galactic DYPers from two to six; noteworthy is that one of the new DYPers has a measured parallax and is m Almost-Equal-To 7 mag, making it the brightest known DYPer to date. Future observations of these new DYPers should prove instrumental in establishing the RCB connection. We consider these results, derived from a machine-learned probabilistic classification catalog, as an important proof-of-concept for the efficient discovery of rare sources with time-domain surveys.« less
Machine-learned Identification of RR Lyrae Stars from Sparse, Multi-band Data: The PS1 Sample
NASA Astrophysics Data System (ADS)
Sesar, Branimir; Hernitschek, Nina; Mitrović, Sandra; Ivezić, Željko; Rix, Hans-Walter; Cohen, Judith G.; Bernard, Edouard J.; Grebel, Eva K.; Martin, Nicolas F.; Schlafly, Edward F.; Burgett, William S.; Draper, Peter W.; Flewelling, Heather; Kaiser, Nick; Kudritzki, Rolf P.; Magnier, Eugene A.; Metcalfe, Nigel; Tonry, John L.; Waters, Christopher
2017-05-01
RR Lyrae stars may be the best practical tracers of Galactic halo (sub-)structure and kinematics. The PanSTARRS1 (PS1) 3π survey offers multi-band, multi-epoch, precise photometry across much of the sky, but a robust identification of RR Lyrae stars in this data set poses a challenge, given PS1's sparse, asynchronous multi-band light curves (≲ 12 epochs in each of five bands, taken over a 4.5 year period). We present a novel template fitting technique that uses well-defined and physically motivated multi-band light curves of RR Lyrae stars, and demonstrate that we get accurate period estimates, precise to 2 s in > 80 % of cases. We augment these light-curve fits with other features from photometric time-series and provide them to progressively more detailed machine-learned classification models. From these models, we are able to select the widest (three-fourths of the sky) and deepest (reaching 120 kpc) sample of RR Lyrae stars to date. The PS1 sample of ˜45,000 RRab stars is pure (90%) and complete (80% at 80 kpc) at high galactic latitudes. It also provides distances that are precise to 3%, measured with newly derived period-luminosity relations for optical/near-infrared PS1 bands. With the addition of proper motions from Gaia and radial velocity measurements from multi-object spectroscopic surveys, we expect the PS1 sample of RR Lyrae stars to become the premier source for studying the structure, kinematics, and the gravitational potential of the Galactic halo. The techniques presented in this study should translate well to other sparse, multi-band data sets, such as those produced by the Dark Energy Survey and the upcoming Large Synoptic Survey Telescope Galactic plane sub-survey.
MUSE deep-fields: the Ly α luminosity function in the Hubble Deep Field-South at 2.91 < z < 6.64
NASA Astrophysics Data System (ADS)
Drake, Alyssa B.; Guiderdoni, Bruno; Blaizot, Jérémy; Wisotzki, Lutz; Herenz, Edmund Christian; Garel, Thibault; Richard, Johan; Bacon, Roland; Bina, David; Cantalupo, Sebastiano; Contini, Thierry; den Brok, Mark; Hashimoto, Takuya; Marino, Raffaella Anna; Pelló, Roser; Schaye, Joop; Schmidt, Kasper B.
2017-10-01
We present the first estimate of the Ly α luminosity function using blind spectroscopy from the Multi Unit Spectroscopic Explorer, MUSE, in the Hubble Deep Field-South. Using automatic source-detection software, we assemble a homogeneously detected sample of 59 Ly α emitters covering a flux range of -18.0 < log10 (F) < -16.3 (erg s-1 cm-2), corresponding to luminosities of 41.4 < log10 (L) < 42.8 (erg s-1). As recent studies have shown, Ly α fluxes can be underestimated by a factor of 2 or more via traditional methods, and so we undertake a careful assessment of each object's Ly α flux using a curve-of-growth analysis to account for extended emission. We describe our self-consistent method for determining the completeness of the sample, and present an estimate of the global Ly α luminosity function between redshifts 2.91 < z < 6.64 using the 1/Vmax estimator. We find that the luminosity function is higher than many number densities reported in the literature by a factor of 2-3, although our result is consistent at the 1σ level with most of these studies. Our observed luminosity function is also in good agreement with predictions from semi-analytic models, and shows no evidence for strong evolution between the high- and low-redshift halves of the data. We demonstrate that one's approach to Ly α flux estimation does alter the observed luminosity function, and caution that accurate flux assessments will be crucial in measurements of the faint-end slope. This is a pilot study for the Ly α luminosity function in the MUSE deep-fields, to be built on with data from the Hubble Ultra Deep Field that will increase the size of our sample by almost a factor of 10.
Data acquisition and processing in the ATLAS tile calorimeter phase-II upgrade demonstrator
NASA Astrophysics Data System (ADS)
Valero, A.; Tile Calorimeter System, ATLAS
2017-10-01
The LHC has planned a series of upgrades culminating in the High Luminosity LHC which will have an average luminosity 5-7 times larger than the nominal Run 2 value. The ATLAS Tile Calorimeter will undergo an upgrade to accommodate the HL-LHC parameters. The TileCal readout electronics will be redesigned, introducing a new readout strategy. A Demonstrator program has been developed to evaluate the new proposed readout architecture and prototypes of all the components. In the Demonstrator, the detector data received in the Tile PreProcessors (PPr) are stored in pipeline buffers and upon the reception of an external trigger signal the data events are processed, packed and readout in parallel through the legacy ROD system, the new Front-End Link eXchange system and an ethernet connection for monitoring purposes. This contribution describes in detail the data processing and the hardware, firmware and software components of the TileCal Demonstrator readout system.
Design and performance studies of a hadronic calorimeter for a FCC-hh experiment
NASA Astrophysics Data System (ADS)
Faltova, J.
2018-03-01
The hadron-hadron Future Circular Collider (FCC-hh) project studies the physics reach of a proton-proton machine with a centre-of-mass-energy of 100 TeV and five times greater peak luminosities than at the High-Luminosity LHC (HL-LHC). The high-energy regime of the FCC-hh opens new opportunities for the discovery of physics beyond the standard model. At 100 TeV a large fraction of the W, Z, H bosons and top quarks are produced with a significant boost. It implies an efficient reconstruction of very high energetic objects decaying hadronically. The reconstruction of those boosted objects sets the calorimeter performance requirements in terms of energy resolution, containment of highly energetic hadron showers, and high transverse granularity. We present the current baseline technologies for the calorimeter system in the barrel region of the FCC-hh reference detector: a liquid argon electromagnetic and a scintillator-steel hadronic calorimeters. The focus of this paper is on the hadronic calorimeter and the performance studies for hadrons. The reconstruction of single particles and the achieved energy resolution for the combined system of the electromagnetic and hadronic calorimeters are discussed.
V and K-band Mass-Luminosity Relations for M Dwarf Stars
NASA Astrophysics Data System (ADS)
Benedict, George Frederick; Henry, Todd J.; McArthur, Barbara E.; Franz, Otto; Wasserman, Larry H.; Dieterich, Sergio
2015-08-01
Applying Hubble Space Telescope Fine Guidance Sensor astrometric techniques developed to establish relative orbits for binary stars (Franz et al. 1998, AJ, 116, 1432), determine masses of binary components (Benedict et al. 2001, AJ, 121, 1607), and measure companion masses of exoplanet host stars (McArthur et al. 2010, ApJ, 715, 1203), we derive masses with an average 2% error for 28 components of 14 M dwarf binary star systems. With these and other published masses we update the lower Main Sequence V-band Mass-Luminosity Relation first shown in Henry et al. 1999, ApJ, 512, 864. We demonstrate that a Mass-Luminosity Relation in the K-band has far less scatter. These relations can be used to estimate the masses of the ubiquitous red dwarfs (75% of all stars) to an accuracy of better than 5%.
Diffuse γ-ray emission from misaligned active galactic nuclei
Di Mauro, M.; Calore, F.; Donato, F.; ...
2013-12-20
Active galactic nuclei (AGNs) with jets seen at small viewing angles are the most luminous and abundant objects in the γ-ray sky. AGNs with jets misaligned along the line of sight appear fainter in the sky but are more numerous than the brighter blazars. Here, we calculate the diffuse γ-ray emission due to the population of misaligned AGNs (MAGNs) unresolved by the Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope (Fermi). Furthermore, a correlation between the γ-ray luminosity and the radio-core luminosity is established and demonstrated to be physical by statistical tests, as well as compatible with uppermore » limits based on Fermi-LAT data for a large sample of radio-loud MAGNs. We constrain the derived γ-ray luminosity function by means of the source-count distribution of the radio galaxies detected by the Fermi-LAT. We finally calculate the diffuse γ-ray flux due to the whole MAGN population. These results demonstrate that MAGNs can contribute from 10% up to nearly the entire measured isotropic gamma-ray background. We evaluate a theoretical uncertainty on the flux of almost an order of magnitude.« less
On the Conditioning of Machine-Learning-Assisted Turbulence Modeling
NASA Astrophysics Data System (ADS)
Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng
2017-11-01
Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.
A Solar-luminosity Model and Climate
NASA Technical Reports Server (NTRS)
Perry, Charles A.
1990-01-01
Although the mechanisms of climatic change are not completely understood, the potential causes include changes in the Sun's luminosity. Solar activity in the form of sunspots, flares, proton events, and radiation fluctuations has displayed periodic tendencies. Two types of proxy climatic data that can be related to periodic solar activity are varved geologic formations and freshwater diatom deposits. A model for solar luminosity was developed by using the geometric progression of harmonic cycles that is evident in solar and geophysical data. The model assumes that variation in global energy input is a result of many periods of individual solar-luminosity variations. The 0.1-percent variation of the solar constant measured during the last sunspot cycle provided the basis for determining the amplitude of each luminosity cycle. Model output is a summation of the amplitudes of each cycle of a geometric progression of harmonic sine waves that are referenced to the 11-year average solar cycle. When the last eight cycles in Emiliani's oxygen-18 variations from deep-sea cores were standardized to the average length of glaciations during the Pleistocene (88,000 years), correlation coefficients with the model output ranged from 0.48 to 0.76. In order to calibrate the model to real time, model output was graphically compared to indirect records of glacial advances and retreats during the last 24,000 years and with sea-level rises during the Holocene. Carbon-14 production during the last millenium and elevations of the Great Salt Lake for the last 140 years demonstrate significant correlations with modeled luminosity. Major solar flares during the last 90 years match well with the time-calibrated model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruschi, Marco
The new ATLAS luminosity monitor has many innovative aspects implemented. Its photomultipliers tubes are used as detector elements by using the Cherenkov light produced by charged particles above threshold crossing the quartz windows. The analog shaping of the readout chain has been improved, in order to cope with the 25 ns bunch spacing of the LHC machine. The main readout card is a quite general processing unit based on 12 bit - 500 MS/s Flash ADC and on FPGAs, delivering the processed data to 1.3 Gb/s optical links. The article will describe all these aspects and will outline future perspectivesmore » of the card for next generation high energy physics experiments. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, R.; Grenier, D.; Wollmann, D.
2014-08-15
A novel experiment has been performed at the CERN HiRadMat test facility to study the impact of the 440 GeV proton beam generated by the Super Proton Synchrotron on extended solid copper cylindrical targets. Substantial hydrodynamic tunneling of the protons in the target material has been observed that leads to significant lengthening of the projectile range, which confirms our previous theoretical predictions [N. A. Tahir et al., Phys. Rev. Spec. Top.-Accel. Beams 15, 051003 (2012)]. Simulation results show very good agreement with the experimental measurements. These results have very important implications on the machine protection design for powerful machines like themore » Large Hadron Collider (LHC), the future High Luminosity LHC, and the proposed huge 80 km circumference Future Circular Collider, which is currently being discussed at CERN. Another very interesting outcome of this work is that one may also study the field of High Energy Density Physics at this test facility.« less
The design of the new LHC connection cryostats
NASA Astrophysics Data System (ADS)
Vande Craen, A.; Barlow, G.; Eymin, C.; Moretti, M.; Parma, V.; Ramos, D.
2017-12-01
In the frame of the High Luminosity upgrade of the LHC, improved collimation schemes are needed to cope with the superconducting magnet quench limitations due to the increasing beam intensities and particle debris produced in the collision points. Two new TCLD collimators have to be installed on either side of the ALICE experiment to intercept heavy-ion particle debris. Beam optics solutions were found to place these collimators in the continuous cryostat of the machine, in the locations where connection cryostats, bridging a gap of about 13 m between adjacent magnets, are already present. It is therefore planned to replace these connection cryostats with two new shorter ones separated by a bypass cryostat allowing the collimators to be placed close to the beam pipes. The connection cryostats, of a new design when compared to the existing ones, will still have to ensure the continuity of the technical systems of the machine cryostat (i.e. beam lines, cryogenic and electrical circuits, insulation vacuum). This paper describes the functionalities and the design solutions implemented, as well as the plans for their construction.
Multipole and field uniformity tailoring of a 750 MHz rf dipole
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delayen, Jean R.; Castillo, Alejandro
2014-12-01
In recent years great interest has been shown in developing rf structures for beam separation, correction of geometrical degradation on luminosity, and diagnostic applications in both lepton and hadron machines. The rf dipole being a very promising one among all of them. The rf dipole has been tested and proven to have attractive properties that include high shunt impedance, low and balance surface fields, absence of lower order modes and far-spaced higher order modes that simplify their damping scheme. As well as to be a compact and versatile design in a considerable range of frequencies, its fairly simple geometry dependencymore » is suitable both for fabrication and surface treatment. The rf dipole geometry can also be optimized for lowering multipacting risk and multipole tailoring to meet machine specific field uniformity tolerances. In the present work a survey of field uniformities, and multipole contents for a set of 750 MHz rf dipole designs is presented as both a qualitative and quantitative analysis of the inherent flexibility of the structure and its limitations.« less
The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking
Farrell, Steven; Anderson, Dustin; Calafiura, Paolo; ...
2017-08-08
Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problemmore » thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. Furthermore, we will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data.« less
The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell, Steven; Anderson, Dustin; Calafiura, Paolo
Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problemmore » thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. Furthermore, we will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data.« less
The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking
NASA Astrophysics Data System (ADS)
Farrell, Steven; Anderson, Dustin; Calafiura, Paolo; Cerati, Giuseppe; Gray, Lindsey; Kowalkowski, Jim; Mudigonda, Mayur; Prabhat; Spentzouris, Panagiotis; Spiropoulou, Maria; Tsaris, Aristeidis; Vlimant, Jean-Roch; Zheng, Stephan
2017-08-01
Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problem thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. We will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data.
NASA Astrophysics Data System (ADS)
Banerji, Manda; Alaghband-Zadeh, S.; Hewett, Paul C.; McMahon, Richard G.
2015-03-01
We present a new population of z > 2 dust-reddened, type 1 quasars with 0.5 ≲ E(B - V) ≲ 1.5, selected using near-infrared (NIR) imaging data from the UKIDSS-LAS (Large Area Survey), ESO-VHS (European Southern Obseratory-VISTA Hemisphere Survey) and WISE surveys. NIR spectra obtained using the Very Large Telescope for 24 new objects bring our total sample of spectroscopically confirmed hyperluminous (>1013 L⊙), high-redshift dusty quasars to 38. There is no evidence for reddened quasars having significantly different Hα equivalent widths relative to unobscured quasars. The average black hole masses (˜109-1010 M⊙) and bolometric luminosities (˜1047 erg s-1) are comparable to the most luminous unobscured quasars at the same redshift, but with a tail extending to very high luminosities of ˜1048 erg s-1. 66 per cent of the reddened quasars are detected at >3σ at 22 μm by WISE. The average 6-μm rest-frame luminosity is log10(L6 μm/ erg s-1) = 47.1 ± 0.4, making the objects among the mid-infrared brightest active galactic nuclei (AGN) currently known. The extinction-corrected space density estimate now extends over three magnitudes (-30 < Mi < -27) and demonstrates that the reddened quasar luminosity function is significantly flatter than that of the unobscured quasar population at z = 2-3. At the brightest magnitudes, Mi ≲ -29, the space density of our dust-reddened population exceeds that of unobscured quasars. A model where the probability that a quasar becomes dust reddened increases at high luminosity is consistent with the observations and such a dependence could be explained by an increase in luminosity and extinction during AGN-fuelling phases. The properties of our obscured type 1 quasars are distinct from the heavily obscured, Compton-thick AGN that have been identified at much fainter luminosities and we conclude that they likely correspond to a brief evolutionary phase in massive galaxy formation.
The luminosity function for different morphological types in the CfA Redshift Survey
NASA Technical Reports Server (NTRS)
Marzke, Ronald O.; Geller, Margaret J.; Huchra, John P.; Corwin, Harold G., Jr.
1994-01-01
We derive the luminosity function for different morphological types in the original CfA Redshift Survey (CfA1) and in the first two slices of the CfA Redshift Survey Extension (CfA2). CfA1 is a complete sample containing 2397 galaxies distributed over 2.7 steradians with m(sub z) less than or equal 14.5. The first two complete slices of CfA2 contain 1862 galaxies distributed over 0.42 steradians with m(sub z)=15.5. The shapes of the E-S0 and spiral luminosity functions (LF) are indistinguishable. We do not confirm the steeply decreasing faint end in the E-S0 luminosity function found by Loveday et al. for an independent sample in the southern hemisphere. We demonstrate that incomplete classification in deep redshift surveys can lead to underestimates of the faint end of the elliptical luminosity function and could be partially responsible for the difference between the CfA survey and other local field surveys. The faint end of the LF for the Magellanic spirals and irregulars is very steep. The Sm-Im luminosity function is well fit by a Schechter function with M*=-18.79, alpha=-1.87, and phi*=0.6x10(exp -3) for M(sub z) less than or equal to -13. These galaxies are largely responsible for the excess at the faint end of the general CfA luminosity function. The abundance of intrinsically faint, blue galaxies nearby affects the interpretation of deep number counts. The dwarf population increases the expected counts at B=25 in a no-evolution, q(sub 0)=0.05 model by a factor of two over standard no-evolution estimates. These dwarfs change the expected median redshift in deep redshift surveys by less than 10 percent . Thus the steep Sm-Im LF may contribute to the reconciliation of deep number counts with deep redshift surveys.
Wind Variability in Intermediate Luminosity B Supergiants
NASA Technical Reports Server (NTRS)
Massa, Derck
1996-01-01
This study used the unique spectroscopic diagnostics of intermediate luminosity B supergiants to determine the ubiquity and nature of wind variability. Specifically, (1) A detailed analysis of HD 64760 demonstrated massive ejections into its wind, provided the first clear demonstration of a 'photospheric connection' and ionization shifts in a stellar wind; (2) The international 'IUE MEGA campaign' obtained unprecedented temporal coverage of wind variability in rapidly rotating stars and demonstrated regularly repeating wind features originating in the photosphere; (3) A detailed analysis of wind variability in the rapidly rotating B1 Ib, gamma Ara demonstrated a two component wind with distinctly different mean states at different epochs; (4) A follow-on campaign to the MEGA project to study slowly rotating stars was organized and deemed a key project by ESA/NASA, and will obtain 30 days of IUE observations in May-June 1996; and (5) A global survey of archival IUE time series identified recurring spectroscopic signatures, identified with different physical phenomena. Items 4 and 5 above are still in progress and will be completed this summer in collaboration with Raman Prinja at University College, London.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larbalestier, David C.; Lee, Peter J.; Tarantini, Chiara
All present circular accelerators use superconducting magnets to bend and to focus the particle beams. The most powerful of these machines is the large hadron collider (LHC) at CERN. The main ring dipole magnets of the LHC are made from Nb-Ti but, as the machine is upgraded to higher luminosity, more powerful magnets made of Nb 3Sn will be required. Our work addresses how to make the Nb 3Sn conductors more effective and more suitable for use in the LHC. The most important property of the superconducting conductor used for an accelerator magnet is that it must have very highmore » critical current density, the property that allows the generation of high magnetic fields in small spaces. Nb 3Sn is the original high field superconductor, the material which was discovered in 1960 to allow a high current density in the field of about 9 T. For the high luminosity upgrade of the LHC, much higher current densities in fields of about 12 Tesla will be required. The critical value of the current density is of order 2600 A/mm 2 in a field of 12 Tesla. But there are very important secondary factors that complicate the attainment of this critical current density. The first is that the effective filament diameter must be no larger than about 40 µm. The second factor is that 50% of the cross-section of the Nb 3Sn conductor that is pure copper must be protected from any poisoning by any Sn leakage through the diffusion barrier that protects the package of niobium and tin from which the Nb 3Sn is formed by a high temperature reaction. These three, somewhat conflicting requirements, mean that optimization of the conductor is complex. The work described in this contract report addresses these conflicting requirements. They show that very sophisticated characterizations can uncover the way to satisfy all 3 requirements and they also suggest that the ultimate optimization of Nb 3Sn is still not yet in sight« less
NASA Astrophysics Data System (ADS)
Stanley, F.; Alexander, D. M.; Harrison, C. M.; Rosario, D. J.; Wang, L.; Aird, J. A.; Bourne, N.; Dunne, L.; Dye, S.; Eales, S.; Knudsen, K. K.; Michałowski, M. J.; Valiante, E.; De Zotti, G.; Furlanetto, C.; Ivison, R.; Maddox, S.; Smith, M. W. L.
2017-12-01
We investigate the mean star formation rates (SFRs) in the host galaxies of ∼3000 optically selected quasi-stellar objects (QSOs) from the Sloan Digital Sky Survey within the Herschel-ATLAS fields, and a radio-luminous subsample covering the redshift range of z = 0.2-2.5. Using Wide-field Infrared Survey Explorer (WISE) and Herschel photometry (12-500 μm) we construct composite spectral energy distributions (SEDs) in bins of redshift and active galactic nucleus (AGN) luminosity. We perform SED fitting to measure the mean infrared luminosity due to star formation, removing the contamination from AGN emission. We find that the mean SFRs show a weak positive trend with increasing AGN luminosity. However, we demonstrate that the observed trend could be due to an increase in black hole (BH) mass (and a consequent increase of inferred stellar mass) with increasing AGN luminosity. We compare to a sample of X-ray selected AGN and find that the two populations have consistent mean SFRs when matched in AGN luminosity and redshift. On the basis of the available virial BH masses, and the evolving BH mass to stellar mass relationship, we find that the mean SFRs of our QSO sample are consistent with those of main sequence star-forming galaxies. Similarly the radio-luminous QSOs have mean SFRs that are consistent with both the overall QSO sample and with star-forming galaxies on the main sequence. In conclusion, on average QSOs reside on the main sequence of star-forming galaxies, and the observed positive trend between the mean SFRs and AGN luminosity can be attributed to BH mass and redshift dependencies.
NASA Astrophysics Data System (ADS)
Schindler, Jan-Torge; Fan, Xiaohui; McGreer, Ian
2018-01-01
Studies of the most luminous quasars at high redshift directly probe the evolution of the most massive black holes in the early Universe and their connection to massive galaxy formation. Unfortunately, extremely luminous quasars at high redshift are very rare objects. Only wide area surveys have a chance to constrain their population. The Sloan Digital Sky Survey (SDSS) nd the Baryon Oscillation Spectroscopic Survey (BOSS) have so far provided the most widely adopted measurements of the type I quasar luminosity function (QLF) at z>3. However, a careful re-examination of the SDSS quasar sample revealed that the SDSS quasar selection is in fact missing a significant fraction of $z~3$ quasars at the brightest end.We have identified the purely optical color selection of SDSS, where quasars at these redshifts are strongly contaminated by late-type dwarfs, and the spectroscopic incompleteness of the SDSS footprint as the main reasons. Therefore we have designed the Extremely Luminous Quasar Survey (ELQS), based on a novel near-infrared JKW2 color cut using WISE AllWISE and 2MASS all-sky photometry, to yield high completeness for very bright (i < 18.0) quasars in the redshift range of 2.8<= z<=5.0. It effectively uses Random Forest machine-learning algorithms on SDSS and WISE photometry for quasar-star classification and photometric redshift estimation.The ELQS is spectroscopically following up ~230 new quasar candidates in an area of ~12000 deg2 in the SDSS footprint, to obtain a well-defined and complete quasar sample for an accurate measurement of the bright-end quasar luminosity function (QLF) at 2.8<= z<=5.0. So far the ELQS has identified 75 bright new quasars in this redshift range and observations of the fall sky will continue until the end of the year. At the AAS winter meeting we will present the full spectroscopic results of the survey, including a re-estimation and extension of the high-z QLF toward higher luminosities.
redMaGiC: selecting luminous red galaxies from the DES Science Verification data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozo, E.
We introduce redMaGiC, an automated algorithm for selecting Luminous Red Galaxies (LRGs). The algorithm was developed to minimize photometric redshift uncertainties in photometric large-scale structure studies. redMaGiC achieves this by self-training the color-cuts necessary to produce a luminosity-thresholded LRG sam- ple of constant comoving density. Additionally, we demonstrate that redMaGiC photo-zs are very nearly as accurate as the best machine-learning based methods, yet they require minimal spectroscopic training, do not suffer from extrapolation biases, and are very nearly Gaussian. We apply our algorithm to Dark Energy Survey (DES) Science Verification (SV) data to produce a redMaGiC catalog sampling the redshiftmore » range z ϵ [0.2,0.8]. Our fiducial sample has a comoving space density of 10 -3 (h -1Mpc) -3, and a median photo-z bias (z spec z photo) and scatter (σ z=(1 + z)) of 0.005 and 0.017 respectively.The corresponding 5σ outlier fraction is 1.4%. We also test our algorithm with Sloan Digital Sky Survey (SDSS) Data Release 8 (DR8) and Stripe 82 data, and discuss how spectroscopic training can be used to control photo-z biases at the 0.1% level.« less
Upgrade of Tile Calorimeter of the ATLAS Detector for the High Luminosity LHC.
NASA Astrophysics Data System (ADS)
Valdes Santurio, Eduardo; Tile Calorimeter System, ATLAS
2017-11-01
The Tile Calorimeter (TileCal) is the hadronic calorimeter of ATLAS covering the central region of the ATLAS experiment. TileCal is a sampling calorimeter with steel as absorber and scintillators as active medium. The scintillators are read out by wavelength shifting fibers coupled to photomultiplier tubes (PMT). The analogue signals from the PMTs are amplified, shaped and digitized by sampling the signal every 25 ns. The High Luminosity Large Hadron Collider (HL-LHC) will have a peak luminosity of 5 × 1034 cm -2 s -1, five times higher than the design luminosity of the LHC. TileCal will undergo a major replacement of its on- and off-detector electronics for the high luminosity programme of the LHC in 2026. The calorimeter signals will be digitized and sent directly to the off-detector electronics, where the signals are reconstructed and shipped to the first level of trigger at a rate of 40 MHz. This will provide a better precision of the calorimeter signals used by the trigger system and will allow the development of more complex trigger algorithms. Three different options are presently being investigated for the front-end electronic upgrade. Extensive test beam studies will determine which option will be selected. Field Programmable Gate Arrays (FPGAs) are extensively used for the logic functions of the off- and on-detector electronics. One hybrid demonstrator prototype module with the new calorimeter module electronics, but still compatible with the present system, may be inserted in ATLAS at the end of 2016.
Formation and Recondensation of Complex Organic Molecules During Protostellar Luminosity Outbursts
NASA Technical Reports Server (NTRS)
Taquet, Vianney; Wirstrom, Eva S.; Charnley, Steven B.
2016-01-01
During the formation of stars, the accretion of surrounding material toward the central object is thought to undergo strong luminosity outbursts followed by long periods of relative quiescence, even at the early stages of star formation when the protostar is still embedded in a large envelope. We investigated the gas-phase formation and recondensation of the complex organic molecules (COMs) di-methyl ether and methyl formate, induced by sudden ice evaporation processes occurring during luminosity outbursts of different amplitudes in protostellar envelopes. For this purpose, we updated a gas-phase chemical network forming COMs in which ammonia plays a key role. The model calculations presented here demonstrate that ion-molecule reactions alone could account for the observed presence of di-methyl ether and methyl formate in a large fraction of protostellar cores without recourse to grain-surface chemistry, although they depend on uncertain ice abundances and gas-phase reaction branching ratios. In spite of the short outburst timescales of about 100 years, abundance ratios of the considered species higher than 10% with respect to methanol are predicted during outbursts due to their low binding energies relative to water and methanol which delay their recondensation during cooling. Although the current luminosity of most embedded protostars would be too low to produce complex organics in the hot-core regions that are observable with current sub-millimetric interferometers, previous luminosity outburst events would induce the formation of COMs in extended regions of protostellar envelopes with sizes increasing by up to one order of magnitude.
Formation and Recondensation of Complex Organic Molecules during Protostellar Luminosity Outbursts
NASA Astrophysics Data System (ADS)
Taquet, Vianney; Wirström, Eva S.; Charnley, Steven B.
2016-04-01
During the formation of stars, the accretion of surrounding material toward the central object is thought to undergo strong luminosity outbursts followed by long periods of relative quiescence, even at the early stages of star formation when the protostar is still embedded in a large envelope. We investigated the gas-phase formation and recondensation of the complex organic molecules (COMs) di-methyl ether and methyl formate, induced by sudden ice evaporation processes occurring during luminosity outbursts of different amplitudes in protostellar envelopes. For this purpose, we updated a gas-phase chemical network forming COMs in which ammonia plays a key role. The model calculations presented here demonstrate that ion-molecule reactions alone could account for the observed presence of di-methyl ether and methyl formate in a large fraction of protostellar cores without recourse to grain-surface chemistry, although they depend on uncertain ice abundances and gas-phase reaction branching ratios. In spite of the short outburst timescales of about 100 years, abundance ratios of the considered species higher than 10% with respect to methanol are predicted during outbursts due to their low binding energies relative to water and methanol which delay their recondensation during cooling. Although the current luminosity of most embedded protostars would be too low to produce complex organics in the hot-core regions that are observable with current sub-millimetric interferometers, previous luminosity outburst events would induce the formation of COMs in extended regions of protostellar envelopes with sizes increasing by up to one order of magnitude.
Multipacting optimization of a 750 MHz rf dipole
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delayen, Jean R.; Castillo, Alejandro
2014-12-01
Crab crossing schemes have been proposed to re-instate luminosity degradation due to crossing angles at the interaction points in next generation colliders to avoid the use of sharp bending magnets and their resulting large synchrotron radiation generation, highly undessirable in the detector region. The rf dipole has been considered for a different set of applications in several machines, both rings and linear colliders. We present in this paper a study of the effects on the multipacting levels and location depending on geometrical variations on the design for a crabbing/deflecting application in a high current (3/0.5 A), high repetition (750 MHz)more » electron/proton collider, as a matter to provide a comparison point for similar applications of rf dipoles.« less
The future of the Large Hadron Collider and CERN.
Heuer, Rolf-Dieter
2012-02-28
This paper presents the Large Hadron Collider (LHC) and its current scientific programme and outlines options for high-energy colliders at the energy frontier for the years to come. The immediate plans include the exploitation of the LHC at its design luminosity and energy, as well as upgrades to the LHC and its injectors. This may be followed by a linear electron-positron collider, based on the technology being developed by the Compact Linear Collider and the International Linear Collider collaborations, or by a high-energy electron-proton machine. This contribution describes the past, present and future directions, all of which have a unique value to add to experimental particle physics, and concludes by outlining key messages for the way forward.
Using machine learning for sequence-level automated MRI protocol selection in neuroradiology.
Brown, Andrew D; Marotta, Thomas R
2018-05-01
Incorrect imaging protocol selection can lead to important clinical findings being missed, contributing to both wasted health care resources and patient harm. We present a machine learning method for analyzing the unstructured text of clinical indications and patient demographics from magnetic resonance imaging (MRI) orders to automatically protocol MRI procedures at the sequence level. We compared 3 machine learning models - support vector machine, gradient boosting machine, and random forest - to a baseline model that predicted the most common protocol for all observations in our test set. The gradient boosting machine model significantly outperformed the baseline and demonstrated the best performance of the 3 models in terms of accuracy (95%), precision (86%), recall (80%), and Hamming loss (0.0487). This demonstrates the feasibility of automating sequence selection by applying machine learning to MRI orders. Automated sequence selection has important safety, quality, and financial implications and may facilitate improvements in the quality and safety of medical imaging service delivery.
NASA Astrophysics Data System (ADS)
Bowers, Ariel; Whitmore, B. C.; Chandar, R.; Larsen, S. S.
2014-01-01
Luminosity functions have been determined for star cluster populations in 20 nearby (4 - 30 Mpc), star-forming galaxies based on ACS source lists generated by the Hubble Legacy Archive (http://hla.stsci.edu). These cluster catalogs provide one of the largest sets of uniform, automatically-generated cluster candidates available in the literature at present. Comparisons are made with other recently generated cluster catalogs demonstrating that the HLA-generated catalogs are of similar quality, but in general do not go as deep. A typical cluster luminosity function can be approximated by a power-law, dN/dL ∝ Lα, with an average value for α of -2.37 and rms scatter = 0.18. A comparison of fitting results based on methods which use binned and unbinned data shows good agreement, although there may be a systematic tendency for the unbinned (maximum-likelihood) method to give slightly more negative values of α for galaxies with steper luminosity functions. Our uniform database results in a small scatter (0.5 magnitude) in the correlation between the magnitude of the brightest cluster (Mbrightest) and Log of the number of clusters brighter than MI = -9 (Log N). We also examine the magnitude of the brightest cluster vs. Log SFR for a sample including LIRGS and ULIRGS.
NASA Astrophysics Data System (ADS)
Koliopanos, F.; Ciambur, B.; Graham, A.; Webb, N.; Coriat, M.; Mutlu-Pakdil, B.; Davis, B.; Godet, O.; Barret, D.; Seigar, M.
2017-10-01
Intermediate Mass Black Holes (IMBHs) are predicted by a variety of models and are the likely seeds for super massive BHs (SMBHs). However, we have yet to establish their existence. One method, by which we can discover IMBHs, is by measuring the mass of an accreting BH, using X-ray and radio observations and drawing on the correlation between radio luminosity, X-ray luminosity and the BH mass, known as the fundamental plane of BH activity (FP-BH). Furthermore, the mass of BHs in the centers of galaxies, can be estimated using scaling relations between BH mass and galactic properties. We are initiating a campaign to search for IMBH candidates in dwarf galaxies with low-luminosity AGN, using - for the first time - three different scaling relations and the FP-BH, simultaneously. In this first stage of our campaign, we measure the mass of seven LLAGN, that have been previously suggested to host central IMBHs, investigate the consistency between the predictions of the BH scaling relations and the FP-BH, in the low mass regime and demonstrate that this multiple method approach provides a robust average mass prediction. In my talk, I will discuss our methodology, results and next steps of this campaign.
Status of the Future Circular Collider Study
NASA Astrophysics Data System (ADS)
Benedikt, Michael
2016-03-01
Following the 2013 update of the European Strategy for Particle Physics, the international Future Circular Collider (FCC) Study has been launched by CERN as host institute, to design an energy frontier hadron collider (FCC-hh) in a new 80-100 km tunnel with a centre-of-mass energy of about 100 TeV, an order of magnitude beyond the LHC's, as a long-term goal. The FCC study also includes the design of a 90-350 GeV high-luminosity lepton collider (FCC-ee) installed in the same tunnel, serving as Higgs, top and Z factory, as a potential intermediate step, as well as an electron-proton collider option (FCC-he). The physics cases for such machines will be assessed and concepts for experiments will be developed in time for the next update of the European Strategy for Particle Physics by the end of 2018. The presentation will summarize the status of machine designs and parameters and discuss the essential technical components to be developed in the frame of the FCC study. Key elements are superconducting accelerator-dipole magnets with a field of 16 T for the hadron collider and high-power, high-efficiency RF systems for the lepton collider. In addition the unprecedented beam power presents special challenges for the hadron collider for all aspects of beam handling and machine protection. First conclusions of geological investigations and implementation studies will be presented. The status of the FCC collaboration and the further planning for the study will be outlined.
Merritt, Stephanie M; Ilgen, Daniel R
2008-04-01
We provide an empirical demonstration of the importance of attending to human user individual differences in examinations of trust and automation use. Past research has generally supported the notions that machine reliability predicts trust in automation, and trust in turn predicts automation use. However, links between user personality and perceptions of the machine with trust in automation have not been empirically established. On our X-ray screening task, 255 students rated trust and made automation use decisions while visually searching for weapons in X-ray images of luggage. We demonstrate that individual differences affect perceptions of machine characteristics when actual machine characteristics are constant, that perceptions account for 52% of trust variance above the effects of actual characteristics, and that perceptions mediate the effects of actual characteristics on trust. Importantly, we also demonstrate that when administered at different times, the same six trust items reflect two types of trust (dispositional trust and history-based trust) and that these two trust constructs are differentially related to other variables. Interactions were found among user characteristics, machine characteristics, and automation use. Our results suggest that increased specificity in the conceptualization and measurement of trust is required, future researchers should assess user perceptions of machine characteristics in addition to actual machine characteristics, and incorporation of user extraversion and propensity to trust machines can increase prediction of automation use decisions. Potential applications include the design of flexible automation training programs tailored to individuals who differ in systematic ways.
ERIC Educational Resources Information Center
Nietupski, John; And Others
1984-01-01
Four elementary age moderately disabled students were taught to use a picture-prompt prosthetic to make vending machine purchases. All students reached criterion on the vending machine use task, demonstrated partial generalization to untrained machines, and three Ss exhibited maintenance as much as six weeks beyond the termination of instruction.…
Stellar Parameters in an Instant with Machine Learning. Application to Kepler LEGACY Targets
NASA Astrophysics Data System (ADS)
Bellinger, Earl P.; Angelou, George C.; Hekker, Saskia; Basu, Sarbani; Ball, Warrick H.; Guggenberger, Elisabet
2017-10-01
With the advent of dedicated photometric space missions, the ability to rapidly process huge catalogues of stars has become paramount. Bellinger and Angelou et al. [1] recently introduced a new method based on machine learning for inferring the stellar parameters of main-sequence stars exhibiting solar-like oscillations. The method makes precise predictions that are consistent with other methods, but with the advantages of being able to explore many more parameters while costing practically no time. Here we apply the method to 52 so-called "LEGACY" main-sequence stars observed by the Kepler space mission. For each star, we present estimates and uncertainties of mass, age, radius, luminosity, core hydrogen abundance, surface helium abundance, surface gravity, initial helium abundance, and initial metallicity as well as estimates of their evolutionary model parameters of mixing length, overshooting coeffcient, and diffusion multiplication factor. We obtain median uncertainties in stellar age, mass, and radius of 14.8%, 3.6%, and 1.7%, respectively. The source code for all analyses and for all figures appearing in this manuscript can be found electronically at
NASA Technical Reports Server (NTRS)
Ferkinhoff, C.; Hailey-Dunsheath, S.; Nikola, T.; Parshley, S. C.; Stacey, G. J.; Benford, D. J.; Staguhn, J. G.
2010-01-01
We have made the first detections of the 88 micrometers [O(sub III)] line from galaxies in the early universe, detecting the line from the lensed active galactic nucleus (AGN)/starburst composite systems APM 08279+5255 at z 3.911 and SMM J02399-0136 at z = 2.8076. The line is exceptionally bright from both systems, with apparent (lensed) luminosities approx.10(exp 11) Solar Luminosity, For APM 08279, the [O(sub III)] line flux can be modeled in a star formation paradigm, with the stellar radiation field dominated by stars with effective temperatures, T(sub eff) > 36,000 K, similar to the starburst found in M82. The model implies approx.35% of the total far-IR luminosity of the system is generated by the starburst, with the remainder arising from dust heated by the AGN. The 881,tm line can also be generated in the narrow-line region of the AGN if gas densities are around a few 1000 cu cm. For SMM J02399, the [O(sub III)] line likely arises from HII regions formed by hot (T(sub eff) > 40,000 K) young stars in a massive starburst that dominates the far-IR luminosity of the system. The present work demonstrates the utility of the [O(sub III)] line for characterizing starbursts and AGN within galaxies in the early universe. These are the first detections of this astrophysically important line from galaxies beyond a redshift of 0.05.s
THE RED SUPERGIANT CONTENT OF M31
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massey, Philip; Evans, Kate Anne, E-mail: kevans@caltech.edu, E-mail: phil.massey@lowell.edu
2016-08-01
We investigate the red supergiant (RSG) population of M31, obtaining the radial velocities of 255 stars. These data substantiate membership of our photometrically selected sample, demonstrating that Galactic foreground stars and extragalactic RSGs can be distinguished on the basis of B V , V R two-color diagrams. In addition, we use these spectra to measure effective temperatures and assign spectral types, deriving physical properties for 192 RSGs. Comparison with the solar metallicity Geneva evolutionary tracks indicates astonishingly good agreement. The most luminous RSGs in M31 are likely evolved from 25–30 M {sub ⊙} stars, while the vast majority evolved frommore » stars with initial masses of 20 M {sub ⊙} or less. There is an interesting bifurcation in the distribution of RSGs with effective temperatures that increases with higher luminosities, with one sequence consisting of early K-type supergiants, and with the other consisting of M-type supergiants that become later (cooler) with increasing luminosities. This separation is only partially reflected in the evolutionary tracks, although that might be due to the mis-match in metallicities between the solar Geneva models and the higher-than-solar metallicity of M31. As the luminosities increase the median spectral type also increases; i.e., the higher mass RSGs spend more time at cooler temperatures than do those of lower luminosities, a result which is new to this study. Finally we discuss what would be needed observationally to successfully build a luminosity function that could be used to constrain the mass-loss rates of RSGs as our Geneva colleagues have suggested.« less
FORMATION AND RECONDENSATION OF COMPLEX ORGANIC MOLECULES DURING PROTOSTELLAR LUMINOSITY OUTBURSTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taquet, Vianney; Wirström, Eva S.; Charnley, Steven B.
2016-04-10
During the formation of stars, the accretion of surrounding material toward the central object is thought to undergo strong luminosity outbursts followed by long periods of relative quiescence, even at the early stages of star formation when the protostar is still embedded in a large envelope. We investigated the gas-phase formation and recondensation of the complex organic molecules (COMs) di-methyl ether and methyl formate, induced by sudden ice evaporation processes occurring during luminosity outbursts of different amplitudes in protostellar envelopes. For this purpose, we updated a gas-phase chemical network forming COMs in which ammonia plays a key role. The modelmore » calculations presented here demonstrate that ion–molecule reactions alone could account for the observed presence of di-methyl ether and methyl formate in a large fraction of protostellar cores without recourse to grain-surface chemistry, although they depend on uncertain ice abundances and gas-phase reaction branching ratios. In spite of the short outburst timescales of about 100 years, abundance ratios of the considered species higher than 10% with respect to methanol are predicted during outbursts due to their low binding energies relative to water and methanol which delay their recondensation during cooling. Although the current luminosity of most embedded protostars would be too low to produce complex organics in the hot-core regions that are observable with current sub-millimetric interferometers, previous luminosity outburst events would induce the formation of COMs in extended regions of protostellar envelopes with sizes increasing by up to one order of magnitude.« less
A general-purpose machine learning framework for predicting properties of inorganic materials
Ward, Logan; Agrawal, Ankit; Choudhary, Alok; ...
2016-08-26
A very active area of materials research is to devise methods that use machine learning to automatically extract predictive models from existing materials data. While prior examples have demonstrated successful models for some applications, many more applications exist where machine learning can make a strong impact. To enable faster development of machine-learning-based models for such applications, we have created a framework capable of being applied to a broad range of materials data. Our method works by using a chemically diverse list of attributes, which we demonstrate are suitable for describing a wide variety of properties, and a novel method formore » partitioning the data set into groups of similar materials to boost the predictive accuracy. In this manuscript, we demonstrate how this new method can be used to predict diverse properties of crystalline and amorphous materials, such as band gap energy and glass-forming ability.« less
A general-purpose machine learning framework for predicting properties of inorganic materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Logan; Agrawal, Ankit; Choudhary, Alok
A very active area of materials research is to devise methods that use machine learning to automatically extract predictive models from existing materials data. While prior examples have demonstrated successful models for some applications, many more applications exist where machine learning can make a strong impact. To enable faster development of machine-learning-based models for such applications, we have created a framework capable of being applied to a broad range of materials data. Our method works by using a chemically diverse list of attributes, which we demonstrate are suitable for describing a wide variety of properties, and a novel method formore » partitioning the data set into groups of similar materials to boost the predictive accuracy. In this manuscript, we demonstrate how this new method can be used to predict diverse properties of crystalline and amorphous materials, such as band gap energy and glass-forming ability.« less
Fog Machines, Vapors, and Phase Diagrams
ERIC Educational Resources Information Center
Vitz, Ed
2008-01-01
A series of demonstrations is described that elucidate the operation of commercial fog machines by using common laboratory equipment and supplies. The formation of fogs, or "mixing clouds", is discussed in terms of the phase diagram for water and other chemical principles. The demonstrations can be adapted for presentation suitable for elementary…
Addressing uncertainty in atomistic machine learning.
Peterson, Andrew A; Christensen, Rune; Khorshidi, Alireza
2017-05-10
Machine-learning regression has been demonstrated to precisely emulate the potential energy and forces that are output from more expensive electronic-structure calculations. However, to predict new regions of the potential energy surface, an assessment must be made of the credibility of the predictions. In this perspective, we address the types of errors that might arise in atomistic machine learning, the unique aspects of atomistic simulations that make machine-learning challenging, and highlight how uncertainty analysis can be used to assess the validity of machine-learning predictions. We suggest this will allow researchers to more fully use machine learning for the routine acceleration of large, high-accuracy, or extended-time simulations. In our demonstrations, we use a bootstrap ensemble of neural network-based calculators, and show that the width of the ensemble can provide an estimate of the uncertainty when the width is comparable to that in the training data. Intriguingly, we also show that the uncertainty can be localized to specific atoms in the simulation, which may offer hints for the generation of training data to strategically improve the machine-learned representation.
Testing of the Support Vector Machine for Binary-Class Classification
NASA Technical Reports Server (NTRS)
Scholten, Matthew
2011-01-01
The Support Vector Machine is a powerful algorithm, useful in classifying data in to species. The Support Vector Machines implemented in this research were used as classifiers for the final stage in a Multistage Autonomous Target Recognition system. A single kernel SVM known as SVMlight, and a modified version known as a Support Vector Machine with K-Means Clustering were used. These SVM algorithms were tested as classifiers under varying conditions. Image noise levels varied, and the orientation of the targets changed. The classifiers were then optimized to demonstrate their maximum potential as classifiers. Results demonstrate the reliability of SMV as a method for classification. From trial to trial, SVM produces consistent results
Predicting the Redshift 2 H-Alpha Luminosity Function Using [OIII] Emission Line Galaxies
NASA Technical Reports Server (NTRS)
Mehta, Vihang; Scarlata, Claudia; Colbert, James W.; Dai, Y. S.; Dressler, Alan; Henry, Alaina; Malkan, Matt; Rafelski, Marc; Siana, Brian; Teplitz, Harry I.;
2015-01-01
Upcoming space-based surveys such as Euclid and WFIRST-AFTA plan to measure Baryonic Acoustic Oscillations (BAOs) in order to study dark energy. These surveys will use IR slitless grism spectroscopy to measure redshifts of a large number of galaxies over a significant redshift range. In this paper, we use the WFC3 Infrared Spectroscopic Parallel Survey (WISP) to estimate the expected number of H-alpha emitters observable by these future surveys. WISP is an ongoing Hubble Space Telescope slitless spectroscopic survey, covering the 0.8 - 1.65 micrometers wavelength range and allowing the detection of H-alpha emitters up to z approximately equal to 1.5 and [OIII] emitters to z approximately equal to 2.3. We derive the H-alpha-[OIII] bivariate line luminosity function for WISP galaxies at z approximately equal to 1 using a maximum likelihood estimator that properly accounts for uncertainties in line luminosity measurement, and demonstrate how it can be used to derive the H-alpha luminosity function from exclusively fitting [OIII] data. Using the z approximately equal to 2 [OIII] line luminosity function, and assuming that the relation between H-alpha and [OIII] luminosity does not change significantly over the redshift range, we predict the H-alpha number counts at z approximately equal to 2 - the upper end of the redshift range of interest for the future surveys. For the redshift range 0.7 less than z less than 2, we expect approximately 3000 galaxies per sq deg for a flux limit of 3 x 10(exp -16) ergs per sec per sq cm (the proposed depth of Euclid galaxy redshift survey) and approximately 20,000 galaxies per sq deg for a flux limit of approximately 10(exp -16) ergs per sec per sq cm (the baseline depth of WFIRST galaxy redshift survey).
48 CFR 9904.409-60 - Illustrations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... the provisions of this Standard. (a) Companies X, Y, and Z purchase identical milling machines to be... individual asset basis. Its experience with similar machines is that the average replacement period is 14... for the milling machine unless it can demonstrate changed circumstances or new circumstances to...
48 CFR 9904.409-60 - Illustrations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... the provisions of this Standard. (a) Companies X, Y, and Z purchase identical milling machines to be... individual asset basis. Its experience with similar machines is that the average replacement period is 14... for the milling machine unless it can demonstrate changed circumstances or new circumstances to...
48 CFR 9904.409-60 - Illustrations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... the provisions of this Standard. (a) Companies X, Y, and Z purchase identical milling machines to be... individual asset basis. Its experience with similar machines is that the average replacement period is 14... for the milling machine unless it can demonstrate changed circumstances or new circumstances to...
Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras
NASA Astrophysics Data System (ADS)
Quinn, Mark Kenneth
2018-05-01
Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.
Machine learning enhanced optical distance sensor
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, N. A.
2018-01-01
Presented for the first time is a machine learning enhanced optical distance sensor. The distance sensor is based on our previously demonstrated distance measurement technique that uses an Electronically Controlled Variable Focus Lens (ECVFL) with a laser source to illuminate a target plane with a controlled optical beam spot. This spot with varying spot sizes is viewed by an off-axis camera and the spot size data is processed to compute the distance. In particular, proposed and demonstrated in this paper is the use of a regularized polynomial regression based supervised machine learning algorithm to enhance the accuracy of the operational sensor. The algorithm uses the acquired features and corresponding labels that are the actual target distance values to train a machine learning model. The optimized training model is trained over a 1000 mm (or 1 m) experimental target distance range. Using the machine learning algorithm produces a training set and testing set distance measurement errors of <0.8 mm and <2.2 mm, respectively. The test measurement error is at least a factor of 4 improvement over our prior sensor demonstration without the use of machine learning. Applications for the proposed sensor include industrial scenario distance sensing where target material specific training models can be generated to realize low <1% measurement error distance measurements.
Evidence for the associated production of a W boson and a top quark at ATLAS
NASA Astrophysics Data System (ADS)
Koll, James
This thesis discusses a search for the Standard Model single top Wt-channel process. An analysis has been performed searching for the Wt-channel process using 4.7 fb-1 of integrated luminosity collected with the ATLAS detector at the Large Hadron Collider. A boosted decision tree is trained using machine learning techniques to increase the separation between signal and background. A profile likelihood fit is used to measure the cross-section of the Wt-channel process at sigma(pp → Wt + X) = 16.8 +/-2.9 (stat) +/- 4.9(syst) pb, consistent with the Standard Model prediction. This fit is also used to generate pseudoexperiments to calculate the significance, finding an observed (expected) 3.3 sigma (3.4 sigma) excess over background.
Design, prototyping, and testing of a compact superconducting double quarter wave crab cavity
NASA Astrophysics Data System (ADS)
Xiao, Binping; Alberty, Luis; Belomestnykh, Sergey; Ben-Zvi, Ilan; Calaga, Rama; Cullen, Chris; Capatina, Ofelia; Hammons, Lee; Li, Zenghai; Marques, Carlos; Skaritka, John; Verdu-Andres, Silvia; Wu, Qiong
2015-04-01
We proposed a novel design for a compact superconducting crab cavity with a double quarter wave (DQWCC) shape. After fabrication and surface treatments, this niobium proof-of-principle cavity was tested cryogenically in a vertical cryostat. The cavity is extremely compact yet has a low frequency of 400 MHz, an essential property for service in the Large Hadron Collider luminosity upgrade. The cavity's electromagnetic properties are well suited for this demanding task. The demonstrated deflecting voltage of 4.6 MV is well above the required 3.34 MV for a crab cavity in the future High Luminosity LHC. In this paper, we present the design, prototyping, and results from testing the DQWCC.
NASA Astrophysics Data System (ADS)
Lin, Yen-Ting; Hsieh, Bau-Ching; Lin, Sheng-Chieh; Oguri, Masamune; Chen, Kai-Feng; Tanaka, Masayuki; Chiu, I.-non; Huang, Song; Kodama, Tadayuki; Leauthaud, Alexie; More, Surhud; Nishizawa, Atsushi J.; Bundy, Kevin; Lin, Lihwai; Miyazaki, Satoshi; HSC Collaboration
2018-01-01
The unprecedented depth and area surveyed by the Subaru Strategic Program with the Hyper Suprime-Cam (HSC-SSP) have enabled us to construct and publish the largest distant cluster sample out to z~1 to date. In this exploratory study of cluster galaxy evolution from z=1 to z=0.3, we investigate the stellar mass assembly history of brightest cluster galaxies (BCGs), and evolution of stellar mass and luminosity distributions, stellar mass surface density profile, as well as the population of radio galaxies. Our analysis is the first high redshift application of the top N richest cluster selection, which is shown to allow us to trace the cluster galaxy evolution faithfully. Our stellar mass is derived from a machine-learning algorithm, which we show to be unbiased and accurate with respect to the COSMOS data. We find very mild stellar mass growth in BCGs, and no evidence for evolution in both the total stellar mass-cluster mass correlation and the shape of the stellar mass surface density profile. The clusters are found to contain more red galaxies compared to the expectations from the field, even after the differences in density between the two environments have been taken into account. We also present the first measurement of the radio luminosity distribution in clusters out to z~1.
Automated solar panel assembly line
NASA Technical Reports Server (NTRS)
Somberg, H.
1981-01-01
The initial stage of the automated solar panel assembly line program was devoted to concept development and proof of approach through simple experimental verification. In this phase, laboratory bench models were built to demonstrate and verify concepts. Following this phase was machine design and integration of the various machine elements. The third phase was machine assembly and debugging. In this phase, the various elements were operated as a unit and modifications were made as required. The final stage of development was the demonstration of the equipment in a pilot production operation.
Mining the Infrared Sky for High-Redshift Quasars
NASA Astrophysics Data System (ADS)
Richards, Gordon
The Spitzer and WISE satellites have opened up new avenues for the study of active galactic nuclei (AGN) by peering through the dust shrouding half (or more) of AGNs. However, despite being more sensitive to shrouded AGNs, current selection methods being used in the mid-IR are still largely blind to the highest redshift quasars-both those that are shrouded and those that are not (and should therefore be easy to find). We describe projects to identify both unobscured (at z>3) and obscured quasars (at z>2) that have heretofore been missed in significant numbers. Finding the high-z obscured quasars in large numbers is crucial for fulfilling the legacy of NASA missions in the IR and X-ray. With these quasars we will be able to perform clustering analyses that break the degeneracy of models describing how black holes can ``feed back" energy to the large-scale host galaxy, significantly influencing its evolution. We will further trace the luminosity function of galaxies undergoing active accretion from low-luminosity AGNs to luminous quasars—probing the growth of the supermassive black holes that we see today in the local universe. Our new insights come about from leveraging new Spitzer data, primarily from the PI's SpitzerIRAC Equatorial Survey (SpIES). The Spitzer data are 2.5 magnitudes deeper than the "AllWISE" survey in a 125 square degree, multiwavelength-rich, equatorial region known as SDSS "Stripe 82". These data are crucial for extending mid-IR investigations to higher redshifts, both for unobscured and obscured sources. The PI's team are among the world's experts in using the proposed machine learning techniques to find both unobscured (type-1) and obscured (type- 2) quasars and in using quasar clustering and luminosity functions to do cutting-edge science. The luminosity function and clustering algorithms are already in place, allowing for timely completion of this project once the multi-wavelength NASA data have been incorporated. This project is directly relevant to our understanding of the evolution of galaxies and to NASA's goal of better understanding the Universe. Moreover, NASA's data archive is crucial to the project: it is only by using data from Spitzer and WISE that will allow us to more fully understand the physics of quasars—by probing them at epochs where they are both most difficult to find, but also the most influential.
redMaGiC: Selecting luminous red galaxies from the DES Science Verification data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozo, E.; Rykoff, E. S.; Abate, A.
Here, we introduce redMaGiC, an automated algorithm for selecting luminous red galaxies (LRGs). The algorithm was specifically developed to minimize photometric redshift uncertainties in photometric large-scale structure studies. redMaGiC achieves this by self-training the colour cuts necessary to produce a luminosity-thresholded LRG sample of constant comoving density. We demonstrate that redMaGiC photo-zs are very nearly as accurate as the best machine learning-based methods, yet they require minimal spectroscopic training, do not suffer from extrapolation biases, and are very nearly Gaussian. We apply our algorithm to Dark Energy Survey (DES) Science Verification (SV) data to produce a redMaGiC catalogue sampling themore » redshift range z ϵ [0.2, 0.8]. Our fiducial sample has a comoving space density of 10 –3 (h –1 Mpc) –3, and a median photo-z bias (zspec – zphoto) and scatter (σz/(1 + z)) of 0.005 and 0.017, respectively. The corresponding 5σ outlier fraction is 1.4 per cent. We also test our algorithm with Sloan Digital Sky Survey Data Release 8 and Stripe 82 data, and discuss how spectroscopic training can be used to control photo-z biases at the 0.1 per cent level.« less
redMaGiC: Selecting luminous red galaxies from the DES Science Verification data
Rozo, E.; Rykoff, E. S.; Abate, A.; ...
2016-05-30
Here, we introduce redMaGiC, an automated algorithm for selecting luminous red galaxies (LRGs). The algorithm was specifically developed to minimize photometric redshift uncertainties in photometric large-scale structure studies. redMaGiC achieves this by self-training the colour cuts necessary to produce a luminosity-thresholded LRG sample of constant comoving density. We demonstrate that redMaGiC photo-zs are very nearly as accurate as the best machine learning-based methods, yet they require minimal spectroscopic training, do not suffer from extrapolation biases, and are very nearly Gaussian. We apply our algorithm to Dark Energy Survey (DES) Science Verification (SV) data to produce a redMaGiC catalogue sampling themore » redshift range z ϵ [0.2, 0.8]. Our fiducial sample has a comoving space density of 10 –3 (h –1 Mpc) –3, and a median photo-z bias (zspec – zphoto) and scatter (σz/(1 + z)) of 0.005 and 0.017, respectively. The corresponding 5σ outlier fraction is 1.4 per cent. We also test our algorithm with Sloan Digital Sky Survey Data Release 8 and Stripe 82 data, and discuss how spectroscopic training can be used to control photo-z biases at the 0.1 per cent level.« less
Coronal Heating and the Magnetic Field in Solar Active Regions
NASA Astrophysics Data System (ADS)
Falconer, D. A.; Tiwari, S. K.; Winebarger, A. R.; Moore, R. L.
2017-12-01
A strong dependence of active-region (AR) coronal heating on the magnetic field is demonstrated by the strong correlation of AR X-ray luminosity with AR total magnetic flux (Fisher et al 1998 ApJ). AR X-ray luminosity is also correlated with AR length of strong-shear neutral line in the photospheric magnetic field (Falconer 1997). These two whole-AR magnetic parameters are also correlated with each other. From 150 ARs observed within 30 heliocentric degrees from disk center by AIA and HMI on SDO, using AR luminosity measured from the hot component of the AIA 94 Å band (Warren et al 2012, ApJ) near the time of each of 3600 measured HMI vector magnetograms of these ARs and a wide selection of whole-AR magnetic parameters from each vector magnetogram after it was deprojected to disk center, we find: (1) The single magnetic parameter having the strongest correlation with AR 94-hot luminosity is the length of strong-field neutral line. (2) The two-parameter combination having the strongest still-stronger correlation with AR 94-hot luminosity is a combination of AR total magnetic flux and AR neutral-line length weighted by the vertical-field gradient across the neutral line. We interpret these results to be consistent with the results of both Fisher et al (1998) and Falconer (1997), and with the correlation of AR coronal loop heating with loop field strength recently found by Tiwari et al (2017, ApJ Letters). Our interpretation is that, in addition to depending strongly on coronal loop field strength, AR coronal heating has a strong secondary positive dependence on the rate of flux cancelation at neutral lines at coronal loop feet. This work was funded by the Living With a Star Science and Heliophysics Guest Investigators programs of NASA's Heliophysics Division.
Characterization of Hall effect thruster propellant distributors with flame visualization
NASA Astrophysics Data System (ADS)
Langendorf, S.; Walker, M. L. R.
2013-01-01
A novel method for the characterization and qualification of Hall effect thruster propellant distributors is presented. A quantitative measurement of the azimuthal number density uniformity, a metric which impacts propellant utilization, is obtained from photographs of a premixed flame anchored on the exit plane of the propellant distributor. The technique is demonstrated for three propellant distributors using a propane-air mixture at reservoir pressure of 40 psi (gauge) (377 kPa) exhausting to atmosphere, with volumetric flow rates ranging from 15-145 cfh (7.2-68 l/min) with equivalence ratios from 1.2 to 2.1. The visualization is compared with in-vacuum pressure measurements 1 mm downstream of the distributor exit plane (chamber pressure held below 2.7 × 10-5 Torr-Xe at all flow rates). Both methods indicate a non-uniformity in line with the propellant inlet, supporting the validity of the technique of flow visualization with flame luminosity for propellant distributor characterization. The technique is applied to a propellant distributor with a manufacturing defect in a known location and is able to identify the defect and characterize its impact. The technique is also applied to a distributor with numerous small orifices at the exit plane and is able to resolve the resulting non-uniformity. Luminosity data are collected with a spatial resolution of 48.2-76.1 μm (pixel width). The azimuthal uniformity is characterized in the form of standard deviation of azimuthal luminosities, normalized by the mean azimuthal luminosity. The distributors investigated achieve standard deviations of 0.346 ± 0.0212, 0.108 ± 0.0178, and 0.708 ± 0.0230 mean-normalized luminosity units respectively, where a value of 0 corresponds to perfect uniformity and a value of 1 represents a standard deviation equivalent to the mean.
Design of a High Luminosity 100 TeV Proton-Antiproton Collider
NASA Astrophysics Data System (ADS)
Oliveros Tautiva, Sandra Jimena
Currently new physics is being explored with the Large Hadron Collider at CERN and with Intensity Frontier programs at Fermilab and KEK. The energy scale for new physics is known to be in the multi-TeV range, signaling the need for a future collider which well surpasses this energy scale. A 10 34 cm-2 s-1 luminosity 100 TeV proton-antiproton collider is explored with 7x the energy of the LHC. The dipoles are 4.5 T to reduce cost. A proton-antiproton collider is selected as a future machine for several reasons. The cross section for many high mass states is 10 times higher in pp than pp collisions. Antiquarks for production can come directly from an antiproton rather than indirectly from gluon splitting. The higher cross sections reduce the synchrotron radiation in superconducting magnets and the number of events per bunch crossing, because lower beam currents can produce the same rare event rates. Events are also more centrally produced, allowing a more compact detector with less space between quadrupole triplets and a smaller beta* for higher luminosity. To adjust to antiproton beam losses (burn rate), a Fermilab-like antiproton source would be adapted to disperse the beam into 12 different momentum channels, using electrostatic septa, to increase antiproton momentum capture 12 times. At Fermilab, antiprotons were stochastically cooled in one Debuncher and one Accumulator ring. Because the stochastic cooling time scales as the number of particles, two options of 12 independent cooling systems are presented. One electron cooling ring might follow the stochastic cooling rings for antiproton stacking. Finally antiprotons in the collider ring would be recycled during runs without leaving the collider ring, by joining them to new bunches with snap bunch coalescence and synchrotron damping. These basic ideas are explored in this work on a future 100 TeV proton-antiproton collider and the main parameters are presented.
Design of a High Luminosity 100 TeV Proton Antiproton Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveros Tuativa, Sandra Jimena
2017-04-01
Currently new physics is being explored with the Large Hadron Collider at CERN and with Intensity Frontier programs at Fermilab and KEK. The energy scale for new physics is known to be in the multi-TeV range, signaling the need for a future collider which well surpasses this energy scale. A 10more » $$^{\\,34}$$ cm$$^{-2}$$ s$$^{-1}$$ luminosity 100 TeV proton-antiproton collider is explored with 7$$\\times$$ the energy of the LHC. The dipoles are 4.5\\,T to reduce cost. A proton-antiproton collider is selected as a future machine for several reasons. The cross section for many high mass states is 10 times higher in $$p\\bar{p}$$ than $pp$ collisions. Antiquarks for production can come directly from an antiproton rather than indirectly from gluon splitting. The higher cross sections reduce the synchrotron radiation in superconducting magnets and the number of events per bunch crossing, because lower beam currents can produce the same rare event rates. Events are also more centrally produced, allowing a more compact detector with less space between quadrupole triplets and a smaller $$\\beta^{*}$$ for higher luminosity. To adjust to antiproton beam losses (burn rate), a Fermilab-like antiproton source would be adapted to disperse the beam into 12 different momentum channels, using electrostatic septa, to increase antiproton momentum capture 12 times. At Fermilab, antiprotons were stochastically cooled in one Debuncher and one Accumulator ring. Because the stochastic cooling time scales as the number of particles, two options of 12 independent cooling systems are presented. One electron cooling ring might follow the stochastic cooling rings for antiproton stacking. Finally antiprotons in the collider ring would be recycled during runs without leaving the collider ring, by joining them to new bunches with snap bunch coalescence and synchrotron damping. These basic ideas are explored in this work on a future 100 TeV proton-antiproton collider and the main parameters are presented.« less
Super-Eddington accreting massive black holes as long-lived cosmological standards.
Wang, Jian-Min; Du, Pu; Valls-Gabaud, David; Hu, Chen; Netzer, Hagai
2013-02-22
Super-Eddington accreting massive black holes (SEAMBHs) reach saturated luminosities above a certain accretion rate due to photon trapping and advection in slim accretion disks. We show that these SEAMBHs could provide a new tool for estimating cosmological distances if they are properly identified by hard x-ray observations, in particular by the slope of their 2-10 keV continuum. To verify this idea we obtained black hole mass estimates and x-ray data for a sample of 60 narrow line Seyfert 1 galaxies that we consider to be the most promising SEAMBH candidates. We demonstrate that the distances derived by the new method for the objects in the sample get closer to the standard luminosity distances as the hard x-ray continuum gets steeper. The results allow us to analyze the requirements for using the method in future samples of active black holes and to demonstrate that the expected uncertainty, given large enough samples, can make them into a useful, new cosmological ruler.
Parker, David L; Yamin, Samuel C; Brosseau, Lisa M; Xi, Min; Gordon, Robert; Most, Ivan G; Stanley, Rodney
2015-11-01
Metal fabrication workers experience high rates of traumatic occupational injuries. Machine operators in particular face high risks, often stemming from the absence or improper use of machine safeguarding or the failure to implement lockout procedures. The National Machine Guarding Program (NMGP) was a translational research initiative implemented in conjunction with two workers' compensation insures. Insurance safety consultants trained in machine guarding used standardized checklists to conduct a baseline inspection of machine-related hazards in 221 business. Safeguards at the point of operation were missing or inadequate on 33% of machines. Safeguards for other mechanical hazards were missing on 28% of machines. Older machines were both widely used and less likely than newer machines to be properly guarded. Lockout/tagout procedures were posted at only 9% of machine workstations. The NMGP demonstrates a need for improvement in many aspects of machine safety and lockout in small metal fabrication businesses. © 2015 The Authors. American Journal of Industrial Medicine published by Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoynev, S.; et al.
The development ofmore » $$Nb_3Sn$$ quadrupole magnets for the High-Luminosity LHC upgrade is a joint venture between the US LHC Accelerator Research Program (LARP)* and CERN with the goal of fabricating large aperture quadrupoles for the LHC in-teraction regions (IR). The inner triplet (low-β) NbTi quadrupoles in the IR will be replaced by the stronger Nb3Sn magnets boosting the LHC program of having 10-fold increase in integrated luminos-ity after the foreseen upgrades. Previously LARP conducted suc-cessful tests of short and long models with up to 120 mm aperture. The first short 150 mm aperture quadrupole model MQXFS1 was assembled with coils fabricated by both CERN and LARP. The magnet demonstrated strong performance at the Fermilab’s verti-cal magnet test facility reaching the LHC operating limits. This paper reports the latest results from MQXFS1 tests with changed pre-stress levels. The overall magnet performance, including quench training and memory, ramp rate and temperature depend-ence, is also summarized.« less
NASA Astrophysics Data System (ADS)
Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene
2017-03-01
Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ``warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10-6 Mpc-3 and neutrino luminosity Lν lesssim 1042 erg s-1 (1041 erg s-1) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.
DISSECTING THE QUASAR MAIN SEQUENCE: INSIGHT FROM HOST GALAXY PROPERTIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Jiayi; Shen, Yue
2015-05-01
The diverse properties of broad-line quasars appear to follow a well-defined main sequence along which the optical Fe ii strength increases. It has been suggested that this sequence is mainly driven by the Eddington ratio (L/L{sub Edd}) of the black hole (BH) accretion. Shen and Ho demonstrated with quasar clustering analysis that the average BH mass decreases with increasing Fe ii strength when quasar luminosity is fixed, consistent with this suggestion. Here we perform an independent test by measuring the stellar velocity dispersion σ{sub *} (hence, the BH mass via the M–σ{sub *} relation) from decomposed host spectra in low-redshiftmore » Sloan Digital Sky Survey quasars. We found that at fixed quasar luminosity, σ{sub *} systematically decreases with increasing Fe ii strength, confirming that the Eddington ratio increases with Fe ii strength. We also found that at fixed luminosity and Fe ii strength, there is little dependence of σ{sub *} on the broad Hβ FWHM. These new results reinforce the framework that the Eddington ratio and orientation govern most of the diversity seen in broad-line quasar properties.« less
Improving Energy Efficiency in CNC Machining
NASA Astrophysics Data System (ADS)
Pavanaskar, Sushrut S.
We present our work on analyzing and improving the energy efficiency of multi-axis CNC milling process. Due to the differences in energy consumption behavior, we treat 3- and 5-axis CNC machines separately in our work. For 3-axis CNC machines, we first propose an energy model that estimates the energy requirement for machining a component on a specified 3-axis CNC milling machine. Our model makes machine-specific predictions of energy requirements while also considering the geometric aspects of the machining toolpath. Our model - and the associated software tool - facilitate direct comparison of various alternative toolpath strategies based on their energy-consumption performance. Further, we identify key factors in toolpath planning that affect energy consumption in CNC machining. We then use this knowledge to propose and demonstrate a novel toolpath planning strategy that may be used to generate new toolpaths that are inherently energy-efficient, inspired by research on digital micrography -- a form of computational art. For 5-axis CNC machines, the process planning problem consists of several sub-problems that researchers have traditionally solved separately to obtain an approximate solution. After illustrating the need to solve all sub-problems simultaneously for a truly optimal solution, we propose a unified formulation based on configuration space theory. We apply our formulation to solve a problem variant that retains key characteristics of the full problem but has lower dimensionality, allowing visualization in 2D. Given the complexity of the full 5-axis toolpath planning problem, our unified formulation represents an important step towards obtaining a truly optimal solution. With this work on the two types of CNC machines, we demonstrate that without changing the current infrastructure or business practices, machine-specific, geometry-based, customized toolpath planning can save energy in CNC machining.
Tests with beam setup of the TileCal phase-II upgrade electronics
NASA Astrophysics Data System (ADS)
Reward Hlaluku, Dingane
2017-09-01
The LHC has planned a series of upgrades culminating in the High Luminosity LHC which will have an average luminosity 5-7 times larger than the nominal Run-2 value. The ATLAS Tile calorimeter plans to introduce a new readout architecture by completely replacing the back-end and front-end electronics for the High Luminosity LHC. The photomultiplier signals will be fully digitized and transferred for every bunch crossing to the off-detector Tile PreProcessor. The Tile PreProcessor will further provide preprocessed digital data to the first level of trigger with improved spatial granularity and energy resolution in contrast to the current analog trigger signals. A single super-drawer module commissioned with the phase-II upgrade electronics is to be inserted into the real detector to evaluate and qualify the new readout and trigger concepts in the overall ATLAS data acquisition system. This new super-drawer, so-called hybrid Demonstrator, must provide analog trigger signals for backward compatibility with the current system. This Demonstrator drawer has been inserted into a Tile calorimeter module prototype to evaluate the performance in the lab. In parallel, one more module has been instrumented with two other front-end electronics options based on custom ASICs (QIE and FATALIC) which are under evaluation. These two modules together with three other modules composed of the current system electronics were exposed to different particles and energies in three test-beam campaigns during 2015 and 2016.
Alumina additions may improve the damage tolerance of soft machined zirconia-based ceramics.
Oilo, Marit; Tvinnereim, Helene M; Gjerdet, Nils Roar
2011-01-01
The aim of this study was to evaluate the damage tolerance of different zirconia-based materials. Bars of one hard machined and one soft machined dental zirconia and an experimental 95% zirconia 5% alumina ceramic were subjected to 100,000 stress cycles (n = 10), indented to provoke cracks on the tensile stress side (n = 10), and left untreated as controls (n = 10). The experimental material demonstrated a higher relative damage tolerance, with a 40% reduction compared to 68% for the hard machined zirconia and 84% for the soft machined zirconia.
Learn about Physical Science: Simple Machines. [CD-ROM].
ERIC Educational Resources Information Center
2000
This CD-ROM, designed for students in grades K-2, explores the world of simple machines. It allows students to delve into the mechanical world and learn the ways in which simple machines make work easier. Animated demonstrations are provided of the lever, pulley, wheel, screw, wedge, and inclined plane. Activities include practical matching and…
Chaotic behaviour of Zeeman machines at introductory course of mechanics
NASA Astrophysics Data System (ADS)
Nagy, Péter; Tasnádi, Péter
2016-05-01
Investigation of chaotic motions and cooperative systems offers a magnificent opportunity to involve modern physics into the basic course of mechanics taught to engineering students. In the present paper it will be demonstrated that Zeeman Machine can be a versatile and motivating tool for students to get introductory knowledge about chaotic motion via interactive simulations. It works in a relatively simple way and its properties can be understood very easily. Since the machine can be built easily and the simulation of its movement is also simple the experimental investigation and the theoretical description can be connected intuitively. Although Zeeman Machine is known mainly for its quasi-static and catastrophic behaviour, its dynamic properties are also of interest with its typical chaotic features. By means of a periodically driven Zeeman Machine a wide range of chaotic properties of the simple systems can be demonstrated such as bifurcation diagrams, chaotic attractors, transient chaos and so on. The main goal of this paper is the presentation of an interactive learning material for teaching the basic features of the chaotic systems through the investigation of the Zeeman Machine.
2011-05-26
Machine Gun 24 12.7mm NATO Nominated Weapon United States – General Dynamics M2 Heavy Barrel Machine Gun 25...Explosively-Clad Refractory Barrel Liners for Small Caliber Machine Guns , Dr. Douglas Taylor, TPL, Inc. 12299 - The HAMR Project, Mr. Xavier Gavage, FN Herstal... Barrel Liners for Small Caliber Machine Guns Dr. Douglas Taylor, TPL, Inc. 12330 - 40mm Low Velocity Air-Burst Munitions System Mr.
Simulating the assembly of galaxies at redshifts z = 6-12
NASA Astrophysics Data System (ADS)
Dayal, Pratika; Dunlop, James S.; Maio, Umberto; Ciardi, Benedetta
2013-09-01
We use state-of-the-art simulations to explore the physical evolution of galaxies in the first billion years of cosmic time. First, we demonstrate that our model reproduces the basic statistical properties of the observed Lyman-break galaxy (LBG) population at z = 6-8, including the evolving ultraviolet (UV) luminosity function (LF), the stellar mass density (SMD) and the average specific star-formation rates (sSFRs) of LBGs with MUV < -18 (AB mag). Encouraged by this success we present predictions for the behaviour of fainter LBGs extending down to MUV ≃ -15 (as will be probed with the James Webb Space Telescope) and have interrogated our simulations to try to gain insight into the physical drivers of the observed population evolution. We find that mass growth due to star formation in the mass-dominant progenitor builds up about 90 per cent of the total z ˜ 6 LBG stellar mass, dominating over the mass contributed by merging throughout this era. Our simulation suggests that the apparent `luminosity evolution' depends on the luminosity range probed: the steady brightening of the bright end of the LF is driven primarily by genuine physical luminosity evolution and arises due to a fairly steady increase in the UV luminosity (and hence star-formation rates) in the most massive LBGs; for example the progenitors of the z ≃ 6 galaxies with MUV < -18.5 comprised ≃90 per cent of the galaxies with MUV < -18 at z ≃ 7 and ≃75 per cent at z ≃ 8. However, at fainter luminosities the situation is more complex, due in part to the more stochastic star-formation histories of lower mass objects; the progenitors of a significant fraction of z ≃ 6 LBGs with MUV > -18 were in fact brighter at z ≃ 7 (and even at z ≃ 8) despite obviously being less massive at earlier times. At this end, the evolution of the UV LF involves a mix of positive and negative luminosity evolution (as low-mass galaxies temporarily brighten and then fade) coupled with both positive and negative density evolution (as new low-mass galaxies form, and other low-mass galaxies are consumed by merging). We also predict that the average sSFR of LBGs should rise from sSFR ≃ 4.5 Gyr- 1 at z ≃ 6 to sSFR ≃ 11 Gyr- 1 by z ≃ 9.
NASA Astrophysics Data System (ADS)
Zhang, Yu-Ying; Reiprich, Thomas H.; Schneider, Peter; Clerc, Nicolas; Merloni, Andrea; Schwope, Axel; Borm, Katharina; Andernach, Heinz; Caretta, César A.; Wu, Xiang-Ping
2017-03-01
We present the relation of X-ray luminosity versus dynamical mass for 63 nearby clusters of galaxies in a flux-limited sample, the HIghest X-ray FLUx Galaxy Cluster Sample (HIFLUGCS, consisting of 64 clusters). The luminosity measurements are obtained based on 1.3 Ms of clean XMM-Newton data and ROSAT pointed observations. The masses are estimated using optical spectroscopic redshifts of 13647 cluster galaxies in total. We classify clusters into disturbed and undisturbed based on a combination of the X-ray luminosity concentration and the offset between the brightest cluster galaxy and X-ray flux-weighted center. Given sufficient numbers (I.e., ≥45) of member galaxies when the dynamical masses are computed, the luminosity versus mass relations agree between the disturbed and undisturbed clusters. The cool-core clusters still dominate the scatter in the luminosity versus mass relation even when a core-corrected X-ray luminosity is used, which indicates that the scatter of this scaling relation mainly reflects the structure formation history of the clusters. As shown by the clusters with only few spectroscopically confirmed members, the dynamical masses can be underestimated and thus lead to a biased scaling relation. To investigate the potential of spectroscopic surveys to follow up high-redshift galaxy clusters or groups observed in X-ray surveys for the identifications and mass calibrations, we carried out Monte Carlo resampling of the cluster galaxy redshifts and calibrated the uncertainties of the redshift and dynamical mass estimates when only reduced numbers of galaxy redshifts per cluster are available. The resampling considers the SPIDERS and 4MOST configurations, designed for the follow-up of the eROSITA clusters, and was carried out for each cluster in the sample at the actual cluster redshift as well as at the assigned input cluster redshifts of 0.2, 0.4, 0.6, and 0.8. To follow up very distant clusters or groups, we also carried out the mass calibration based on the resampling with only ten redshifts per cluster, and redshift calibration based on the resampling with only five and ten redshifts per cluster, respectively. Our results demonstrate the power of combining upcoming X-ray and optical spectroscopic surveys for mass calibration of clusters. The scatter in the dynamical mass estimates for the clusters with at least ten members is within 50%.
NASA Astrophysics Data System (ADS)
Hsiao, Ming-Chih; Su, Ling-Huey
2018-02-01
This research addresses the problem of scheduling hybrid machine types, in which one type is a two-machine flowshop and another type is a single machine. A job is either processed on the two-machine flowshop or on the single machine. The objective is to determine a production schedule for all jobs so as to minimize the makespan. The problem is NP-hard since the two parallel machines problem was proved to be NP-hard. Simulated annealing algorithms are developed to solve the problem optimally. A mixed integer programming (MIP) is developed and used to evaluate the performance for two SAs. Computational experiments demonstrate the efficiency of the simulated annealing algorithms, the quality of the simulated annealing algorithms will also be reported.
Commissioning of the first chambers of the CMS GE1/1 muon station
NASA Astrophysics Data System (ADS)
Ressegotti, Martina; CMS Muon Group
2017-12-01
The upgrades of the LHC planned in the next years will increase the instantaneous luminosity up to 5 × 1034 cm -2 s -1 after Long Shutdown 3, a value about five times higher than the nominal one for which the CMS experiment was designed. The resulting larger rate of interactions will produce a higher pileup environment that will challenge the trigger system of the CMS experiment in its original configuration, in particular in the endcap region. As part of the upgrade program of the CMS muon endcaps, additional muon detectors based on Gas Electron Multiplier (GEM) technology will be installed, in order to be able to sustain a physics program during high-luminosity operation without performance losses. The installation of the GE1/1 station is scheduled for Long Shutdown 2 in 2019-2020 already a demonstrator composed of five superchambers has been installed during the Extended Year-End Technical Stop at the beginning of 2017. Its goal is to test the system’s operational conditions and also to demonstrate the integration of the GE1/1 chambers into the CMS online system. The status of the installation and commissioning of the GE1/1 demonstrator is presented.
NASA Astrophysics Data System (ADS)
Bouchami, Jihene
The LHC proton-proton collisions create a hard radiation environment in the ATLAS detector. In order to quantify the effects of this environment on the detector performance and human safety, several Monte Carlo simulations have been performed. However, direct measurement is indispensable to monitor radiation levels in ATLAS and also to verify the simulation predictions. For this purpose, sixteen ATLAS-MPX devices have been installed at various positions in the ATLAS experimental and technical areas. They are composed of a pixelated silicon detector called MPX whose active surface is partially covered with converter layers for the detection of thermal, slow and fast neutrons. The ATLAS-MPX devices perform real-time measurement of radiation fields by recording the detected particle tracks as raster images. The analysis of the acquired images allows the identification of the detected particle types by the shapes of their tracks. For this aim, a pattern recognition software called MAFalda has been conceived. Since the tracks of strongly ionizing particles are influenced by charge sharing between adjacent pixels, a semi-empirical model describing this effect has been developed. Using this model, the energy of strongly ionizing particles can be estimated from the size of their tracks. The converter layers covering each ATLAS-MPX device form six different regions. The efficiency of each region to detect thermal, slow and fast neutrons has been determined by calibration measurements with known sources. The study of the ATLAS-MPX devices response to the radiation produced by proton-proton collisions at a center of mass energy of 7 TeV has demonstrated that the number of recorded tracks is proportional to the LHC luminosity. This result allows the ATLAS-MPX devices to be employed as luminosity monitors. To perform an absolute luminosity measurement and calibration with these devices, the van der Meer method based on the LHC beam parameters has been proposed. Since the ATLAS-MPX devices response and the luminosity are correlated, the results of measuring radiation levels are expressed in terms of particle fluences per unit integrated luminosity. A significant deviation has been obtained when comparing these fluences with those predicted by GCALOR, which is one of the ATLAS detector simulations. In addition, radiation measurements performed at the end of proton-proton collisions have demonstrated that the decay of radionuclides produced during collisions can be observed with the ATLAS-MPX devices. The residual activation of ATLAS components can be measured with these devices by means of ambient dose equivalent calibration. Keywords: pattern recognition, charge sharing effect, neutron detection efficiency, luminosity, van der Meer method, particle fluences, GCALOR simulation, residual activation, ambient dose equivalent.
Design, prototyping, and testing of a compact superconducting double quarter wave crab cavity
Xiao, Binping; Alberty, Luis; Belomestnykh, Sergey; ...
2015-04-01
We proposed a novel design for a compact superconducting crab cavity with a double quarter wave (DQWCC) shape. After fabrication and surface treatments, this niobium proof-of-principle cavity was tested cryogenically in a vertical cryostat. The cavity is extremely compact yet has a low frequency of 400 MHz, an essential property for service in the Large Hadron Collider luminosity upgrade. The cavity’s electromagnetic properties are well suited for this demanding task. The demonstrated deflecting voltage of 4.6 MV is well above the required 3.34 MV for a crab cavity in the future High Luminosity LHC. In this paper, we present themore » design, prototyping, and results from testing the DQWCC.« less
The in-situ 3D measurement system combined with CNC machine tools
NASA Astrophysics Data System (ADS)
Zhao, Huijie; Jiang, Hongzhi; Li, Xudong; Sui, Shaochun; Tang, Limin; Liang, Xiaoyue; Diao, Xiaochun; Dai, Jiliang
2013-06-01
With the development of manufacturing industry, the in-situ 3D measurement for the machining workpieces in CNC machine tools is regarded as the new trend of efficient measurement. We introduce a 3D measurement system based on the stereovision and phase-shifting method combined with CNC machine tools, which can measure 3D profile of the machining workpieces between the key machining processes. The measurement system utilizes the method of high dynamic range fringe acquisition to solve the problem of saturation induced by specular lights reflected from shiny surfaces such as aluminum alloy workpiece or titanium alloy workpiece. We measured two workpieces of aluminum alloy on the CNC machine tools to demonstrate the effectiveness of the developed measurement system.
Coupling Correction and Beam Dynamics at Ultralow Vertical Emittance in the ALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steier, Christoph; Robin, D.; Wolski, A.
2008-03-17
For synchrotron light sources and for damping rings of linear colliders it is important to be able to minimize the vertical emittance and to correct the spurious vertical dispersion. This allows one to maximize the brightness and/or the luminosity. A commonly used tool to measure the skew error distribution is the analysis of orbit response matrices using codes like LOCO. Using the new Matlab version of LOCO and 18 newly installed power supplies for individual skew quadrupoles at the ALS the emittance ratio could be reduced below 0.1% at 1.9 GeV yielding a vertical emittance of about 5 pm. Atmore » those very low emittances, additional effects like intra beam scattering become more important, potentially limiting the minimum emittance for machine like the damping rings of linear colliders.« less
ERIC Educational Resources Information Center
Weatherly, Jeffrey N.; Thompson, Bradley J.; Hodny, Marisa; Meier, Ellen
2009-01-01
In a simulated casino environment, 6 nonpathological women played concurrently available commercial slot machines programmed to pay out at different rates. Participants did not always demonstrate preferences for the higher paying machine. The data suggest that factors other than programmed or obtained rate of reinforcement may control gambling…
Code of Federal Regulations, 2011 CFR
2011-07-01
... performance test of one representative magnet wire coating machine for each group of identical or very similar... you complete the performance test of a representative magnet wire coating machine. The requirements in... operations, you may, with approval, conduct a performance test of a single magnet wire coating machine that...
Critical radiation fluxes and luminosities of black holes and relativistic stars
NASA Technical Reports Server (NTRS)
Lamb, Frederick K.; Miller, M. Coleman
1995-01-01
The critial luminosity at which the outward force of radiation balances the inward force of gravity plays an important role in many astrophysical systems. We present expressions for the radiation force on particles with arbitrary cross sections and analyze the radiation field produced by radiating matter, such as a disk, ring, boundary layer, or stellar surface, that rotates slowly around a slowly rotating gravitating mass. We then use these results to investigate the critical radiation flux and, where possible, the critical luminosity of such a system in genral relativity. We demonstrate that if the radiation source is axisymmetric and emission is back-front symmetric with repect to the local direction of motion of the radiating matter, as seen in the comoving frame, then the radial component of the radiation flux and the diagonal components of the radiation stress-energy tensor outside the source are the same, to first order in the rotation rates, as they would be if the radiation source and gravitating mass were not rotating. We argue that the critical radiation flux for matter at rest in the locally nonrotating frame is often satisfactory as an astrophysical benchmark flux and show that if this benchmark is adopted, many of the complications potentially introduced by rotation of the radiation source and the gravitating mass are avoided. We show that if the radiation field in the absence of rotation would be spherically symmetric and the opacity is independent of frequency and direction, one can define a critical luminosity for the system that is independent of frequency and direction, one can define a critical luminosity for the system that is independent of the spectrum and angular size of the radiation source and is unaffected by rotation of the source and mass and orbital motion of the matter, to first order. Finally, we analyze the conditions under which the maximum possible luminosity of a star or black hole powered by steady spherically symmetric radial accretion is the same in general relativity as in the Newtonian limit.
Multicutter machining of compound parametric surfaces
NASA Astrophysics Data System (ADS)
Hatna, Abdelmadjid; Grieve, R. J.; Broomhead, P.
2000-10-01
Parametric free forms are used in industries as disparate as footwear, toys, sporting goods, ceramics, digital content creation, and conceptual design. Optimizing tool path patterns and minimizing the total machining time is a primordial issue in numerically controlled (NC) machining of free form surfaces. We demonstrate in the present work that multi-cutter machining can achieve as much as 60% reduction in total machining time for compound sculptured surfaces. The given approach is based upon the pre-processing as opposed to the usual post-processing of surfaces for the detection and removal of interference followed by precise tracking of unmachined areas.
THE LOCAL [C ii] 158 μ m EMISSION LINE LUMINOSITY FUNCTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemmati, Shoubaneh; Yan, Lin; Capak, Peter
We present, for the first time, the local [C ii] 158 μ m emission line luminosity function measured using a sample of more than 500 galaxies from the Revised Bright Galaxy Sample. [C ii] luminosities are measured from the Herschel PACS observations of the Luminous Infrared Galaxies (LIRGs) in the Great Observatories All-sky LIRG Survey and estimated for the rest of the sample based on the far-infrared (far-IR) luminosity and color. The sample covers 91.3% of the sky and is complete at S{sub 60μm} > 5.24 Jy. We calculate the completeness as a function of [C ii] line luminosity and distance, basedmore » on the far-IR color and flux densities. The [C ii] luminosity function is constrained in the range ∼10{sup 7–9} L{sub ⊙} from both the 1/ V{sub max} and a maximum likelihood methods. The shape of our derived [C ii] emission line luminosity function agrees well with the IR luminosity function. For the CO(1-0) and [C ii] luminosity functions to agree, we propose a varying ratio of [C ii]/CO(1-0) as a function of CO luminosity, with larger ratios for fainter CO luminosities. Limited [C ii] high-redshift observations as well as estimates based on the IR and UV luminosity functions are suggestive of an evolution in the [C ii] luminosity function similar to the evolution trend of the cosmic star formation rate density. Deep surveys using the Atacama Large Millimeter Array with full capability will be able to confirm this prediction.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This factsheet describes a project that developed and demonstrated a new manufacturing-informed design framework that utilizes advanced multi-scale, physics-based process modeling to dramatically improve manufacturing productivity and quality in machining operations while reducing the cost of machined components.
Evolution of the luminosity function of quasar accretion disks
NASA Technical Reports Server (NTRS)
Caditz, David M.; Petrosian, Vahe; Wandel, Amri
1991-01-01
Using an accretion-disk model, accretion disk luminosities are calculated for a grid of black hole masses and accretion rates. It is shown that, as the black-hole mass increases with time, the monochromatic luminosity at a given frequency first increases and then decreases rapidly as this frequency is crossed by the Wien cutoff. The upper limit on the monochromatic luminosity, which is characteristic for a given epoch, constrains the evolution of quasar luminosities and determines the evolultion of the quasar luminosity function.
Single molecule detection, thermal fluctuation and life
YANAGIDA, Toshio; ISHII, Yoshiharu
2017-01-01
Single molecule detection has contributed to our understanding of the unique mechanisms of life. Unlike artificial man-made machines, biological molecular machines integrate thermal noises rather than avoid them. For example, single molecule detection has demonstrated that myosin motors undergo biased Brownian motion for stepwise movement and that single protein molecules spontaneously change their conformation, for switching to interactions with other proteins, in response to thermal fluctuation. Thus, molecular machines have flexibility and efficiency not seen in artificial machines. PMID:28190869
LUMINOSITY FUNCTIONS OF SPITZER-IDENTIFIED PROTOSTARS IN NINE NEARBY MOLECULAR CLOUDS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kryukova, E.; Megeath, S. T.; Allen, T. S.
2012-08-15
We identify protostars in Spitzer surveys of nine star-forming (SF) molecular clouds within 1 kpc: Serpens, Perseus, Ophiuchus, Chamaeleon, Lupus, Taurus, Orion, Cep OB3, and Mon R2, which combined host over 700 protostar candidates. These clouds encompass a variety of SF environments, including both low-mass and high-mass SF regions, as well as dense clusters and regions of sparsely distributed star formation. Our diverse cloud sample allows us to compare protostar luminosity functions in these varied environments. We combine near- and mid-infrared photometry from the Two Micron All Sky Survey and Spitzer to create 1-24 {mu}m spectral energy distributions (SEDs). Usingmore » protostars from the c2d survey with well-determined bolometric luminosities, we derive a relationship between bolometric luminosity, mid-IR luminosity (integrated from 1-24 {mu}m), and SED slope. Estimations of the bolometric luminosities for protostar candidates are combined to create luminosity functions for each cloud. Contamination due to edge-on disks, reddened Class II sources, and galaxies is estimated and removed from the luminosity functions. We find that luminosity functions for high-mass SF clouds (Orion, Mon R2, and Cep OB3) peak near 1 L{sub Sun} and show a tail extending toward luminosities above 100 L{sub Sun }. The luminosity functions of the low-mass SF clouds (Serpens, Perseus, Ophiuchus, Taurus, Lupus, and Chamaeleon) do not exhibit a common peak, however the combined luminosity function of these regions peaks below 1 L{sub Sun }. Finally, we examine the luminosity functions as a function of the local surface density of young stellar objects. In the Orion molecular clouds, we find a significant difference between the luminosity functions of protostars in regions of high and low stellar density, the former of which is biased toward more luminous sources. This may be the result of primordial mass segregation, although this interpretation is not unique. We compare our luminosity functions to those predicted by models and find that our observed luminosity functions are best matched by models that invoke competitive accretion, although we do not find strong agreement between the high-mass SF clouds and any of the models.« less
Yamin, Samuel C.; Brosseau, Lisa M.; Xi, Min; Gordon, Robert; Most, Ivan G.; Stanley, Rodney
2015-01-01
Background Metal fabrication workers experience high rates of traumatic occupational injuries. Machine operators in particular face high risks, often stemming from the absence or improper use of machine safeguarding or the failure to implement lockout procedures. Methods The National Machine Guarding Program (NMGP) was a translational research initiative implemented in conjunction with two workers' compensation insures. Insurance safety consultants trained in machine guarding used standardized checklists to conduct a baseline inspection of machine‐related hazards in 221 business. Results Safeguards at the point of operation were missing or inadequate on 33% of machines. Safeguards for other mechanical hazards were missing on 28% of machines. Older machines were both widely used and less likely than newer machines to be properly guarded. Lockout/tagout procedures were posted at only 9% of machine workstations. Conclusions The NMGP demonstrates a need for improvement in many aspects of machine safety and lockout in small metal fabrication businesses. Am. J. Ind. Med. 58:1174–1183, 2015. © 2015 The Authors. American Journal of Industrial Medicine published by Wiley Periodicals, Inc. PMID:26332060
The SuperB Accelerator: Overview and Lattice Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biagini, M.E.; Boni, R.; Boscolo, M.
2011-11-22
SuperB aims at the construction of a very high luminosity (10{sup 36} cm{sup -2} s{sup -1}) asymmetric e{sup +}e{sup -} Flavour Factory, with possible location at the campus of the University of Rome Tor Vergata, near the INFN Frascati National Laboratory. In this paper the basic principles of the design and details on the lattice are given. SuperB is a new machine that can exploit novel very promising design approaches: (1) large Piwinski angle scheme will allow for peak luminosity of the order of 10{sup 36} cm{sup -2} s{sup -1}, well beyond the current state-of-the-art, without a significant increase inmore » beam currents or shorter bunch lengths; (2) 'crab waist' sextupoles will be used for suppression of dangerous resonances; (3) the low beam currents design presents reduced detector and background problems, and affordable operating costs; (4) a polarized electron beam can produce polarized {tau} leptons, opening an entirely new realm of exploration in lepton flavor physics. SuperB studies are already proving useful to the accelerator and particle physics communities. The principle of operation is being tested at DAFNE. The baseline lattice, based on the reuse of all PEP-II hardware, fits in the Tor Vergata University campus site, near Frascati. A CDR is being reviewed by an International Review Committee, chaired by J. Dainton (UK). A Technical Design Report will be prepared to be ready by beginning of 2010.« less
ERIC Educational Resources Information Center
Crossley, Scott A.
2013-01-01
This paper provides an agenda for replication studies focusing on second language (L2) writing and the use of natural language processing (NLP) tools and machine learning algorithms. Specifically, it introduces a range of the available NLP tools and machine learning algorithms and demonstrates how these could be used to replicate seminal studies…
NASA Astrophysics Data System (ADS)
Bagchi, Manjari
2013-08-01
Luminosity is an intrinsic property of radio pulsars related to the properties of the magnetospheric plasma and the beam geometry, and inversely proportional to the observing frequency. In traditional models, luminosity has been considered as a function of the spin parameters of pulsars. On the other hand, parameter independent models like power law and lognormal have been also used to fit the observed luminosities. Some of the older studies on pulsar luminosities neglected observational biases, but all of the recent studies tried to model observational effects as accurately as possible. Luminosities of pulsars in globular clusters (GCs) and in the Galactic disk have been studied separately. Older studies concluded that these two categories of pulsars have different luminosity distributions, but the most recent study concluded that those are the same. This paper reviews all significant works on pulsar luminosities and discusses open questions.
Remarks on the maximum luminosity
NASA Astrophysics Data System (ADS)
Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon
2018-04-01
The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.
ERIC Educational Resources Information Center
Adams, Stephen
1998-01-01
Describes a project in which students create wind machines to harness the wind's power and do mechanical work. Demonstrates kinetic and potential energy conversions and makes work and power calculations meaningful. Students conduct hands-on investigations with their machines. (DDR)
Hard X-Ray Emission and the Ionizing Source in LINERs
NASA Technical Reports Server (NTRS)
Terashima, Yuichi; Ho, Luis C.; Ptak, Andrew F.
2000-01-01
We report X-ray fluxes in the 2-10 keV band from LINERs (low-ionization nuclear emission-line regions) and low-luminosity Seyfert galaxies obtained with the ASCA satellite. Observed X-ray luminosities are in the range between 4 x 10(exp 39) and 5 x 10(exp 41) ergs/s, which are significantly smaller than that of the "classical" low-luminosity Seyfert 1 galaxy NGC 4051. We found that X-ray luminosities in 2-10 keV of LINERs with broad H.alpha emission in their optical spectra (LINER 1s) are proportional to their Ha luminosities. This correlation strongly supports the hypothesis that the dominant ionizing source in LINER 1s is photoionization by hard photons from low-luminosity AGNs. On the other hand, the X-ray luminosities of most LINERs without broad H.alpha emission (LINER 2s) in our sample are lower than LINER 1s at a given H.alpha luminosity. The observed X-ray luminosities in these objects are insufficient to power their H.alpha luminosities, suggesting that their primary ionizing source is other than an AGN, or that an AGN, if present, is obscured even at energies above 2 keV.
Optimizing integrated luminosity of future hadron colliders
NASA Astrophysics Data System (ADS)
Benedikt, Michael; Schulte, Daniel; Zimmermann, Frank
2015-10-01
The integrated luminosity, a key figure of merit for any particle-physics collider, is closely linked to the peak luminosity and to the beam lifetime. The instantaneous peak luminosity of a collider is constrained by a number of boundary conditions, such as the available beam current, the maximum beam-beam tune shift with acceptable beam stability and reasonable luminosity lifetime (i.e., the empirical "beam-beam limit"), or the event pileup in the physics detectors. The beam lifetime at high-luminosity hadron colliders is largely determined by particle burn off in the collisions. In future highest-energy circular colliders synchrotron radiation provides a natural damping mechanism, which can be exploited for maximizing the integrated luminosity. In this article, we derive analytical expressions describing the optimized integrated luminosity, the corresponding optimum store length, and the time evolution of relevant beam parameters, without or with radiation damping, while respecting a fixed maximum value for the total beam-beam tune shift or for the event pileup in the detector. Our results are illustrated by examples for the proton-proton luminosity of the existing Large Hadron Collider (LHC) at its design parameters, of the High-Luminosity Large Hadron Collider (HL-LHC), and of the Future Circular Collider (FCC-hh).
Multiple Cylinder Free-Piston Stirling Machinery
NASA Astrophysics Data System (ADS)
Berchowitz, David M.; Kwon, Yong-Rak
In order to improve the specific power of piston-cylinder type machinery, there is a point in capacity or power where an advantage accrues with increasing number of piston-cylinder assemblies. In the case of Stirling machinery where primary energy is transferred across the casing wall of the machine, this consideration is even more important. This is due primarily to the difference in scaling of basic power and the required heat transfer. Heat transfer is found to be progressively limited as the size of the machine increases. Multiple cylinder machines tend to preserve the surface area to volume ratio at more favorable levels. In addition, the spring effect of the working gas in the so-called alpha configuration is often sufficient to provide a high frequency resonance point that improves the specific power. There are a number of possible multiple cylinder configurations. The simplest is an opposed pair of piston-displacer machines (beta configuration). A three-cylinder machine requires stepped pistons to obtain proper volume phase relationships. Four to six cylinder configurations are also possible. A small demonstrator inline four cylinder alpha machine has been built to demonstrate both cooling operation and power generation. Data from this machine verifies theoretical expectations and is used to extrapolate the performance of future machines. Vibration levels are discussed and it is argued that some multiple cylinder machines have no linear component to the casing vibration but may have a nutating couple. Example applications are discussed ranging from general purpose coolers, computer cooling, exhaust heat power extraction and some high power engines.
Testing and Improving the Luminosity Relations for Gamma-Ray Bursts
NASA Astrophysics Data System (ADS)
Collazzi, Andrew C.
2012-01-01
Gamma Ray Bursts (GRBs) have several luminosity relations where a measurable property of a burst light curve or spectrum is correlated with the burst luminosity. These luminosity relations are calibrated for the fraction of bursts with spectroscopic redshifts and hence the known luminosities. GRBs have thus become known as a type of "standard candle” where standard candle is meant in the usual sense that luminosities can be derived from measurable properties of the bursts. GRBs can therefore be used for the same cosmology applications as Type Ia supernovae, including the construction of the Hubble Diagram and measuring massive star formation rate. The greatest disadvantage of using GRBs as standard candles is that their accuracy is lower than desired. With the recent advent of GRBs as a new standard candle, every effort must be made to test and improve the distance measures. Here, methods are employed to do just that. First, generalized forms of two tests are performed on the luminosity relations. All the luminosity relations pass one of these tests, and all but two pass the other. Even with this failure, redundancies in using multiple luminosity relations allows all the luminosity relations to retain value. Next, the "Firmani relation” is shown to have poorer accuracy than first advertised. It is also shown to be derivable from two other luminosity relations. For these reasons, the Firmani relation is useless for cosmology. The Amati relation is then revisited and shown to be an artifact of a combination of selection effects. Therefore, the Amati relation is also not good for cosmology. Fourthly, the systematic errors involved in measuring a luminosity indicator (Epeak) are measured. The result is an irreducible systematic error of 28%. Finally, the work concludes with a discussion about the impact of the work and the future of GRB luminosity relations.
Particle content, radio-galaxy morphology, and jet power: all radio-loud AGN are not equal
NASA Astrophysics Data System (ADS)
Croston, J. H.; Ineson, J.; Hardcastle, M. J.
2018-05-01
Ongoing and future radio surveys aim to trace the evolution of black hole growth and feedback from active galactic nuclei (AGNs) throughout cosmic time; however, there remain major uncertainties in translating radio luminosity functions into a reliable assessment of the energy input as a function of galaxy and/or dark matter halo mass. A crucial and long-standing problem is the composition of the radio-lobe plasma that traces AGN jet activity. In this paper, we carry out a systematic comparison of the plasma conditions in Fanaroff & Riley class I and II radio galaxies to demonstrate conclusively that their internal composition is systematically different. This difference is best explained by the presence of an energetically dominant proton population in the FRI, but not the FRII radio galaxies. We show that, as expected from this systematic difference in particle content, radio morphology also affects the jet-power/radio-luminosity relationship, with FRII radio galaxies having a significantly lower ratio of jet power to radio luminosity than the FRI cluster radio sources used to derive jet-power scaling relations via X-ray cavity measurements. Finally, we also demonstrate conclusively that lobe composition is unconnected to accretion mode (optical excitation class): the internal conditions of low- and high-excitation FRII radio lobes are indistinguishable. We conclude that inferences of population-wide AGN impact require careful assessment of the contribution of different jet subclasses, particularly given the increased diversity of jet evolutionary states expected to be present in deep, low-frequency radio surveys such as the LOFAR Two-Metre Sky Survey.
Illuminating Gravitational Waves
NASA Astrophysics Data System (ADS)
Kasliwal, Mansi; GROWTH (Global Relay of Observatories Watching Transients Happen) Team
2018-01-01
On August 17 2017, for the first time, an electromagnetic counterpart to gravitational waves was detected. Two neutron stars merged and lit up the entire electromagnetic spectrum, from gamma-rays to the radio. The infrared signature vividly demonstrates that neutron star mergers are indeed the long-sought production sites that forge heavy elements by r-process nucleosynthesis. The weak gamma-rays are dissimilar to classical short gamma-ray bursts with ultra-relativistic jets. Instead, by synthesizing a panchromatic dataset, we suggest that break-out of a wide-angle, mildly-relativistic cocoon engulfing the jet elegantly explains the low-luminosity gamma-rays, the high-luminosity ultraviolet-optical-infrared and the delayed radio/X-ray emission. I conclude with the promise of a literally bright and loud future, thanks to even more sensitive survey telescopes and gravitational wave interferometers.
High speed operation of permanent magnet machines
NASA Astrophysics Data System (ADS)
El-Refaie, Ayman M.
This work proposes methods to extend the high-speed operating capabilities of both the interior PM (IPM) and surface PM (SPM) machines. For interior PM machines, this research has developed and presented the first thorough analysis of how a new bi-state magnetic material can be usefully applied to the design of IPM machines. Key elements of this contribution include identifying how the unique properties of the bi-state magnetic material can be applied most effectively in the rotor design of an IPM machine by "unmagnetizing" the magnet cavity center posts rather than the outer bridges. The importance of elevated rotor speed in making the best use of the bi-state magnetic material while recognizing its limitations has been identified. For surface PM machines, this research has provided, for the first time, a clear explanation of how fractional-slot concentrated windings can be applied to SPM machines in order to achieve the necessary conditions for optimal flux weakening. A closed-form analytical procedure for analyzing SPM machines designed with concentrated windings has been developed. Guidelines for designing SPM machines using concentrated windings in order to achieve optimum flux weakening are provided. Analytical and numerical finite element analysis (FEA) results have provided promising evidence of the scalability of the concentrated winding technique with respect to the number of poles, machine aspect ratio, and output power rating. Useful comparisons between the predicted performance characteristics of SPM machines equipped with concentrated windings and both SPM and IPM machines designed with distributed windings are included. Analytical techniques have been used to evaluate the impact of the high pole number on various converter performance metrics. Both analytical techniques and FEA have been used for evaluating the eddy-current losses in the surface magnets due to the stator winding subharmonics. Techniques for reducing these losses have been investigated. A 6kW, 36slot/30pole prototype SPM machine has been designed and built. Experimental measurements have been used to verify the analytical and FEA results. These test results have demonstrated that wide constant-power speed range can be achieved. Other important machine features such as the near-sinusoidal back-emf, high efficiency, and low cogging torque have also been demonstrated.
HerMES: Redshift Evolution of the Cosmic Infrared Background from Herschel/SPIRE
NASA Astrophysics Data System (ADS)
Vieira, Joaquin; HerMES
2013-01-01
We report on the redshift evolution of the cosmic infrared background (CIB) at wavelengths of 70-1100 microns. Using data from the Herschel Multi-tiered Extragalactic Survey (HerMES) of the GOODS-N field, we statistically correlate fluctuations in the CIB with external catalogs. We use a deep Spitzer-MIPS 24 micron flux-limited catalog complete with redshifts and stack on MIPS 70 and 160 micron, Herschel-SPIRE 250, 350, and 500 micron, and JCMT-AzTEC 1100 micron maps. We measure the co-moving infrared luminosity density at 0.1
Matching into the Helical Bunch Coalescing Channel for a High Luminosity Muon Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sy, Amy; Ankenbrandt, Charles; Derbenev, Yaroslav
2015-09-01
For high luminosity in a muon collider, muon bunches that have been cooled in the six-dimensional helical cooling channel (HCC) must be merged into a single bunch and further cooled in preparation for acceleration and transport to the collider ring. The helical bunch coalescing channel has been previously simulated and provides the most natural match from helical upstream and downstream subsystems. This work focuses on the matching from the exit of the multiple bunch HCC into the start of the helical bunch coalescing channel. The simulated helical matching section simultaneously matches the helical spatial period lambda in addition to providingmore » the necessary acceleration for efficient bunch coalescing. Previous studies assumed that the acceleration of muon bunches from p=209.15 MeV/c to 286.816 MeV/c and matching of lambda from 0.5 m to 1.0 m could be accomplished with zero particle losses and zero emittance growth in the individual bunches. This study demonstrates nonzero values for both particle loss and emittance growth, and provides considerations for reducing these adverse effects to best preserve high luminosity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene, E-mail: mertsch@nbi.ku.dk, E-mail: mohamed.rameez@nbi.ku.dk, E-mail: tamborra@nbi.ku.dk
Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ''warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sourcesmore » with local density exceeding 10{sup −6} Mpc{sup −3} and neutrino luminosity L {sub ν} ∼< 10{sup 42} erg s{sup −1} (10{sup 41} erg s{sup −1}) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.« less
V and K-band Mass-Luminosity Relations for M dwarf Stars
NASA Astrophysics Data System (ADS)
Benedict, G. Fritz; Henry, Todd J.; McArthur, Barbara; Franz, Otto G.; Wasserman, Lawrence H.; Dieterich, Sergio
2015-01-01
Applying Hubble Space Telescope Fine Guidance Sensor astrometric techniques developed to establish relative orbits for binary stars (Franz et al. 1998, AJ, 116, 1432), determine masses of binary components (Benedict et al. 2001, AJ, 121, 1607), and measure companion masses of exoplanet host stars (McArthur et al. 2010, ApJ, 715, 1203), we derive masses with an average 2.1% error for 24 components of 12 M dwarf binary star systems. Masses range 0.08 to 0.40 solar masses. With these we update the lower Main Sequence V-band Mass-Luminosity Relation first shown in Henry et al. (1999, ApJ, 512, 864). We demonstrate that a Mass-Luminosity Relation in the K-band has far less scatter than in the V-band. For the eight binary components for which we have component magnitude differences in the K-band the RMS residual drops from 0.5 magnitude in the V-band to 0.05 magnitude in the K-band. These relations can be used to estimate the masses of the ubiquitous red dwarfs that account for 75% of all stars, to an accuracy of 5%, which is much better than ever before.
How Common are the Magellanic Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Lulu; Gerke, Brian F.; Wechsler, Risa H.
2011-05-20
We introduce a probabilistic approach to the problem of counting dwarf satellites around host galaxies in databases with limited redshift information. This technique is used to investigate the occurrence of satellites with luminosities similar to the Magellanic Clouds around hosts with properties similar to the Milky Way in the object catalog of the Sloan Digital Sky Survey. Our analysis uses data from SDSS Data Release 7, selecting candidate Milky-Way-like hosts from the spectroscopic catalog and candidate analogs of the Magellanic Clouds from the photometric catalog. Our principal result is the probability for a Milky-Way-like galaxy to host N{sub sat} closemore » satellites with luminosities similar to the Magellanic Clouds. We find that 81 percent of galaxies like the Milky Way have no such satellites within a radius of 150 kpc, 11 percent have one, and only 3.5 percent of hosts have two. The probabilities are robust to changes in host and satellite selection criteria, background-estimation technique, and survey depth. These results demonstrate that the Milky Way has significantly more satellites than a typical galaxy of its luminosity; this fact is useful for understanding the larger cosmological context of our home galaxy.« less
Effect of Machining Parameters on Oxidation Behavior of Mild Steel
NASA Astrophysics Data System (ADS)
Majumdar, P.; Shekhar, S.; Mondal, K.
2015-01-01
This study aims to find out a correlation between machining parameters, resultant microstructure, and isothermal oxidation behavior of lathe-machined mild steel in the temperature range of 660-710 °C. The tool rake angles "α" used were +20°, 0°, and -20°, and cutting speeds used were 41, 232, and 541 mm/s. Under isothermal conditions, non-machined and machined mild steel samples follow parabolic oxidation kinetics with activation energy of 181 and ~400 kJ/mol, respectively. Exaggerated grain growth of the machined surface was observed, whereas, the center part of the machined sample showed minimal grain growth during oxidation at higher temperatures. Grain growth on the surface was attributed to the reduction of strain energy at high temperature oxidation, which was accumulated on the sub-region of the machined surface during machining. It was also observed that characteristic surface oxide controlled the oxidation behavior of the machined samples. This study clearly demonstrates the effect of equivalent strain, roughness, and grain size due to machining, and subsequent grain growth on the oxidation behavior of the mild steel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rossi, Adriana; et al.
Long-range beam-beam (LRBB) interactions can be a source of emittance growth and beam losses in the LHC during physics and will become even more relevant with the smaller '* and higher bunch intensities foreseen for the High Luminosity LHC upgrade (HL-LHC), in particular if operated without crab cavities. Both beam losses and emittance growth could be mitigated by compensat-ing the non-linear LRBB kick with a correctly placed current carrying wire. Such a compensation scheme is currently being studied in the LHC through a demonstration test using current-bearing wires embedded into col-limator jaws, installed either side of the high luminosity interactionmore » regions. For HL-LHC two options are considered, a current-bearing wire as for the demonstrator, or electron lenses, as the ideal distance between the particle beam and compensating current may be too small to allow the use of solid materials. This paper reports on the ongoing activities for both options, covering the progress of the wire-in-jaw collimators, the foreseen LRBB experiments at the LHC, and first considerations for the design of the electron lenses to ultimately replace material wires for HL-LHC.« less
TEACHING PHYSICS: A computer-based revitalization of Atwood's machine
NASA Astrophysics Data System (ADS)
Trumper, Ricardo; Gelbman, Moshe
2000-09-01
Atwood's machine is used in a microcomputer-based experiment to demonstrate Newton's second law with considerable precision. The friction force on the masses and the moment of inertia of the pulley can also be estimated.
An intelligent CNC machine control system architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D.J.; Loucks, C.S.
1996-10-01
Intelligent, agile manufacturing relies on automated programming of digitally controlled processes. Currently, processes such as Computer Numerically Controlled (CNC) machining are difficult to automate because of highly restrictive controllers and poor software environments. It is also difficult to utilize sensors and process models for adaptive control, or to integrate machining processes with other tasks within a factory floor setting. As part of a Laboratory Directed Research and Development (LDRD) program, a CNC machine control system architecture based on object-oriented design and graphical programming has been developed to address some of these problems and to demonstrate automated agile machining applications usingmore » platform-independent software.« less
NASA Technical Reports Server (NTRS)
Waldron, W. L.
1985-01-01
The observed X-ray emission from early-type stars can be explained by the recombination stellar wind model (or base coronal model). The model predicts that the true X-ray luminosity from the base coronal zone can be 10 to 1000 times greater than the observed X-ray luminosity. From the models, scaling laws were found for the true and observed X-ray luminosities. These scaling laws predict that the ratio of the observed X-ray luminosity to the bolometric luminosity is functionally dependent on several stellar parameters. When applied to several other O and B stars, it is found that the values of the predicted ratio agree very well with the observed values.
Implications of the Observed Ultraluminous X-Ray Source Luminosity Function
NASA Technical Reports Server (NTRS)
Swartz, Douglas A.; Tennant, Allyn; Soria, Roberto; Yukita, Mihoko
2012-01-01
We present the X-ray luminosity function (XLF) of ultraluminous X-ray (ULX) sources with 0.3-10.0 keV luminosities in excess of 10(sup 39) erg/s in a complete sample of nearby galaxies. The XLF shows a break or cut-off at high luminosities that deviates from its pure power law distribution at lower luminosities. The cut-off is at roughly the Eddington luminosity for a 90-140 solar mass accretor. We examine the effects on the observed XLF of sample biases, of small-number statistics (at the high luminosity end) and of measurement uncertainties. We consider the physical implications of the shape and normalization of the XLF. The XLF is also compared and contrasted to results of other recent surveys.
Source localization in an ocean waveguide using supervised machine learning.
Niu, Haiqiang; Reeves, Emma; Gerstoft, Peter
2017-09-01
Source localization in ocean acoustics is posed as a machine learning problem in which data-driven methods learn source ranges directly from observed acoustic data. The pressure received by a vertical linear array is preprocessed by constructing a normalized sample covariance matrix and used as the input for three machine learning methods: feed-forward neural networks (FNN), support vector machines (SVM), and random forests (RF). The range estimation problem is solved both as a classification problem and as a regression problem by these three machine learning algorithms. The results of range estimation for the Noise09 experiment are compared for FNN, SVM, RF, and conventional matched-field processing and demonstrate the potential of machine learning for underwater source localization.
Experimental Realization of a Quantum Support Vector Machine
NASA Astrophysics Data System (ADS)
Li, Zhaokai; Liu, Xiaomei; Xu, Nanyang; Du, Jiangfeng
2015-04-01
The fundamental principle of artificial intelligence is the ability of machines to learn from previous experience and do future work accordingly. In the age of big data, classical learning machines often require huge computational resources in many practical cases. Quantum machine learning algorithms, on the other hand, could be exponentially faster than their classical counterparts by utilizing quantum parallelism. Here, we demonstrate a quantum machine learning algorithm to implement handwriting recognition on a four-qubit NMR test bench. The quantum machine learns standard character fonts and then recognizes handwritten characters from a set with two candidates. Because of the wide spread importance of artificial intelligence and its tremendous consumption of computational resources, quantum speedup would be extremely attractive against the challenges of big data.
A STUDY OF RO-VIBRATIONAL OH EMISSION FROM HERBIG Ae/Be STARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brittain, Sean D.; Reynolds, Nickalas; Najita, Joan R.
2016-10-20
We present a study of ro-vibrational OH and CO emission from 21 disks around Herbig Ae/Be stars. We find that the OH and CO luminosities are proportional over a wide range of stellar ultraviolet luminosities. The OH and CO line profiles are also similar, indicating that they arise from roughly the same radial region of the disk. The CO and OH emission are both correlated with the far-ultraviolet luminosity of the stars, while the polycyclic aromatic hydrocarbon (PAH) luminosity is correlated with the longer wavelength ultraviolet luminosity of the stars. Although disk flaring affects the PAH luminosity, it is notmore » a factor in the luminosity of the OH and CO emission. These properties are consistent with models of UV-irradiated disk atmospheres. We also find that the transition disks in our sample, which have large optically thin inner regions, have lower OH and CO luminosities than non-transition disk sources with similar ultraviolet luminosities. This result, while tentative given the small sample size, is consistent with the interpretation that transition disks lack a gaseous disk close to the star.« less
NASA Astrophysics Data System (ADS)
Parsa, Shaghayegh; Dunlop, James S.; McLure, Ross J.; Mortlock, Alice
2016-03-01
We present a new, robust measurement of the evolving rest-frame ultraviolet (UV) galaxy luminosity function (LF) over the key redshift range from z ≃ 2 to z ≃ 4. Our results are based on the high dynamic range provided by combining the Hubble Ultra Deep Field (HUDF), CANDELS/GOODS-South, and UltraVISTA/COSMOS surveys. We utilize the unparalleled multifrequency photometry available in this survey `wedding cake' to compile complete galaxy samples at z ≃ 2, 3, 4 via photometric redshifts (calibrated against the latest spectroscopy) rather than colour-colour selection, and to determine accurate rest-frame UV absolute magnitudes (M1500) from spectral energy distribution (SED) fitting. Our new determinations of the UV LF extend from M1500 ≃ -22 (AB mag) down to M1500 = -14.5, -15.5 and -16 at z ≃ 2, 3 and 4, respectively (thus, reaching ≃ 3-4 mag fainter than previous blank-field studies at z ≃ 2,3). At z ≃ 2, 3, we find a much shallower faint-end slope (α = -1.32 ± 0.03) than reported in some previous studies (α ≃ -1.7), and demonstrate that this new measurement is robust. By z ≃ 4, the faint-end slope has steepened slightly, to α = -1.43 ± 0.04, and we show that these measurements are consistent with the overall evolutionary trend from z = 0 to 8. Finally, we find that while characteristic number density (φ*) drops from z ≃ 2 to z ≃ 4, characteristic luminosity (M*) brightens by ≃ 1 mag. This, combined with the new flatter faint-end slopes, has the consequence that UV luminosity density (and hence unobscured star formation density) peaks at z ≃ 2.5-3, when the Universe was ≃ 2.5 Gyr old.
A New Determination of the Luminosity Function of the Galactic Halo.
NASA Astrophysics Data System (ADS)
Dawson, Peter Charles
The luminosity function of the galactic halo is determined by subtracting from the observed numbers of proper motion stars in the LHS Catalogue the expected numbers of main-sequence, degenerate, and giant stars of the disk population. Selection effects are accounted for by Monte Carlo simulations based upon realistic colour-luminosity relations and kinematic models. The catalogue is shown to be highly complete, and a calibration of the magnitude estimates therein is presented. It is found that, locally, the ratio of disk to halo material is close to 950, and that the mass density in main sequence and subgiant halo stars with 3 < M(,v) < 14 is about 2 x 10('-5) M(,o) pc('-3). With due allowance for white dwarfs and binaries, and taking into account the possibility of a moderate rate of halo rotation, it is argued that the total density does not much exceed 5 x 10('-5) M(,o) pc('-3), in which case the total mass interior to the sun is of the order of 5 x 10('8) M(,o) for a density distribution which projects to a de Vaucouleurs r(' 1/4) law. It is demonstrated that if the Wielen luminosity function is a faithful representation of the stellar distribution in the solar neighbourhood, then the observed numbers of large proper motion stars are inconsistent with the presence of an intermediate popula- tion at the level, and with the kinematics advocated recently by Gilmore and Reid. The initial mass function (IMF) of the halo is considered, and weak evidence is presented that its slope is at least not shallower than that of the disk population IMF. A crude estimate of the halo's age, based on a comparison of the main sequence turnoff in the reduced proper motion diagram with theoretical models is obtained; a tentative lower limit is 15 Gyr with a best estimate of between 15 and 18 Gyr. Finally, the luminosity function obtained here is compared with those determined in other investigations.
NASA Technical Reports Server (NTRS)
Wood, Brian E.; Brown, Alexander; Linsky, Jeffrey L.; Kellett, Barry J.; Bromage, Gordon E.; Hodgkin, Simon T.; Pye, John P.
1994-01-01
We report the results of a volume-limited ROSAT Wide Field Camera (WFC) survey of all nondegenerate stars within 10 pc. Of the 220 known star systems within 10 pc, we find that 41 are positive detections in at least one of the two WFC filter bandpasses (S1 and S2), while we consider another 14 to be marginal detections. We compute X-ray luminosities for the WFC detections using Einstein Imaging Proportional Counter (IPC) data, and these IPC luminosities are discussed along with the WFC luminosities throughout the paper for purposes of comparison. Extreme ultraviolet (EUV) luminosity functions are computed for single stars of different spectral types using both S1 and S2 luminosities, and these luminosity functions are compared with X-ray luminosity functions derived by previous authors using IPC data. We also analyze the S1 and S2 luminosity functions of the binary stars within 10 pc. We find that most stars in binary systems do not emit EUV radiation at levels different from those of single stars, but there may be a few EUV-luminous multiple-star systems which emit excess EUV radiation due to some effect of binarity. In general, the ratio of X-ray luminosity to EUV luminosity increases with increasing coronal emission, suggesting that coronally active stars have higher coronal temperatures. We find that our S1, S2, and IPC luminosities are well correlated with rotational velocity, and we compare activity-rotation relations determined using these different luminosities. Late M stars are found to be significantly less luminous in the EUV than other late-type stars. The most natural explanation for this results is the concept of coronal saturation -- the idea that late-type stars can emit only a limited fraction of their total luminosity in X-ray and EUV radiation, which means stars with very low bolometric luminosities must have relatively low X-ray and EUV luminosities as well. The maximum level of coronal emission from stars with earlier spectral types is studied also. To understand the saturation levels for these stars, we have compiled a large number of IPC luminosities for stars with a wide variety of spectral types and luminosity classes. We show quantitatively that if the Sun were completely covered with X-ray-emitting coronal loops, it would be near the saturation limit implied by this compilation, supporting the idea that stars near upper limits in coronal activity are completely covered with active regions.
Exploring cluster Monte Carlo updates with Boltzmann machines
NASA Astrophysics Data System (ADS)
Wang, Lei
2017-11-01
Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.
The effects of a nutrition education intervention on vending machine sales on a university campus.
Brown, Mary V; Flint, Matthew; Fuqua, James
2014-01-01
To determine the effects of a nutrition information intervention on the vending machine purchases on a college campus. Five high-use vending machines were selected for the intervention, which was conducted in the fall of 2011. Baseline sales data were collected in the 5 machines prior to the intervention. At the time of the intervention, color-coded stickers were placed near each item selection to identify less healthy (red), moderately healthy (yellow), and more healthy (green) snack items. Sales data were collected during the 2-week intervention. Purchases of red- and yellow-stickered foods were reduced in most of the machines; moreover, sales of the green-stickered items increased in all of the machines. The increased purchases of healthier snack options demonstrate encouraging patterns that support more nutritious and healthy alternatives in vending machines.
NASA Astrophysics Data System (ADS)
Huertas-Company, M.; Primack, J. R.; Dekel, A.; Koo, D. C.; Lapiner, S.; Ceverino, D.; Simons, R. C.; Snyder, G. F.; Bernardi, M.; Chen, Z.; Domínguez-Sánchez, H.; Lee, C. T.; Margalef-Bentabol, B.; Tuccillo, D.
2018-05-01
We use machine learning to identify in color images of high-redshift galaxies an astrophysical phenomenon predicted by cosmological simulations. This phenomenon, called the blue nugget (BN) phase, is the compact star-forming phase in the central regions of many growing galaxies that follows an earlier phase of gas compaction and is followed by a central quenching phase. We train a convolutional neural network (CNN) with mock “observed” images of simulated galaxies at three phases of evolution— pre-BN, BN, and post-BN—and demonstrate that the CNN successfully retrieves the three phases in other simulated galaxies. We show that BNs are identified by the CNN within a time window of ∼0.15 Hubble times. When the trained CNN is applied to observed galaxies from the CANDELS survey at z = 1–3, it successfully identifies galaxies at the three phases. We find that the observed BNs are preferentially found in galaxies at a characteristic stellar mass range, 109.2–10.3 M ⊙ at all redshifts. This is consistent with the characteristic galaxy mass for BNs as detected in the simulations and is meaningful because it is revealed in the observations when the direct information concerning the total galaxy luminosity has been eliminated from the training set. This technique can be applied to the classification of other astrophysical phenomena for improved comparison of theory and observations in the era of large imaging surveys and cosmological simulations.
How much information is in a jet?
NASA Astrophysics Data System (ADS)
Datta, Kaustuv; Larkoski, Andrew
2017-06-01
Machine learning techniques are increasingly being applied toward data analyses at the Large Hadron Collider, especially with applications for discrimination of jets with different originating particles. Previous studies of the power of machine learning to jet physics have typically employed image recognition, natural language processing, or other algorithms that have been extensively developed in computer science. While these studies have demonstrated impressive discrimination power, often exceeding that of widely-used observables, they have been formulated in a non-constructive manner and it is not clear what additional information the machines are learning. In this paper, we study machine learning for jet physics constructively, expressing all of the information in a jet onto sets of observables that completely and minimally span N-body phase space. For concreteness, we study the application of machine learning for discrimination of boosted, hadronic decays of Z bosons from jets initiated by QCD processes. Our results demonstrate that the information in a jet that is useful for discrimination power of QCD jets from Z bosons is saturated by only considering observables that are sensitive to 4-body (8 dimensional) phase space.
Intelligence-Augmented Rat Cyborgs in Maze Solving.
Yu, Yipeng; Pan, Gang; Gong, Yongyue; Xu, Kedi; Zheng, Nenggan; Hua, Weidong; Zheng, Xiaoxiang; Wu, Zhaohui
2016-01-01
Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains.
Intelligence-Augmented Rat Cyborgs in Maze Solving
Yu, Yipeng; Pan, Gang; Gong, Yongyue; Xu, Kedi; Zheng, Nenggan; Hua, Weidong; Zheng, Xiaoxiang; Wu, Zhaohui
2016-01-01
Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains. PMID:26859299
Hands-free human-machine interaction with voice
NASA Astrophysics Data System (ADS)
Juang, B. H.
2004-05-01
Voice is natural communication interface between a human and a machine. The machine, when placed in today's communication networks, may be configured to provide automation to save substantial operating cost, as demonstrated in AT&T's VRCP (Voice Recognition Call Processing), or to facilitate intelligent services, such as virtual personal assistants, to enhance individual productivity. These intelligent services often need to be accessible anytime, anywhere (e.g., in cars when the user is in a hands-busy-eyes-busy situation or during meetings where constantly talking to a microphone is either undersirable or impossible), and thus call for advanced signal processing and automatic speech recognition techniques which support what we call ``hands-free'' human-machine communication. These techniques entail a broad spectrum of technical ideas, ranging from use of directional microphones and acoustic echo cancellatiion to robust speech recognition. In this talk, we highlight a number of key techniques that were developed for hands-free human-machine communication in the mid-1990s after Bell Labs became a unit of Lucent Technologies. A video clip will be played to demonstrate the accomplishement.
Three-dimensionally printed biological machines powered by skeletal muscle.
Cvetkovic, Caroline; Raman, Ritu; Chan, Vincent; Williams, Brian J; Tolish, Madeline; Bajaj, Piyush; Sakar, Mahmut Selman; Asada, H Harry; Saif, M Taher A; Bashir, Rashid
2014-07-15
Combining biological components, such as cells and tissues, with soft robotics can enable the fabrication of biological machines with the ability to sense, process signals, and produce force. An intuitive demonstration of a biological machine is one that can produce motion in response to controllable external signaling. Whereas cardiac cell-driven biological actuators have been demonstrated, the requirements of these machines to respond to stimuli and exhibit controlled movement merit the use of skeletal muscle, the primary generator of actuation in animals, as a contractile power source. Here, we report the development of 3D printed hydrogel "bio-bots" with an asymmetric physical design and powered by the actuation of an engineered mammalian skeletal muscle strip to result in net locomotion of the bio-bot. Geometric design and material properties of the hydrogel bio-bots were optimized using stereolithographic 3D printing, and the effect of collagen I and fibrin extracellular matrix proteins and insulin-like growth factor 1 on the force production of engineered skeletal muscle was characterized. Electrical stimulation triggered contraction of cells in the muscle strip and net locomotion of the bio-bot with a maximum velocity of ∼ 156 μm s(-1), which is over 1.5 body lengths per min. Modeling and simulation were used to understand both the effect of different design parameters on the bio-bot and the mechanism of motion. This demonstration advances the goal of realizing forward-engineered integrated cellular machines and systems, which can have a myriad array of applications in drug screening, programmable tissue engineering, drug delivery, and biomimetic machine design.
Electromechanical converters for electric vehicles
NASA Astrophysics Data System (ADS)
Ambros, T.; Burduniuc, M.; Deaconu, S. I.; Rujanschi, N.
2018-01-01
The paper presents the analysis of various constructive schemes of synchronous electromechanical converters with permanent magnets fixed on the rotor and asynchronous with the short-circuit rotor. Various electrical stator winding schemes have also been compared, demonstrating the efficiency of copper utilization in toroidal windings. The electromagnetic calculus of the axial machine has particularities compared to the cylindrical machine, in the paper is presented the method of correlating the geometry of the cylindrical and axial machines. In this case the method and recommendations used in the design of such machines may be used.
X-ray studies of quasars with the Einstein Observatory. IV - X-ray dependence on radio emission
NASA Technical Reports Server (NTRS)
Worrall, D. M.; Tananbaum, H.; Giommi, P.; Zamorani, G.
1987-01-01
The X-ray properties of a sample of 114 radio-loud quasars observed with the Einstein Observatory are examined, and the results are compared with those obtained from a large sample of radio-quiet quasars. The results of statistical analysis of the dependence of X-ray luminosity on combined functions of optical and radio luminosity show that the dependence on both luminosities is important. However, statistically significant differences are found between subsamples of flat radio spectra quasars and steep radio spectra quasars with regard to dependence of X-ray luminosity on only radio luminosity. The data are consistent with radio-loud quasars having a physical component, not directly related to the optical luminosity, which produces the core radio luminosity plus 'extra' X-ray emission.
NASA Astrophysics Data System (ADS)
Santiago-Alvarado, A.; Cruz-Félix, A.; Hernández Méndez, A.; Pérez-Maldonado, Y.; Domínguez-Osante, C.
2015-05-01
Tunable lenses have attracted much attention due to their potential applications in such areas like machine vision, laser projection, ophthalmology, etc. In this work we present the design of a tunable opto-mechatronic system capable of focusing and to regulate the entrance illumination that mimics the performance made by the iris and the crystalline lens of the human eye. A solid elastic lens made of PDMS has been used in order to mimic the crystalline lens and an automatic diaphragm has been used to mimic the iris of the human eye. Also, a characterization of such system has been performed with standard values of luminosity for the human eye have been taken into account to calibrate and to validate the entrance illumination levels to the overall optical system.
First Attempts at using Active Halo Control at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, Joschka; Bruce, Roderik; Garcia Morales, Hector
2016-06-01
The beam halo population is a non-negligible factor for the performance of the LHC collimation system and the machine protection. In particular this could become crucial for aiming at stored beam energies of 700 MJ in the High Luminosity (HL-LHC) project, in order to avoid beam dumps caused by orbit jitter and to ensure safety during a crab cavity failure. Therefore several techniques to safely deplete the halo, i.e. active halo control, are under development. In a first attempt a novel way for safe halo depletion was tested with particle narrow-band excitation employing the LHC Transverse Damper (ADT). At anmore » energy of 450 GeV a bunch selective beam tail scraping without affecting the core distribution was attempted. This paper presents the first measurement results, as well as a simple simulation to model the underlying dynamics.« less
Design and Simulation of a Matching System into the Helical Cooling Channel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshikawa, C.; Ankenbrandt, C.; Johnson, R. P.
2014-07-01
Muon colliders could provide the most sensitive measurement of the Higgs mass and return the US back to the Energy Frontier. Central to the capabilities of muon colliders are the cooling channels that provide the extraordinary reduction in emittance required for the precise Higgs mass measurement and increased luminosity for enhanced discovery potential of an Energy Frontier Machine. The Helical Cooling Channel (HCC) is able to achieve such emittance reduction and matching sections within the HCC have been successfully designed in the past with lossless transmission and no emittance growth. However, matching into the HCC from a straight solenoid posesmore » a challenge, since a large emittance beam must cross transition. We elucidate on the challenge and present evaluations of two solutions, along with concepts to integrate the operations of a Charge Separator and match into the HCC.« less
Discovery of a low-luminosity spiral DRAGN
NASA Astrophysics Data System (ADS)
Mulcahy, D. D.; Mao, M. Y.; Mitsuishi, I.; Scaife, A. M. M.; Clarke, A. O.; Babazaki, Y.; Kobayashi, H.; Suganuma, R.; Matsumoto, H.; Tawara, Y.
2016-11-01
Standard galaxy formation models predict that large-scale double-lobed radio sources, known as DRAGNs, will always be hosted by elliptical galaxies. In spite of this, in recent years a small number of spiral galaxies have also been found to host such sources. These so-called spiral DRAGNs are still extremely rare, with only 5 cases being widely accepted. Here we report on the serendipitous discovery of a new spiral DRAGN in data from the Giant Metrewave Radio Telescope (GMRT) at 322 MHz. The host galaxy, MCG+07-47-10, is a face-on late-type Sbc galaxy with distinctive spiral arms and prominent bulge suggesting a high black hole mass. Using WISE infra-red and GALEX UV data we show that this galaxy has a star formation rate of 0.16-0.75 M⊙ yr-1, and that the radio luminosity is dominated by star-formation. We demonstrate that this spiral DRAGN has similar environmental properties to others of this class, but has a comparatively low radio luminosity of L1.4 GHz = 1.12 × 1022 W Hz-1, two orders of magnitude smaller than other known spiral DRAGNs. We suggest that this may indicate the existence of a previously unknown low-luminosity population of spiral DRAGNS. FITS cutout image of the observed spiral DRAGN MCG+07-47- 10 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/595/L8
Modelling the luminosity function of long gamma-ray bursts using Swift and Fermi
NASA Astrophysics Data System (ADS)
Paul, Debdutta
2018-01-01
I have used a sample of long gamma-ray bursts (GRBs) common to both Swift and Fermi to re-derive the parameters of the Yonetoku correlation. This allowed me to self-consistently estimate pseudo-redshifts of all the bursts with unknown redshifts. This is the first time such a large sample of GRBs from these two instruments is used, both individually and in conjunction, to model the long GRB luminosity function. The GRB formation rate is modelled as the product of the cosmic star formation rate and a GRB formation efficiency for a given stellar mass. An exponential cut-off power-law luminosity function fits the data reasonably well, with ν = 0.6 and Lb = 5.4 × 1052 ergs- 1, and does not require a cosmological evolution. In the case of a broken power law, it is required to incorporate a sharp evolution of the break given by Lb ∼ 0.3 × 1052(1 + z)2.90 erg s- 1, and the GRB formation efficiency (degenerate up to a beaming factor of GRBs) decreases with redshift as ∝ (1 + z)-0.80. However, it is not possible to distinguish between the two models. The derived models are then used as templates to predict the distribution of GRBs detectable by CZT Imager onboard AstroSat as a function of redshift and luminosity. This demonstrates that via a quick localization and redshift measurement of even a few CZT Imager GRBs, AstroSat will help in improving the statistics of GRBs both typical and peculiar.
Laser processing of ceramics for microelectronics manufacturing
NASA Astrophysics Data System (ADS)
Sposili, Robert S.; Bovatsek, James; Patel, Rajesh
2017-03-01
Ceramic materials are used extensively in the microelectronics, semiconductor, and LED lighting industries because of their electrically insulating and thermally conductive properties, as well as for their high-temperature-service capabilities. However, their brittleness presents significant challenges for conventional machining processes. In this paper we report on a series of experiments that demonstrate and characterize the efficacy of pulsed nanosecond UV and green lasers in machining ceramics commonly used in microelectronics manufacturing, such as aluminum oxide (alumina) and aluminum nitride. With a series of laser pocket milling experiments, fundamental volume ablation rate and ablation efficiency data were generated. In addition, techniques for various industrial machining processes, such as shallow scribing and deep scribing, were developed and demonstrated. We demonstrate that lasers with higher average powers offer higher processing rates with the one exception of deep scribes in aluminum nitride, where a lower average power but higher pulse energy source outperformed a higher average power laser.
1984-06-29
sheet metal, machined and composite parts and assembling the components into final pruJucts o Planning, evaluating, testing, inspecting and...Research showed that current programs were pursuing the design and demonstration of integrated centers for sheet metal, machining and composite ...determine any metal parts required and to schedule these requirements from the machining center. Figure 3-33, Planned Composite Production, shows
Pellet to Part Manufacturing System for CNCs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roschli, Alex C.; Love, Lonnie J.; Post, Brian K.
Oak Ridge National Laboratory’s Manufacturing Demonstration Facility worked with Hybrid Manufacturing Technologies to develop a compact prototype composite additive manufacturing head that can effectively extrude injection molding pellets. The head interfaces with conventional CNC machine tools enabling rapid conversion of conventional machine tools to additive manufacturing tools. The intent was to enable wider adoption of Big Area Additive Manufacturing (BAAM) technology and combine BAAM technology with conventional machining systems.
NASA Astrophysics Data System (ADS)
Pathak, Jaideep; Wikner, Alexander; Fussell, Rebeckah; Chandra, Sarthak; Hunt, Brian R.; Girvan, Michelle; Ott, Edward
2018-04-01
A model-based approach to forecasting chaotic dynamical systems utilizes knowledge of the mechanistic processes governing the dynamics to build an approximate mathematical model of the system. In contrast, machine learning techniques have demonstrated promising results for forecasting chaotic systems purely from past time series measurements of system state variables (training data), without prior knowledge of the system dynamics. The motivation for this paper is the potential of machine learning for filling in the gaps in our underlying mechanistic knowledge that cause widely-used knowledge-based models to be inaccurate. Thus, we here propose a general method that leverages the advantages of these two approaches by combining a knowledge-based model and a machine learning technique to build a hybrid forecasting scheme. Potential applications for such an approach are numerous (e.g., improving weather forecasting). We demonstrate and test the utility of this approach using a particular illustrative version of a machine learning known as reservoir computing, and we apply the resulting hybrid forecaster to a low-dimensional chaotic system, as well as to a high-dimensional spatiotemporal chaotic system. These tests yield extremely promising results in that our hybrid technique is able to accurately predict for a much longer period of time than either its machine-learning component or its model-based component alone.
Very low luminosity active galaxies and the X-ray background
NASA Technical Reports Server (NTRS)
Elvis, M.; Soltan, A.; Keel, W. C.
1984-01-01
The properties of very low luminosity active galactic nuclei are not well studied, and, in particular, their possible contribution to the diffuse X-ray background is not known. In the present investigation, an X-ray luminosity function for the range from 10 to the 39th to 10 to the 42.5th ergs/s is constructed. The obtained X-ray luminosity function is integrated to estimate the contribution of these very low luminosity active galaxies to the diffuse X-ray background. The construction of the X-ray luminosity function is based on data obtained by Keel (1983) and some simple assumptions about optical and X-ray properties.
The X-ray luminosity functions of Abell clusters from the Einstein Cluster Survey
NASA Technical Reports Server (NTRS)
Burg, R.; Giacconi, R.; Forman, W.; Jones, C.
1994-01-01
We have derived the present epoch X-ray luminosity function of northern Abell clusters using luminosities from the Einstein Cluster Survey. The sample is sufficiently large that we can determine the luminosity function for each richness class separately with sufficient precision to study and compare the different luminosity functions. We find that, within each richness class, the range of X-ray luminosity is quite large and spans nearly a factor of 25. Characterizing the luminosity function for each richness class with a Schechter function, we find that the characteristic X-ray luminosity, L(sub *), scales with richness class as (L(sub *) varies as N(sub*)(exp gamma), where N(sub *) is the corrected, mean number of galaxies in a richness class, and the best-fitting exponent is gamma = 1.3 +/- 0.4. Finally, our analysis suggests that there is a lower limit to the X-ray luminosity of clusters which is determined by the integrated emission of the cluster member galaxies, and this also scales with richness class. The present sample forms a baseline for testing cosmological evolution of Abell-like clusters when an appropriate high-redshift cluster sample becomes available.
Carboch, Jan; Süss, Vladimir; Kocib, Tomas
2014-01-01
Practicing with the use of a ball machine could handicap a player compared to playing against an actual opponent. Recent studies have shown some differences in swing timing and movement coordination, when a player faces a ball projection machine as opposed to a human opponent. We focused on the time of movement initiation and on stroke timing during returning tennis serves (simulated by a ball machine or by a real server). Receivers’ movements were measured on a tennis court. In spite of using a serving ball speed from 90 kph to 135 kph, results showed significant differences in movement initiation and backswing duration between serves received from a ball machine and serves received from a real server. Players had shorter movement initiation when they faced a ball machine. Backswing duration was longer for the group using a ball machine. That demonstrates different movement timing of tennis returns when players face a ball machine. Use of ball machines in tennis practice should be limited as it may disrupt stroke timing. Key points Players have shorter initial move time when they are facing the ball machine. Using the ball machine results in different swing timing and movement coordination. The use of the ball machine should be limited. PMID:24790483
Carboch, Jan; Süss, Vladimir; Kocib, Tomas
2014-05-01
Practicing with the use of a ball machine could handicap a player compared to playing against an actual opponent. Recent studies have shown some differences in swing timing and movement coordination, when a player faces a ball projection machine as opposed to a human opponent. We focused on the time of movement initiation and on stroke timing during returning tennis serves (simulated by a ball machine or by a real server). Receivers' movements were measured on a tennis court. In spite of using a serving ball speed from 90 kph to 135 kph, results showed significant differences in movement initiation and backswing duration between serves received from a ball machine and serves received from a real server. Players had shorter movement initiation when they faced a ball machine. Backswing duration was longer for the group using a ball machine. That demonstrates different movement timing of tennis returns when players face a ball machine. Use of ball machines in tennis practice should be limited as it may disrupt stroke timing. Key pointsPlayers have shorter initial move time when they are facing the ball machine.Using the ball machine results in different swing timing and movement coordination.The use of the ball machine should be limited.
Evidence for different accretion regimes in GRO J1008-57
NASA Astrophysics Data System (ADS)
Kühnel, Matthias; Fürst, Felix; Pottschmidt, Katja; Kreykenbohm, Ingo; Ballhausen, Ralf; Falkner, Sebastian; Rothschild, Richard E.; Klochkov, Dmitry; Wilms, Jörn
2017-11-01
We present a comprehensive spectral analysis of the BeXRB GRO J1008-57 over a luminosity range of three orders of magnitude using NuSTAR, Suzaku, and RXTE data. We find significant evolution of the spectral parameters with luminosity. In particular, the photon index hardens with increasing luminosity at intermediate luminosities in the range 1036-1037 erg s-1. This evolution is stable and repeatedly observed over different outbursts. However, at the extreme ends of the observed luminosity range, we find that the correlation breaks down, with a significance level of at least 3.7σ. We conclude that these changes indicate transitions to different accretion regimes, which are characterized by different deceleration processes, such as Coulomb or radiation breaking. We compare our observed luminosity levels of these transitions to theoretical predications and discuss the variation of those theoretical luminosity values with fundamental neutron star parameters. Finally, we present detailed spectroscopy of the unique "triple peaked" outburst in 2014/15 which does not fit in the general parameter evolution with luminosity. The pulse profile on the other hand is consistent with what is expected at this luminosity level, arguing against a change in accretion geometry. In summary, GRO J1008-57 is an ideal target to study different accretion regimes due to the well-constrained evolution of its broad-band spectral continuum over several orders of magnitude in luminosity.
Quantum Machine Learning over Infinite Dimensions
Lau, Hoi-Kwan; Pooser, Raphael; Siopsis, George; ...
2017-02-21
Machine learning is a fascinating and exciting eld within computer science. Recently, this ex- citement has been transferred to the quantum information realm. Currently, all proposals for the quantum version of machine learning utilize the nite-dimensional substrate of discrete variables. Here we generalize quantum machine learning to the more complex, but still remarkably practi- cal, in nite-dimensional systems. We present the critical subroutines of quantum machine learning algorithms for an all-photonic continuous-variable quantum computer that achieve an exponential speedup compared to their equivalent classical counterparts. Finally, we also map out an experi- mental implementation which can be used as amore » blueprint for future photonic demonstrations.« less
Quantum Machine Learning over Infinite Dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lau, Hoi-Kwan; Pooser, Raphael; Siopsis, George
Machine learning is a fascinating and exciting eld within computer science. Recently, this ex- citement has been transferred to the quantum information realm. Currently, all proposals for the quantum version of machine learning utilize the nite-dimensional substrate of discrete variables. Here we generalize quantum machine learning to the more complex, but still remarkably practi- cal, in nite-dimensional systems. We present the critical subroutines of quantum machine learning algorithms for an all-photonic continuous-variable quantum computer that achieve an exponential speedup compared to their equivalent classical counterparts. Finally, we also map out an experi- mental implementation which can be used as amore » blueprint for future photonic demonstrations.« less
An iterative learning control method with application for CNC machine tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, D.I.; Kim, S.
1996-01-01
A proportional, integral, and derivative (PID) type iterative learning controller is proposed for precise tracking control of industrial robots and computer numerical controller (CNC) machine tools performing repetitive tasks. The convergence of the output error by the proposed learning controller is guaranteed under a certain condition even when the system parameters are not known exactly and unknown external disturbances exist. As the proposed learning controller is repeatedly applied to the industrial robot or the CNC machine tool with the path-dependent repetitive task, the distance difference between the desired path and the actual tracked or machined path, which is one ofmore » the most significant factors in the evaluation of control performance, is progressively reduced. The experimental results demonstrate that the proposed learning controller can improve machining accuracy when the CNC machine tool performs repetitive machining tasks.« less
Entanglement-Based Machine Learning on a Quantum Computer
NASA Astrophysics Data System (ADS)
Cai, X.-D.; Wu, D.; Su, Z.-E.; Chen, M.-C.; Wang, X.-L.; Li, Li; Liu, N.-L.; Lu, C.-Y.; Pan, J.-W.
2015-03-01
Machine learning, a branch of artificial intelligence, learns from previous experience to optimize performance, which is ubiquitous in various fields such as computer sciences, financial analysis, robotics, and bioinformatics. A challenge is that machine learning with the rapidly growing "big data" could become intractable for classical computers. Recently, quantum machine learning algorithms [Lloyd, Mohseni, and Rebentrost, arXiv.1307.0411] were proposed which could offer an exponential speedup over classical algorithms. Here, we report the first experimental entanglement-based classification of two-, four-, and eight-dimensional vectors to different clusters using a small-scale photonic quantum computer, which are then used to implement supervised and unsupervised machine learning. The results demonstrate the working principle of using quantum computers to manipulate and classify high-dimensional vectors, the core mathematical routine in machine learning. The method can, in principle, be scaled to larger numbers of qubits, and may provide a new route to accelerate machine learning.
NASA Astrophysics Data System (ADS)
Kaynak, Y.; Huang, B.; Karaca, H. E.; Jawahir, I. S.
2017-07-01
This experimental study focuses on the phase state and phase transformation response of the surface and subsurface of machined NiTi alloys. X-ray diffraction (XRD) analysis and differential scanning calorimeter techniques were utilized to measure the phase state and the transformation response of machined specimens, respectively. Specimens were machined under dry machining at ambient temperature, preheated conditions, and cryogenic cooling conditions at various cutting speeds. The findings from this research demonstrate that cryogenic machining substantially alters austenite finish temperature of martensitic NiTi alloy. Austenite finish ( A f) temperature shows more than 25 percent increase resulting from cryogenic machining compared with austenite finish temperature of as-received NiTi. Dry and preheated conditions do not substantially alter austenite finish temperature. XRD analysis shows that distinctive transformation from martensite to austenite occurs during machining process in all three conditions. Complete transformation from martensite to austenite is observed in dry cutting at all selected cutting speeds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Applegate, J.H.
1988-06-01
It is shown that a radiative envelope in which the Kramers opacity law holds cannot transport a luminosity larger than a critical value, and it is argued that the transition to red giant structure is triggered by the star's luminosity exceeding the critical value. If the Kramers law is used for all temperatures and densities, the radius of the star diverges as the critical luminosity is approached. In real stars the radiative envelope expands as the luminosity increases until the star intersects the Hayashi track. Once on the Hayashi track, luminosities in excess of the critical luminosity can be accommodatedmore » by forcing most of the mass of the envelope into the convection zone. 17 references.« less
Routine human-competitive machine intelligence by means of genetic programming
NASA Astrophysics Data System (ADS)
Koza, John R.; Streeter, Matthew J.; Keane, Martin
2004-01-01
Genetic programming is a systematic method for getting computers to automatically solve a problem. Genetic programming starts from a high-level statement of what needs to be done and automatically creates a computer program to solve the problem. The paper demonstrates that genetic programming (1) now routinely delivers high-return human-competitive machine intelligence; (2) is an automated invention machine; (3) can automatically create a general solution to a problem in the form of a parameterized topology; and (4) has delivered a progression of qualitatively more substantial results in synchrony with five approximately order-of-magnitude increases in the expenditure of computer time. Recent results involving the automatic synthesis of the topology and sizing of analog electrical circuits and controllers demonstrate these points.
Physarum machines: encapsulating reaction-diffusion to compute spanning tree
NASA Astrophysics Data System (ADS)
Adamatzky, Andrew
2007-12-01
The Physarum machine is a biological computing device, which employs plasmodium of Physarum polycephalum as an unconventional computing substrate. A reaction-diffusion computer is a chemical computing device that computes by propagating diffusive or excitation wave fronts. Reaction-diffusion computers, despite being computationally universal machines, are unable to construct certain classes of proximity graphs without the assistance of an external computing device. I demonstrate that the problem can be solved if the reaction-diffusion system is enclosed in a membrane with few ‘growth points’, sites guiding the pattern propagation. Experimental approximation of spanning trees by P. polycephalum slime mold demonstrates the feasibility of the approach. Findings provided advance theory of reaction-diffusion computation by enriching it with ideas of slime mold computation.
Evolution of the luminosity function of extragalactic objects
NASA Technical Reports Server (NTRS)
Petrosian, V.
1985-01-01
A nonparametric procedure for determination of the evolution of the luminosity function of extragalactic objects and use of this for prediction of expected redshift and luminosity distribution of objects is described. The relation between this statistical evolution of the population and their physical evolution, such as the variation with cosmological epoch of their luminosity and formation rate is presented. This procedure when applied to a sample of optically selected quasars with redshifts less than two shows that the luminosity function evolves more strongly for higher luminosities, indicating a larger quasar activity at earlier epochs and a more rapid evolution of the objects during their higher luminosity phases. It is also shown that absence of many quasars at redshifts greater than three implies slowing down of this evolution in the conventional cosmological models, perhaps indicating that this is near the epoch of the birth of the quasar (and galaxies).
Signal detection using support vector machines in the presence of ultrasonic speckle
NASA Astrophysics Data System (ADS)
Kotropoulos, Constantine L.; Pitas, Ioannis
2002-04-01
Support Vector Machines are a general algorithm based on guaranteed risk bounds of statistical learning theory. They have found numerous applications, such as in classification of brain PET images, optical character recognition, object detection, face verification, text categorization and so on. In this paper we propose the use of support vector machines to segment lesions in ultrasound images and we assess thoroughly their lesion detection ability. We demonstrate that trained support vector machines with a Radial Basis Function kernel segment satisfactorily (unseen) ultrasound B-mode images as well as clinical ultrasonic images.
Scalable Machine Learning for Massive Astronomical Datasets
NASA Astrophysics Data System (ADS)
Ball, Nicholas M.; Gray, A.
2014-04-01
We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors. This is likely of particular interest to the radio astronomy community given, for example, that survey projects contain groups dedicated to this topic. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex datasets that wishes to extract the full scientific value from its data.
Scalable Machine Learning for Massive Astronomical Datasets
NASA Astrophysics Data System (ADS)
Ball, Nicholas M.; Astronomy Data Centre, Canadian
2014-01-01
We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors, and the local outlier factor. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex datasets that wishes to extract the full scientific value from its data.
Testing and Improving the Luminosity Relations for Gamma-Ray Bursts
NASA Astrophysics Data System (ADS)
Collazzi, Andrew
2011-08-01
Gamma Ray Bursts (GRBs) have several luminosity relations where a measurable property of a burst light curve or spectrum is correlated with the burst luminosity. These luminosity relations are calibrated for the fraction of bursts with spectroscopic redshifts and hence the known luminosities. GRBs have thus become known as a type of 'standard candle'; where standard candle is meant in the usual sense that their luminosities can be derived from measurable properties of the bursts. GRBs can therefore be used for the same cosmology applications as Type Ia supernovae, including the construction of the Hubble Diagram and measuring massive star formation rate. The greatest disadvantage of using GRBs as standard candles is that their accuracy is lower than desired. With the recent advent of GRBs as a new standard candle, every effort must be made to test and improve the distance measures. Here, several methods are employed to do just that. First, generalized forms of two tests are performed on all of the luminosity relations. All the luminosity relations pass the second of these tests, and all but two pass the first. Even with this failure, the redundancy in using multiple luminosity relations allows all the luminosity relations to retain value. Next, the 'Firmani relation' is shown to have poorer accuracy than first advertised. In addition, it is shown to be exactly derivable from two other luminosity relations. For these reasons, the Firmani relation is useless for cosmology. The Amati relation is then revisited and shown to be an artifact of a combination of selection effects. Therefore, the Amati relation is also not good for cosmology. Fourthly, the systematic errors involved in measuring a popular luminosity indicator (Epeak ) are measured. The result is that an irreducible systematic error of 28% exists. After that, a preliminary investigation into the usefulness of breaking GRBs into individual pulses is conducted. The results of an 'ideal' set of data do not provide for confident results due to large error bars. Finally, the work concludes with a discussion about the impact of the work and the future of GRB luminosity relations.
Tomography and generative training with quantum Boltzmann machines
NASA Astrophysics Data System (ADS)
Kieferová, Mária; Wiebe, Nathan
2017-12-01
The promise of quantum neural nets, which utilize quantum effects to model complex data sets, has made their development an aspirational goal for quantum machine learning and quantum computing in general. Here we provide methods of training quantum Boltzmann machines. Our work generalizes existing methods and provides additional approaches for training quantum neural networks that compare favorably to existing methods. We further demonstrate that quantum Boltzmann machines enable a form of partial quantum state tomography that further provides a generative model for the input quantum state. Classical Boltzmann machines are incapable of this. This verifies the long-conjectured connection between tomography and quantum machine learning. Finally, we prove that classical computers cannot simulate our training process in general unless BQP=BPP , provide lower bounds on the complexity of the training procedures and numerically investigate training for small nonstoquastic Hamiltonians.
The CMS Data Acquisition - Architectures for the Phase-2 Upgrade
NASA Astrophysics Data System (ADS)
Andre, J.-M.; Behrens, U.; Branson, J.; Brummer, P.; Chaze, O.; Cittolin, S.; Contescu, C.; Craigs, B. G.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Doualot, N.; Erhan, S.; Fulcher, J. F.; Gigi, D.; Gładki, M.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Janulis, M.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; O'Dell, V.; Orsini, L.; Paus, C.; Petrova, P.; Pieri, M.; Racz, A.; Reis, T.; Sakulin, H.; Schwick, C.; Simelevicius, D.; Zejdl, P.
2017-10-01
The upgraded High Luminosity LHC, after the third Long Shutdown (LS3), will provide an instantaneous luminosity of 7.5 × 1034 cm-2 s -1 (levelled), at the price of extreme pileup of up to 200 interactions per crossing. In LS3, the CMS Detector will also undergo a major upgrade to prepare for the phase-2 of the LHC physics program, starting around 2025. The upgraded detector will be read out at an unprecedented data rate of up to 50 Tb/s and an event rate of 750 kHz. Complete events will be analysed by software algorithms running on standard processing nodes, and selected events will be stored permanently at a rate of up to 10 kHz for offline processing and analysis. In this paper we discuss the baseline design of the DAQ and HLT systems for the phase-2, taking into account the projected evolution of high speed network fabrics for event building and distribution, and the anticipated performance of general purpose CPU. Implications on hardware and infrastructure requirements for the DAQ “data center” are analysed. Emerging technologies for data reduction are considered. Novel possible approaches to event building and online processing, inspired by trending developments in other areas of computing dealing with large masses of data, are also examined. We conclude by discussing the opportunities offered by reading out and processing parts of the detector, wherever the front-end electronics allows, at the machine clock rate (40 MHz). This idea presents interesting challenges and its physics potential should be studied.
The CMS Data Acquisition - Architectures for the Phase-2 Upgrade
Andre, J-M; Behrens, U.; Branson, J.; ...
2017-10-01
The upgraded High Luminosity LHC, after the third Long Shutdown (LS3), will provide an instantaneous luminosity of 7.5 × 10 34 cm -2 s -1 (levelled), at the price of extreme pileup of up to 200 interactions per crossing. In LS3, the CMS Detector will also undergo a major upgrade to prepare for the phase-2 of the LHC physics program, starting around 2025. The upgraded detector will be read out at an unprecedented data rate of up to 50 Tb/s and an event rate of 750 kHz. Complete events will be analysed by software algorithms running on standard processing nodes,more » and selected events will be stored permanently at a rate of up to 10 kHz for offline processing and analysis. Here in this paper we discuss the baseline design of the DAQ and HLT systems for the phase-2, taking into account the projected evolution of high speed network fabrics for event building and distribution, and the anticipated performance of general purpose CPU. Implications on hardware and infrastructure requirements for the DAQ “data center” are analysed. Emerging technologies for data reduction are considered. Novel possible approaches to event building and online processing, inspired by trending developments in other areas of computing dealing with large masses of data, are also examined. We conclude by discussing the opportunities offered by reading out and processing parts of the detector, wherever the front-end electronics allows, at the machine clock rate (40 MHz). This idea presents interesting challenges and its physics potential should be studied.« less
The Belle II Silicon Vertex Detector
NASA Astrophysics Data System (ADS)
Friedl, M.; Ackermann, K.; Aihara, H.; Aziz, T.; Bergauer, T.; Bozek, A.; Campbell, A.; Dingfelder, J.; Drasal, Z.; Frankenberger, A.; Gadow, K.; Gfall, I.; Haba, J.; Hara, K.; Hara, T.; Higuchi, T.; Himori, S.; Irmler, C.; Ishikawa, A.; Joo, C.; Kah, D. H.; Kang, K. H.; Kato, E.; Kiesling, C.; Kodys, P.; Kohriki, T.; Koike, S.; Kvasnicka, P.; Marinas, C.; Mayekar, S. N.; Mibe, T.; Mohanty, G. B.; Moll, A.; Negishi, K.; Nakayama, H.; Natkaniec, Z.; Niebuhr, C.; Onuki, Y.; Ostrowicz, W.; Park, H.; Rao, K. K.; Ritter, M.; Rozanska, M.; Saito, T.; Sakai, K.; Sato, N.; Schmid, S.; Schnell, M.; Shimizu, N.; Steininger, H.; Tanaka, S.; Tanida, K.; Taylor, G.; Tsuboyama, T.; Ueno, K.; Uozumi, S.; Ushiroda, Y.; Valentan, M.; Yamamoto, H.
2013-12-01
The KEKB machine and the Belle experiment in Tsukuba (Japan) are now undergoing an upgrade, leading to an ultimate luminosity of 8×1035 cm-2 s-1 in order to measure rare decays in the B system with high statistics. The previous vertex detector cannot cope with this 40-fold increase of luminosity and thus needs to be replaced. Belle II will be equipped with a two-layer Pixel Detector surrounding the beam pipe, and four layers of double-sided silicon strip sensors at higher radii than the old detector. The Silicon Vertex Detector (SVD) will have a total sensitive area of 1.13 m2 and 223,744 channels-twice as many as its predecessor. All silicon sensors will be made from 150 mm wafers in order to maximize their size and thus to reduce the relative contribution of the support structure. The forward part has slanted sensors of trapezoidal shape to improve the measurement precision and to minimize the amount of material as seen by particles from the vertex. Fast-shaping front-end amplifiers will be used in conjunction with an online hit time reconstruction algorithm in order to reduce the occupancy to the level of a few percent at most. A novel “Origami” chip-on-sensor scheme is used to minimize both the distance between strips and amplifier (thus reducing the electronic noise) as well as the overall material budget. This report gives an overview on the status of the Belle II SVD and its components, including sensors, front-end detector ladders, mechanics, cooling and the readout electronics.
Galaxy and Mass Assembly (GAMA): galaxies at the faint end of the Hα luminosity function
NASA Astrophysics Data System (ADS)
Brough, S.; Hopkins, A. M.; Sharp, R. G.; Gunawardhana, M.; Wijesinghe, D.; Robotham, A. S. G.; Driver, S. P.; Baldry, I. K.; Bamford, S. P.; Liske, J.; Loveday, J.; Norberg, P.; Peacock, J. A.; Bland-Hawthorn, J.; Brown, M. J. I.; Cameron, E.; Croom, S. M.; Frenk, C. S.; Foster, C.; Hill, D. T.; Jones, D. H.; Kelvin, L. S.; Kuijken, K.; Nichol, R. C.; Parkinson, H. R.; Pimbblet, K.; Popescu, C. C.; Prescott, M.; Sutherland, W. J.; Taylor, E.; Thomas, D.; Tuffs, R. J.; van Kampen, E.
2011-05-01
We present an analysis of the properties of the lowest Hα-luminosity galaxies (LHα≤ 4 × 1032 W; SFR < 0.02 M⊙ yr-1, with SFR denoting the star formation rate) in the Galaxy And Mass Assembly survey. These galaxies make up the rise above a Schechter function in the number density of systems seen at the faint end of the Hα luminosity function. Above our flux limit, we find that these galaxies are principally composed of intrinsically low stellar mass systems (median stellar mass = 2.5 × 108 M⊙) with only 5/90 having stellar masses M > 1010 M⊙. The low-SFR systems are found to exist predominantly in the lowest-density environments (median density ˜0.02 galaxy Mpc-2) with none in environments more dense than ˜1.5 galaxy Mpc-2. Their current specific SFRs (SSFRs; -8.5 < log [SSFR (yr -1)] < -12) are consistent with their having had a variety of star formation histories. The low-density environments of these galaxies demonstrate that such low-mass, star-forming systems can only remain as low mass and form stars if they reside sufficiently far from other galaxies to avoid being accreted, dispersed through tidal effects or having their gas reservoirs rendered ineffective through external processes.
Upgrade of the ATLAS Hadronic Tile Calorimeter for the High Luminosity LHC
NASA Astrophysics Data System (ADS)
Tortajada, Ignacio Asensi
2018-01-01
The Large Hadron Collider (LHC) has envisaged a series of upgrades towards a High Luminosity LHC (HL-LHC) delivering five times the LHC nominal instantaneous luminosity. The ATLAS Phase II upgrade, in 2024, will accommodate the upgrade of the detector and data acquisition system for the HL-LHC. The Tile Calorimeter (TileCal) will undergo a major replacement of its on- and off-detector electronics. In the new architecture, all signals will be digitized and then transferred directly to the off-detector electronics, where the signals will be reconstructed, stored, and sent to the first level of trigger at the rate of 40 MHz. This will provide better precision of the calorimeter signals used by the trigger system and will allow the development of more complex trigger algorithms. Changes to the electronics will also contribute to the reliability and redundancy of the system. Three different front-end options are presently being investigated for the upgrade, two of them based on ASICs, and a final solution will be chosen after extensive laboratory and test beam studies that are in progress. A hybrid demonstrator module is being developed using the new electronics while conserving compatibility with the current system. The status of the developments will be presented, including results from the several tests with particle beams.
Luminosity variations of protostars at the Hayashi stage
NASA Astrophysics Data System (ADS)
Abdulmyanov, T. R.
2017-09-01
In the present paper, the luminosity variations of protostars at the Hayashi stage are considered. According to the density wave model, the luminosity of protostars will have significant variations throughout the Hayashi stage. The initial moments of the formation of protoplanetary rings of the Solar system and the luminosity of the protostar for these moments are obtained.
NLC Luminosity as a Function of Beam Parameters
NASA Astrophysics Data System (ADS)
Nosochkov, Y.
2002-06-01
Realistic calculation of NLC luminosity has been performed using particle tracking in DIMAD and beam-beam simulations in GUINEA-PIG code for various values of beam emittance, energy and beta functions at the Interaction Point (IP). Results of the simulations are compared with analytic luminosity calculations. The optimum range of IP beta functions for high luminosity was identified.
Metal release from coffee machines and electric kettles.
Müller, Frederic D; Hackethal, Christin; Schmidt, Roman; Kappenstein, Oliver; Pfaff, Karla; Luch, Andreas
2015-01-01
The release of elemental ions from 8 coffee machines and 11 electric kettles into food simulants was investigated. Three different types of coffee machines were tested: portafilter espresso machines, pod machines and capsule machines. All machines were tested subsequently on 3 days before and on 3 days after decalcification. Decalcification of the machines was performed with agents according to procedures as specified in the respective manufacturer's manuals. The electric kettles showed only a low release of the elements analysed. For the coffee machines decreasing concentrations of elements were found from the first to the last sample taken in the course of 1 day. Metal release on consecutive days showed a decreasing trend as well. After decalcification a large increase in the amounts of elements released was encountered. In addition, the different machine types investigated clearly differed in their extent of element release. By far the highest leaching, both quantitatively and qualitatively, was found for the portafilter machines. With these products releases of Pb, Ni, Mn, Cr and Zn were in the range and beyond the release limits as proposed by the Council of Europe. Therefore, a careful rinsing routine, especially after decalcification, is recommended for these machines. The comparably lower extent of release of one particular portafilter machine demonstrates that metal release at levels above the threshold that triggers health concerns are technically avoidable.
Slimeware: engineering devices with slime mold.
Adamatzky, Andrew
2013-01-01
The plasmodium of the acellular slime mold Physarum polycephalum is a gigantic single cell visible to the unaided eye. The cell shows a rich spectrum of behavioral patterns in response to environmental conditions. In a series of simple experiments we demonstrate how to make computing, sensing, and actuating devices from the slime mold. We show how to program living slime mold machines by configurations of repelling and attracting gradients and demonstrate the workability of the living machines on tasks of computational geometry, logic, and arithmetic.
Anytime query-tuned kernel machine classifiers via Cholesky factorization
NASA Technical Reports Server (NTRS)
DeCoste, D.
2002-01-01
We recently demonstrated 2 to 64-fold query-time speedups of Support Vector Machine and Kernel Fisher classifiers via a new computational geometry method for anytime output bounds (DeCoste,2002). This new paper refines our approach in two key ways. First, we introduce a simple linear algebra formulation based on Cholesky factorization, yielding simpler equations and lower computational overhead. Second, this new formulation suggests new methods for achieving additional speedups, including tuning on query samples. We demonstrate effectiveness on benchmark datasets.
A Multi-scale, Multi-Model, Machine-Learning Solar Forecasting Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamann, Hendrik F.
The goal of the project was the development and demonstration of a significantly improved solar forecasting technology (short: Watt-sun), which leverages new big data processing technologies and machine-learnt blending between different models and forecast systems. The technology aimed demonstrating major advances in accuracy as measured by existing and new metrics which themselves were developed as part of this project. Finally, the team worked with Independent System Operators (ISOs) and utilities to integrate the forecasts into their operations.
Speckle-learning-based object recognition through scattering media.
Ando, Takamasa; Horisaki, Ryoichi; Tanida, Jun
2015-12-28
We experimentally demonstrated object recognition through scattering media based on direct machine learning of a number of speckle intensity images. In the experiments, speckle intensity images of amplitude or phase objects on a spatial light modulator between scattering plates were captured by a camera. We used the support vector machine for binary classification of the captured speckle intensity images of face and non-face data. The experimental results showed that speckles are sufficient for machine learning.
PSRPOPPy: an open-source package for pulsar population simulations
NASA Astrophysics Data System (ADS)
Bates, S. D.; Lorimer, D. R.; Rane, A.; Swiggum, J.
2014-04-01
We have produced a new software package for the simulation of pulsar populations, PSRPOPPY, based on the PSRPOP package. The codebase has been re-written in Python (save for some external libraries, which remain in their native Fortran), utilizing the object-oriented features of the language, and improving the modularity of the code. Pre-written scripts are provided for running the simulations in `standard' modes of operation, but the code is flexible enough to support the writing of personalised scripts. The modular structure also makes the addition of experimental features (such as new models for period or luminosity distributions) more straightforward than with the previous code. We also discuss potential additions to the modelling capabilities of the software. Finally, we demonstrate some potential applications of the code; first, using results of surveys at different observing frequencies, we find pulsar spectral indices are best fitted by a normal distribution with mean -1.4 and standard deviation 1.0. Secondly, we model pulsar spin evolution to calculate the best fit for a relationship between a pulsar's luminosity and spin parameters. We used the code to replicate the analysis of Faucher-Giguère & Kaspi, and have subsequently optimized their power-law dependence of radio luminosity, L, with period, P, and period derivative, Ṗ. We find that the underlying population is best described by L ∝ P-1.39±0.09 Ṗ0.48±0.04 and is very similar to that found for γ-ray pulsars by Perera et al. Using this relationship, we generate a model population and examine the age-luminosity relation for the entire pulsar population, which may be measurable after future large-scale surveys with the Square Kilometre Array.
Constraints on Omega_0 and cluster evolution using the ROSAT log N-log S relation
NASA Astrophysics Data System (ADS)
Mathiesen, B.; Evrard, A. E.
1998-04-01
We examine the likelihoods of different cosmological models and cluster evolutionary histories by comparing semi-analytical predictions of X-ray cluster number counts with observational data from the ROSAT satellite. We model cluster abundance as a function of mass and redshift using a Press-Schechter distribution, and assume that the temperature T(M,z) and bolometric luminosity L_X(M,z) scale as power laws in mass and epoch, in order to construct expected counts as a function of X-ray flux. The L_X-M scaling is fixed using the local luminosity function, while the degree of evolution in the X-ray luminosity with redshift L_X~(1+z)^s is left open, with s an interesting free parameter which we investigate. We examine open and flat cosmologies with initial, scale-free fluctuation spectra having indices n=0, -1 and -2. An independent constraint arising from the slope of the luminosity-temperature relation strongly favours the n=-2 spectrum. The expected counts demonstrate a strong dependence on Omega_0 and s, with lesser dependence on lambda_0 and n. Comparison with the observed counts reveals a `ridge' of acceptable models in the Omega_0-s plane, roughly following the relation s~6Omega_0 and spanning low-density models with a small degree of evolution to Omega=1 models with strong evolution. Models with moderate evolution are revealed to have a strong lower limit of Omega_0>~0.3, and low-evolution models imply that Omega_0<1 at a very high confidence level. We suggest observational tests for breaking the degeneracy along this ridge, and discuss implications for evolutionary histories of the intracluster medium.
Evaluation of GPUs as a level-1 track trigger for the High-Luminosity LHC
NASA Astrophysics Data System (ADS)
Mohr, H.; Dritschler, T.; Ardila, L. E.; Balzer, M.; Caselle, M.; Chilingaryan, S.; Kopmann, A.; Rota, L.; Schuh, T.; Vogelgesang, M.; Weber, M.
2017-04-01
In this work, we investigate the use of GPUs as a way of realizing a low-latency, high-throughput track trigger, using CMS as a showcase example. The CMS detector at the Large Hadron Collider (LHC) will undergo a major upgrade after the long shutdown from 2024 to 2026 when it will enter the high luminosity era. During this upgrade, the silicon tracker will have to be completely replaced. In the High Luminosity operation mode, luminosities of 5-7 × 1034 cm-2s-1 and pileups averaging at 140 events, with a maximum of up to 200 events, will be reached. These changes will require a major update of the triggering system. The demonstrated systems rely on dedicated hardware such as associative memory ASICs and FPGAs. We investigate the use of GPUs as an alternative way of realizing the requirements of the L1 track trigger. To this end we implemeted a Hough transformation track finding step on GPUs and established a low-latency RDMA connection using the PCIe bus. To showcase the benefits of floating point operations, made possible by the use of GPUs, we present a modified algorithm. It uses hexagonal bins for the parameter space and leads to a more truthful representation of the possible track parameters of the individual hits in Hough space. This leads to fewer duplicate candidates and reduces fake track candidates compared to the regular approach. With data-transfer latencies of 2 μs and processing times for the Hough transformation as low as 3.6 μs, we can show that latencies are not as critical as expected. However, computing throughput proves to be challenging due to hardware limitations.
SOLAR-LIKE OSCILLATIONS IN LOW-LUMINOSITY RED GIANTS: FIRST RESULTS FROM KEPLER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bedding, T. R.; Huber, D.; Stello, D.
2010-04-20
We have measured solar-like oscillations in red giants using time-series photometry from the first 34 days of science operations of the Kepler Mission. The light curves, obtained with 30 minute sampling, reveal clear oscillations in a large sample of G and K giants, extending in luminosity from the red clump down to the bottom of the giant branch. We confirm a strong correlation between the large separation of the oscillations ({delta}{nu}) and the frequency of maximum power ({nu}{sub max}). We focus on a sample of 50 low-luminosity stars ({nu}{sub max} > 100 {mu}Hz, L {approx}< 30 L {sub sun}) havingmore » high signal-to-noise ratios and showing the unambiguous signature of solar-like oscillations. These are H-shell-burning stars, whose oscillations should be valuable for testing models of stellar evolution and for constraining the star formation rate in the local disk. We use a new technique to compare stars on a single echelle diagram by scaling their frequencies and find well-defined ridges corresponding to radial and non-radial oscillations, including clear evidence for modes with angular degree l = 3. Measuring the small separation between l = 0 and l = 2 allows us to plot the so-called C-D diagram of {delta}{nu}{sub 02} versus {delta}{nu}. The small separation {delta}{nu}{sub 01} of l = 1 from the midpoint of adjacent l = 0 modes is negative, contrary to the Sun and solar-type stars. The ridge for l = 1 is notably broadened, which we attribute to mixed modes, confirming theoretical predictions for low-luminosity giants. Overall, the results demonstrate the tremendous potential of Kepler data for asteroseismology of red giants.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tony Y.; Wechsler, Risa H.; Devaraj, Kiruthika
Intensity mapping, which images a single spectral line from unresolved galaxies across cosmological volumes, is a promising technique for probing the early universe. Here we present predictions for the intensity map and power spectrum of the CO(1–0) line from galaxies atmore » $$z\\sim 2.4$$–2.8, based on a parameterized model for the galaxy–halo connection, and demonstrate the extent to which properties of high-redshift galaxies can be directly inferred from such observations. We find that our fiducial prediction should be detectable by a realistic experiment. Motivated by significant modeling uncertainties, we demonstrate the effect on the power spectrum of varying each parameter in our model. Using simulated observations, we infer constraints on our model parameter space with an MCMC procedure, and show corresponding constraints on the $${L}_{\\mathrm{IR}}$$–$${L}_{\\mathrm{CO}}$$ relation and the CO luminosity function. These constraints would be complementary to current high-redshift galaxy observations, which can detect the brightest galaxies but not complete samples from the faint end of the luminosity function. Furthermore, by probing these populations in aggregate, CO intensity mapping could be a valuable tool for probing molecular gas and its relation to star formation in high-redshift galaxies.« less
Space Density Of Optically-Selected Type II Quasars From The SDSS
NASA Astrophysics Data System (ADS)
Reyes, Reinabelle; Zakamska, N. L.; Strauss, M. A.; Green, J.; Krolik, J. H.; Shen, Y.; Richards, G. T.
2007-12-01
Type II quasars are luminous Active Galactic Nuclei (AGN) whose central regions are obscured by large amounts of gas and dust. In this poster, we present a catalog of 887 type II quasars with redshifts z<0.83 from the Sloan Digital Sky Survey (SDSS), selected based on their emission lines, and derive the 1/Vmax [OIII] 5007 luminosity function from this sample. Since some objects may not be included in the sample because they lack strong emission lines, the derived luminosity function is only a lower limit. We also derive the [OIII] 5007 luminosity function for a sample of type I (broad-line) quasars in the same redshift range. Taking [OIII] 5007 luminosity as a tracer of intrinsic luminosity in both type I and type II quasars, we obtain lower limits to the type II quasar fraction as a function of [OIII] 5007 luminosity, from L[OIII] = 108.3 to 1010 Lsun, which roughly correspond to bolometric luminosities of 1044 to 1046 erg/s.
The High Luminosity LHC Project
NASA Astrophysics Data System (ADS)
Rossi, Lucio
The High Luminosity LHC is one of the major scientific project of the next decade. It aims at increasing the luminosity reach of LHC by a factor five for peak luminosity and a factor ten in integrated luminosity. The project, now fully approved and funded, will be finished in ten years and will prolong the life of LHC until 2035-2040. It implies deep modifications of the LHC for about 1.2 km around the high luminosity insertions of ATLAS and CMS and relies on new cutting edge technologies. We are developing new advanced superconducting magnets capable of reaching 12 T field; superconducting RF crab cavities capable to rotate the beams with great accuracy; 100 kA and hundred meter long superconducting links for removing the power converter out of the tunnel; new collimator concepts, etc... Beside the important physics goals, the High Luminosity LHC project is an ideal test bed for new technologies for the next hadron collider for the post-LHC era.
Luminosity function and cosmological evolution of X-ray selected quasars
NASA Technical Reports Server (NTRS)
Maccacaro, T.; Gioia, I. M.
1983-01-01
The preliminary analysis of a complete sample of 55 X-ray sources is presented as part of the Medium Sensitivity Survey of the Einstein Observatory. A pure luminosity evolutionary law is derived in order to determine the uniform distribution of the sources and the rates of evolution for Active Galactic Nuclei (AGNs) observed by X-ray and optical techniques are compared. A nonparametric representation of the luminosity function is fitted to the observational data. On the basis of the reduced data, it is determined that: (1) AGNs evolve cosmologically; (2) less evolution is required to explain the X-ray data than the optical data; (3) the high-luminosity portion of the X-ray luminosity can be described by a power-law with a slope of gamma = 3.6; and (4) the X-ray luminosity function flattens at low luminosities. Some of the implications of the results for conventional theoretical models of the evolution of quasars and Seyfert galaxies are discussed.
Luminosity and Stellar Mass Functions from the 6dF Galaxy Survey
NASA Astrophysics Data System (ADS)
Colless, M.; Jones, D. H.; Peterson, B. A.; Campbell, L.; Saunders, W.; Lah, P.
2007-12-01
The completed 6dF Galaxy Survey includes redshifts for over 124,000 galaxies. We present luminosity functions in optical and near-infrared passbands that span a range of 10^4 in luminosity. These luminosity functions show systematic deviations from the Schechter form. The corresponding luminosity densities in the optical and near-infrared are consistent with an old stellar population and a moderately declining star formation rate. Stellar mass functions, derived from the K band luminosities and simple stellar population models selected by b_J-r_F colour, lead to an estimate of the present-day stellar mass density of ρ_* = (5.00 ± 0.11) × 10^8 h M_⊙ Mpc^{-3}, corresponding to Ω_* h = (1.80 ± 0.04) × 10^{-3}.
Direct Machining of Low-Loss THz Waveguide Components With an RF Choke.
Lewis, Samantha M; Nanni, Emilio A; Temkin, Richard J
2014-12-01
We present results for the successful fabrication of low-loss THz metallic waveguide components using direct machining with a CNC end mill. The approach uses a split-block machining process with the addition of an RF choke running parallel to the waveguide. The choke greatly reduces coupling to the parasitic mode of the parallel-plate waveguide produced by the split-block. This method has demonstrated loss as low as 0.2 dB/cm at 280 GHz for a copper WR-3 waveguide. It has also been used in the fabrication of 3 and 10 dB directional couplers in brass, demonstrating excellent agreement with design simulations from 240-260 GHz. The method may be adapted to structures with features on the order of 200 μm.
Dixon, Steven L; Duan, Jianxin; Smith, Ethan; Von Bargen, Christopher D; Sherman, Woody; Repasky, Matthew P
2016-10-01
We introduce AutoQSAR, an automated machine-learning application to build, validate and deploy quantitative structure-activity relationship (QSAR) models. The process of descriptor generation, feature selection and the creation of a large number of QSAR models has been automated into a single workflow within AutoQSAR. The models are built using a variety of machine-learning methods, and each model is scored using a novel approach. Effectiveness of the method is demonstrated through comparison with literature QSAR models using identical datasets for six end points: protein-ligand binding affinity, solubility, blood-brain barrier permeability, carcinogenicity, mutagenicity and bioaccumulation in fish. AutoQSAR demonstrates similar or better predictive performance as compared with published results for four of the six endpoints while requiring minimal human time and expertise.
Luminosity determination in pp collisions at √{s} = 8 TeV using the ATLAS detector at the LHC
NASA Astrophysics Data System (ADS)
Aaboud, M.; Aad, G.; Abbott, B.; Abdallah, J.; Abdinov, O.; Abeloos, B.; Aben, R.; AbouZeid, O. S.; Abraham, N. L.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Affolder, A. A.; Agatonovic-Jovin, T.; Agricola, J.; Aguilar-Saavedra, J. A.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akerstedt, H.; Åkesson, T. P. A.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Verzini, M. J. Alconada; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexopoulos, T.; Alhroob, M.; Aliev, M.; Alimonti, G.; Alison, J.; Alkire, S. P.; Allbrooke, B. M. M.; Allen, B. W.; Allport, P. P.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Alstaty, M.; Gonzalez, B. Alvarez; Piqueras, D. Álvarez; Alviggi, M. G.; Amadio, B. T.; Amako, K.; Coutinho, Y. Amaral; Amelung, C.; Amidei, D.; Santos, S. P. Amor Dos; Amorim, A.; Amoroso, S.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anders, J. K.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonelli, M.; Antonov, A.; Anulli, F.; Aoki, M.; Bella, L. Aperio; Arabidze, G.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arduh, F. A.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Armitage, L. J.; Arnaez, O.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Artz, S.; Asai, S.; Asbah, N.; Ashkenazi, A.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Augsten, K.; Avolio, G.; Axen, B.; Ayoub, M. K.; Azuelos, G.; Baak, M. A.; Baas, A. E.; Baca, M. J.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Baines, J. T.; Baker, O. K.; Baldin, E. M.; Balek, P.; Balestri, T.; Balli, F.; Balunas, W. K.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Barak, L.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barklow, T.; Barlow, N.; Barnes, S. L.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Navarro, L. Barranco; Barreiro, F.; da Costa, J. Barreiro Guimarães; Bartoldus, R.; Barton, A. E.; Bartos, P.; Basalaev, A.; Bassalat, A.; Bates, R. L.; Batista, S. J.; Batley, J. R.; Battaglia, M.; Bauce, M.; Bauer, F.; Bawa, H. S.; Beacham, J. B.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, M.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bedognetti, M.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, J. K.; Belanger-Champagne, C.; Bell, A. S.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Belyaev, N. L.; Benary, O.; Benchekroun, D.; Bender, M.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Noccioli, E. Benhar; Benitez, J.; Benjamin, D. P.; Bensinger, J. R.; Bentvelsen, S.; Beresford, L.; Beretta, M.; Berge, D.; Kuutmann, E. Bergeaas; Berger, N.; Beringer, J.; Berlendis, S.; Bernard, N. R.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertram, I. A.; Bertsche, C.; Bertsche, D.; Besjes, G. J.; Bylund, O. Bessidskaia; Bessner, M.; Besson, N.; Betancourt, C.; Bethke, S.; Bevan, A. J.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Biedermann, D.; Bielski, R.; Biesuz, N. V.; Biglietti, M.; De Mendizabal, J. Bilbao; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biondi, S.; Bjergaard, D. M.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blanco, J. E.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Blunier, S.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boehler, M.; Boerner, D.; Bogaerts, J. A.; Bogavac, D.; Bogdanchikov, A. G.; Bohm, C.; Boisvert, V.; Bokan, P.; Bold, T.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Bortfeldt, J.; Bortoletto, D.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Sola, J. D. Bossio; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Boutle, S. K.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Madden, W. D. Breaden; Brendlinger, K.; Brennan, A. J.; Brenner, L.; Brenner, R.; Bressler, S.; Bristow, T. M.; Britton, D.; Britzger, D.; Brochu, F. M.; Brock, I.; Brock, R.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Broughton, J. H.; de Renstrom, P. A. Bruckman; Bruncko, D.; Bruneliere, R.; Bruni, A.; Bruni, G.; Bruni, L. S.; Brunt, BH; Bruschi, M.; Bruscino, N.; Bryant, P.; Bryngemark, L.; Buanes, T.; Buat, Q.; Buchholz, P.; Buckley, A. G.; Budagov, I. A.; Buehrer, F.; Bugge, M. K.; Bulekov, O.; Bullock, D.; Burckhart, H.; Burdin, S.; Burgard, C. D.; Burghgrave, B.; Burka, K.; Burke, S.; Burmeister, I.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Butler, J. M.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Buzykaev, A. R.; Urbán, S. Cabrera; Caforio, D.; Cairo, V. M.; Cakir, O.; Calace, N.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Caloba, L. P.; Calvet, D.; Calvet, S.; Calvet, T. P.; Toro, R. Camacho; Camarda, S.; Camarri, P.; Cameron, D.; Armadans, R. Caminal; Camincher, C.; Campana, S.; Campanelli, M.; Camplani, A.; Campoverde, A.; Canale, V.; Canepa, A.; Bret, M. Cano; Cantero, J.; Cantrill, R.; Cao, T.; Garrido, M. D. M. Capeans; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Carbone, R. M.; Cardarelli, R.; Cardillo, F.; Carli, I.; Carli, T.; Carlino, G.; Carminati, L.; Caron, S.; Carquin, E.; Carrillo-Montoya, G. D.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Casolino, M.; Casper, D. W.; Castaneda-Miranda, E.; Castelijn, R.; Castelli, A.; Gimenez, V. Castillo; Castro, N. F.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Caudron, J.; Cavaliere, V.; Cavallaro, E.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Alberich, L. Cerda; Cerio, B. C.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cerv, M.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chan, S. K.; Chan, Y. L.; Chang, P.; Chapman, J. D.; Charlton, D. G.; Chatterjee, A.; Chau, C. C.; Barajas, C. A. Chavez; Che, S.; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, K.; Chen, S.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, H. J.; Cheng, Y.; Cheplakov, A.; Cheremushkina, E.; Moursli, R. Cherkaoui El; Chernyatin, V.; Cheu, E.; Chevalier, L.; Chiarella, V.; Chiarelli, G.; Chiodini, G.; Chisholm, A. S.; Chitan, A.; Chizhov, M. V.; Choi, K.; Chomont, A. R.; Chouridou, S.; Chow, B. K. B.; Christodoulou, V.; Chromek-Burckhart, D.; Chudoba, J.; Chuinard, A. J.; Chwastowski, J. J.; Chytka, L.; Ciapetti, G.; Ciftci, A. K.; Cinca, D.; Cindro, V.; Cioara, I. A.; Ciocio, A.; Cirotto, F.; Citron, Z. H.; Citterio, M.; Ciubancan, M.; Clark, A.; Clark, B. L.; Clark, M. R.; Clark, P. J.; Clarke, R. N.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coffey, L.; Colasurdo, L.; Cole, B.; Colijn, A. P.; Collot, J.; Colombo, T.; Compostella, G.; Muiño, P. Conde; Coniavitis, E.; Connell, S. H.; Connelly, I. A.; Consorti, V.; Constantinescu, S.; Conti, G.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cormier, K. J. R.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Crawley, S. J.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Ortuzar, M. Crispin; Cristinziani, M.; Croft, V.; Crosetti, G.; Donszelmann, T. Cuhadar; Cummings, J.; Curatolo, M.; Cúth, J.; Cuthbert, C.; Czirr, H.; Czodrowski, P.; D'amen, G.; D'Auria, S.; D'Onofrio, M.; De Sousa, M. J. Da Cunha Sargedas; Via, C. Da; Dabrowski, W.; Dado, T.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Dandoy, J. R.; Dang, N. P.; Daniells, A. C.; Dann, N. S.; Danninger, M.; Hoffmann, M. Dano; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Davey, W.; David, C.; Davidek, T.; Davies, M.; Davison, P.; Dawe, E.; Dawson, I.; Daya-Ishmukhametova, R. K.; De, K.; de Asmundis, R.; De Benedetti, A.; De Castro, S.; De Cecco, S.; De Groot, N.; de Jong, P.; De la Torre, H.; De Lorenzi, F.; De Maria, A.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Regie, J. B. De Vivie; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dedovich, D. V.; Dehghanian, N.; Deigaard, I.; Del Gaudio, M.; Del Peso, J.; Del Prete, T.; Delgove, D.; Deliot, F.; Delitzsch, C. M.; Deliyergiyev, M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P. A.; Deluca, C.; DeMarco, D. A.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Denysiuk, D.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Dette, K.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; Di Ciaccio, A.; Di Ciaccio, L.; Di Clemente, W. K.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Micco, B.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Diaconu, C.; Diamond, M.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Diglio, S.; Dimitrievska, A.; Dingfelder, J.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; do Vale, M. A. B.; Dobos, D.; Dobre, M.; Doglioni, C.; Dohmae, T.; Dolejsi, J.; Dolezal, Z.; Dolgoshein, B. A.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Drechsler, E.; Dris, M.; Du, Y.; Duarte-Campderros, J.; Duchovni, E.; Duckeck, G.; Ducu, O. A.; Duda, D.; Dudarev, A.; Duffield, E. M.; Duflot, L.; Duguid, L.; Dührssen, M.; Dumancic, M.; Dunford, M.; Yildiz, H. Duran; Düren, M.; Durglishvili, A.; Duschinger, D.; Dutta, B.; Dyndal, M.; Eckardt, C.; Ecker, K. M.; Edgar, R. C.; Edwards, N. C.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; Kacimi, M. El; Ellajosyula, V.; Ellert, M.; Elles, S.; Ellinghaus, F.; Elliot, A. A.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Endo, M.; Ennis, J. S.; Erdmann, J.; Ereditato, A.; Ernis, G.; Ernst, J.; Ernst, M.; Errede, S.; Ertel, E.; Escalier, M.; Esch, H.; Escobar, C.; Esposito, B.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Fabbri, F.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farina, C.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Giannelli, M. Faucci; Favareto, A.; Fawcett, W. J.; Fayard, L.; Fedin, O. L.; Fedorko, W.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenyuk, A. B.; Feremenga, L.; Martinez, P. Fernandez; Perez, S. Fernandez; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; de Lima, D. E. Ferreira; Ferrer, A.; Ferrere, D.; Ferretti, C.; Parodi, A. Ferretto; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, A.; Fischer, C.; Fischer, J.; Fisher, W. C.; Flaschel, N.; Fleck, I.; Fleischmann, P.; Fletcher, G. T.; Fletcher, R. R. M.; Flick, T.; Floderus, A.; Castillo, L. R. Flores; Flowerdew, M. J.; Forcolin, G. T.; Formica, A.; Forti, A.; Foster, A. G.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Francis, D.; Franconi, L.; Franklin, M.; Frate, M.; Fraternali, M.; Freeborn, D.; Fressard-Batraneanu, S. M.; Friedrich, F.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Torregrosa, E. Fullana; Fusayasu, T.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gach, G. P.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, L. G.; Gagnon, P.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Gao, J.; Gao, Y.; Gao, Y. S.; Walls, F. M. Garay; García, C.; Navarro, J. E. García; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Bravo, A. Gascon; Gatti, C.; Gaudiello, A.; Gaudio, G.; Gaur, B.; Gauthier, L.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Gecse, Z.; Gee, C. N. P.; Geich-Gimbel, Ch.; Geisen, M.; Geisler, M. P.; Gemme, C.; Genest, M. H.; Geng, C.; Gentile, S.; George, S.; Gerbaudo, D.; Gershon, A.; Ghasemi, S.; Ghazlane, H.; Ghneimat, M.; Giacobbe, B.; Giagu, S.; Giannetti, P.; Gibbard, B.; Gibson, S. M.; Gignac, M.; Gilchriese, M.; Gillam, T. P. S.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giorgi, F. M.; Giorgi, F. M.; Giraud, P. F.; Giromini, P.; Giugni, D.; Giuli, F.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gkougkousis, E. L.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Goblirsch-Kolb, M.; Godlewski, J.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gonçalo, R.; Costa, J. Goncalves Pinto Firmino Da; Gonella, G.; Gonella, L.; Gongadze, A.; de la Hoz, S. González; Parra, G. Gonzalez; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Goudet, C. R.; Goujdami, D.; Goussiou, A. G.; Govender, N.; Gozani, E.; Graber, L.; Grabowska-Bold, I.; Gradin, P. O. J.; Grafström, P.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Gratchev, V.; Gravila, P. M.; Gray, H. M.; Graziani, E.; Greenwood, Z. D.; Grefe, C.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Grevtsov, K.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grivaz, J.-F.; Groh, S.; Grohs, J. P.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Grout, Z. J.; Guan, L.; Guan, W.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Guo, Y.; Gupta, S.; Gustavino, G.; Gutierrez, P.; Ortiz, N. G. Gutierrez; Gutschow, C.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Hadef, A.; Haefner, P.; Hageböck, S.; Hajduk, Z.; Hakobyan, H.; Haleem, M.; Haley, J.; Halladjian, G.; Hallewell, G. D.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamilton, A.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Hanagaki, K.; Hanawa, K.; Hance, M.; Haney, B.; Hanke, P.; Hanna, R.; Hansen, J. B.; Hansen, J. D.; Hansen, M. C.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harrington, R. D.; Harrison, P. F.; Hartjes, F.; Hartmann, N. M.; Hasegawa, M.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauser, R.; Hauswald, L.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hayden, D.; Hays, C. P.; Hays, J. M.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, J. J.; Heinrich, L.; Heinz, C.; Hejbal, J.; Helary, L.; Hellman, S.; Helsens, C.; Henderson, J.; Henderson, R. C. W.; Heng, Y.; Henkelmann, S.; Correia, A. M. Henriques; Henrot-Versille, S.; Herbert, G. H.; Jiménez, Y. Hernández; Herten, G.; Hertenberger, R.; Hervas, L.; Hesketh, G. G.; Hessey, N. P.; Hetherly, J. W.; Hickling, R.; Higón-Rodriguez, E.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hinman, R. R.; Hirose, M.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hohn, D.; Holmes, T. R.; Homann, M.; Hong, T. M.; Hooberman, B. H.; Hopkins, W. H.; Horii, Y.; Horton, A. J.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howarth, J.; Hrabovsky, M.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hrynevich, A.; Hsu, C.; Hsu, P. J.; Hsu, S.-C.; Hu, D.; Hu, Q.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Hülsing, T. A.; Huo, P.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Ideal, E.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Ince, T.; Introzzi, G.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Ishino, M.; Ishitsuka, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Ito, F.; Ponce, J. M. Iturbe; Iuppa, R.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jabbar, S.; Jackson, B.; Jackson, M.; Jackson, P.; Jain, V.; Jakobi, K. B.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jamin, D. O.; Jana, D. K.; Jansen, E.; Jansky, R.; Janssen, J.; Janus, M.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Jeanneau, F.; Jeanty, L.; Jejelava, J.; Jeng, G.-Y.; Jennens, D.; Jenni, P.; Jentzsch, J.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, H.; Jiang, Y.; Jiggins, S.; Pena, J. Jimenez; Jin, S.; Jinaru, A.; Jinnouchi, O.; Johansson, P.; Johns, K. A.; Johnson, W. J.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, S.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Jovicevic, J.; Ju, X.; Rozas, A. Juste; Köhler, M. K.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kahn, S. J.; Kajomovitz, E.; Kalderon, C. W.; Kaluza, A.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kaneti, S.; Kanjir, L.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kaplan, L. S.; Kapliy, A.; Kar, D.; Karakostas, K.; Karamaoun, A.; Karastathis, N.; Kareem, M. J.; Karentzos, E.; Karnevskiy, M.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kasahara, K.; Kashif, L.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Kato, C.; Katre, A.; Katzy, J.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kazama, S.; Kazanin, V. F.; Keeler, R.; Kehoe, R.; Keller, J. S.; Kempster, J. J.; Kawade, K.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Keyes, R. A.; Khalil-zada, F.; Khanov, A.; Kharlamov, A. G.; Khoo, T. J.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kido, S.; Kim, H. Y.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kind, O. M.; King, B. T.; King, M.; King, S. B.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kiss, F.; Kiuchi, K.; Kivernyk, O.; Kladiva, E.; Klein, M. H.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klinger, J. A.; Klioutchnikova, T.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Knapik, J.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koffas, T.; Koffeman, E.; Koi, T.; Kolanoski, H.; Kolb, M.; Koletsou, I.; Komar, A. A.; Komori, Y.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Köpke, L.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Kortner, O.; Kortner, S.; Kosek, T.; Kostyukhin, V. V.; Kotwal, A.; Kourkoumeli-Charalampidi, A.; Kourkoumelis, C.; Kouskoura, V.; Kowalewska, A. B.; Kowalewski, R.; Kowalski, T. Z.; Kozakai, C.; Kozanecki, W.; Kozhin, A. S.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Kraus, J. K.; Kravchenko, A.; Kretz, M.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Krizka, K.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumnack, N.; Kruse, A.; Kruse, M. C.; Kruskal, M.; Kubota, T.; Kucuk, H.; Kuday, S.; Kuechler, J. T.; Kuehn, S.; Kugel, A.; Kuger, F.; Kuhl, A.; Kuhl, T.; Kukhtin, V.; Kukla, R.; Kulchitsky, Y.; Kuleshov, S.; Kuna, M.; Kunigo, T.; Kupco, A.; Kurashige, H.; Kurochkin, Y. A.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwan, T.; Kyriazopoulos, D.; Rosa, A. La; Navarro, J. L. La Rosa; Rotonda, L. La; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lammers, S.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lang, V. S.; Lange, J. C.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Manghi, F. Lasagni; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Lazovich, T.; Lazzaroni, M.; Le, B.; Dortz, O. Le; Guirriec, E. Le; Quilleuc, E. P. Le; LeBlanc, M.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, S. C.; Lee, L.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Miotto, G. Lehmann; Lei, X.; Leight, W. A.; Leisos, A.; Leister, A. G.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Leontsinis, S.; Lerner, G.; Leroy, C.; Lesage, A. A. J.; Lester, C. G.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Lewis, D.; Leyko, A. M.; Leyton, M.; Li, B.; Li, H.; Li, H. L.; Li, L.; Li, L.; Li, Q.; Li, S.; Li, X.; Li, Y.; Liang, Z.; Liberti, B.; Liblong, A.; Lichard, P.; Lie, K.; Liebal, J.; Liebig, W.; Limosani, A.; Lin, S. C.; Lin, T. H.; Lindquist, B. E.; Lionti, A. E.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lister, A.; Litke, A. M.; Liu, B.; Liu, D.; Liu, H.; Liu, H.; Liu, J.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, M.; Liu, Y. L.; Liu, Y.; Livan, M.; Lleres, A.; Merino, J. Llorente; Lloyd, S. L.; Sterzo, F. Lo; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Loebinger, F. K.; Loevschall-Jensen, A. E.; Loew, K. M.; Loginov, A.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, B. A.; Long, J. D.; Long, R. E.; Longo, L.; Looper, K. A.; Lopes, L.; Mateos, D. Lopez; Paredes, B. Lopez; Paz, I. Lopez; Solis, A. Lopez; Lorenz, J.; Martinez, N. Lorenzo; Losada, M.; Lösel, P. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lu, H.; Lu, N.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luedtke, C.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Luzi, P. M.; Lynn, D.; Lysak, R.; Lytken, E.; Lyubushkin, V.; Ma, H.; Ma, L. L.; Ma, Y.; Maccarrone, G.; Macchiolo, A.; Macdonald, C. M.; Maček, B.; Miguens, J. Machado; Madaffari, D.; Madar, R.; Maddocks, H. J.; Mader, W. F.; Madsen, A.; Maeda, J.; Maeland, S.; Maeno, T.; Maevskiy, A.; Magradze, E.; Mahlstedt, J.; Maiani, C.; Maidantchik, C.; Maier, A. A.; Maier, T.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyukov, S.; Mamuzic, J.; Mancini, G.; Mandelli, B.; Mandelli, L.; Mandić, I.; Maneira, J.; Filho, L. Manhaes de Andrade; Ramos, J. Manjarres; Mann, A.; Manousos, A.; Mansoulie, B.; Mansour, J. D.; Mantifel, R.; Mantoani, M.; Manzoni, S.; Mapelli, L.; Marceca, G.; March, L.; Marchiori, G.; Marcisovsky, M.; Marjanovic, M.; Marley, D. E.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Marti-Garcia, S.; Martin, B.; Martin, T. A.; Martin, V. J.; Latour, B. Martin dit; Martinez, M.; Martin-Haugh, S.; Martoiu, V. S.; Martyniuk, A. C.; Marx, M.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massa, L.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Mattmann, J.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Mazza, S. M.; Fadden, N. C. Mc; Goldrick, G. Mc; Kee, S. P. Mc; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McClymont, L. I.; McDonald, E. F.; McFarlane, K. W.; Mcfayden, J. A.; Mchedlidze, G.; McMahon, S. J.; McPherson, R. A.; Medinnis, M.; Meehan, S.; Mehlhase, S.; Mehta, A.; Meier, K.; Meineck, C.; Meirose, B.; Melini, D.; Garcia, B. R. Mellado; Melo, M.; Meloni, F.; Menary, S. B.; Mengarelli, A.; Menke, S.; Meoni, E.; Mergelmeyer, S.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Theenhausen, H. Meyer Zu; Miano, F.; Middleton, R. P.; Miglioranzi, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milesi, M.; Milic, A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Minami, Y.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mistry, K. P.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Miucci, A.; Miyagawa, P. S.; Mjörnmark, J. U.; Moa, T.; Mochizuki, K.; Mohapatra, S.; Molander, S.; Moles-Valls, R.; Monden, R.; Mondragon, M. C.; Mönig, K.; Monk, J.; Monnier, E.; Montalbano, A.; Berlingen, J. Montejo; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Llácer, M. Moreno; Morettini, P.; Mori, D.; Mori, T.; Morii, M.; Morinaga, M.; Morisbak, V.; Moritz, S.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Mortensen, S. S.; Morvaj, L.; Mosidze, M.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Muanza, S.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, R. S. P.; Mueller, T.; Muenstermann, D.; Mullen, P.; Mullier, G. A.; Sanchez, F. J. Munoz; Quijada, J. A. Murillo; Murray, W. J.; Musheghyan, H.; Muškinja, M.; Myagkov, A. G.; Myska, M.; Nachman, B. P.; Nackenhorst, O.; Nagai, K.; Nagai, R.; Nagano, K.; Nagasaka, Y.; Nagata, K.; Nagel, M.; Nagy, E.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Namasivayam, H.; Garcia, R. F. Naranjo; Narayan, R.; Villar, D. I. Narrias; Naryshkin, I.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Nef, P. D.; Negri, A.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, A.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Neves, R. M.; Nevski, P.; Newman, P. R.; Nguyen, D. H.; Manh, T. Nguyen; Nickerson, R. B.; Nicolaidou, R.; Nielsen, J.; Nikiforov, A.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, J. K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nisius, R.; Nobe, T.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Nooney, T.; Norberg, S.; Nordberg, M.; Norjoharuddeen, N.; Novgorodova, O.; Nowak, S.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nurse, E.; Nuti, F.; O'grady, F.; O'Neil, D. C.; O'Rourke, A. A.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, I.; Ochoa-Ricoux, J. P.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Oide, H.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Seabra, L. F. Oleiro; Pino, S. A. Olivares; Damazio, D. Oliveira; Olszewski, A.; Olszowska, J.; Onofre, A.; Onogi, K.; Onyisi, P. U. E.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Orr, R. S.; Osculati, B.; Ospanov, R.; Garzon, G. Otero y.; Otono, H.; Ouchrif, M.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Owen, M.; Owen, R. E.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pages, A. Pacheco; Aranda, C. Padilla; Pagáčová, M.; Griso, S. Pagan; Paige, F.; Pais, P.; Pajchel, K.; Palacino, G.; Palestini, S.; Palka, M.; Pallin, D.; Palma, A.; Panagiotopoulou, E. St.; Pandini, C. E.; Vazquez, J. G. Panduro; Pani, P.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Hernandez, D. Paredes; Parker, A. J.; Parker, M. A.; Parker, K. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pascuzzi, V. R.; Pasqualucci, E.; Passaggio, S.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Pater, J. R.; Pauly, T.; Pearce, J.; Pearson, B.; Pedersen, L. E.; Pedersen, M.; Lopez, S. Pedraza; Pedro, R.; Peleganchuk, S. V.; Pelikan, D.; Penc, O.; Peng, C.; Peng, H.; Penwell, J.; Peralva, B. S.; Perego, M. M.; Perepelitsa, D. V.; Codina, E. Perez; Perini, L.; Pernegger, H.; Perrella, S.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petroff, P.; Petrolo, E.; Petrov, M.; Petrucci, F.; Pettersson, N. E.; Peyaud, A.; Pezoa, R.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Piccinini, M.; Pickering, M. A.; Piegaia, R.; Pilcher, J. E.; Pilkington, A. D.; Pin, A. W. J.; Pinamonti, M.; Pinfold, J. L.; Pingel, A.; Pires, S.; Pirumov, H.; Pitt, M.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Plucinski, P.; Pluth, D.; Poettgen, R.; Poggioli, L.; Pohl, D.; Polesello, G.; Poley, A.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Pozdnyakov, V.; Astigarraga, M. E. Pozo; Pralavorio, P.; Pranko, A.; Prell, S.; Price, D.; Price, L. E.; Primavera, M.; Prince, S.; Proissl, M.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Puddu, D.; Purohit, M.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Quayle, W. B.; Queitsch-Maitland, M.; Quilty, D.; Raddum, S.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Raine, J. A.; Rajagopalan, S.; Rammensee, M.; Rangel-Smith, C.; Ratti, M. G.; Rauscher, F.; Rave, S.; Ravenscroft, T.; Ravinovich, I.; Raymond, M.; Read, A. L.; Readioff, N. P.; Reale, M.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Rehnisch, L.; Reichert, J.; Reisin, H.; Rembser, C.; Ren, H.; Rescigno, M.; Resconi, S.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Richter, S.; Richter-Was, E.; Ricken, O.; Ridel, M.; Rieck, P.; Riegel, C. J.; Rieger, J.; Rifki, O.; Rijssenbeek, M.; Rimoldi, A.; Rimoldi, M.; Rinaldi, L.; Ristić, B.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Rizzi, C.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Roda, C.; Rodina, Y.; Perez, A. Rodriguez; Rodriguez, D. Rodriguez; Roe, S.; Rogan, C. S.; Røhne, O.; Romaniouk, A.; Romano, M.; Saez, S. M. Romano; Adam, E. Romero; Rompotis, N.; Ronzani, M.; Roos, L.; Ros, E.; Rosati, S.; Rosbach, K.; Rose, P.; Rosenthal, O.; Rosien, N.-A.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rosten, J. H. N.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rudolph, M. S.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Ruschke, A.; Russell, H. L.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryu, S.; Ryzhov, A.; Rzehorz, G. F.; Saavedra, A. F.; Sabato, G.; Sacerdoti, S.; Sadrozinski, H. F.-W.; Sadykov, R.; Tehrani, F. Safai; Saha, P.; Sahinsoy, M.; Saimpert, M.; Saito, T.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salamon, A.; Loyola, J. E. Salazar; Salek, D.; De Bruin, P. H. Sales; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sammel, D.; Sampsonidis, D.; Sanchez, A.; Sánchez, J.; Martinez, V. Sanchez; Sandaker, H.; Sandbach, R. L.; Sander, H. G.; Sandhoff, M.; Sandoval, C.; Sandstroem, R.; Sankey, D. P. C.; Sannino, M.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Castillo, I. Santoyo; Sapp, K.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sasaki, O.; Sasaki, Y.; Sato, K.; Sauvage, G.; Sauvan, E.; Savage, G.; Savard, P.; Sawyer, C.; Sawyer, L.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Scarfone, V.; Schaarschmidt, J.; Schacht, P.; Schachtner, B. M.; Schaefer, D.; Schaefer, R.; Schaeffer, J.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Schiavi, C.; Schier, S.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmidt-Sommerfeld, K. R.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schmitz, S.; Schneider, B.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schopf, E.; Schott, M.; Schovancova, J.; Schramm, S.; Schreyer, M.; Schuh, N.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwartzman, A.; Schwarz, T. A.; Schwegler, Ph.; Schweiger, H.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Schwindt, T.; Sciolla, G.; Scuri, F.; Scutti, F.; Searcy, J.; Seema, P.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Sekhon, K.; Sekula, S. J.; Seliverstov, D. M.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Serkin, L.; Sessa, M.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shaikh, N. W.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shaw, S. M.; Shcherbakova, A.; Shehu, C. Y.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shiyakova, M.; Shmeleva, A.; Saadi, D. Shoaleh; Shochet, M. J.; Shojaii, S.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Sicho, P.; Sickles, A. M.; Sidebo, P. E.; Sidiropoulou, O.; Sidorov, D.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silverstein, S. B.; Simak, V.; Simard, O.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simon, D.; Simon, M.; Sinervo, P.; Sinev, N. B.; Sioli, M.; Siragusa, G.; Sivoklokov, S. Yu.; Sjölin, J.; Sjursen, T. B.; Skinner, M. B.; Skottowe, H. P.; Skubic, P.; Slater, M.; Slavicek, T.; Slawinska, M.; Sliwa, K.; Slovak, R.; Smakhtin, V.; Smart, B. H.; Smestad, L.; Smiesko, J.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, M. N. K.; Smith, R. W.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Soh, D. A.; Sokhrannyi, G.; Sanchez, C. A. Solans; Solar, M.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Son, H.; Song, H. Y.; Sood, A.; Sopczak, A.; Sopko, V.; Sorin, V.; Sosa, D.; Sotiropoulou, C. L.; Soualah, R.; Soukharev, A. M.; South, D.; Sowden, B. C.; Spagnolo, S.; Spalla, M.; Spangenberg, M.; Spanò, F.; Sperlich, D.; Spettel, F.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; Denis, R. D. St.; Stabile, A.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanescu-Bellu, M.; Stanitzki, M. M.; Stapnes, S.; Starchenko, E. A.; Stark, G. H.; Stark, J.; Staroba, P.; Starovoitov, P.; Stärz, S.; Staszewski, R.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stewart, G. A.; Stillings, J. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Subramaniam, R.; Suchek, S.; Sugaya, Y.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Susinno, G.; Sutton, M. R.; Suzuki, S.; Svatos, M.; Swiatlowski, M.; Sykora, I.; Sykora, T.; Ta, D.; Taccini, C.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takai, H.; Takashima, R.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tan, K. G.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tannenwald, B. B.; Araya, S. Tapia; Tapprogge, S.; Tarem, S.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Delgado, A. Tavares; Tayalati, Y.; Taylor, A. C.; Taylor, G. N.; Taylor, P. T. E.; Taylor, W.; Teischinger, F. A.; Teixeira-Dias, P.; Temming, K. K.; Temple, D.; Kate, H. Ten; Teng, P. K.; Teoh, J. J.; Tepel, F.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Theveneaux-Pelzer, T.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, E. N.; Thompson, P. D.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Thomson, M.; Tibbetts, M. J.; Torres, R. E. Ticse; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tipton, P.; Tisserant, S.; Todome, K.; Todorov, T.; Todorova-Nova, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tolley, E.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Tong, B.; Torrence, E.; Torres, H.; Pastor, E. Torró; Toth, J.; Touchard, F.; Tovey, D. R.; Trefzger, T.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Trofymov, A.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; Truong, L.; Trzebinski, M.; Trzupek, A.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsui, K. M.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tudorache, A.; Tudorache, V.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turecek, D.; Turgeman, D.; Turra, R.; Turvey, A. J.; Tuts, P. M.; Tyndel, M.; Ucchielli, G.; Ueda, I.; Ueno, R.; Ughetto, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Unverdorben, C.; Urban, J.; Urquijo, P.; Urrejola, P.; Usai, G.; Usanova, A.; Vacavant, L.; Vacek, V.; Vachon, B.; Valderanis, C.; Santurio, E. Valdes; Valencic, N.; Valentinetti, S.; Valero, A.; Valery, L.; Valkar, S.; Vallecorsa, S.; Ferrer, J. A. Valls; Van Den Wollenberg, W.; Van Der Deijl, P. C.; van der Geer, R.; van der Graaf, H.; van Eldik, N.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vanguri, R.; Vaniachine, A.; Vankov, P.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vasquez, J. G.; Vazeille, F.; Schroeder, T. Vazquez; Veatch, J.; Veloce, L. M.; Veloso, F.; Veneziano, S.; Ventura, A.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Viazlo, O.; Vichou, I.; Vickey, T.; Boeriu, O. E. Vickey; Viehhauser, G. H. A.; Viel, S.; Vigani, L.; Vigne, R.; Villa, M.; Perez, M. Villaplana; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Vittori, C.; Vivarelli, I.; Vlachos, S.; Vlasak, M.; Vogel, M.; Vokac, P.; Volpi, G.; Volpi, M.; von der Schmitt, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Milosavljevic, M. Vranjes; Vrba, V.; Vreeswijk, M.; Vuillermet, R.; Vukotic, I.; Vykydal, Z.; Wagner, P.; Wagner, W.; Wahlberg, H.; Wahrmund, S.; Wakabayashi, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wallangen, V.; Wang, C.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, K.; Wang, R.; Wang, S. M.; Wang, T.; Wang, T.; Wang, W.; Wang, X.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Washbrook, A.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, S.; Weber, M. S.; Weber, S. W.; Webster, J. S.; Weidberg, A. R.; Weinert, B.; Weingarten, J.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, M. D.; Werner, P.; Wessels, M.; Wetter, J.; Whalen, K.; Whallon, N. L.; Wharton, A. M.; White, A.; White, M. J.; White, R.; Whiteson, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wildauer, A.; Wilk, F.; Wilkens, H. G.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, J. A.; Wingerter-Seez, I.; Winklmeier, F.; Winston, O. J.; Winter, B. T.; Wittgen, M.; Wittkowski, J.; Wollstadt, S. J.; Wolter, M. W.; Wolters, H.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wozniak, K. W.; Wu, M.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xu, D.; Xu, L.; Yabsley, B.; Yacoob, S.; Yakabe, R.; Yamaguchi, D.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, S.; Yamanaka, T.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, Y.; Yang, Z.; Yao, W.-M.; Yap, Y. C.; Yasu, Y.; Yatsenko, E.; Wong, K. H. Yau; Ye, J.; Ye, S.; Yeletskikh, I.; Yen, A. L.; Yildirim, E.; Yorita, K.; Yoshida, R.; Yoshihara, K.; Young, C.; Young, C. J. S.; Youssef, S.; Yu, D. R.; Yu, J.; Yu, J. M.; Yu, J.; Yuan, L.; Yuen, S. P. Y.; Yusuff, I.; Zabinski, B.; Zaidan, R.; Zaitsev, A. M.; Zakharchuk, N.; Zalieckas, J.; Zaman, A.; Zambito, S.; Zanello, L.; Zanzi, D.; Zeitnitz, C.; Zeman, M.; Zemla, A.; Zeng, J. C.; Zeng, Q.; Zengel, K.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zhang, D.; Zhang, F.; Zhang, G.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, R.; Zhang, R.; Zhang, X.; Zhang, Z.; Zhao, X.; Zhao, Y.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, C.; Zhou, L.; Zhou, L.; Zhou, M.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, S.; Zinonos, Z.; Zinser, M.; Ziolkowski, M.; Živković, L.; Zobernig, G.; Zoccoli, A.; Nedden, M. zur; Zurzolo, G.; Zwalinski, L.
2016-12-01
The luminosity determination for the ATLAS detector at the LHC during pp collisions at √{s} = 8 TeV in 2012 is presented. The evaluation of the luminosity scale is performed using several luminometers, and comparisons between these luminosity detectors are made to assess the accuracy, consistency and long-term stability of the results. A luminosity uncertainty of δ L/L = ± 1.9% is obtained for the 22.7 fb^{-1} of pp collision data delivered to ATLAS at √{s} = 8 TeV in 2012.
Aaboud, M.; Aad, G.; Abbott, B.; ...
2016-11-28
The luminosity determination for the ATLAS detector at the LHC during pp collisions atmore » $$\\sqrt{s} = 8$$ TeV in 2012 is presented. The evaluation of the luminosity scale is performed using several luminometers, and comparisons between these luminosity detectors are made to assess the accuracy, consistency and long-term stability of the results. A luminosity uncertainty of $δL/L$= ± 1.9% is obtained for the 22.7fb –1 of pp collision data delivered to ATLAS at $$\\sqrt{s} = 8$$ TeV in 2012.« less
Crab cavities: Past, present, and future of a challenging device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Q.
2015-05-03
In two-ring facilities operating with a crossing-angle collision scheme, luminosity can be limited due to an incomplete overlapping of the colliding bunches. Crab cavities then are introduced to restore head-on collisions by providing the destined opposite deflection to the head and tail of the bunch. An increase in luminosity was demonstrated at KEKB with global crab-crossing, while the Large Hardron Collider (LHC) at CERN currently is designing local crab crossing for the Hi-Lumi upgrade. Future colliders may investigate both approaches. In this paper, we review the challenges in the technology, and the implementation of crab cavities, while discussing experience inmore » earlier colliders, ongoing R&D, and proposed implementations for future facilities, such as HiLumi-LHC, CERN’s compact linear collider (CLIC), the international linear collider (ILC), and the electron-ion collider under design at BNL (eRHIC).« less
NASA Astrophysics Data System (ADS)
Chlebana, Frank; CMS Collaboration
2017-11-01
The challenges of the High-Luminosity LHC (HL-LHC) are driven by the large number of overlapping proton-proton collisions (pileup) in each bunch-crossing and the extreme radiation dose to detectors at high pseudorapidity. To overcome this challenge CMS is developing an endcap electromagnetic+hadronic sampling calorimeter employing silicon sensors in the electromagnetic and front hadronic sections, comprising over 6 million channels, and highly-segmented plastic scintillators in the rear part of the hadronic section. This High- Granularity Calorimeter (HGCAL) will be the first of its kind used in a colliding beam experiment. Clustering deposits of energy over many cells and layers is a complex and challenging computational task, particularly in the high-pileup environment of HL-LHC. Baseline detector performance results are presented for electromagnetic and hadronic objects, and studies demonstrating the advantages of fine longitudinal and transverse segmentation are explored.
Illuminating gravitational waves: A concordant picture of photons from a neutron star merger
Kasliwal, M. M.; Nakar, E.; Singer, L. P.; ...
2017-10-16
Merging neutron stars offer an excellent laboratory for simultaneously studying strong-field gravity and matter in extreme environments. We establish the physical association of an electromagnetic counterpart (EM170817) with gravitational waves (GW170817) detected from merging neutron stars. By synthesizing a panchromatic data set, we demonstrate that merging neutron stars are a long-sought production site forging heavy elements by r-process nucleosynthesis. The weak gamma rays seen in EM170817 are dissimilar to classical short gamma-ray bursts with ultrarelativistic jets. Instead, we suggest that breakout of a wide-angle, mildly relativistic cocoon engulfing the jet explains the low-luminosity gamma rays, the high-luminosity ultraviolet-optical-infrared, and themore » delayed radio and x-ray emission. We posit that all neutron star mergers may lead to a wide-angle cocoon breakout, sometimes accompanied by a successful jet and sometimes by a choked jet.« less
Illuminating gravitational waves: A concordant picture of photons from a neutron star merger
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasliwal, M. M.; Nakar, E.; Singer, L. P.
Merging neutron stars offer an excellent laboratory for simultaneously studying strong-field gravity and matter in extreme environments. We establish the physical association of an electromagnetic counterpart (EM170817) with gravitational waves (GW170817) detected from merging neutron stars. By synthesizing a panchromatic data set, we demonstrate that merging neutron stars are a long-sought production site forging heavy elements by r-process nucleosynthesis. The weak gamma rays seen in EM170817 are dissimilar to classical short gamma-ray bursts with ultrarelativistic jets. Instead, we suggest that breakout of a wide-angle, mildly relativistic cocoon engulfing the jet explains the low-luminosity gamma rays, the high-luminosity ultraviolet-optical-infrared, and themore » delayed radio and x-ray emission. We posit that all neutron star mergers may lead to a wide-angle cocoon breakout, sometimes accompanied by a successful jet and sometimes by a choked jet.« less
Illuminating gravitational waves: A concordant picture of photons from a neutron star merger
NASA Astrophysics Data System (ADS)
Kasliwal, M. M.; Nakar, E.; Singer, L. P.; Kaplan, D. L.; Cook, D. O.; Van Sistine, A.; Lau, R. M.; Fremling, C.; Gottlieb, O.; Jencson, J. E.; Adams, S. M.; Feindt, U.; Hotokezaka, K.; Ghosh, S.; Perley, D. A.; Yu, P.-C.; Piran, T.; Allison, J. R.; Anupama, G. C.; Balasubramanian, A.; Bannister, K. W.; Bally, J.; Barnes, J.; Barway, S.; Bellm, E.; Bhalerao, V.; Bhattacharya, D.; Blagorodnova, N.; Bloom, J. S.; Brady, P. R.; Cannella, C.; Chatterjee, D.; Cenko, S. B.; Cobb, B. E.; Copperwheat, C.; Corsi, A.; De, K.; Dobie, D.; Emery, S. W. K.; Evans, P. A.; Fox, O. D.; Frail, D. A.; Frohmaier, C.; Goobar, A.; Hallinan, G.; Harrison, F.; Helou, G.; Hinderer, T.; Ho, A. Y. Q.; Horesh, A.; Ip, W.-H.; Itoh, R.; Kasen, D.; Kim, H.; Kuin, N. P. M.; Kupfer, T.; Lynch, C.; Madsen, K.; Mazzali, P. A.; Miller, A. A.; Mooley, K.; Murphy, T.; Ngeow, C.-C.; Nichols, D.; Nissanke, S.; Nugent, P.; Ofek, E. O.; Qi, H.; Quimby, R. M.; Rosswog, S.; Rusu, F.; Sadler, E. M.; Schmidt, P.; Sollerman, J.; Steele, I.; Williamson, A. R.; Xu, Y.; Yan, L.; Yatsu, Y.; Zhang, C.; Zhao, W.
2017-12-01
Merging neutron stars offer an excellent laboratory for simultaneously studying strong-field gravity and matter in extreme environments. We establish the physical association of an electromagnetic counterpart (EM170817) with gravitational waves (GW170817) detected from merging neutron stars. By synthesizing a panchromatic data set, we demonstrate that merging neutron stars are a long-sought production site forging heavy elements by r-process nucleosynthesis. The weak gamma rays seen in EM170817 are dissimilar to classical short gamma-ray bursts with ultrarelativistic jets. Instead, we suggest that breakout of a wide-angle, mildly relativistic cocoon engulfing the jet explains the low-luminosity gamma rays, the high-luminosity ultraviolet-optical-infrared, and the delayed radio and x-ray emission. We posit that all neutron star mergers may lead to a wide-angle cocoon breakout, sometimes accompanied by a successful jet and sometimes by a choked jet.
Blazar Gamma-Rays, Shock Acceleration, and the Extragalactic Background Light
NASA Technical Reports Server (NTRS)
Stecker, Floyd W.; Baring, Matthew G.; Summerlin, Errol J.
2007-01-01
The observed spectra of blazars, their intrinsic emission, and the underlying populations of radiating particles are intimately related. The use of these sources as probes of the extragalactic infrared background, a prospect propelled by recent advances in TeV-band telescopes, soon to be augmented by observations by NASA's upcoming Gamma-Ray Large Area Space Telescope (GLAST), has been a topic of great recent interest. Here, it is demonstrated that if particles in blazar jets are accelerated at relativistic shocks, then GAMMA-ray spectra with indices less than 1.5 can be produced. This, in turn, loosens the upper limits on the near infrared extragalactic background radiation previously proposed. We also show evidence hinting that TeV blazars with flatter spectra have higher intrinsic TeV GAMMA-ray luminosities and we indicate that there may be a correlation of flatness and luminosity with redshift.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brewer, Brendon J.; Foreman-Mackey, Daniel; Hogg, David W., E-mail: bj.brewer@auckland.ac.nz
We present and implement a probabilistic (Bayesian) method for producing catalogs from images of stellar fields. The method is capable of inferring the number of sources N in the image and can also handle the challenges introduced by noise, overlapping sources, and an unknown point-spread function. The luminosity function of the stars can also be inferred, even when the precise luminosity of each star is uncertain, via the use of a hierarchical Bayesian model. The computational feasibility of the method is demonstrated on two simulated images with different numbers of stars. We find that our method successfully recovers the inputmore » parameter values along with principled uncertainties even when the field is crowded. We also compare our results with those obtained from the SExtractor software. While the two approaches largely agree about the fluxes of the bright stars, the Bayesian approach provides more accurate inferences about the faint stars and the number of stars, particularly in the crowded case.« less
Spectroscopic Results of Gravitational Microlenses: Are These Dark Objects or Faint Stars?
NASA Astrophysics Data System (ADS)
Joseph, C. L.; Gallagher, J.; Phillips, M.
1994-12-01
We report on the spectroscopic results obtained in October 1994 with the 4-meter telescope on Cerro Tololo Interamerican Observatory (CTIO). Spectra of 2 recent microlens candidates toward the Galactic bulge reported by the Optical Gravitational Lens Experiment (OGLE) as well as one caught in the early phases of brightening toward the LMC reported by the MAssive Compact Halo Object (MACHO) Project have been obtained. The spectral coverage is from 6500 to 9800 Angstroms at a resolution of 6 Angstroms. The long-term goal of this spectroscopic study is to obtain censored statistical evidence on the luminosity of the microlenses, constraining the nature of these lenses. Several models of composite spectra of a bulge or LMC star plus a cool lensing star of different spectral types are presented to demonstrate the ranges in the product of luminosity times distance that the faint star could be detected in a composite spectrum.
Global properties of infrared bright galaxies
NASA Technical Reports Server (NTRS)
Young, Judith S.; Xie, Shuding; Kenney, Jeffrey D. P.; Rice, Walter L.
1989-01-01
Infrared flux densities of 182 galaxies, including 50 galaxies in the Virgo cluster, were analyzed using IRAS data for 12, 25, 60, and 100 microns, and the results were compared with data listed in the Point Source Catalog (PSC, 1985). In addition, IR luminosities, L(IRs), colors, and warm dust masses were derived for these galaxies and were compared with the interstellar gas masses and optical luminosities of the galaxies. It was found that, for galaxies whose optical diameter measures between 5 and 8 arcmin, the PSC flux densities are underestimated by a factor of 2 at 60 microns, and by a factor of 1.5 at 100 microns. It was also found that, for 49 galaxies, the mass of warm dust correlated well with the H2 mass, and that L(IR) correlated with L(H-alpha), demonstrating that the L(IR) measures the rate of star formation in these galaxies.
NASA Astrophysics Data System (ADS)
Hong, Haibo; Yin, Yuehong; Chen, Xing
2016-11-01
Despite the rapid development of computer science and information technology, an efficient human-machine integrated enterprise information system for designing complex mechatronic products is still not fully accomplished, partly because of the inharmonious communication among collaborators. Therefore, one challenge in human-machine integration is how to establish an appropriate knowledge management (KM) model to support integration and sharing of heterogeneous product knowledge. Aiming at the diversity of design knowledge, this article proposes an ontology-based model to reach an unambiguous and normative representation of knowledge. First, an ontology-based human-machine integrated design framework is described, then corresponding ontologies and sub-ontologies are established according to different purposes and scopes. Second, a similarity calculation-based ontology integration method composed of ontology mapping and ontology merging is introduced. The ontology searching-based knowledge sharing method is then developed. Finally, a case of human-machine integrated design of a large ultra-precision grinding machine is used to demonstrate the effectiveness of the method.
NASA Astrophysics Data System (ADS)
Nagata, Fusaomi; Okada, Yudai; Sakamoto, Tatsuhiko; Kusano, Takamasa; Habib, Maki K.; Watanabe, Keigo
2017-06-01
The authors have developed earlier an industrial machining robotic system for foamed polystyrene materials. The developed robotic CAM system provided a simple and effective interface without the need to use any robot language between operators and the machining robot. In this paper, a preprocessor for generating Cutter Location Source data (CLS data) from Stereolithography (STL data) is first proposed for robotic machining. The preprocessor enables to control the machining robot directly using STL data without using any commercially provided CAM system. The STL deals with a triangular representation for a curved surface geometry. The preprocessor allows machining robots to be controlled through a zigzag or spiral path directly calculated from STL data. Then, a smart spline interpolation method is proposed and implemented for smoothing coarse CLS data. The effectiveness and potential of the developed approaches are demonstrated through experiments on actual machining and interpolation.
Ship localization in Santa Barbara Channel using machine learning classifiers.
Niu, Haiqiang; Ozanich, Emma; Gerstoft, Peter
2017-11-01
Machine learning classifiers are shown to outperform conventional matched field processing for a deep water (600 m depth) ocean acoustic-based ship range estimation problem in the Santa Barbara Channel Experiment when limited environmental information is known. Recordings of three different ships of opportunity on a vertical array were used as training and test data for the feed-forward neural network and support vector machine classifiers, demonstrating the feasibility of machine learning methods to locate unseen sources. The classifiers perform well up to 10 km range whereas the conventional matched field processing fails at about 4 km range without accurate environmental information.
Skoraczyński, G; Dittwald, P; Miasojedow, B; Szymkuć, S; Gajewska, E P; Grzybowski, B A; Gambin, A
2017-06-15
As machine learning/artificial intelligence algorithms are defeating chess masters and, most recently, GO champions, there is interest - and hope - that they will prove equally useful in assisting chemists in predicting outcomes of organic reactions. This paper demonstrates, however, that the applicability of machine learning to the problems of chemical reactivity over diverse types of chemistries remains limited - in particular, with the currently available chemical descriptors, fundamental mathematical theorems impose upper bounds on the accuracy with which raction yields and times can be predicted. Improving the performance of machine-learning methods calls for the development of fundamentally new chemical descriptors.
Energy: Machines, Science (Experimental): 5311.03.
ERIC Educational Resources Information Center
Castaldi, June P.
This unit of instruction was designed as an introductory course in energy involving six simple machines, electricity, magnetism, and motion. The booklet lists the relevant state-adopted texts and states the performance objectives for the unit. It provides an outline of the course content and suggests experiments, demonstrations, field trips, and…
Celebrating Successful Students
ERIC Educational Resources Information Center
Squires, Dan; Case, Pauline
2008-01-01
The Machine Tool Program at Cowley College in Arkansas City, Kansas, is preparing students to become future leaders in the machining field, and the school recognizes the importance of sharing and celebrating those stories of success with the public to demonstrate the effectiveness of career and technical education (CTE) programs. Cowley College is…
The quasar luminosity function at redshift 4 with the Hyper Suprime-Cam Wide Survey
NASA Astrophysics Data System (ADS)
Akiyama, Masayuki; He, Wanqiu; Ikeda, Hiroyuki; Niida, Mana; Nagao, Tohru; Bosch, James; Coupon, Jean; Enoki, Motohiro; Imanishi, Masatoshi; Kashikawa, Nobunari; Kawaguchi, Toshihiro; Komiyama, Yutaka; Lee, Chien-Hsiu; Matsuoka, Yoshiki; Miyazaki, Satoshi; Nishizawa, Atsushi J.; Oguri, Masamune; Ono, Yoshiaki; Onoue, Masafusa; Ouchi, Masami; Schulze, Andreas; Silverman, John D.; Tanaka, Manobu M.; Tanaka, Masayuki; Terashima, Yuichi; Toba, Yoshiki; Ueda, Yoshihiro
2018-01-01
We present the luminosity function of z ˜ 4 quasars based on the Hyper Suprime-Cam Subaru Strategic Program Wide layer imaging data in the g, r, i, z, and y bands covering 339.8 deg2. From stellar objects, 1666 z ˜ 4 quasar candidates are selected via the g-dropout selection down to i = 24.0 mag. Their photometric redshifts cover the redshift range between 3.6 and 4.3, with an average of 3.9. In combination with the quasar sample from the Sloan Digital Sky Survey in the same redshift range, a quasar luminosity function covering the wide luminosity range of M1450 = -22 to -29 mag is constructed. The quasar luminosity function is well described by a double power-law model with a knee at M1450 = -25.36 ± 0.13 mag and a flat faint-end slope with a power-law index of -1.30 ± 0.05. The knee and faint-end slope show no clear evidence of redshift evolution from those seen at z ˜ 2. The flat slope implies that the UV luminosity density of the quasar population is dominated by the quasars around the knee, and does not support the steeper faint-end slope at higher redshifts reported at z > 5. If we convert the M1450 luminosity function to the hard X-ray 2-10 keV luminosity function using the relation between the UV and X-ray luminosity of quasars and its scatter, the number density of UV-selected quasars matches well with that of the X-ray-selected active galactic nuclei (AGNs) above the knee of the luminosity function. Below the knee, the UV-selected quasars show a deficiency compared to the hard X-ray luminosity function. The deficiency can be explained by the lack of obscured AGNs among the UV-selected quasars.
NASA Astrophysics Data System (ADS)
Kawamata, Ryota; Ishigaki, Masafumi; Shimasaku, Kazuhiro; Oguri, Masamune; Ouchi, Masami; Tanigawa, Shingo
2018-03-01
We construct z ∼ 6–7, 8, and 9 faint Lyman break galaxy samples (334, 61, and 37 galaxies, respectively) with accurate size measurements with the software glafic from the complete Hubble Frontier Fields (HFF) cluster and parallel fields data. These are the largest samples hitherto and reach down to the faint ends of recently obtained deep luminosity functions. At faint magnitudes, however, these samples are highly incomplete for galaxies with large sizes, implying that derivation of the luminosity function sensitively depends on the intrinsic size–luminosity relation. We thus conduct simultaneous maximum-likelihood estimation of luminosity function and size–luminosity relation parameters from the observed distribution of galaxies on the size–luminosity plane with the help of a completeness map as a function of size and luminosity. At z ∼ 6–7, we find that the intrinsic size–luminosity relation expressed as r e ∝ L β has a notably steeper slope of β ={0.46}-0.09+0.08 than those at lower redshifts, which in turn implies that the luminosity function has a relatively shallow faint-end slope of α =-{1.86}-0.18+0.17. This steep β can be reproduced by a simple analytical model in which smaller galaxies have lower specific angular momenta. The β and α values for the z ∼ 8 and 9 samples are consistent with those for z ∼ 6–7 but with larger errors. For all three samples, there is a large, positive covariance between β and α, implying that the simultaneous determination of these two parameters is important. We also provide new strong lens mass models of Abell S1063 and Abell 370, as well as updated mass models of Abell 2744 and MACS J0416.1‑2403.
20131201-1231_Green Machine Florida Canyon Hourly Data
Thibedeau, Joe
2014-01-08
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from 01 Dec to 31 Dec 2013.
20131101-1130_Green Machine Florida Canyon Hourly Data
Thibedeau, Joe
2013-12-02
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from 01 Nov to 30 Nov 2013.
20130416_Green Machine Florida Canyon Hourly Data
Vanderhoff, Alex
2013-04-24
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from 4/16/13.
20131001-1031_Green Machine Florida Canyon Hourly Data
Thibedeau, Joe
2013-11-05
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from 1 Oct 2013 to 31 Oct 2013.
20140201-0228_Green Machine Florida Canyon Hourly Data
Thibedeau, Joe
2014-03-03
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from 01 Feb to 28 Feb 2014.
20130801-0831_Green Machine Florida Canyon Hourly Data
Vanderhoff, Alex
2013-09-10
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from 8/1/13 to 8/31/13.
20140101-0131_Green Machine Florida Canyon Hourly Data
Thibedeau, Joe
2014-02-03
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from 01 Jan to 31 Jan 2014.
20140430_Green Machine Florida Canyon Hourly Data
Thibedeau, Joe
2014-05-05
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from 01 April to 30 April 2014.
20140301-0331_Green Machine Florida Canyon Hourly Data
Thibedeau, Joe
2014-04-07
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from 01 Mar to 31 Mar 2014.
20140501-0531_Green Machine Florida Canyon Hourly Data
Thibedeau, Joe
2014-06-02
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from 01 May to 31 May 2014.
20140601-0630_Green Machine Florida Canyon Hourly Data
Thibedeau, Joe
2014-06-30
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from 01 June to 30 June 2014.
20140701-0731_Green Machine Florida Canyon Hourly Data
Thibedeau, Joe
2014-07-31
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from 01 July to 31 July 2014.
20130901-0930_Green Machine Florida Canyon Hourly Data
Thibedeau, Joe
2013-10-25
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from 1 September 2013 to 30 September 2013.
Green Machine Florida Canyon Hourly Data 20130731
Vanderhoff, Alex
2013-08-30
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from 7/1/13 to 7/31/13.
20130501-20130531_Green Machine Florida Canyon Hourly Data
Vanderhoff, Alex
2013-06-18
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from May 2013
Green Machine Florida Canyon Hourly Data
Vanderhoff, Alex
2013-07-15
Employing innovative product developments to demonstrate financial and technical viability of producing electricity from low temperature geothermal fluids, coproduced in a mining operation, by employing ElectraTherm's modular and mobile heat-to-power "micro geothermal" power plant with output capacity expected in the 30-70kWe range. The Green Machine is an Organic Rankine Cycle power plant. The Florida Canyon machine is powered by geothermal brine with air cooled condensing. The data provided is an hourly summary from 6/1/13 to 6/30/13
NASA Technical Reports Server (NTRS)
OKeefe, Sean
2004-01-01
The images in this viewgraph presentation have the following captions: 1) EDU mirror after being sawed in half; 2) EDU Delivered to Axsys; 3) Be EDU Blank Received and Machining Started; 4) Loaded HIP can for flight PM segments 1 and 2; 5) Flight Blanks 1 and 2 Loaded into HIP Can at Brush-Wellman; 6) EDU in Machining at Axsys.
AN EMPIRICAL CALIBRATION TO ESTIMATE COOL DWARF FUNDAMENTAL PARAMETERS FROM H-BAND SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newton, Elisabeth R.; Charbonneau, David; Irwin, Jonathan
Interferometric radius measurements provide a direct probe of the fundamental parameters of M dwarfs. However, interferometry is within reach for only a limited sample of nearby, bright stars. We use interferometrically measured radii, bolometric luminosities, and effective temperatures to develop new empirical calibrations based on low-resolution, near-infrared spectra. We find that H-band Mg and Al spectral features are good tracers of stellar properties, and derive functions that relate effective temperature, radius, and log luminosity to these features. The standard deviations in the residuals of our best fits are, respectively, 73 K, 0.027 R {sub ☉}, and 0.049 dex (an 11% error on luminosity).more » Our calibrations are valid from mid K to mid M dwarf stars, roughly corresponding to temperatures between 3100 and 4800 K. We apply our H-band relationships to M dwarfs targeted by the MEarth transiting planet survey and to the cool Kepler Objects of Interest (KOIs). We present spectral measurements and estimated stellar parameters for these stars. Parallaxes are also available for many of the MEarth targets, allowing us to independently validate our calibrations by demonstrating a clear relationship between our inferred parameters and the stars' absolute K magnitudes. We identify objects with magnitudes that are too bright for their inferred luminosities as candidate multiple systems. We also use our estimated luminosities to address the applicability of near-infrared metallicity calibrations to mid and late M dwarfs. The temperatures we infer for the KOIs agree remarkably well with those from the literature; however, our stellar radii are systematically larger than those presented in previous works that derive radii from model isochrones. This results in a mean planet radius that is 15% larger than one would infer using the stellar properties from recent catalogs. Our results confirm the derived parameters from previous in-depth studies of KOIs 961 (Kepler-42), 254 (Kepler-45), and 571 (Kepler-186), the latter of which hosts a rocky planet orbiting in its star's habitable zone.« less
NASA Astrophysics Data System (ADS)
Eftekhari, T.; Berger, E.; Williams, P. K. G.; Blanchard, P. K.
2018-06-01
The discovery of a repeating fast radio burst (FRB) has led to the first precise localization, an association with a dwarf galaxy, and the identification of a coincident persistent radio source. However, further localizations are required to determine the nature of FRBs, the sources powering them, and the possibility of multiple populations. Here we investigate the use of associated persistent radio sources to establish FRB counterparts, taking into account the localization area and the source flux density. Due to the lower areal number density of radio sources compared to faint optical sources, robust associations can be achieved for less precise localizations as compared to direct optical host galaxy associations. For generally larger localizations that preclude robust associations, the number of candidate hosts can be reduced based on the ratio of radio-to-optical brightness. We find that confident associations with sources having a flux density of ∼0.01–1 mJy, comparable to the luminosity of the persistent source associated with FRB 121102 over the redshift range z ≈ 0.1–1, require FRB localizations of ≲20″. We demonstrate that even in the absence of a robust association, constraints can be placed on the luminosity of an associated radio source as a function of localization and dispersion measure (DM). For DM ≈1000 pc cm‑3, an upper limit comparable to the luminosity of the FRB 121102 persistent source can be placed if the localization is ≲10″. We apply our analysis to the case of the ASKAP FRB 170107, using optical and radio observations of the localization region. We identify two candidate hosts based on a radio-to-optical brightness ratio of ≳100. We find that if one of these is indeed associated with FRB 170107, the resulting radio luminosity (1029‑ 4 × 1030 erg s‑1 Hz‑1, as constrained from the DM value) is comparable to the luminosity of the FRB 121102 persistent source.
The WISSH quasars project. I. Powerful ionised outflows in hyper-luminous quasars
NASA Astrophysics Data System (ADS)
Bischetti, M.; Piconcelli, E.; Vietri, G.; Bongiorno, A.; Fiore, F.; Sani, E.; Marconi, A.; Duras, F.; Zappacosta, L.; Brusa, M.; Comastri, A.; Cresci, G.; Feruglio, C.; Giallongo, E.; La Franca, F.; Mainieri, V.; Mannucci, F.; Martocchia, S.; Ricci, F.; Schneider, R.; Testa, V.; Vignali, C.
2017-02-01
Models and observations suggest that both the power and effects of AGN feedback should be maximised in hyper-luminous (LBol > 1047 erg s-1) quasars, I.e. objects at the brightest end of the AGN luminosity function. In this paper, we present the first results of a multiwavelength observing programme, focusing on a sample of WISE/SDSS selected hyper-luminous (WISSH) broad-line quasars at z ≈ 1.5-5. The WISSH quasars project has been designed to reveal the most energetic AGN-driven outflows, estimate their occurrence at the peak of quasar activity, and extend the study of correlations between outflows and nuclear properties up to poorly investigated, extreme AGN luminosities, I.e. LBol 1047 - 1048 erg s-1. We present near-infrared, long-slit LBT/LUCI1 spectroscopy of five WISSH quasars at z ≈ 2.3 - 3.5, showing prominent [OIII] emission lines with broad (FWHM 1200-2200 km s-1) and skewed profiles. The luminosities of these broad [OIII] wings are the highest measured so far, with L[OIII]broad ≳ 5 × 1044 erg s-1, and reveal the presence of powerful ionised outflows with associated mass outflow rates Ṁ ≳ 1700M⊙ yr-1 and kinetic powers Ėkin ≳ 1045 erg s-1. Although these estimates are affected by large uncertainties because of the use of [OIII] as a tracer of ionised outflows and the very basic outflow model adopted here, these results suggest that in our hyper-luminous targets the AGN is highly efficient at pushing large amounts of ionised gas outwards. Furthermore, the mechanical outflow luminosities measured for WISSH quasars correspond to higher percentages ( 1-3%) of LBol than those derived for AGN with lower LBol. Our targets host very massive (MBH ≳ 2 × 109M⊙) black holes that are still accreting at a high rate (I.e. a factor of 0.4-3 of the Eddington limit). These findings clearly demonstrate that WISSH quasars offer the opportunity to probe the extreme end of both luminosity and supermassive black holes (SMBH) mass functions and revealing powerful ionised outflows that are able to affect the evolution of their host galaxies.
Industrial Inspection with Open Eyes: Advance with Machine Vision Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Niel, Kurt
Machine vision systems have evolved significantly with the technology advances to tackle the challenges from modern manufacturing industry. A wide range of industrial inspection applications for quality control are benefiting from visual information captured by different types of cameras variously configured in a machine vision system. This chapter screens the state of the art in machine vision technologies in the light of hardware, software tools, and major algorithm advances for industrial inspection. The inspection beyond visual spectrum offers a significant complementary to the visual inspection. The combination with multiple technologies makes it possible for the inspection to achieve a bettermore » performance and efficiency in varied applications. The diversity of the applications demonstrates the great potential of machine vision systems for industry.« less
An order statistics approach to the halo model for galaxies
NASA Astrophysics Data System (ADS)
Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.
2017-04-01
We use the halo model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the 'central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the lognormal distribution around this mean and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering; however, this model predicts no luminosity dependence of large-scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically underpredicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the halo model for galaxies with more physically motivated galaxy formation models.
Toward a Unified View of Black-Hole High-Energy States
NASA Technical Reports Server (NTRS)
Nowak, Michael A.
1995-01-01
We present here a review of high-energy (greater than 1 keV) observations of seven black-hole candidates, six of which have estimated masses. In this review we focus on two parameters of interest: the ratio of 'nonthermal' to total luminosity as a function of the total luminosity divided by the Eddington luminosity, and the root-mean-square (rms) variability as a function of the nonthermal-to-total luminosity ratio. Below approx. 10% Eddington luminosity, the sources tend to be strictly nonthermal (the so called 'off' and 'low' states). Above this luminosity the sources become mostly thermal (the 'high' state). with the nonthermal component increasing with luminosity (the 'very high' and 'flare' states). There are important exceptions to this behavior, however, and no steady - as opposed to transient - source has been observed over a wide range of parameter space. In addition, the rms variability is positively correlated with the ratio of nonthermal to total luminosity, although there may be a minimum level of variability associated with 'thermal' states. We discuss these results in light of theoretical models and find that currently no single model describes the full range of black-hole high-energy behavior. In fact, the observations are exactly opposite from what one expects based upon simple notions of accretion disk instabilities.
The remarkable infrared galaxy Arp 220 = IC 4553
NASA Technical Reports Server (NTRS)
Soifer, B. T.; Neugebauer, G.; Helou, G.; Lonsdale, C. J.; Hacking, P.; Rice, W.; Houck, J. R.; Low, F. J.; Rowan-Robinson, M.
1984-01-01
IRAS observations of the peculiar galaxy Arp 220 = IC 4553 show that it is extremely luminous in the far-infrared, with a total luminosity of 2 x 10 to the 12th solar luminosities. The infrared-to-blue luminosity ratio of this galaxy is about 80, which is the largest value of the ratio for galaxies in the UGC catalog, and places it in the range of the 'unidentified' infrared sources recently reported by Houck et al. in the IRAS all-sky survey. Other observations of Arp 220, combined with the luminosity in the infrared, allow either a Seyfert-like or starburst origin for this luminosity.
A limit to the X-ray luminosity of nearby normal galaxies
NASA Technical Reports Server (NTRS)
Worrall, D. M.; Marshall, F. E.; Boldt, E. A.
1979-01-01
Emission is studied at luminosities lower than those for which individual discrete sources can be studied. It is shown that normal galaxies do not appear to provide the numerous low luminosity X-ray sources which could make up the 2-60 keV diffuse background. Indeed, upper limits suggest luminosities comparable with, or a little less than, that of the galaxy. This is consistent with the fact that the average optical luminosity of the sample galaxies within approximately 20 Mpc is slightly lower than that of the galaxy. An upper limit of approximately 1% of the diffuse background from such sources is derived.
The luminosity function for the CfA redshift survey slices
NASA Technical Reports Server (NTRS)
De Lapparent, Valerie; Geller, Margaret J.; Huchra, John P.
1989-01-01
The luminosity function for two complete slices of the extension of the CfA redshift survey is calculated. The nonparametric technique of Lynden-Bell (1971) and Turner (1979) is used to determine the shape for the luminosity function of the 12 deg slice of the redshift survey. The amplitude of the luminosity function is determined, taking large-scale inhomogeneities into account. The effects of the Malmquist bias on a magnitude-limited redshift survey are examined, showing that the random errors in the magnitudes for the 12 deg slice affect both the determination of the luminosity function and the spatial density constrast of large scale structures.
Correlations of the IR Luminosity and Eddington Ratio with a Hard X-ray Selected Sample of AGN
NASA Technical Reports Server (NTRS)
Mushotzy, Richard F.; Winter, Lisa M.; McIntosh, Daniel H.; Tueller, Jack
2008-01-01
We use the SWIFT Burst Alert Telescope (BAT) sample of hard x-ray selected active galactic nuclei (AGN) with a median redshift of 0.03 and the 2MASS J and K band photometry to examine the correlation of hard x-ray emission to Eddington ratio as well as the relationship of the J and K band nuclear luminosity to the hard x-ray luminosity. The BAT sample is almost unbiased by the effects of obscuration and thus offers the first large unbiased sample for the examination of correlations between different wavelength bands. We find that the near-IR nuclear J and K band luminosity is related to the BAT (14 - 195 keV) luminosity over a factor of 10(exp 3) in luminosity (L(sub IR) approx.equals L(sub BAT)(sup 1.25) and thus is unlikely to be due to dust. We also find that the Eddington ratio is proportional to the x-ray luminosity. This new result should be a strong constraint on models of the formation of the broad band continuum.
Data Mining at NASA: From Theory to Applications
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.
2009-01-01
This slide presentation demonstrates the data mining/machine learning capabilities of NASA Ames and Intelligent Data Understanding (IDU) group. This will encompass the work done recently in the group by various group members. The IDU group develops novel algorithms to detect, classify, and predict events in large data streams for scientific and engineering systems. This presentation for Knowledge Discovery and Data Mining 2009 is to demonstrate the data mining/machine learning capabilities of NASA Ames and IDU group. This will encompass the work done re cently in the group by various group members.
The Atwood machine revisited using smartphones
NASA Astrophysics Data System (ADS)
Monteiro, Martín; Stari, Cecilia; Cabeza, Cecilia; Marti, Arturo C.
2015-09-01
The Atwood machine is a simple device used for centuries to demonstrate Newton's second law. It consists of two supports containing different masses joined by a string. Here we propose an experiment in which a smartphone is fixed to one support. With the aid of the built-in accelerometer of the smartphone, the vertical acceleration is registered. By redistributing the masses of the supports, a linear relationship between the mass difference and the vertical acceleration is obtained. In this experiment, the use of a smartphone contributes to enhance a classical demonstration.
On transient events in the upper atmosphere generated away of thunderstorm regions
NASA Astrophysics Data System (ADS)
Morozenko, V.; Garipov, G.; Khrenov, B.; Klimov, P.; Panasyuk, M.; Sharakin, S.; Zotov, M.
2011-12-01
Experimental data on transient events in UV and Red-IR ranges obtained in the MSU missions "Unversitetsky-Tatiana" (wavelengths 300-400 nm) and "Unversitetsky-Tatiana-2" (wavelengths 300-400 nm and 600-800 nm), published by Garipov et al, in 2010 at COSPAR session http://www.cospar2010.org, at TEPA conference http://www.aragats.am/Conferences/tepa2010 and in 2011 by Sadovnichy et al, Solar System Research, 45, #1, 3-29 (2011); Vedenkin et al, JETP, v. 140, issue 3(9), 1-11 (2011) demonstrated existence of transients at large distances (up to thousands km) away of cloud thunderstorm regions. Those "remote" transients are short (1-5 msec) and are less luminous than the transients above thunderstorm regions. The ratio of Red-IR to UV photon numbers in those transients indicates high altitude of their origin (~70 km). Important observation facts are also: 1. a change of the exponent in transient distribution on luminosity Q ("-1" for photon numbers Q=1020 -1023 to "-2" for Q>1023), 2. a change of global distribution of transient with their luminosity (transients with Q>1023 are concentrated in equatorial range above continents, while transients with low luminosity are distributed more uniformly), 3. a phenomenon of transient sequences in one satellite orbit which is close to geomagnetic meridian. In the present paper phenomenological features of transients are explained in assumption that the observed transients have to be divided in two classes: 1. transients related to local, lower in the atmosphere, lightning at distance not more than hundreds km from satellite detector field of view in the atmosphere and 2. transients generated by far away lightning. Local transients are luminous and presumably are events called "transient luminous events" (TLE). In distribution on luminosity those events have some threshold Q~1023 and their differential luminosity distribution is approximated by power law exponent "-2". Remote transients have to be considered separately. Their origin may be related to electromagnetic pulses (EMP) or waves (whistler, EMW) generated by lightning. The EMP-EMW is transmitted in the ionosphere- ground channel to large distances R with low absorption. The part of EMP-EMW "visible" in the detector aperture diminishes with distance as R-1 due to observation geometry. The EMP-EMW triggers the electric discharge in the upper atmosphere (lower ionosphere, ~70 km). Estimates of resulting transients luminosity and their correlation with geomagnetic field are in progress.
Burst Statistics Using the Lag-Luminosity Relationship
NASA Technical Reports Server (NTRS)
Band, D. L.; Norris, J. P.; Bonnell, J. T.
2003-01-01
Using the lag-luminosity relation and various BATSE catalogs we create a large catalog of burst redshifts, peak luminosities and emitted energies. These catalogs permit us to evaluate the lag-luminosity relation, and to study the burst energy distribution. We find that this distribution can be described as a power law with an index of alpha = 1.76 +/- 0.05 (95% confidence), close to the alpha = 2 predicted by the original quasi-universal jet model.
Challenges in Finding AGNs in the Low Luminosity Regime
NASA Astrophysics Data System (ADS)
Satyapal, Shobita; Abel, Nick; Secrest, Nathan; Singh, Amrit; Ellison, Sara
2016-08-01
Low luminosity AGNs are an important component of the AGN population. They are often found in the lowest mass galaxies or galaxies that lack classical bulges, a demographic that places important constraints to models of supermassive black hole seed formation and merger-free models of AGN fueling. The detection of AGNs in this low luminosity regime is challenging both because star formation in the host galaxy can dominate the optical spectrum and gas and dust can obscure the central engine at both optical and X-ray wavelengths. Thus while mid-infrared color selection and X-ray observations at energies <10 keV are often powerful tools in uncovering optically unidentified AGNs at higher luminosities, this is not the case in the low luminosity regime. In this talk, I will review the effectiveness of uncovering AGNs in the low luminosity regime using multiwavength investigations, with a focus on infrared spectroscopic signatures.
ERIC Educational Resources Information Center
Hill, Janet W.; And Others
1982-01-01
The study demonstrated the acquisition and generalization into community settings of a chronologically age-appropriate leisure skill with three severely and profoundly mentally retarded adolescents. Results indicated that participants could acquire and generalize use of an electronic pinball machine leisure skill effectively and learn to exhibit…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathew, D; Tanny, S; Parsai, E
2015-06-15
Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measuredmore » on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class-specific reference conditions.« less
Characterization of exoplanets from their formation. III. The statistics of planetary luminosities
NASA Astrophysics Data System (ADS)
Mordasini, C.; Marleau, G.-D.; Mollière, P.
2017-12-01
Context. This paper continues a series in which we predict the main observable characteristics of exoplanets based on their formation. In Paper I we described our global planet formation and evolution model that is based on the core accretion paradigm. In Paper II we studied the planetary mass-radius relationship with population syntheses. Aims: In this paper we present an extensive study of the statistics of planetary luminosities during both formation and evolution. Our results can be compared with individual directly imaged extrasolar (proto)planets and with statistical results from surveys. Methods: We calculated three populations of synthetic planets assuming different efficiencies of the accretional heating by gas and planetesimals during formation. We describe the temporal evolution of the planetary mass-luminosity relation. We investigate the relative importance of the shock and internal luminosity during formation, and predict a statistical version of the post-formation mass vs. entropy "tuning fork" diagram. Because the calculations now include deuterium burning we also update the planetary mass-radius relationship in time. Results: We find significant overlap between the high post-formation luminosities of planets forming with hot and cold gas accretion because of the core-mass effect. Variations in the individual formation histories of planets can still lead to a factor 5 to 20 spread in the post-formation luminosity at a given mass. However, if the gas accretional heating and planetesimal accretion rate during the runaway phase is unknown, the post-formation luminosity may exhibit a spread of as much as 2-3 orders of magnitude at a fixed mass. As a key result we predict a flat log-luminosity distribution for giant planets, and a steep increase towards lower luminosities due to the higher occurrence rate of low-mass (M ≲ 10-40 M⊕) planets. Future surveys may detect this upturn. Conclusions: Our results indicate that during formation an estimation of the planetary mass may be possible for cold gas accretion if the planetary gas accretion rate can be estimated. If it is unknown whether the planet still accretes gas, the spread in total luminosity (internal + accretional) at a given mass may be as large as two orders of magnitude, therefore inhibiting the mass estimation. Due to the core-mass effect even planets which underwent cold accretion can have large post-formation entropies and luminosities, such that alternative formation scenarios such as gravitational instabilities do not need to be invoked. Once the number of self-luminous exoplanets with known ages and luminosities increases, the resulting luminosity distributions may be compared with our predictions.
Mathematical defense method of networked servers with controlled remote backups
NASA Astrophysics Data System (ADS)
Kim, Song-Kyoo
2006-05-01
The networked server defense model is focused on reliability and availability in security respects. The (remote) backup servers are hooked up by VPN (Virtual Private Network) with high-speed optical network and replace broken main severs immediately. The networked server can be represent as "machines" and then the system deals with main unreliable, spare, and auxiliary spare machine. During vacation periods, when the system performs a mandatory routine maintenance, auxiliary machines are being used for back-ups; the information on the system is naturally delayed. Analog of the N-policy to restrict the usage of auxiliary machines to some reasonable quantity. The results are demonstrated in the network architecture by using the stochastic optimization techniques.
Aad, G.; Abajyan, T.; Abbott, B.; ...
2013-08-14
The luminosity calibration for the ATLAS detector at the LHC during pp collisions at √s = 7 TeV in 2010 and 2011 is presented. Evaluation of the luminosity scale is performed using several luminosity-sensitive detectors, and comparisons are made of the long-term stability and accuracy of this calibration applied to the pp collisions at √s = 7 TeV. A luminosity uncertainty of δL/L= ± 3.5 % is obtained for the 47 pb -1 of data delivered to ATLAS in 2010, and an uncertainty of δL/L= ± 1.8 % is obtained for the 5.5 fb -1 delivered in 2011.
Machinability of Al 6061 Deposited with Cold Spray Additive Manufacturing
NASA Astrophysics Data System (ADS)
Aldwell, Barry; Kelly, Elaine; Wall, Ronan; Amaldi, Andrea; O'Donnell, Garret E.; Lupoi, Rocco
2017-10-01
Additive manufacturing techniques such as cold spray are translating from research laboratories into more mainstream high-end production systems. Similar to many additive processes, finishing still depends on removal processes. This research presents the results from investigations into aspects of the machinability of aluminum 6061 tubes manufactured with cold spray. Through the analysis of cutting forces and observations on chip formation and surface morphology, the effect of cutting speed, feed rate, and heat treatment was quantified, for both cold-sprayed and bulk aluminum 6061. High-speed video of chip formation shows changes in chip form for varying material and heat treatment, which is supported by the force data and quantitative imaging of the machined surface. The results shown in this paper demonstrate that parameters involved in cold spray directly impact on machinability and therefore have implications for machining parameters and strategy.
NASA Astrophysics Data System (ADS)
Wang, L.; Norberg, P.; Gunawardhana, M. L. P.; Heinis, S.; Baldry, I. K.; Bland-Hawthorn, J.; Bourne, N.; Brough, S.; Brown, M. J. I.; Cluver, M. E.; Cooray, A.; da Cunha, E.; Driver, S. P.; Dunne, L.; Dye, S.; Eales, S.; Grootes, M. W.; Holwerda, B. W.; Hopkins, A. M.; Ibar, E.; Ivison, R.; Lacey, C.; Lara-Lopez, M. A.; Loveday, J.; Maddox, S. J.; Michałowski, M. J.; Oteo, I.; Owers, M. S.; Popescu, C. C.; Smith, D. J. B.; Taylor, E. N.; Tuffs, R. J.; van der Werf, P.
2016-09-01
We compare common star formation rate (SFR) indicators in the local Universe in the Galaxy and Mass Assembly (GAMA) equatorial fields (˜160 deg2), using ultraviolet (UV) photometry from GALEX, far-infrared and sub-millimetre (sub-mm) photometry from Herschel Astrophysical Terahertz Large Area Survey, and Hα spectroscopy from the GAMA survey. With a high-quality sample of 745 galaxies (median redshift
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohmi, K.
In recent high luminosity colliders, the finite crossing angle scheme becomes popular to gain the multiplicity of luminosity with multi-bunch or long bunch operation. Success of KEKB showed that the finite crossing angle scheme was no problem to achieve the beam-beam parameter up to 0.05. The authors have studied the beam-beam interactions with/without crossing angle toward higher luminosity. They discuss how the crossing angle affects the beam-beam parameter and luminosity in the present KEK B factory (KEKB) using computer simulations.
Scuss u-Band Emission as a Star-Formation-Rate Indicator
NASA Astrophysics Data System (ADS)
Zhou, Zhimin; Zhou, Xu; Wu, Hong; Fan, Xiao-Hui; Fan, Zhou; Jiang, Zhao-Ji; Jing, Yi-Peng; Li, Cheng; Lesser, Michael; Jiang, Lin-Hua; Ma, Jun; Nie, Jun-Dan; Shen, Shi-Yin; Wang, Jia-Li; Wu, Zhen-Yu; Zhang, Tian-Meng; Zou, Hu
2017-01-01
We present and analyze the possibility of using optical u-band luminosities to estimate star-formation rates (SFRs) of galaxies based on the data from the South Galactic Cap u band Sky Survey (SCUSS), which provides a deep u-band photometric survey covering about 5000 deg2 of the South Galactic Cap. Based on two samples of normal star-forming galaxies selected by the BPT diagram, we explore the correlations between u-band, Hα, and IR luminosities by combing SCUSS data with the Sloan Digital Sky Survey and Wide-field Infrared Survey Explorer (WISE). The attenuation-corrected u-band luminosities are tightly correlated with the Balmer decrement-corrected Hα luminosities with an rms scatter of ˜0.17 dex. The IR-corrected u luminosities are derived based on the correlations between the attenuation of u-band luminosities and WISE 12 (or 22) μm luminosities, and then calibrated with the Balmer-corrected Hα luminosities. The systematic residuals of these calibrations are tested against the physical properties over the ranges covered by our sample objects. We find that the best-fitting nonlinear relations are better than the linear ones and recommended to be applied in the measurement of SFRs. The systematic deviations mainly come from the pollution of old stellar population and the effect of dust extinction; therefore, a more detailed analysis is needed in future work.
Machine vision for digital microfluidics
NASA Astrophysics Data System (ADS)
Shin, Yong-Jun; Lee, Jeong-Bong
2010-01-01
Machine vision is widely used in an industrial environment today. It can perform various tasks, such as inspecting and controlling production processes, that may require humanlike intelligence. The importance of imaging technology for biological research or medical diagnosis is greater than ever. For example, fluorescent reporter imaging enables scientists to study the dynamics of gene networks with high spatial and temporal resolution. Such high-throughput imaging is increasingly demanding the use of machine vision for real-time analysis and control. Digital microfluidics is a relatively new technology with expectations of becoming a true lab-on-a-chip platform. Utilizing digital microfluidics, only small amounts of biological samples are required and the experimental procedures can be automatically controlled. There is a strong need for the development of a digital microfluidics system integrated with machine vision for innovative biological research today. In this paper, we show how machine vision can be applied to digital microfluidics by demonstrating two applications: machine vision-based measurement of the kinetics of biomolecular interactions and machine vision-based droplet motion control. It is expected that digital microfluidics-based machine vision system will add intelligence and automation to high-throughput biological imaging in the future.
Using human brain activity to guide machine learning.
Fong, Ruth C; Scheirer, Walter J; Cox, David D
2018-03-29
Machine learning is a field of computer science that builds algorithms that learn. In many cases, machine learning algorithms are used to recreate a human ability like adding a caption to a photo, driving a car, or playing a game. While the human brain has long served as a source of inspiration for machine learning, little effort has been made to directly use data collected from working brains as a guide for machine learning algorithms. Here we demonstrate a new paradigm of "neurally-weighted" machine learning, which takes fMRI measurements of human brain activity from subjects viewing images, and infuses these data into the training process of an object recognition learning algorithm to make it more consistent with the human brain. After training, these neurally-weighted classifiers are able to classify images without requiring any additional neural data. We show that our neural-weighting approach can lead to large performance gains when used with traditional machine vision features, as well as to significant improvements with already high-performing convolutional neural network features. The effectiveness of this approach points to a path forward for a new class of hybrid machine learning algorithms which take both inspiration and direct constraints from neuronal data.
Heat-Assisted Machining for Material Removal Improvement
NASA Astrophysics Data System (ADS)
Mohd Hadzley, A. B.; Hafiz, S. Muhammad; Azahar, W.; Izamshah, R.; Mohd Shahir, K.; Abu, A.
2015-09-01
Heat assisted machining (HAM) is a process where an intense heat source is used to locally soften the workpiece material before machined by high speed cutting tool. In this paper, an HAM machine is developed by modification of small CNC machine with the addition of special jig to hold the heat sources in front of the machine spindle. Preliminary experiment to evaluate the capability of HAM machine to produce groove formation for slotting process was conducted. A block AISI D2 tool steel with100mm (width) × 100mm (length) × 20mm (height) size has been cut by plasma heating with different setting of arc current, feed rate and air pressure. Their effect has been analyzed based on distance of cut (DOC).Experimental results demonstrated the most significant factor that contributed to the DOC is arc current, followed by the feed rate and air pressure. HAM improves the slotting process of AISI D2 by increasing distance of cut due to initial cutting groove that formed during thermal melting and pressurized air from the heat source.
Liu, Guang-Hui; Shen, Hong-Bin; Yu, Dong-Jun
2016-04-01
Accurately predicting protein-protein interaction sites (PPIs) is currently a hot topic because it has been demonstrated to be very useful for understanding disease mechanisms and designing drugs. Machine-learning-based computational approaches have been broadly utilized and demonstrated to be useful for PPI prediction. However, directly applying traditional machine learning algorithms, which often assume that samples in different classes are balanced, often leads to poor performance because of the severe class imbalance that exists in the PPI prediction problem. In this study, we propose a novel method for improving PPI prediction performance by relieving the severity of class imbalance using a data-cleaning procedure and reducing predicted false positives with a post-filtering procedure: First, a machine-learning-based data-cleaning procedure is applied to remove those marginal targets, which may potentially have a negative effect on training a model with a clear classification boundary, from the majority samples to relieve the severity of class imbalance in the original training dataset; then, a prediction model is trained on the cleaned dataset; finally, an effective post-filtering procedure is further used to reduce potential false positive predictions. Stringent cross-validation and independent validation tests on benchmark datasets demonstrated the efficacy of the proposed method, which exhibits highly competitive performance compared with existing state-of-the-art sequence-based PPIs predictors and should supplement existing PPI prediction methods.
Small business initiative -- Surface inspection machine infrared (SIMIR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, G.L.; Beecroft, M.
This Cooperative Research and Development Agreement was a one year effort to make the surface inspection machine based on diffuse reflectance infrared spectroscopy (Surface Inspection Machine-Infrared, SIMIR), being developed by Surface Optics Corporation, perform to its highest potential as a practical, portable surface inspection machine. A secondary purpose was to evaluate applications that would serve both the private and the public sector. The design function of the SIMIR is to inspect sandblasted metal surfaces for cleanliness (stains). The system is also capable of evaluating graphite-resin systems for cure and heat damage, and for measuring the effects of moisture exposure onmore » lithium hydride, corrosion on uranium metal, and the constituents of and contamination on wood, paper, and fabrics. Surface Optics Corporation supplied LMES-Y12 with a prototype SOC-400 that was evaluated by LMES-Y12 and rebuilt by Surface Optics to achieve the desired performance. LMES-Y12 subsequently evaluated the instrument against numerous applications including determining part cleanliness at the Corpus Christi Army Depot, demonstrating the ability to detect plasticizers and other organic contaminants on metals to Pantex and LANL personnel, analyzed sandblasted metal contamination standards supplied by NASA-MSFC, and demonstrated to Lockheed Martin Tactical Aircraft, marietta, GA, for analyzing the paint applied to the F-22 Fighter. The instrument also demonstrated the analysis of yarn, fabric, and finish on the textiles.« less
Energy efficient quantum machines
NASA Astrophysics Data System (ADS)
Abah, Obinna; Lutz, Eric
2017-05-01
We investigate the performance of a quantum thermal machine operating in finite time based on shortcut-to-adiabaticity techniques. We compute efficiency and power for a paradigmatic harmonic quantum Otto engine by taking the energetic cost of the shortcut driving explicitly into account. We demonstrate that shortcut-to-adiabaticity machines outperform conventional ones for fast cycles. We further derive generic upper bounds on both quantities, valid for any heat engine cycle, using the notion of quantum speed limit for driven systems. We establish that these quantum bounds are tighter than those stemming from the second law of thermodynamics.
Oceanic eddy detection and lifetime forecast using machine learning methods
NASA Astrophysics Data System (ADS)
Ashkezari, Mohammad D.; Hill, Christopher N.; Follett, Christopher N.; Forget, Gaël.; Follows, Michael J.
2016-12-01
We report a novel altimetry-based machine learning approach for eddy identification and characterization. The machine learning models use daily maps of geostrophic velocity anomalies and are trained according to the phase angle between the zonal and meridional components at each grid point. The trained models are then used to identify the corresponding eddy phase patterns and to predict the lifetime of a detected eddy structure. The performance of the proposed method is examined at two dynamically different regions to demonstrate its robust behavior and region independency.
Yang, Kamie K; Lewis, Ian H
2014-06-15
Various equipment malfunctions of anesthesia gas delivery systems have been previously reported. Our profession increasingly uses technology as a means to prevent these errors. We report a case of a near-total anesthesia circuit obstruction that went undetected before the induction of anesthesia despite the use of automated machine check technology. This case highlights that automated machine check modules can fail to detect severe equipment failure and demonstrates how, even in this era of expanding technology, manual checks still remain essential components of safe care.
Evolution of the X-ray luminosity in young HII galaxies
NASA Astrophysics Data System (ADS)
Rosa González, D.; Terlevich, E.; Jiménez Bailón, E.; Terlevich, R.; Ranalli, P.; Comastri, A.; Laird, E.; Nandra, K.
2009-10-01
In an effort to understand the correlation between X-ray emission and present star formation rate, we obtained XMM-Newton data to estimate the X-ray luminosities of a sample of actively star-forming HII galaxies. The obtained X-ray luminosities are compared to other well-known tracers of star formation activity such as the far-infrared and the ultraviolet luminosities. We also compare the obtained results with empirical laws from the literature and with recently published analysis applying synthesis models. We use the time delay between the formation of the stellar cluster and that of the first X-ray binaries, in order to put limits on the age of a given stellar burst. We conclude that the generation of soft X-rays, as well as the Hα or infrared luminosities is instantaneous. The relation between the observed radio and hard X-ray luminosities, on the other hand, points to the existence of a time delay between the formation of the stellar cluster and the explosion of the first massive stars and the consequent formation of supernova (SN) remnants and high-mass X-ray binaries, which originate the radio and hard X-ray fluxes, respectively. When comparing hard X-rays with a star formation indicator that traces the first million years of evolution (e.g. Hα luminosities), we found a deficit in the expected X-ray luminosity. This deficit is not found when the X-ray luminosities are compared with infrared luminosities, a star formation tracer that represents an average over the last 108yr. The results support the hypothesis that hard X-rays are originated in X-ray binaries which, as SN remnants, have a formation time delay of a few mega years after the star-forming burst. Partially based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. E-mail: danrosa@inaoep.mx ‡ Visiting Fellow, IoA, Cambridge, UK.
Progress with High-Field Superconducting Magnets for High-Energy Colliders
NASA Astrophysics Data System (ADS)
Apollinari, Giorgio; Prestemon, Soren; Zlobin, Alexander V.
2015-10-01
One of the possible next steps for high-energy physics research relies on a high-energy hadron or muon collider. The energy of a circular collider is limited by the strength of bending dipoles, and its maximum luminosity is determined by the strength of final focus quadrupoles. For this reason, the high-energy physics and accelerator communities have shown much interest in higher-field and higher-gradient superconducting accelerator magnets. The maximum field of NbTi magnets used in all present high-energy machines, including the LHC, is limited to ˜10 T at 1.9 K. Fields above 10 T became possible with the use of Nb3Sn superconductors. Nb3Sn accelerator magnets can provide operating fields up to ˜15 T and can significantly increase the coil temperature margin. Accelerator magnets with operating fields above 15 T require high-temperature superconductors. This review discusses the status and main results of Nb3Sn accelerator magnet research and development and work toward 20-T magnets.
The New BaBar Data Reconstruction Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ceseracciu, Antonio
2003-06-02
The BaBar experiment is characterized by extremely high luminosity, a complex detector, and a huge data volume, with increasing requirements each year. To fulfill these requirements a new control system has been designed and developed for the offline data reconstruction system. The new control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of OO design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system is activelymore » distributed, enforces the separation between different processing tiers by using different naming domains, and glues them together by dedicated brokers. It provides a powerful Finite State Machine framework to describe custom processing models in a simple regular language. This paper describes this new control system, currently in use at SLAC and Padova on {approx}450 CPUs organized in 12 farms.« less
The BaBar Data Reconstruction Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ceseracciu, A
2005-04-20
The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a Control System has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of OO design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system ismore » distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful Finite State Machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on {approx}450 CPUs organized in 9 farms.« less
The BaBar Data Reconstruction Control System
NASA Astrophysics Data System (ADS)
Ceseracciu, A.; Piemontese, M.; Tehrani, F. S.; Pulliam, T. M.; Galeazzi, F.
2005-08-01
The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a control system has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of object oriented (OO) design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system is distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful finite state machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on /spl sim/450 CPUs organized in nine farms.
Numerical simulations of a proposed hollow electron beam collimator for the LHC upgrade at CERN.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Previtali, V.; Stancari, G.; Valishev, A.
2013-07-12
In the last years the LHC collimation system has been performing over the expectations, providing the machine with a nearly perfect e cient cleaning system[1]. Nonetheless, when trying to push the existing accelerators to - and over - their design limits, all the accelerator components are required to boost their performances. In particular, in view of the high luminosity frontier for the LHC, the increased intensity would ask for a more e cient cleaning system. In this framework innovative collimation solutions are under evaluation[2]: one option is the usage of an hollow electron lens for beam halo cleaning. This workmore » intends to study the applicability of an the hollow electron lens for the LHC collimation, by evaluating the case of the existing Tevatron e-lens applied to the nominal LHC 7 TeV beam. New e-lens operation modes are here proposed to standard enhance the electron lens halo removal e ect.« less
Longitudinal density monitor for the LHC
NASA Astrophysics Data System (ADS)
Jeff, A.; Andersen, M.; Boccardi, A.; Bozyigit, S.; Bravin, E.; Lefevre, T.; Rabiller, A.; Roncarolo, F.; Welsch, C. P.; Fisher, A. S.
2012-03-01
The longitudinal density monitor (LDM) is primarily intended for the measurement of the particle population in nominally empty rf buckets. These so-called satellite or ghost bunches can cause problems for machine protection as well as influencing the luminosity calibration of the LHC. The high dynamic range of the system allows measurement of ghost bunches with as little as 0.01% of the main bunch population at the same time as characterization of the main bunches. The LDM is a single-photon counting system using visible synchrotron light. The photon detector is a silicon avalanche photodiode operated in Geiger mode, which allows the longitudinal distribution of the LHC beams to be measured with a resolution of 90 ps. Results from the LDM are presented, including a proposed method for constructing a 3-dimensional beam density map by scanning the LDM sensor in the transverse plane. In addition, we present a scheme to improve the sensitivity of the system by using an optical switching technique.
Progress with high-field superconducting magnets for high-energy colliders
Apollinari, Giorgio; Prestemon, Soren; Zlobin, Alexander V.
2015-10-01
One of the possible next steps for high-energy physics research relies on a high-energy hadron or muon collider. The energy of a circular collider is limited by the strength of bending dipoles, and its maximum luminosity is determined by the strength of final focus quadrupoles. For this reason, the high-energy physics and accelerator communities have shown much interest in higher-field and higher-gradient superconducting accelerator magnets. The maximum field of NbTi magnets used in all present high-energy machines, including the LHC, is limited to ~10 T at 1.9 K. Fields above 10 T became possible with the use of Nbmore » $$_3$$Sn superconductors. Nb$$_3$$Sn accelerator magnets can provide operating fields up to ~15 T and can significantly increase the coil temperature margin. Accelerator magnets with operating fields above 15 T require high-temperature superconductors. Furthermore, this review discusses the status and main results of Nb$$_3$$Sn accelerator magnet research and development and work toward 20-T magnets.« less
NASA Astrophysics Data System (ADS)
Valentino, Gianluca; Baud, Guillaume; Bruce, Roderik; Gasior, Marek; Mereghetti, Alessio; Mirarchi, Daniele; Olexa, Jakub; Redaelli, Stefano; Salvachua, Belen; Valloni, Alessandra; Wenninger, Jorg
2017-08-01
During Long Shutdown 1, 18 Large Hadron Collider (LHC) collimators were replaced with a new design, in which beam position monitor (BPM) pick-up buttons are embedded in the collimator jaws. The BPMs provide a direct measurement of the beam orbit at the collimators, and therefore can be used to align the collimators more quickly than using the standard technique which relies on feedback from beam losses. Online orbit measurements also allow for reducing operational margins in the collimation hierarchy placed specifically to cater for unknown orbit drifts, therefore decreasing the β* and increasing the luminosity reach of the LHC. In this paper, the results from the commissioning of the embedded BPMs in the LHC are presented. The data acquisition and control software architectures are reviewed. A comparison with the standard alignment technique is provided, together with a fill-to-fill analysis of the measured orbit in different machine modes, which will also be used to determine suitable beam interlocks for a tighter collimation hierarchy.
Lamti, Hachem A; Gorce, Philippe; Ben Khelifa, Mohamed Moncef; Alimi, Adel M
2016-12-01
The goal of this study is to investigate the influence of mental fatigue on the event related potential P300 features (maximum pick, minimum amplitude, latency and period) during virtual wheelchair navigation. For this purpose, an experimental environment was set up based on customizable environmental parameters (luminosity, number of obstacles and obstacles velocities). A correlation study between P300 and fatigue ratings was conducted. Finally, the best correlated features supplied three classification algorithms which are MLP (Multi Layer Perceptron), Linear Discriminate Analysis and Support Vector Machine. The results showed that the maximum feature over visual and temporal regions as well as period feature over frontal, fronto-central and visual regions were correlated with mental fatigue levels. In the other hand, minimum amplitude and latency features didn't show any correlation. Among classification techniques, MLP showed the best performance although the differences between classification techniques are minimal. Those findings can help us in order to design suitable mental fatigue based wheelchair control.
NASA Astrophysics Data System (ADS)
Lin, Yen-Ting; Hsieh, Bau-Ching; Lin, Sheng-Chieh; Oguri, Masamune; Chen, Kai-Feng; Tanaka, Masayuki; Chiu, I.-Non; Huang, Song; Kodama, Tadayuki; Leauthaud, Alexie; More, Surhud; Nishizawa, Atsushi J.; Bundy, Kevin; Lin, Lihwai; Miyazaki, Satoshi
2017-12-01
The unprecedented depth and area surveyed by the Subaru Strategic Program with the Hyper Suprime-Cam (HSC-SSP) have enabled us to construct and publish the largest distant cluster sample out to z∼ 1 to date. In this exploratory study of cluster galaxy evolution from z = 1 to z = 0.3, we investigate the stellar mass assembly history of brightest cluster galaxies (BCGs), the evolution of stellar mass and luminosity distributions, the stellar mass surface density profile, as well as the population of radio galaxies. Our analysis is the first high-redshift application of the top N richest cluster selection, which is shown to allow us to trace the cluster galaxy evolution faithfully. Over the 230 deg2 area of the current HSC-SSP footprint, selecting the top 100 clusters in each of the four redshift bins allows us to observe the buildup of galaxy population in descendants of clusters whose z≈ 1 mass is about 2× {10}14 {M}ȯ . Our stellar mass is derived from a machine-learning algorithm, which is found to be unbiased and accurate with respect to the COSMOS data. We find very mild stellar mass growth in BCGs (about 35% between z = 1 and 0.3), and no evidence for evolution in both the total stellar mass–cluster mass correlation and the shape of the stellar mass surface density profile. We also present the first measurement of the radio luminosity distribution in clusters out to z∼ 1, and show hints of changes in the dominant accretion mode powering the cluster radio galaxies at z∼ 0.8.
Progress on the Development of the Nb 3Sn 11T Dipole for the High Luminosity Upgrade of LHC
Savary, Frederic; Bajko, Marta; Bordini, Bernardo; ...
2017-02-08
The high-luminosity large hadron collider (LHC) project at CERN entered into the production phase in October 2015 after the completion of the design study phase. In the meantime, the development of the 11 T dipole needed for the upgrade of the collimation system of the machine made significant progress with very good performance of the first two-in-one magnet model of 2-m length made at CERN. The 11 T dipole, which is more powerful than the current main dipoles of LHC, can be made shorter with an equivalent integrated field. This will allow creating space for the installation of additional collimatorsmore » in specific locations of the dispersion suppressor regions. Following tests carried out during heavy ions runs of LHC in the end of 2015, and a more recent review of the project budget, the installation plan for the 11 T dipole was revised. Consequently, one 11 T dipole full assembly containing two 11 T dipoles of 5.5-m length will be installed on either side of interaction point 7. These two units shall be installed during the long shutdown 2 in years 2019-2020. After a brief reminder on the design features of the magnet, this paper describes the current status of the development activities, in particular the short model programme and the construction of the first full scale prototype at CERN. Finally, critical operations such as the reaction treatment and the coil impregnation are discussed, the quench performance tests results of the two-in-one model are reviewed and finally, the plan toward the production for the long shut down 2 is described.« less
Readout Electronics for the ATLAS LAr Calorimeter at HL-LHC
NASA Astrophysics Data System (ADS)
Chen, Hucheng; ATLAS Liquid Argon Calorimeter Group
The ATLAS Liquid Argon (LAr) calorimeters are high precision, high sensitivity and high granularity detectors designed to provide precision measurements of electrons, photons, jets and missing transverse energy. ATLAS and its LAr calorimeters have been operating and collecting proton-proton collisions at LHC since 2009. The current front-end electronics of the LAr calorimeters need to be upgraded to sustain the higher radiation levels and data rates expected at the upgraded high luminosity LHC machine (HL-LHC), which will have 5 times more luminosity than the LHC in its ultimate configuration. The complexity of the present electronics and the obsolescence of some of components of which it is made, will not allow a partial replacement of the system. A completely new readout architecture scheme is under study and many components are being developed in various R&D programs of the LAr Calorimeter Group.The new front-end readout electronics will send data continuously at each bunch crossing through high speed radiation resistant optical links. The data will be processed real-time with the possibility of implementing trigger algorithms for clusters and electron/photon identification at a higher granularity than that which is currently implemented. The new architecture will eliminate the intrinsic limitation presently existing on Level-1 trigger acceptance. This article is an overview of the R&D activities which covers architectural design aspects of the new electronics as well as some detailed progress on the development of several ASICs needed, and preliminary studies with FPGAs to cover the backend functions including part of the Level-1 trigger requirements. A recently proposed staged upgrade with hybrid Tower Builder Board (TBB) is also described.
Luminosity function and jet structure of Gamma-Ray Burst
NASA Astrophysics Data System (ADS)
Pescalli, A.; Ghirlanda, G.; Salafia, O. S.; Ghisellini, G.; Nappo, F.; Salvaterra, R.
2015-02-01
The structure of gamma-ray burst (GRB) jets impacts on their prompt and afterglow emission properties. The jet of GRBs could be uniform, with constant energy per unit solid angle within the jet aperture, or it could be structured, namely with energy and velocity that depend on the angular distance from the axis of the jet. We try to get some insight about the still unknown structure of GRBs by studying their luminosity function. We show that low (1046-48 erg s-1) and high (i.e. with L ≥ 1050 erg s-1) luminosity GRBs can be described by a unique luminosity function, which is also consistent with current lower limits in the intermediate luminosity range (1048-50 erg s-1). We derive analytical expressions for the luminosity function of GRBs in uniform and structured jet models and compare them with the data. Uniform jets can reproduce the entire luminosity function with reasonable values of the free parameters. A structured jet can also fit adequately the current data, provided that the energy within the jet is relatively strongly structured, i.e. E ∝ θ-k with k ≥ 4. The classical E ∝ θ-2 structured jet model is excluded by the current data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richter, B.
In this paper I have reviewed the possibilities for new colliders that might be available in the 1990's. One or more new proton should be available in the late-90s based on plans of Europe, the US and the USSR. The two very high energy machines, LHC and SSC, are quite expensive, and their construction will be more decided by the politicians' view on the availability of resources than by the physicists' view of the need for new machines. Certainly something will be built, but the question is when. New electron colliders beyond LEP II could be available in the latemore » 1990's as well. Most of the people who have looked at this problem believe that at a minimum three years of RandD are required before a proposal can be made, two years will be required to convince the authorities to go ahead, and five years will be required to build such a machine. Thus the earliest time a new electron collider at high energy could be available is around 1988. A strong international RandD program will be required to meet that schedule. In the field of B factories, PSI's proposal is the first serious step beyond the capabilities of CESR. There are other promising techniques but these need more RandD. The least RandD would be required for the asymmetric storage ring systems, while the most would be required for high luminosity linear colliders. For the next decade, high energy physics will be doing its work at the high energy frontier with Tevatron I and II, UNK, SLC, LEP I and II, and HERA. The opportunities for science presented by experiments at these facilities are very great, and it is to be hoped that the pressure for funding to construct the next generation facilities will not badly affect the operating budgets of the ones we now have or which will soon be turning on. 9 refs., 12 figs., 6 tabs.« less
Shaping-lathe headrig will stretch shrinking timber supply
J. Gengler; J.D. Saul
1975-01-01
The first commercial version of the shaping lathe headrig, designed to machine short hardwood or softwood logs into cants and flakes, was introduced to forest industry executives in September during a working demonstration at Stetson-Ross Machine Co., Seattle. Based on a concept provided by Dr. Peter Koch, chief wood scientist at the Southern Forest Experiment Station...
Comprehensive decision tree models in bioinformatics.
Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter
2012-01-01
Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly redundant attributes that are very common in bioinformatics.
Comprehensive Decision Tree Models in Bioinformatics
Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter
2012-01-01
Purpose Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. Methods This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. Results The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. Conclusions The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly redundant attributes that are very common in bioinformatics. PMID:22479449
A Search for Water Maser Emission from Brown Dwarfs and Low-luminosity Young Stellar Objects
NASA Astrophysics Data System (ADS)
Gómez, José F.; Palau, Aina; Uscanga, Lucero; Manjarrez, Guillermo; Barrado, David
2017-05-01
We present a survey for water maser emission toward a sample of 44 low-luminosity young objects, comprising (proto-)brown dwarfs, first hydrostatic cores (FHCs), and other young stellar objects (YSOs) with bolometric luminosities lower than 0.4 L ⊙. Water maser emission is a good tracer of energetic processes, such as mass-loss and/or accretion, and is a useful tool to study these processes with very high angular resolution. This type of emission has been confirmed in objects with L bol ≳ 1 L ⊙. Objects with lower luminosities also undergo mass-loss and accretion, and thus, are prospective sites of maser emission. Our sensitive single-dish observations provided a single detection when pointing toward the FHC L1448 IRS 2E. However, follow-up interferometric observations showed water maser emission associated with the nearby YSO L1448 IRS 2 (a Class 0 protostar of L bol ≃ 3.6-5.3 L ⊙) and did not find any emission toward L1448 IRS 2E. The upper limits for water maser emission determined by our observations are one order of magnitude lower than expected from the correlation between water maser luminosities and bolometric luminosities found for YSOs. This suggests that this correlation does not hold at the lower end of the (sub)stellar mass spectrum. Possible reasons are that the slope of this correlation is steeper at L bol ≤ 1 L ⊙ or that there is an absolute luminosity threshold below which water maser emission cannot be produced. Alternatively, if the correlation still stands at low luminosity, the detection rates of masers would be significantly lower than the values obtained in higher-luminosity Class 0 protostars.
Luminosity and surface brightness distribution of K-band galaxies from the UKIDSS Large Area Survey
NASA Astrophysics Data System (ADS)
Smith, Anthony J.; Loveday, Jon; Cross, Nicholas J. G.
2009-08-01
We present luminosity and surface-brightness distributions of 40111 galaxies with K-band photometry from the United Kingdom Infrared Telescope (UKIRT) Infrared Deep Sky Survey (UKIDSS) Large Area Survey (LAS), Data Release 3 and optical photometry from Data Release 5 of the Sloan Digital Sky Survey (SDSS). Various features and limitations of the new UKIDSS data are examined, such as a problem affecting Petrosian magnitudes of extended sources. Selection limits in K- and r-band magnitude, K-band surface brightness and K-band radius are included explicitly in the 1/Vmax estimate of the space density and luminosity function. The bivariate brightness distribution in K-band absolute magnitude and surface brightness is presented and found to display a clear luminosity-surface brightness correlation that flattens at high luminosity and broadens at low luminosity, consistent with similar analyses at optical wavelengths. Best-fitting Schechter function parameters for the K-band luminosity function are found to be M* - 5 logh = -23.19 +/- 0.04,α = -0.81 +/- 0.04 and φ* = (0.0166 +/- 0.0008)h3Mpc-3, although the Schechter function provides a poor fit to the data at high and low luminosity, while the luminosity density in the K band is found to be j = (6.305 +/- 0.067) × 108LsolarhMpc-3. However, we caution that there are various known sources of incompleteness and uncertainty in our results. Using mass-to-light ratios determined from the optical colours, we estimate the stellar mass function, finding good agreement with previous results. Possible improvements are discussed that could be implemented when extending this analysis to the full LAS.
The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes
NASA Technical Reports Server (NTRS)
Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark
2000-01-01
Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately < A(sub V) approximately < 5) lines-of-sight with decreasing quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.
Machine Learning Estimates of Natural Product Conformational Energies
Rupp, Matthias; Bauer, Matthias R.; Wilcken, Rainer; Lange, Andreas; Reutlinger, Michael; Boeckler, Frank M.; Schneider, Gisbert
2014-01-01
Machine learning has been used for estimation of potential energy surfaces to speed up molecular dynamics simulations of small systems. We demonstrate that this approach is feasible for significantly larger, structurally complex molecules, taking the natural product Archazolid A, a potent inhibitor of vacuolar-type ATPase, from the myxobacterium Archangium gephyra as an example. Our model estimates energies of new conformations by exploiting information from previous calculations via Gaussian process regression. Predictive variance is used to assess whether a conformation is in the interpolation region, allowing a controlled trade-off between prediction accuracy and computational speed-up. For energies of relaxed conformations at the density functional level of theory (implicit solvent, DFT/BLYP-disp3/def2-TZVP), mean absolute errors of less than 1 kcal/mol were achieved. The study demonstrates that predictive machine learning models can be developed for structurally complex, pharmaceutically relevant compounds, potentially enabling considerable speed-ups in simulations of larger molecular structures. PMID:24453952
Three-Dimensional High-Lift Analysis Using a Parallel Unstructured Multigrid Solver
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1998-01-01
A directional implicit unstructured agglomeration multigrid solver is ported to shared and distributed memory massively parallel machines using the explicit domain-decomposition and message-passing approach. Because the algorithm operates on local implicit lines in the unstructured mesh, special care is required in partitioning the problem for parallel computing. A weighted partitioning strategy is described which avoids breaking the implicit lines across processor boundaries, while incurring minimal additional communication overhead. Good scalability is demonstrated on a 128 processor SGI Origin 2000 machine and on a 512 processor CRAY T3E machine for reasonably fine grids. The feasibility of performing large-scale unstructured grid calculations with the parallel multigrid algorithm is demonstrated by computing the flow over a partial-span flap wing high-lift geometry on a highly resolved grid of 13.5 million points in approximately 4 hours of wall clock time on the CRAY T3E.
Connecting CO intensity mapping to molecular gas and star formation in the epoch of galaxy assembly
Li, Tony Y.; Wechsler, Risa H.; Devaraj, Kiruthika; ...
2016-01-29
Intensity mapping, which images a single spectral line from unresolved galaxies across cosmological volumes, is a promising technique for probing the early universe. Here we present predictions for the intensity map and power spectrum of the CO(1–0) line from galaxies atmore » $$z\\sim 2.4$$–2.8, based on a parameterized model for the galaxy–halo connection, and demonstrate the extent to which properties of high-redshift galaxies can be directly inferred from such observations. We find that our fiducial prediction should be detectable by a realistic experiment. Motivated by significant modeling uncertainties, we demonstrate the effect on the power spectrum of varying each parameter in our model. Using simulated observations, we infer constraints on our model parameter space with an MCMC procedure, and show corresponding constraints on the $${L}_{\\mathrm{IR}}$$–$${L}_{\\mathrm{CO}}$$ relation and the CO luminosity function. These constraints would be complementary to current high-redshift galaxy observations, which can detect the brightest galaxies but not complete samples from the faint end of the luminosity function. Furthermore, by probing these populations in aggregate, CO intensity mapping could be a valuable tool for probing molecular gas and its relation to star formation in high-redshift galaxies.« less
Galaxy luminosity function and Tully-Fisher relation: reconciled through rotation-curve studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cattaneo, Andrea; Salucci, Paolo; Papastergis, Emmanouil, E-mail: andrea.cattaneo@oamp.fr, E-mail: salucci@sissa.it, E-mail: papastergis@astro.cornell.edu
2014-03-10
The relation between galaxy luminosity L and halo virial velocity v {sub vir} required to fit the galaxy luminosity function differs from the observed Tully-Fisher relation between L and disk speed v {sub rot}. Because of this, the problem of reproducing the galaxy luminosity function and the Tully-Fisher relation simultaneously has plagued semianalytic models since their inception. Here we study the relation between v {sub rot} and v {sub vir} by fitting observational average rotation curves of disk galaxies binned in luminosity. We show that the v {sub rot}-v {sub vir} relation that we obtain in this way can fullymore » account for this seeming inconsistency. Therefore, the reconciliation of the luminosity function with the Tully-Fisher relation rests on the complex dependence of v {sub rot} on v {sub vir}, which arises because the ratio of stellar mass to dark matter mass is a strong function of halo mass.« less
Aad, G.; Abbott, B.; Abdallah, J.; ...
2011-04-27
Measurements of luminosity obtained using the ATLAS detector during early running of the Large Hadron Collider (LHC) at √s = 7 TeV are presented. The luminosity is independently determined using several detectors and multiple algorithms, each having different acceptances, systematic uncertainties and sensitivity to background. The ratios of the luminosities obtained from these methods are monitored as a function of time and of μ, the average number of inelastic interactions per bunch crossing. Residual time- and μ-dependence between the methods is less than 2% for 0 < μ < 2.5. Absolute luminosity calibrations, performed using beam separation scans, have amore » common systematic uncertainty of ±11%, dominated by the measurement of the LHC beam currents. After calibration, the luminosities obtained from the different methods differ by at most ±2%. The visible cross sections measured using the beam scans are compared to predictions obtained with the PYTHIA and PHOJET event generators and the ATLAS detector simulation.« less
Padé Approximant and Minimax Rational Approximation in Standard Cosmology
NASA Astrophysics Data System (ADS)
Zaninetti, Lorenzo
2016-02-01
The luminosity distance in the standard cosmology as given by $\\Lambda$CDM and consequently the distance modulus for supernovae can be defined by the Pad\\'e approximant. A comparison with a known analytical solution shows that the Pad\\'e approximant for the luminosity distance has an error of $4\\%$ at redshift $= 10$. A similar procedure for the Taylor expansion of the luminosity distance gives an error of $4\\%$ at redshift $=0.7 $; this means that for the luminosity distance, the Pad\\'e approximation is superior to the Taylor series. The availability of an analytical expression for the distance modulus allows applying the Levenberg--Marquardt method to derive the fundamental parameters from the available compilations for supernovae. A new luminosity function for galaxies derived from the truncated gamma probability density function models the observed luminosity function for galaxies when the observed range in absolute magnitude is modeled by the Pad\\'e approximant. A comparison of $\\Lambda$CDM with other cosmologies is done adopting a statistical point of view.
Evolution of the Blue and Far-Infrared Galaxy Luminosity Functions
NASA Technical Reports Server (NTRS)
Lonsdale, Carol J.; Chokshi, Arati
1993-01-01
The space density of blue-selected galaxies at moderate redshifts is determined here directly by deriving the luminosity function. Evidence is found for density evolution for moderate luminosity galaxies at a rate of (1+z) exp delta, with a best fit of delta + 4 +/- 2, between the current epoch and Z greater than about 0.1. At M(b) less than -22 evidence is found for about 0.5-1.5 mag of luminosity evolution in addition to the density evolution, corresponding to an evolutionary rate of about (1+z) exp gamma, with gamma = 0.5-2.5, but a redshift of about 0.4. Assuming a steeper faint end slope of alpha = -1.3 similar to that observed in the Virgo cluster, could explain the data with a luminosity evolution rate of gamma = 1-2, without need for any density evolution. Acceptable fits are found by comparing composite density and luminosity evolution models to faint IRAS 60 micron source counts, implying that the blue and far-IR evolutionary rates may be similar.
The influence of maintenance quality of hemodialysis machines on hemodialysis efficiency.
Azar, Ahmad Taher
2009-01-01
Several studies suggest that there is a correlation between dose of dialysis and machine maintenance. However, in spite of the current practice, there are conflicting reports regarding the relationship between dose of dialysis or patient outcome, and machine maintenance. In order to evaluate the impact of hemodialysis machine maintenance on dialysis adequacy Kt/V and session performance, data were processed on 134 patients on 3-times-per-week dialysis regimens by dividing the patients into four groups and also dividing the hemodialysis machines into four groups according to their year of installation. The equilibrated dialysis dose eq Kt/V, urea reduction ratio (URR) and the overall equipment effectiveness (OEE) were calculated in each group to show the effect hemodialysis machine efficiency on the overall session performance. The average working time per machine per month was 270 hours. The cumulative number of hours according to the year of installation was: 26,122 hours for machines installed in 1998; 21,596 hours for machines installed in 1999, 8362 hours for those installed in 2003 and 2486 hours for those installed in 2005. The mean time between failures (MTBF) was 1.8, 2.1, 4.2 and 6 months between failures for machines installed in 1999, 1998, 2003 and 2005, respectively. Statistical analysis demonstrated that the dialysis dose eq Kt/V and URR were increased as the overall equipment effectiveness (OEE) increases with regular maintenance procedures. Maintenance has become one of the most expedient approaches to guarantee high machine dependability. The efficiency of dialysis machine is relevant in assuring a proper dialysis adequacy.
Gamma-Ray Burst Host Galaxies Have "Normal" Luminosities.
Schaefer
2000-04-10
The galactic environment of gamma-ray bursts can provide good evidence about the nature of the progenitor system, with two old arguments implying that the burst host galaxies are significantly subluminous. New data and new analysis have now reversed this picture: (1) Even though the first two known host galaxies are indeed greatly subluminous, the next eight hosts have absolute magnitudes typical for a population of field galaxies. A detailed analysis of the 16 known hosts (10 with redshifts) shows them to be consistent with a Schechter luminosity function with R*=-21.8+/-1.0, as expected for normal galaxies. (2) Bright bursts from the Interplanetary Network are typically 18 times brighter than the faint bursts with redshifts; however, the bright bursts do not have galaxies inside their error boxes to limits deeper than expected based on the luminosities for the two samples being identical. A new solution to this dilemma is that a broad burst luminosity function along with a burst number density varying as the star formation rate will require the average luminosity of the bright sample (>6x1058 photons s-1 or>1.7x1052 ergs s-1) to be much greater than the average luminosity of the faint sample ( approximately 1058 photons s-1 or approximately 3x1051 ergs s-1). This places the bright bursts at distances for which host galaxies with a normal luminosity will not violate the observed limits. In conclusion, all current evidence points to gamma-ray burst host galaxies being normal in luminosity.
Construction in space - Toward a fresh definition of the man/machine relation
NASA Technical Reports Server (NTRS)
Watters, H. H.; Stokes, J. W.
1979-01-01
The EVA (extravehicular activity) project forming part of the space construction process is reviewed. The manual EVA constuction, demonstrated by the crew of Skylab 3 by assembling a modest space structure in the form of the twin-pole sunshade, is considered, indicating that the experiment dispelled many doubts about man's ability to execute routine and contingency EVA operations. Tests demonstrating the feasibility of remote teleoperator rendezvous, station keeping, and docking operations, using hand controllers for direct input and television for feedback, are noted. Future plans for designing space construction machines are mentioned.
Yamaura, Hiroshi; Matsushita, Kojiro; Kato, Ryu; Yokoi, Hiroshi
2009-01-01
We have developed a hand rehabilitation system for patients suffering from paralysis or contracture. It consists of two components: a hand rehabilitation machine, which moves human finger joints with motors, and a data glove, which provides control of the movement of finger joints attached to the rehabilitation machine. The machine is based on the arm structure type of hand rehabilitation machine; a motor indirectly moves a finger joint via a closed four-link mechanism. We employ a wire-driven mechanism and develop a compact design that can control all three joints (i.e., PIP, DIP and MP ) of a finger and that offers a wider range of joint motion than conventional systems. Furthermore, we demonstrate the hand rehabilitation process, finger joints of the left hand attached to the machine are controlled by the finger joints of the right hand wearing the data glove.
Powering the programmed nanostructure and function of gold nanoparticles with catenated DNA machines
NASA Astrophysics Data System (ADS)
Elbaz, Johann; Cecconello, Alessandro; Fan, Zhiyuan; Govorov, Alexander O.; Willner, Itamar
2013-06-01
DNA nanotechnology is a rapidly developing research area in nanoscience. It includes the development of DNA machines, tailoring of DNA nanostructures, application of DNA nanostructures for computing, and more. Different DNA machines were reported in the past and DNA-guided assembly of nanoparticles represents an active research effort in DNA nanotechnology. Several DNA-dictated nanoparticle structures were reported, including a tetrahedron, a triangle or linear nanoengineered nanoparticle structures; however, the programmed, dynamic reversible switching of nanoparticle structures and, particularly, the dictated switchable functions emerging from the nanostructures, are missing elements in DNA nanotechnology. Here we introduce DNA catenane systems (interlocked DNA rings) as molecular DNA machines for the programmed, reversible and switchable arrangement of different-sized gold nanoparticles. We further demonstrate that the machine-powered gold nanoparticle structures reveal unique emerging switchable spectroscopic features, such as plasmonic coupling or surface-enhanced fluorescence.
An automatic taxonomy of galaxy morphology using unsupervised machine learning
NASA Astrophysics Data System (ADS)
Hocking, Alex; Geach, James E.; Sun, Yi; Davey, Neil
2018-01-01
We present an unsupervised machine learning technique that automatically segments and labels galaxies in astronomical imaging surveys using only pixel data. Distinct from previous unsupervised machine learning approaches used in astronomy we use no pre-selection or pre-filtering of target galaxy type to identify galaxies that are similar. We demonstrate the technique on the Hubble Space Telescope (HST) Frontier Fields. By training the algorithm using galaxies from one field (Abell 2744) and applying the result to another (MACS 0416.1-2403), we show how the algorithm can cleanly separate early and late type galaxies without any form of pre-directed training for what an 'early' or 'late' type galaxy is. We then apply the technique to the HST Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) fields, creating a catalogue of approximately 60 000 classifications. We show how the automatic classification groups galaxies of similar morphological (and photometric) type and make the classifications public via a catalogue, a visual catalogue and galaxy similarity search. We compare the CANDELS machine-based classifications to human-classifications from the Galaxy Zoo: CANDELS project. Although there is not a direct mapping between Galaxy Zoo and our hierarchical labelling, we demonstrate a good level of concordance between human and machine classifications. Finally, we show how the technique can be used to identify rarer objects and present lensed galaxy candidates from the CANDELS imaging.
Efficient machining of ultra precise steel moulds with freeform surfaces
NASA Astrophysics Data System (ADS)
Bulla, B.; Robertson, D. J.; Dambon, O.; Klocke, F.
2013-09-01
Ultra precision diamond turning of hardened steel to produce optical quality surfaces can be realized by applying an ultrasonic assisted process. With this technology optical moulds used typically for injection moulding can be machined directly from steel without the requirement to overcoat the mould with a diamond machinable material such as Nickel Phosphor. This has both the advantage of increasing the mould tool lifetime and also reducing manufacture costs by dispensing with the relatively expensive plating process. This publication will present results we have obtained for generating free form moulds in hardened steel by means of ultrasonic assisted diamond turning with a vibration frequency of 80 kHz. To provide a baseline with which to characterize the system performance we perform plane cutting experiments on different steel alloys with different compositions. The baseline machining results provides us information on the surface roughness and on tool wear caused during machining and we relate these to material composition. Moving on to freeform surfaces, we will present a theoretical background to define the machine program parameters for generating free forms by applying slow slide servo machining techniques. A solution for optimal part generation is introduced which forms the basis for the freeform machining experiments. The entire process chain, from the raw material through to ultra precision machining is presented, with emphasis on maintaining surface alignment when moving a component from CNC pre-machining to final machining using ultrasonic assisted diamond turning. The free form moulds are qualified on the basis of the surface roughness measurements and a form error map comparing the machined surface with the originally defined surface. These experiments demonstrate the feasibility of efficient free form machining applying ultrasonic assisted diamond turning of hardened steel.
Detector Developments for the High Luminosity LHC Era (1/4)
Straessner, Arno
2018-04-27
Calorimetry and Muon Spectrometers - Part I : In the first part of the lecture series, the motivation for a high luminosity upgrade of the LHC will be quickly reviewed together with the challenges for the LHC detectors. In particular, the plans and ongoing research for new calorimeter detectors will be explained. The main issues in the high-luminosity era are an improved radiation tolerance, natural ageing of detector components and challenging trigger and physics requirements. The new technological solutions for calorimetry at a high-luminosity LHC will be reviewed.
The line continuum luminosity ratio in AGN: Or on the Baldwin Effect
NASA Technical Reports Server (NTRS)
Mushotzky, R.; Ferland, F. J.
1983-01-01
The luminosity dependence of the equivalent width of CIV in active galaxies, the "Baldwin" effect, is shown to be a consequence of a luminosity dependent ionization parameter. This law also agrees with the lack of a "Baldwin" effect in Ly alpha or other hydrogen lines. A fit to the available data gives a weak indication that the mean covering factor decreases with increasing luminosity, consistent with the inference from X-ray observations. The effects of continuum shape and density on various line ratios of interest are discussed.
Unified treatment of the luminosity distance in cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jaiyul; Scaccabarozzi, Fulvio, E-mail: jyoo@physik.uzh.ch, E-mail: fulvio@physik.uzh.ch
Comparing the luminosity distance measurements to its theoretical predictions is one of the cornerstones in establishing the modern cosmology. However, as shown in Biern and Yoo, its theoretical predictions in literature are often plagued with infrared divergences and gauge-dependences. This trend calls into question the sanity of the methods used to derive the luminosity distance. Here we critically investigate four different methods—the geometric approach, the Sachs approach, the Jacobi mapping approach, and the geodesic light cone (GLC) approach to modeling the luminosity distance, and we present a unified treatment of such methods, facilitating the comparison among the methods and checkingmore » their sanity. All of these four methods, if exercised properly, can be used to reproduce the correct description of the luminosity distance.« less
Compton scattering of the microwave background by quasar-blown bubbles
NASA Technical Reports Server (NTRS)
Voit, G. Mark
1994-01-01
At least 10% of quasars drive rapid outflows from the central regions of their host galaxies. The mass and energy flow rates in these winds are difficult to measure, but their kinetic luminosities probably exceed 10(exp 45) ergs/s. This kind of outflow easily sunders the interstellar medium of the host and blows a bubble in the intergalactic medium. After the quasar shuts off, the hot bubble continues to shock intergalactic gas until its leading edge merges with the Hubble flow. The interior hot gas Compton scatters microwave background photons, potentially providing a way to detect these bubbles. Assuming that quasar kinetic luminosities scale with their blue luminosities, we integrate over the quasar luminosity function to find the total distortion (y) of the microwave background produced by the entire population of quasar wind bubbles. This calculation of y distortion is remarkably insensitive to the properties of the intergalactic medium (IGM), quasar lifetimes, and cosmological parameters. Current Cosmic Background Explorer (COBE) limits on y constrain the kinetic luminosities of quasars to be less than several times their bolometric radiative luminosities. Within this constraint, quasars can still expel enough kinetic luminosity to shock the entire IGM by z = 0, but cannot heat and ionize the IGM by z = 4 unless omega(sub IGM) much less than 10(exp -2).
Effects of variability of X-ray binaries on the X-ray luminosity functions of Milky Way
NASA Astrophysics Data System (ADS)
Islam, Nazma; Paul, Biswajit
2016-08-01
The X-ray luminosity functions of galaxies have become a useful tool for population studies of X-ray binaries in them. The availability of long term light-curves of X-ray binaries with the All Sky X-ray Monitors opens up the possibility of constructing X-ray luminosity functions, by also including the intensity variation effects of the galactic X-ray binaries. We have constructed multiple realizations of the X-ray luminosity functions (XLFs) of Milky Way, using the long term light-curves of sources obtained in the 2-10 keV energy band with the RXTE-ASM. The observed spread seen in the value of slope of both HMXB and LMXB XLFs are due to inclusion of variable luminosities of X-ray binaries in construction of these XLFs as well as finite sample effects. XLFs constructed for galactic HMXBs in the luminosity range 1036-1039 erg/sec is described by a power-law model with a mean power-law index of -0.48 and a spread due to variability of HMXBs as 0.19. XLFs constructed for galactic LMXBs in the luminosity range 1036-1039 erg/sec has a shape of cut-off power-law with mean power-law index of -0.31 and a spread due to variability of LMXBs as 0.07.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asai, K.; Matsuoka, M.; Mihara, T.
2013-08-20
We present the luminosity dwell-time distributions during the hard states of two low-mass X-ray binaries containing a neutron star (NS), 4U 1608-52 and Aql X-1, observed with MAXI/GSC. The luminosity distributions show a steep cutoff on the low-luminosity side at {approx}1.0 Multiplication-Sign 10{sup 36} erg s{sup -1} in both sources. The cutoff implies a rapid luminosity decrease in their outburst decay phases and this decrease can be interpreted as being due to the propeller effect. We estimate the surface magnetic field of 4U 1608-52 to be (0.5-1.6) Multiplication-Sign 10{sup 8} G and Aql X-1 to be (0.6-1.9) Multiplication-Sign 10{sup 8}more » G from the cutoff luminosity and apply the same propeller mechanism to the similar rapid luminosity decrease observed in the transient Z source, XTE J1701-462, with RXTE/ASM. Assuming that the spin period of the NS is on the order of milliseconds, the observed cutoff luminosity implies a surface magnetic field on the order of 10{sup 9} G.« less
Accounting for the dispersion in the x ray properties of early-type galaxies
NASA Technical Reports Server (NTRS)
White, Raymond E., III; Sarazin, Craig L.
1990-01-01
The x ray luminosities of early-type galaxies are correlated with their optical (e.g., blue) luminosities (L sub X approx. L sub B exp 1.6), but the x ray luminosities exhibit considerable scatter for a given optical luminosity L sub B. This dispersion in x ray luminosity is much greater than the dispersion of other properties of early-type galaxies (for a given L sub B), such as luminosity scale-length, velocity dispersion, color, and metallicity. Here, researchers consider several possible sources for the dispersion in x ray luminosity. Some of the scatter in x ray luminosity may result from stellar population variations between galaxies with similar L sub B. Since the x ray emitting gas is from accumulated stellar mass loss, the L sub X dispersion may be due to variations in integrated stellar mass loss rates. Another possible cause of the L sub X dispersion may be variations in the amount of cool material in the galaxies; cool gas may act as an energy sink for the hot gas. Infrared emission may be used to trace such cool material, so researchers look for a correlation between the infrared emission and the x ray emission of early-type galaxies at fixed L sub B. Velocity dispersion variations between galaxies of similar L sub B may also contribute to the L sub X dispersion. The most likely a priori source of the dispersion in L sub X is probably the varying amount of ram-pressure stripping in a range of galaxy environments. The hot gaseous halos of early-type galaxies can be stripped in encounters with other galaxies or with ambient cluster gas if the intracluster gas is sufficiently dense. Researchers find that the most likely cause of dispersion in the x ray properties of early type galaxies is probably the ram-pressure stripping of gaseous halos from galaxies. For a sample of 81 early-type galaxies with x ray luminosities or upper limits derived from Einstein Observatory observations (CFT) researchers calculated the cumulative distribution of angular distances between the x ray sample members and bright galaxies from the Revised Shapley - Ames catalog. Collectively, galaxies with low x ray luminosities (for a given L sub B) tend to be in denser environments than galaxies with higher x ray luminosities.
ERIC Educational Resources Information Center
Hancock, Thomas E.; And Others
1995-01-01
In machine-mediated learning environments, there is a need for more reliable methods of calculating the probability that a learner's response will be correct in future trials. A combination of domain-independent response-state measures of cognition along with two instructional variables for maximum predictive ability are demonstrated. (Author/LRW)
Code of Federal Regulations, 2010 CFR
2010-07-01
... compliance date specified in § 63.3883. For magnet wire coating operations you may, with approval, conduct a performance test of one representative magnet wire coating machine for each group of identical or very similar magnet wire coating machines. (2) You must develop and begin implementing the work practice plan required...
Exploiting the Dynamics of Soft Materials for Machine Learning
Hauser, Helmut; Li, Tao; Pfeifer, Rolf
2018-01-01
Abstract Soft materials are increasingly utilized for various purposes in many engineering applications. These materials have been shown to perform a number of functions that were previously difficult to implement using rigid materials. Here, we argue that the diverse dynamics generated by actuating soft materials can be effectively used for machine learning purposes. This is demonstrated using a soft silicone arm through a technique of multiplexing, which enables the rich transient dynamics of the soft materials to be fully exploited as a computational resource. The computational performance of the soft silicone arm is examined through two standard benchmark tasks. Results show that the soft arm compares well to or even outperforms conventional machine learning techniques under multiple conditions. We then demonstrate that this system can be used for the sensory time series prediction problem for the soft arm itself, which suggests its immediate applicability to a real-world machine learning problem. Our approach, on the one hand, represents a radical departure from traditional computational methods, whereas on the other hand, it fits nicely into a more general perspective of computation by way of exploiting the properties of physical materials in the real world. PMID:29708857
Exploiting the Dynamics of Soft Materials for Machine Learning.
Nakajima, Kohei; Hauser, Helmut; Li, Tao; Pfeifer, Rolf
2018-06-01
Soft materials are increasingly utilized for various purposes in many engineering applications. These materials have been shown to perform a number of functions that were previously difficult to implement using rigid materials. Here, we argue that the diverse dynamics generated by actuating soft materials can be effectively used for machine learning purposes. This is demonstrated using a soft silicone arm through a technique of multiplexing, which enables the rich transient dynamics of the soft materials to be fully exploited as a computational resource. The computational performance of the soft silicone arm is examined through two standard benchmark tasks. Results show that the soft arm compares well to or even outperforms conventional machine learning techniques under multiple conditions. We then demonstrate that this system can be used for the sensory time series prediction problem for the soft arm itself, which suggests its immediate applicability to a real-world machine learning problem. Our approach, on the one hand, represents a radical departure from traditional computational methods, whereas on the other hand, it fits nicely into a more general perspective of computation by way of exploiting the properties of physical materials in the real world.
Fall classification by machine learning using mobile phones.
Albert, Mark V; Kording, Konrad; Herrmann, Megan; Jayaraman, Arun
2012-01-01
Fall prevention is a critical component of health care; falls are a common source of injury in the elderly and are associated with significant levels of mortality and morbidity. Automatically detecting falls can allow rapid response to potential emergencies; in addition, knowing the cause or manner of a fall can be beneficial for prevention studies or a more tailored emergency response. The purpose of this study is to demonstrate techniques to not only reliably detect a fall but also to automatically classify the type. We asked 15 subjects to simulate four different types of falls-left and right lateral, forward trips, and backward slips-while wearing mobile phones and previously validated, dedicated accelerometers. Nine subjects also wore the devices for ten days, to provide data for comparison with the simulated falls. We applied five machine learning classifiers to a large time-series feature set to detect falls. Support vector machines and regularized logistic regression were able to identify a fall with 98% accuracy and classify the type of fall with 99% accuracy. This work demonstrates how current machine learning approaches can simplify data collection for prevention in fall-related research as well as improve rapid response to potential injuries due to falls.
Obscured Active Galactic Nuclei in Luminous Infrared Galaxies
NASA Astrophysics Data System (ADS)
Shier, L. M.; Rieke, M. J.; Rieke, G. H.
1996-10-01
We examine the nature of the central power source in very luminous infrared galaxies. The infrared properties of the galaxies, including their far-infrared and 2.2 micron fluxes, CO indices, and Brackett line fluxes are compared to models of starburst stellar populations. Among seven galaxies we found two dominated by emission from young stars, two dominated by emission from an AGN, and three transition cases. Our results are consistent with evidence for active nuclei in the same galaxies at other wavelengths. Nuclear mass measurements obtained for the galaxies indicate an initial mass function biased toward high-mass stars in two galaxies. After demonstrating our methods in well-studied galaxies, we define complete samples of high luminosity and ultraluminous galaxies. We find that the space density of embedded and unembedded quasars in the local universe is similar for objects of similar luminosity. If quasars evolve from embedded sources to optically prominent objects, it appears that the lifetime of a quasar is no more than about 108 yr.
Variations on Debris Disks. IV. An Improved Analytical Model for Collisional Cascades
NASA Astrophysics Data System (ADS)
Kenyon, Scott J.; Bromley, Benjamin C.
2017-04-01
We derive a new analytical model for the evolution of a collisional cascade in a thin annulus around a single central star. In this model, r max the size of the largest object changes with time, {r}\\max \\propto {t}-γ , with γ ≈ 0.1-0.2. Compared to standard models where r max is constant in time, this evolution results in a more rapid decline of M d , the total mass of solids in the annulus, and L d , the luminosity of small particles in the annulus: {M}d\\propto {t}-(γ +1) and {L}d\\propto {t}-(γ /2+1). We demonstrate that the analytical model provides an excellent match to a comprehensive suite of numerical coagulation simulations for annuli at 1 au and at 25 au. If the evolution of real debris disks follows the predictions of the analytical or numerical models, the observed luminosities for evolved stars require up to a factor of two more mass than predicted by previous analytical models.
Will there be energy frontier colliders after LHC?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiltsev, Vladimir
2016-09-15
High energy particle colliders have been in the forefront of particle physics for more than three decades. At present the near term US, European and international strategies of the particle physics community are centered on full exploitation of the physics potential of the Large Hadron Collider (LHC) through its high-luminosity upgrade (HL-LHC). The future of the world-wide HEP community critically depends on the feasibility of possible post-LHC colliders. The concept of the feasibility is complex and includes at least three factors: feasibility of energy, feasibility of luminosity and feasibility of cost. Here we overview all current options for post-LHC collidersmore » from such perspective (ILC, CLIC, Muon Collider, plasma colliders, CEPC, FCC, HE-LHC) and discuss major challenges and accelerator R&D required to demonstrate feasibility of an energy frontier accelerator facility following the LHC. We conclude by taking a look into ultimate energy reach accelerators based on plasmas and crystals, and discussion on the perspectives for the far future of the accelerator-based particle physics.« less
Generalized image contrast enhancement technique based on Heinemann contrast discrimination model
NASA Astrophysics Data System (ADS)
Liu, Hong; Nodine, Calvin F.
1994-03-01
This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.
Boosting Higgs pair production in the [Formula: see text] final state with multivariate techniques.
Behr, J Katharina; Bortoletto, Daniela; Frost, James A; Hartland, Nathan P; Issever, Cigdem; Rojo, Juan
2016-01-01
The measurement of Higgs pair production will be a cornerstone of the LHC program in the coming years. Double Higgs production provides a crucial window upon the mechanism of electroweak symmetry breaking and has a unique sensitivity to the Higgs trilinear coupling. We study the feasibility of a measurement of Higgs pair production in the [Formula: see text] final state at the LHC. Our analysis is based on a combination of traditional cut-based methods with state-of-the-art multivariate techniques. We account for all relevant backgrounds, including the contributions from light and charm jet mis-identification, which are ultimately comparable in size to the irreducible 4 b QCD background. We demonstrate the robustness of our analysis strategy in a high pileup environment. For an integrated luminosity of [Formula: see text] ab[Formula: see text], a signal significance of [Formula: see text] is obtained, indicating that the [Formula: see text] final state alone could allow for the observation of double Higgs production at the High Luminosity LHC.
Illuminating gravitational waves: A concordant picture of photons from a neutron star merger.
Kasliwal, M M; Nakar, E; Singer, L P; Kaplan, D L; Cook, D O; Van Sistine, A; Lau, R M; Fremling, C; Gottlieb, O; Jencson, J E; Adams, S M; Feindt, U; Hotokezaka, K; Ghosh, S; Perley, D A; Yu, P-C; Piran, T; Allison, J R; Anupama, G C; Balasubramanian, A; Bannister, K W; Bally, J; Barnes, J; Barway, S; Bellm, E; Bhalerao, V; Bhattacharya, D; Blagorodnova, N; Bloom, J S; Brady, P R; Cannella, C; Chatterjee, D; Cenko, S B; Cobb, B E; Copperwheat, C; Corsi, A; De, K; Dobie, D; Emery, S W K; Evans, P A; Fox, O D; Frail, D A; Frohmaier, C; Goobar, A; Hallinan, G; Harrison, F; Helou, G; Hinderer, T; Ho, A Y Q; Horesh, A; Ip, W-H; Itoh, R; Kasen, D; Kim, H; Kuin, N P M; Kupfer, T; Lynch, C; Madsen, K; Mazzali, P A; Miller, A A; Mooley, K; Murphy, T; Ngeow, C-C; Nichols, D; Nissanke, S; Nugent, P; Ofek, E O; Qi, H; Quimby, R M; Rosswog, S; Rusu, F; Sadler, E M; Schmidt, P; Sollerman, J; Steele, I; Williamson, A R; Xu, Y; Yan, L; Yatsu, Y; Zhang, C; Zhao, W
2017-12-22
Merging neutron stars offer an excellent laboratory for simultaneously studying strong-field gravity and matter in extreme environments. We establish the physical association of an electromagnetic counterpart (EM170817) with gravitational waves (GW170817) detected from merging neutron stars. By synthesizing a panchromatic data set, we demonstrate that merging neutron stars are a long-sought production site forging heavy elements by r-process nucleosynthesis. The weak gamma rays seen in EM170817 are dissimilar to classical short gamma-ray bursts with ultrarelativistic jets. Instead, we suggest that breakout of a wide-angle, mildly relativistic cocoon engulfing the jet explains the low-luminosity gamma rays, the high-luminosity ultraviolet-optical-infrared, and the delayed radio and x-ray emission. We posit that all neutron star mergers may lead to a wide-angle cocoon breakout, sometimes accompanied by a successful jet and sometimes by a choked jet. Copyright © 2017, American Association for the Advancement of Science.
Hybrid accretion disks in active galactic nuclei. I - Structure and spectra
NASA Technical Reports Server (NTRS)
Wandel, Amri; Liang, Edison P.
1991-01-01
A unified treatment is presented of the two distinct states of vertically thin AGN accretion disks: a cool (about 10 to the 6th K) optically thick solution, and a hot (about 10 to the 9th K) optically thin solution. A generalized formalism and a new radiative cooling equation valid in both regimes are introduced. A new luminosity limit is found at which the hot and cool alpha solutions merge into a single solution of intermediate optical depth. Analytic solutions for the disk structure are given, and output spectra are computed numerically. This is used to demonstrate the prospect of fitting AGN broadband spectra containing both the UV bump as well as the hard X-ray and gamma-ray tail, using a single accretion disk model. Such models are found to make definite predictions about the observed spectrum, such as the relation between the hard X-ray spectral index, the UV-to-X-ray luminosity ratio, and a feature of about 1 MeV.
Managing virtual machines with Vac and Vcycle
NASA Astrophysics Data System (ADS)
McNab, A.; Love, P.; MacMahon, E.
2015-12-01
We compare the Vac and Vcycle virtual machine lifecycle managers and our experiences in providing production job execution services for ATLAS, CMS, LHCb, and the GridPP VO at sites in the UK, France and at CERN. In both the Vac and Vcycle systems, the virtual machines are created outside of the experiment's job submission and pilot framework. In the case of Vac, a daemon runs on each physical host which manages a pool of virtual machines on that host, and a peer-to-peer UDP protocol is used to achieve the desired target shares between experiments across the site. In the case of Vcycle, a daemon manages a pool of virtual machines on an Infrastructure-as-a-Service cloud system such as OpenStack, and has within itself enough information to create the types of virtual machines to achieve the desired target shares. Both systems allow unused shares for one experiment to temporarily taken up by other experiements with work to be done. The virtual machine lifecycle is managed with a minimum of information, gathered from the virtual machine creation mechanism (such as libvirt or OpenStack) and using the proposed Machine/Job Features API from WLCG. We demonstrate that the same virtual machine designs can be used to run production jobs on Vac and Vcycle/OpenStack sites for ATLAS, CMS, LHCb, and GridPP, and that these technologies allow sites to be operated in a reliable and robust way.
THE EXTREME SMALL SCALES: DO SATELLITE GALAXIES TRACE DARK MATTER?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, Douglas F.; Berlind, Andreas A.; McBride, Cameron K.
2012-04-10
We investigate the radial distribution of galaxies within their host dark matter halos as measured in the Sloan Digital Sky Survey by modeling their small-scale clustering. Specifically, we model the Jiang et al. measurements of the galaxy two-point correlation function down to very small projected separations (10 h{sup -1} kpc {<=} r {<=} 400 h{sup -1} kpc), in a wide range of luminosity threshold samples (absolute r-band magnitudes of -18 up to -23). We use a halo occupation distribution framework with free parameters that specify both the number and spatial distribution of galaxies within their host dark matter halos. Wemore » assume one galaxy resides in the halo center and additional galaxies are considered satellites that follow a radial density profile similar to the dark matter Navarro-Frenk-White (NFW) profile, except that the concentration and inner slope are allowed to vary. We find that in low luminosity samples (M{sub r} < -19.5 and lower), satellite galaxies have radial profiles that are consistent with NFW. M{sub r} < -20 and brighter satellite galaxies have radial profiles with significantly steeper inner slopes than NFW (we find inner logarithmic slopes ranging from -1.6 to -2.1, as opposed to -1 for NFW). We define a useful metric of concentration, M{sub 1/10}, which is the fraction of satellite galaxies (or mass) that are enclosed within one-tenth of the virial radius of a halo. We find that M{sub 1/10} for low-luminosity satellite galaxies agrees with NFW, whereas for luminous galaxies it is 2.5-4 times higher, demonstrating that these galaxies are substantially more centrally concentrated within their dark matter halos than the dark matter itself. Our results therefore suggest that the processes that govern the spatial distribution of galaxies, once they have merged into larger halos, must be luminosity dependent, such that luminous galaxies become poor tracers of the underlying dark matter.« less
NASA Astrophysics Data System (ADS)
Zhang, H.; Yu, W.
2015-08-01
Episodic jets are usually observed in the intermediate state of black hole transients during their X-ray outbursts. Here we report the discovery of a strong positive correlation between the peak radio power of the episodic jet Pjet and the corresponding peak X-ray luminosity Lx of the soft state (in Eddington units) in a complete sample of the outbursts of black hole transients observed during the RXTE era of which data are available, which follows the relation log Pjet = (2.2 ± 0.3) + (1.6 ± 0.2) × log Lx. The transient ultraluminous X-ray source in M31 and HLX-1 in EXO 243-49 fall on the relation if they contain stellar-mass black hole and either stellar-mass black hole or intermediate-mass black hole, respectively. Besides, a significant correlation between the peak power of the episodic jet and the rate of increase of the X-ray luminosity dLx/dt during the rising phase of those outbursts is also found, following log Pjet = (2.0 ± 0.4) + (0.7 ± 0.2) × log dLx/dt. In GX 339-4 and H 1743-322 in which data for two outbursts are available, measurements of the peak radio power of the episodic jet and the X-ray peak luminosity (and its rate of change) shows similar positive correlations between outbursts, which demonstrate the dominant role of accretion over black hole spin in generating episodic jet power. On the other hand, no significant difference is seen among the systems with different measured black hole spin in current sample. This implies that the power of the episodic jet is strongly affected by non-stationary accretion instead of black hole spin characterized primarily by the rate of change of the mass accretion rate.
Weak homology of elliptical galaxies.
NASA Astrophysics Data System (ADS)
Bertin, G.; Ciotti, L.; Del Principe, M.
2002-04-01
Studies of the Fundamental Plane of early-type galaxies, from small to intermediate redshifts, are generally carried out under the guiding principle that the Fundamental Plane reflects the existence of an underlying mass-luminosity relation for such galaxies, in a scenario where galaxies are homologous systems in dynamical equilibrium. In this paper we re-examine the question of whether a systematic non-homology could be partly responsible for the correlations that define the Fundamental Plane. We start by studying a small set of objects characterized by photometric profiles that have been pointed out to deviate significantly from the standard R1/4 law. For these objects we confirm that a generic R1/n law, with n a free parameter, can provide superior fits (the best-fit value of n can be lower than 2.5 or higher than 10), better than those that can be obtained by a pure R1/4 law, by an R1/4 + exponential model, and by other dynamically justified self-consistent models. Therefore, strictly speaking, elliptical galaxies should not be considered homologous dynamical systems. Still, a case for weak homology, useful for the interpretation of the Fundamental Plane, could be made if the best-fit parameter n, as often reported, correlates with galaxy luminosity L, provided the underlying dynamical structure also follows a systematic trend with luminosity. We demonstrate that this statement may be true even in the presence of significant scatter in the correlation n(L). Preliminary indications provided by a set of ``data points" associated with a sample of 14 galaxies suggest that neither the strict homology nor the constant stellar mass-to-light solution are a satisfactory explanation of the observed Fundamental Plane. These conclusions await further extensions and clarifications, because the class of low-luminosity early-type galaxies, which contribute significantly to the Fundamental Plane, falls outside the simple dynamical framework considered here and because dynamical considerations should be supplemented with other important constraints derived from the evolution of stellar populations.
Electron-capture supernovae of super-asymptotic giant branch stars and the Crab supernova 1054
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomoto, Ken'ichi; Tominaga, Nozomu; Blinnikov, Sergei I.
2014-05-02
An electron-capture supernova (ECSN) is a core-collapse supernova explosion of a super-asymptotic giant branch (SAGB) star with a main-sequence mass M{sub Ms} ∼ 7 - 9.5M{sub ⊙}. The explosion takes place in accordance with core bounce and subsequent neutrino heating and is a unique example successfully produced by first-principle simulations. This allows us to derive a first self-consistent multicolor light curves of a core-collapse supernova. Adopting the explosion properties derived by the first-principle simulation, i.e., the low explosion energy of 1.5 × 10{sup 50} erg and the small {sup 56}Ni mass of 2.5 × 10{sup −3} M{sub ⊙}, we performmore » a multigroup radiation hydrodynamics calculation of ECSNe and present multicolor light curves of ECSNe of SAGB stars with various envelope mass and hydrogen abundance. We demonstrate that a shock breakout has peak luminosity of L ∼ 2 × 10{sup 44} erg s{sup −1} and can evaporate circumstellar dust up to R ∼ 10{sup 17} cm for a case of carbon dust, that plateau luminosity and plateau duration of ECSNe are L ∼ 10{sup 42} erg s{sup −1} and {sup t} ∼ 60 - 100 days, respectively, and that a plateau is followed by a tail with a luminosity drop by ∼ 4 mag. The ECSN shows a bright and short plateau that is as bright as typical Type II plateau supernovae, and a faint tail that might be influenced by spin-down luminosity of a newborn pulsar. Furthermore, the theoretical models are compared with ECSN candidates: SN 1054 and SN 2008S. We find that SN 1054 shares the characteristics of the ECSNe. For SN 2008S, we find that its faint plateau requires a ECSN model with a significantly low explosion energy of E ∼ 10{sup 48} erg.« less
Hettige, Nuwan C; Nguyen, Thai Binh; Yuan, Chen; Rajakulendran, Thanara; Baddour, Jermeen; Bhagwat, Nikhil; Bani-Fatemi, Ali; Voineskos, Aristotle N; Mallar Chakravarty, M; De Luca, Vincenzo
2017-07-01
Suicide is a major concern for those afflicted by schizophrenia. Identifying patients at the highest risk for future suicide attempts remains a complex problem for psychiatric interventions. Machine learning models allow for the integration of many risk factors in order to build an algorithm that predicts which patients are likely to attempt suicide. Currently it is unclear how to integrate previously identified risk factors into a clinically relevant predictive tool to estimate the probability of a patient with schizophrenia for attempting suicide. We conducted a cross-sectional assessment on a sample of 345 participants diagnosed with schizophrenia spectrum disorders. Suicide attempters and non-attempters were clearly identified using the Columbia Suicide Severity Rating Scale (C-SSRS) and the Beck Suicide Ideation Scale (BSS). We developed four classification algorithms using a regularized regression, random forest, elastic net and support vector machine models with sociocultural and clinical variables as features to train the models. All classification models performed similarly in identifying suicide attempters and non-attempters. Our regularized logistic regression model demonstrated an accuracy of 67% and an area under the curve (AUC) of 0.71, while the random forest model demonstrated 66% accuracy and an AUC of 0.67. Support vector classifier (SVC) model demonstrated an accuracy of 67% and an AUC of 0.70, and the elastic net model demonstrated and accuracy of 65% and an AUC of 0.71. Machine learning algorithms offer a relatively successful method for incorporating many clinical features to predict individuals at risk for future suicide attempts. Increased performance of these models using clinically relevant variables offers the potential to facilitate early treatment and intervention to prevent future suicide attempts. Copyright © 2017 Elsevier Inc. All rights reserved.
Simulations of dust in interacting galaxies
NASA Astrophysics Data System (ADS)
Jonsson, Patrik
This dissertation studies the effects of dust in N-body simulations of interacting galaxies. A new Monte-Carlo radiative-transfer code, Sunrise , is used in conjunction with hydrodynamic simulations. Results from radiative- transfer calculations in over 20 SPH simulations of disk-galaxy major mergers (Cox, 2004) are presented. Dust has a profound effect on the appearance of these simulations. At peak luminosities, 90% of the bolometric luminosity is absorbed by dust. The dust obscuration increases with luminosity in such a way that the brightness at UV/ visual wavelengths remains roughly constant. A general relationship between the fraction of energy absorbed and the ratio of bolometric luminosity to baryonic mass is found to hold in galaxies with metallicities >0.7 [Special characters omitted.] over a factor of 50 in mass. The accuracy to which the simulations describe observed starburst galaxies is evaluated by comparing them to observations by Meurer et al. (1999) and Heckman et al. (1998). The simulations are found to follow a relation similar to the IRX-b relation found by Meurer et al. (1999) when similar luminosity objects are considered. The highest-luminosity simulated galaxies depart from this relation and occupy the region where local LIRGs/ULIRGs are found. Comparing to the Heckman et al. (1998) sample, the simulations are found to obey the same relations between UV luminosity, UV color, IR luminosity, absolute blue magnitude and metallicity as the observations. This agreement is contingent on the presence of a realistic mass-metallicity relation, and Milky-Way-like dust. SMC-like dust results in far too red a UV continuum slope. On the whole, the agreement between the simulated and observed galaxies is impressive considering that the simulations have not been fit to agree with the observations, and we conclude that the simulations provide a realistic replication of the real universe. The simulations are used to study the performance of star-formation indicators in the presence of dust. The far-infrared luminosity is found to be reliable. In contrast, the Ha and far-ultraviolet luminosities suffer severely from dust attenuation, and dust corrections can only partially remedy the situation.
Sample preparation of metal alloys by electric discharge machining
NASA Technical Reports Server (NTRS)
Chapman, G. B., II; Gordon, W. A.
1976-01-01
Electric discharge machining was investigated as a noncontaminating method of comminuting alloys for subsequent chemical analysis. Particulate dispersions in water were produced from bulk alloys at a rate of about 5 mg/min by using a commercially available machining instrument. The utility of this approach was demonstrated by results obtained when acidified dispersions were substituted for true acid solutions in an established spectrochemical method. The analysis results were not significantly different for the two sample forms. Particle size measurements and preliminary results from other spectrochemical methods which require direct aspiration of liquid into flame or plasma sources are reported.
NASA Astrophysics Data System (ADS)
Shivaei, Irene; Reddy, Naveen A.; Shapley, Alice E.; Siana, Brian; Kriek, Mariska; Mobasher, Bahram; Coil, Alison L.; Freeman, William R.; Sanders, Ryan L.; Price, Sedona H.; Azadi, Mojegan; Zick, Tom
2017-03-01
We present results on the variation of 7.7 μm polycyclic aromatic hydrocarbon (PAH) emission in galaxies spanning a wide range in metallicity at z ˜ 2. For this analysis, we use rest-frame optical spectra of 476 galaxies at 1.37 ≤ z ≤ 2.61 from the MOSFIRE Deep Evolution Field (MOSDEF) survey to infer metallicities and ionization states. Spitzer/MIPS 24 μm and Herschel/PACS 100 and 160 μm observations are used to derive rest-frame 7.7 μm luminosities ({L}7.7) and total IR luminosities ({L}{IR}), respectively. We find significant trends between the ratio of {L}7.7 to {L}{IR} (and to dust-corrected star formation rate [SFR]) and both metallicity and [O III]/[O II] ({{{O}}}32) emission line ratio. The latter is an empirical proxy for the ionization parameter. These trends indicate a paucity of PAH emission in low-metallicity environments with harder and more intense radiation fields. Additionally, {L}7.7/{L}{IR} is significantly lower in the youngest quartile of our sample (ages of ≲500 Myr) compared to older galaxies, which may be a result of the delayed production of PAHs by AGB stars. The relative strength of {L}7.7 to {L}{IR} is also lower by a factor of ˜2 for galaxies with masses {M}* < {10}10 {M}⊙ , compared to the more massive ones. We demonstrate that commonly used conversions of {L}7.7 (or 24 μm flux density, f 24) to {L}{IR} underestimate the IR luminosity by more than a factor of 2 at {M}* ˜ {10}9.6{--10.0} {M}⊙ . We adopt a mass-dependent conversion of {L}7.7 to {L}{IR} with {L}7.7/{L}{IR} = 0.09 and 0.22 for {M}* ≤slant {10}10 and > {10}10 {M}⊙ , respectively. Based on the new scaling, the SFR-M * relation has a shallower slope than previously derived. Our results also suggest a higher IR luminosity density at z ˜ 2 than previously measured, corresponding to a ˜30% increase in the SFR density.
A 16 deg2 survey of emission-line galaxies at z < 1.5 in HSC-SSP Public Data Release 1
NASA Astrophysics Data System (ADS)
Hayashi, Masao; Tanaka, Masayuki; Shimakawa, Rhythm; Furusawa, Hisanori; Momose, Rieko; Koyama, Yusei; Silverman, John D.; Kodama, Tadayuki; Komiyama, Yutaka; Leauthaud, Alexie; Lin, Yen-Ting; Miyazaki, Satoshi; Nagao, Tohru; Nishizawa, Atsushi J.; Ouchi, Masami; Shibuya, Takatoshi; Tadaki, Ken-ichi; Yabe, Kiyoto
2018-01-01
We present initial results from the Subaru Strategic Program (SSP) with Hyper Suprime-Cam (HSC) on a comprehensive survey of emission-line galaxies at z < 1.5 based on narrowband imaging. The first Public Data Release provides us with data from two narrowband filters, specifically NB816 and NB921 over 5.7 deg2 and 16.2 deg2 respectively. The 5 σ limiting magnitudes are 25.2 mag (UltraDeep layer, 1.4 deg2) and 24.8 mag (Deep layer, 4.3 deg2) for NB816, and 25.1 mag (UltraDeep, 2.9 deg2) and 24.6-24.8 mag (Deep, 13.3 deg2) for NB921. The wide-field imaging allows us to construct unprecedentedly large samples of 8054 H α emitters at z ≈ 0.25 and 0.40, 8656 [O III] emitters at z ≈ 0.63 and 0.84, and 16877 [O II] emitters at z ≈ 1.19 and 1.47. We map the cosmic web on scales out to about 50 comoving Mpc that includes galaxy clusters, identified by red sequence galaxies, located at the intersection of filamentary structures of star-forming galaxies. The luminosity functions of emission-line galaxies are measured with precision and are consistent with published studies. The wide field coverage of the data enables us to measure the luminosity functions up to brighter luminosities than previous studies. The comparison of the luminosity functions between the different HSC-SSP fields suggests that a survey volume of >5 × 105 Mpc3 is essential to overcome cosmic variance. Since the current data have not reached the full depth expected for the HSC-SSP, the color cut in i - NB816 or z - NB921 induces a bias towards star-forming galaxies with large equivalent widths, primarily seen in the stellar mass functions for the H α emitters at z ≈ 0.25-0.40. Even so, the emission-line galaxies clearly cover a wide range of luminosity, stellar mass, and environment, thus demonstrating the usefulness of the narrowband data from the HSC-SSP for investigating star-forming galaxies at z < 1.5.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
ERIC Educational Resources Information Center
Adney, Kenneth J.
1991-01-01
An activity in which students compare the sun's brightness with that of a light bulb of known luminosity (in watts) to determine the luminosity of the sun is presented. As an extension, the luminosity value that the student obtains for the sun can also be used to estimate the sun's surface temperature. (KR)
Gamma-Ray Bursts and Cosmology
NASA Technical Reports Server (NTRS)
Norris, Jay P.
2003-01-01
The unrivalled, extreme luminosities of gamma-ray bursts (GRBs) make them the favored beacons for sampling the high redshift Universe. To employ GRBs to study the cosmic terrain -- e.g., star and galaxy formation history -- GRB luminosities must be calibrated, and the luminosity function versus redshift must be measured or inferred. Several nascent relationships between gamma-ray temporal or spectral indicators and luminosity or total energy have been reported. These measures promise to further our understanding of GRBs once the connections between the luminosity indicators and GRB jets and emission mechanisms are better elucidated. The current distribution of 33 redshifts determined from host galaxies and afterglows peaks near z $\\sim$ 1, whereas for the full BATSE sample of long bursts, the lag-luminosity relation predicts a broad peak z $\\sim$ 1--4 with a tail to z $\\sim$ 20, in rough agreement with theoretical models based on star formation considerations. For some GRB subclasses and apparently related phenomena -- short bursts, long-lag bursts, and X-ray flashes -- the present information on their redshift distributions is sparse or entirely lacking, and progress is expected in Swift era when prompt alerts become numerous.
Polycrystalline CdTe detectors: A luminosity monitor for the LHC
NASA Astrophysics Data System (ADS)
Gschwendtner, E.; Placidia, M.; Schmicklera, H.
2003-09-01
The luminosity at the four interaction points of the Large Hadron Collider must be continuously monitored in order to provide an adequate tool for the control and optimization of the collision parameters and the beam optics. At both sides of the interaction points absorbers are installed to protect the super-conducting accelerator elements from quenches caused by the deposited energy of collision products. The luminosity detectors will be installed in the copper core of these absorbers to measure the electromagnetic and hadronic showers caused by neutral particles that are produced at the proton-proton collision in the interaction points. The detectors have to withstand extreme radiation levels (108 Gy/yr at the design luminosity) and their long-term operation has to be assured without requiring human intervention. In addition the demand for bunch-by-bunch luminosity measurements, i.e. 40 MHz detection speed, puts severe constraints on the detectors. Polycrystalline CdTe detectors have a high potential to fulfill the requirements and are considered as LHC luminosity monitors. In this paper the interaction region is shown and the characteristics of the CdTe detectors are presented.
The Evolution of Globular Cluster Systems In Early-Type Galaxies
NASA Astrophysics Data System (ADS)
Grillmair, Carl
1999-07-01
We will measure structural parameters {core radii and concentrations} of globular clusters in three early-type galaxies using deep, four-point dithered observations. We have chosen globular cluster systems which have young, medium-age and old cluster populations, as indicated by cluster colors and luminosities. Our primary goal is to test the hypothesis that globular cluster luminosity functions evolve towards a ``universal'' form. Previous observations have shown that young cluster systems have exponential luminosity functions rather than the characteristic log-normal luminosity function of old cluster systems. We will test to see whether such young system exhibits a wider range of structural parameters than an old systems, and whether and at what rate plausible disruption mechanisms will cause the luminosity function to evolve towards a log-normal form. A simple observational comparison of structural parameters between different age cluster populations and between diff er ent sub-populations within the same galaxy will also provide clues concerning both the formation and destruction mechanisms of star clusters, the distinction between open and globular clusters, and the advisability of using globular cluster luminosity functions as distance indicators.
Beam-dynamic effects at the CMS BRIL van der Meer scans
NASA Astrophysics Data System (ADS)
Babaev, A.
2018-03-01
The CMS Beam Radiation Instrumentation and Luminosity Project (BRIL) is responsible for the simulation and measurement of luminosity, beam conditions and radiation fields in the CMS experiment. The project is engaged in operating and developing new detectors (luminometers), adequate for the experimental conditions associated with high values of instantaneous luminosity delivered by the CERN LHC . BRIL operates several detectors based on different physical principles and technologies. Precise and accurate measurements of the delivered luminosity is of paramount importance for the CMS physics program. The absolute calibration of luminosity is achieved by the van der Meer method, which is carried out under specially tailored conditions. This paper presents models used to simulate of beam-dynamic effects arising due to the electromagnetic interaction of colliding bunches. These effects include beam-beam deflection and dynamic-β effect. Both effects are important to luminosity measurements and influence calibration constants at the level of 1-2%. The simulations are carried out based on 2016 CMS van der Meer scan data for proton-proton collisions at a center-of-mass energy of 13 TeV.
What powers Hyperluminous infrared galaxies at z˜1-2?
NASA Astrophysics Data System (ADS)
Symeonidis, M.; Page, M. J.
2018-06-01
We investigate what powers hyperluminous infrared galaxies (HyLIRGs; LIR, 8-1000μm > 1013 L⊙) at z˜1-2, by examining the behaviour of the infrared AGN luminosity function in relation to the infrared galaxy luminosity function. The former corresponds to emission from AGN-heated dust only, whereas the latter includes emission from dust heated by stars and AGN. Our results show that the two luminosity functions are substantially different below 1013 L⊙ but converge in the HyLIRG regime. We find that the fraction of AGN dominated sources increases with total infrared luminosity and at L_IR>10^{13.5} L_{⊙} AGN can account for the entire infrared emission. We conclude that the bright end of the 1 < z < 2 infrared galaxy luminosity function is shaped by AGN rather than star-forming galaxies.
A Search for Water Maser Emission from Brown Dwarfs and Low-luminosity Young Stellar Objects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gómez, José F.; Manjarrez, Guillermo; Palau, Aina
We present a survey for water maser emission toward a sample of 44 low-luminosity young objects, comprising (proto-)brown dwarfs, first hydrostatic cores (FHCs), and other young stellar objects (YSOs) with bolometric luminosities lower than 0.4 L {sub ⊙}. Water maser emission is a good tracer of energetic processes, such as mass-loss and/or accretion, and is a useful tool to study these processes with very high angular resolution. This type of emission has been confirmed in objects with L {sub bol} ≳ 1 L {sub ⊙}. Objects with lower luminosities also undergo mass-loss and accretion, and thus, are prospective sites of maser emission.more » Our sensitive single-dish observations provided a single detection when pointing toward the FHC L1448 IRS 2E. However, follow-up interferometric observations showed water maser emission associated with the nearby YSO L1448 IRS 2 (a Class 0 protostar of L {sub bol} ≃ 3.6–5.3 L {sub ⊙}) and did not find any emission toward L1448 IRS 2E. The upper limits for water maser emission determined by our observations are one order of magnitude lower than expected from the correlation between water maser luminosities and bolometric luminosities found for YSOs. This suggests that this correlation does not hold at the lower end of the (sub)stellar mass spectrum. Possible reasons are that the slope of this correlation is steeper at L {sub bol} ≤ 1 L {sub ⊙} or that there is an absolute luminosity threshold below which water maser emission cannot be produced. Alternatively, if the correlation still stands at low luminosity, the detection rates of masers would be significantly lower than the values obtained in higher-luminosity Class 0 protostars.« less
Planar electroluminescent panel techniques
NASA Technical Reports Server (NTRS)
Kerr, C.; Kell, R. E.
1973-01-01
Investigations of planar electroluminescent multipurpose displays with latch-in memory are described. An 18 x 24 in. flat, thin address panel with elements spacing of 0.100 in. was constructed which demonstrated essentially uniform luminosity of 3-5 foot lamberts for each of its 43200 EL cells. A working model of a 4-bit EL-PC (electroluminescent photoconductive) electrooptical decoder was made which demonstrated the feasibility of this concept. A single-diagram electroluminescent display device with photoconductive-electroluminescent latch-in memory was constructed which demonstrated the conceptual soundness of this principle. Attempts to combine these principles in a single PEL multipurpose display with latch-in memory were unsuccessful and were judged to exceed the state-of-the-art for close-packed (0.10 in. centers) photoconductor-electroluminescent cell assembly.
Reliability Analysis of Uniaxially Ground Brittle Materials
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Nemeth, Noel N.; Powers, Lynn M.; Choi, Sung R.
1995-01-01
The fast fracture strength distribution of uniaxially ground, alpha silicon carbide was investigated as a function of grinding angle relative to the principal stress direction in flexure. Both as-ground and ground/annealed surfaces were investigated. The resulting flexural strength distributions were used to verify reliability models and predict the strength distribution of larger plate specimens tested in biaxial flexure. Complete fractography was done on the specimens. Failures occurred from agglomerates, machining cracks, or hybrid flaws that consisted of a machining crack located at a processing agglomerate. Annealing eliminated failures due to machining damage. Reliability analyses were performed using two and three parameter Weibull and Batdorf methodologies. The Weibull size effect was demonstrated for machining flaws. Mixed mode reliability models reasonably predicted the strength distributions of uniaxial flexure and biaxial plate specimens.
UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces
NASA Technical Reports Server (NTRS)
Shiffman, Smadar; Degani, Asaf; Heymann, Michael
2004-01-01
In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.
Responsive materials: A novel design for enhanced machine-augmented composites
Bafekrpour, Ehsan; Molotnikov, Andrey; Weaver, James C.; Brechet, Yves; Estrin, Yuri
2014-01-01
The concept of novel responsive materials with a displacement conversion capability was further developed through the design of new machine-augmented composites (MACs). Embedded converter machines and MACs with improved geometry were designed and fabricated by multi-material 3D printing. This technique proved to be very effective in fabricating these novel composites with tuneable elastic moduli of the matrix and the embedded machines and excellent bonding between them. Substantial improvement in the displacement conversion efficiency of the new MACs over the existing ones was demonstrated. Also, the new design trebled the energy absorption of the MACs. Applications in energy absorbers as well as mechanical sensors and actuators are thus envisaged. A further type of MACs with conversion ability, viz. conversion of compressive displacements to torsional ones, was also proposed. PMID:24445490
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawaguchi, Toshihiro; Saito, Yuriko; Miki, Yohei
2014-07-01
Galaxies and massive black holes (BHs) presumably grow via galactic merging events and subsequent BH coalescence. As a case study, we investigate the merging event between the Andromeda galaxy (M31) and a satellite galaxy. We compute the expected observational appearance of the massive BH that was at the center of the satellite galaxy prior to the merger and is currently wandering in the M31 halo. We demonstrate that a radiatively inefficient accretion flow with a bolometric luminosity of a few tens of solar luminosities develops when Hoyle-Lyttleton accretion onto the BH is assumed. We compute the associated broadband spectrum andmore » show that the radio band (observable with EVLA, ALMA, and the Square Kilometre Array) is the best frequency range in which to detect the emission. We also evaluate the mass and the luminosity of the stars bound by the wandering BH and find that such a star cluster is sufficiently luminous that it could correspond to one of the star clusters found by the PAndAS survey. The discovery of a relic massive BH wandering in a galactic halo will provide a direct means of investigating in detail the coevolution of galaxies and BHs. It also means a new population of BHs (off-center massive BHs) and offers targets for clean BH imaging that avoid strong interstellar scattering in the centers of galaxies.« less
NASA Astrophysics Data System (ADS)
Chen, Xiaodian; Deng, Licai; de Grijs, Richard; Wang, Shu; Feng, Yuting
2018-06-01
W Ursa Majoris (W UMa)-type contact binary systems (CBs) are useful statistical distance indicators because of their large numbers. Here, we establish (orbital) period–luminosity relations (PLRs) in 12 optical to mid-infrared bands (GBVRIJHK s W1W2W3W4) based on 183 nearby W UMa-type CBs with accurate Tycho–Gaia parallaxes. The 1σ dispersion of the PLRs decreases from optical to near- and mid-infrared wavelengths. The minimum scatter, 0.16 mag, implies that W UMa-type CBs can be used to recover distances to 7% precision. Applying our newly determined PLRs to 19 open clusters containing W UMa-type CBs demonstrates that the PLR and open cluster CB distance scales are mutually consistent to within 1%. Adopting our PLRs as secondary distance indicators, we compiled a catalog of 55,603 CB candidates, of which 80% have distance estimates based on a combination of optical, near-infrared, and mid-infrared photometry. Using Fourier decomposition, 27,318 high-probability W UMa-type CBs were selected. The resulting 8% distance accuracy implies that our sample encompasses the largest number of objects with accurate distances within a local volume with a radius of 3 kpc available to date. The distribution of W UMa-type CBs in the Galaxy suggests that in different environments, the CB luminosity function may be different: larger numbers of brighter (longer-period) W UMa-type CBs are found in younger environments.
EXors and the stellar birthline
NASA Astrophysics Data System (ADS)
Moody, Mackenzie S. L.; Stahler, Steven W.
2017-04-01
We assess the evolutionary status of EXors. These low-mass, pre-main-sequence stars repeatedly undergo sharp luminosity increases, each a year or so in duration. We place into the HR diagram all EXors that have documented quiescent luminosities and effective temperatures, and thus determine their masses and ages. Two alternate sets of pre-main-sequence tracks are used, and yield similar results. Roughly half of EXors are embedded objects, I.e., they appear observationally as Class I or flat-spectrum infrared sources. We find that these are relatively young and are located close to the stellar birthline in the HR diagram. Optically visible EXors, on the other hand, are situated well below the birthline. They have ages of several Myr, typical of classical T Tauri stars. Judging from the limited data at hand, we find no evidence that binarity companions trigger EXor eruptions; this issue merits further investigation. We draw several general conclusions. First, repetitive luminosity outbursts do not occur in all pre-main-sequence stars, and are not in themselves a sign of extreme youth. They persist, along with other signs of activity, in a relatively small subset of these objects. Second, the very existence of embedded EXors demonstrates that at least some Class I infrared sources are not true protostars, but very young pre-main-sequence objects still enshrouded in dusty gas. Finally, we believe that the embedded pre-main-sequence phase is of observational and theoretical significance, and should be included in a more complete account of early stellar evolution.
NASA Astrophysics Data System (ADS)
Kawaguchi, Toshihiro; Saito, Yuriko; Miki, Yohei; Mori, Masao
2014-07-01
Galaxies and massive black holes (BHs) presumably grow via galactic merging events and subsequent BH coalescence. As a case study, we investigate the merging event between the Andromeda galaxy (M31) and a satellite galaxy. We compute the expected observational appearance of the massive BH that was at the center of the satellite galaxy prior to the merger and is currently wandering in the M31 halo. We demonstrate that a radiatively inefficient accretion flow with a bolometric luminosity of a few tens of solar luminosities develops when Hoyle-Lyttleton accretion onto the BH is assumed. We compute the associated broadband spectrum and show that the radio band (observable with EVLA, ALMA, and the Square Kilometre Array) is the best frequency range in which to detect the emission. We also evaluate the mass and the luminosity of the stars bound by the wandering BH and find that such a star cluster is sufficiently luminous that it could correspond to one of the star clusters found by the PAndAS survey. The discovery of a relic massive BH wandering in a galactic halo will provide a direct means of investigating in detail the coevolution of galaxies and BHs. It also means a new population of BHs (off-center massive BHs) and offers targets for clean BH imaging that avoid strong interstellar scattering in the centers of galaxies.
Recovering the Physical Properties of Molecular Gas in Galaxies from CO SLED Modeling
NASA Astrophysics Data System (ADS)
Kamenetzky, J.; Privon, G. C.; Narayanan, D.
2018-05-01
Modeling of the spectral line energy distribution (SLED) of the CO molecule can reveal the physical conditions (temperature and density) of molecular gas in Galactic clouds and other galaxies. Recently, the Herschel Space Observatory and ALMA have offered, for the first time, a comprehensive view of the rotational J = 4‑3 through J = 13‑12 lines, which arise from a complex, diverse range of physical conditions that must be simplified to one, two, or three components when modeled. Here we investigate the recoverability of physical conditions from SLEDs produced by galaxy evolution simulations containing a large dynamical range in physical properties. These simulated SLEDs were generally fit well by one component of gas whose properties largely resemble or slightly underestimate the luminosity-weighted properties of the simulations when clumping due to nonthermal velocity dispersion is taken into account. If only modeling the first three rotational lines, the median values of the marginalized parameter distributions better represent the luminosity-weighted properties of the simulations, but the uncertainties in the fitted parameters are nearly an order of magnitude, compared to approximately 0.2 dex in the “best-case” scenario of a fully sampled SLED through J = 10‑9. This study demonstrates that while common CO SLED modeling techniques cannot reveal the underlying complexities of the molecular gas, they can distinguish bulk luminosity-weighted properties that vary with star formation surface densities and galaxy evolution, if a sufficient number of lines are detected and modeled.
Upgrade of the ATLAS Tile Calorimeter Electronics
NASA Astrophysics Data System (ADS)
Moreno, Pablo; ATLAS Tile Calorimeter System
2016-04-01
The Tile Calorimeter (TileCal) is the hadronic calorimeter covering the central region of the ATLAS experiment at LHC. The TileCal readout consists of 9852 channels. The bulk of its upgrade will occur for the High Luminosity LHC phase (Phase II) where the peak luminosity will increase 5× compared to the design luminosity (1034 cm-2s-1) at center of mass energy of 14 TeV. The TileCal upgrade aims at replacing the majority of the on- and off-detector electronics to the extent that all calorimeter signals will be digitized and sent to the off-detector electronics in the counting room. To achieve the required reliability, redundancy has been introduced at different levels. Three different options are presently being investigated for the front-end electronic upgrade. Extensive test beam studies will determine which option will be selected. 10.24 Gbps optical links are used to read out all digitized data to the counting room while 4.8 Gbps down-links are used for synchronization, configuration and detector control. For the off-detector electronics a pre-processor (sROD) is being developed, which takes care of the initial trigger processing while temporarily storing the main data flow in pipeline and de-randomizer memories. Field Programmable Gate Arrays are extensively used for the logic functions off- and on-detector. One demonstrator prototype module with the new calorimeter module electronics, but still compatible with the present system, is planned to be inserted in ATLAS at the end of 2015.
Cosmos Redshift 7 is an Active Black Hole
Smidt, Joseph Michael; Wiggins, Brandon Kerry; Johnson, Jarrett L.
2016-09-14
We present the first ab initio cosmological simulations of a CR7-like object which approximately reproduce the observed line widths and strengths. In our model, CR7 is powered by a massive (3:23 107 M ) black hole (BH) the accretion rate of which varies between ' 0.25 and ' 0.9 times the Eddington rate on timescales as short as 103 yr. Our model takes into account multi-dimensional effects, X-ray feedback, secondary ionizations and primordial chemistry. We estimate Ly- line widths by post-processing simulation output with Monte Carlo radiative transfer and calculate emissivity contributions from radiative recombination and collisional excitation. We findmore » the luminosities in the Lyman- and He II 1640 angstrom lines to be 5:0 10 44 and 2:4 10 43 erg s -1, respectively, in agreement with the observed values of > 8:3 10 43 and 2:0 10 43 erg s -1. We also find that the black hole heats the halo and renders it unable to produce stars as required to keep the halo metal free. These results demonstrate the viability of the BH hypothesis for CR7 in a cosmological context. Assuming the BH mass and accretion rate that we find, we estimate the synchrotron luminosity of CR7 to be P ' 10 40 - 10 41 erg s -1, which is sufficiently luminous to be observed in Jy observations and would discriminate this scenario from one where the luminosity is driven by Population III stars.« less
A luminous X-ray outburst from an intermediate-mass black hole in an off-centre star cluster
NASA Astrophysics Data System (ADS)
Lin, Dacheng; Strader, Jay; Carrasco, Eleazar R.; Page, Dany; Romanowsky, Aaron J.; Homan, Jeroen; Irwin, Jimmy A.; Remillard, Ronald A.; Godet, Olivier; Webb, Natalie A.; Baumgardt, Holger; Wijnands, Rudy; Barret, Didier; Duc, Pierre-Alain; Brodie, Jean P.; Gwyn, Stephen D. J.
2018-06-01
A unique signature for the presence of massive black holes in very dense stellar regions is occasional giant-amplitude outbursts of multi-wavelength radiation from tidal disruption and subsequent accretion of stars that make a close approach to the black holes1. Previous strong tidal disruption event (TDE) candidates were all associated with the centres of largely isolated galaxies2-6. Here, we report the discovery of a luminous X-ray outburst from a massive star cluster at a projected distance of 12.5 kpc from the centre of a large lenticular galaxy. The luminosity peaked at 1043 erg s-1 and decayed systematically over 10 years, approximately following a trend that supports the identification of the event as a TDE. The X-ray spectra were all very soft, with emission confined to be ≲3.0 keV, and could be described with a standard thermal disk. The disk cooled significantly as the luminosity decreased—a key thermal-state signature often observed in accreting stellar-mass black holes. This thermal-state signature, coupled with very high luminosities, ultrasoft X-ray spectra and the characteristic power-law evolution of the light curve, provides strong evidence that the source contains an intermediate-mass black hole with a mass tens of thousand times that of the solar mass. This event demonstrates that one of the most effective means of detecting intermediate-mass black holes is through X-ray flares from TDEs in star clusters.
A mass of less than 15 solar masses for the black hole in an ultraluminous X-ray source.
Motch, C; Pakull, M W; Soria, R; Grisé, F; Pietrzyński, G
2014-10-09
Most ultraluminous X-ray sources have a typical set of properties not seen in Galactic stellar-mass black holes. They have luminosities of more than 3 × 10(39) ergs per second, unusually soft X-ray components (with a typical temperature of less than about 0.3 kiloelectronvolts) and a characteristic downturn in their spectra above about 5 kiloelectronvolts. Such puzzling properties have been interpreted either as evidence of intermediate-mass black holes or as emission from stellar-mass black holes accreting above their Eddington limit, analogous to some Galactic black holes at peak luminosity. Recently, a very soft X-ray spectrum was observed in a rare and transient stellar-mass black hole. Here we report that the X-ray source P13 in the galaxy NGC 7793 is in a binary system with a period of about 64 days and exhibits all three canonical properties of ultraluminous sources. By modelling the strong optical and ultraviolet modulations arising from X-ray heating of the B9Ia donor star, we constrain the black hole mass to be less than 15 solar masses. Our results demonstrate that in P13, soft thermal emission and spectral curvature are indeed signatures of supercritical accretion. By analogy, ultraluminous X-ray sources with similar X-ray spectra and luminosities of up to a few times 10(40) ergs per second can be explained by supercritical accretion onto massive stellar-mass black holes.
Process Monitoring Evaluation and Implementation for the Wood Abrasive Machining Process
Saloni, Daniel E.; Lemaster, Richard L.; Jackson, Steven D.
2010-01-01
Wood processing industries have continuously developed and improved technologies and processes to transform wood to obtain better final product quality and thus increase profits. Abrasive machining is one of the most important of these processes and therefore merits special attention and study. The objective of this work was to evaluate and demonstrate a process monitoring system for use in the abrasive machining of wood and wood based products. The system developed increases the life of the belt by detecting (using process monitoring sensors) and removing (by cleaning) the abrasive loading during the machining process. This study focused on abrasive belt machining processes and included substantial background work, which provided a solid base for understanding the behavior of the abrasive, and the different ways that the abrasive machining process can be monitored. In addition, the background research showed that abrasive belts can effectively be cleaned by the appropriate cleaning technique. The process monitoring system developed included acoustic emission sensors which tended to be sensitive to belt wear, as well as platen vibration, but not loading, and optical sensors which were sensitive to abrasive loading. PMID:22163477
Mechanical design of walking machines.
Arikawa, Keisuke; Hirose, Shigeo
2007-01-15
The performance of existing actuators, such as electric motors, is very limited, be it power-weight ratio or energy efficiency. In this paper, we discuss the method to design a practical walking machine under this severe constraint with focus on two concepts, the gravitationally decoupled actuation (GDA) and the coupled drive. The GDA decouples the driving system against the gravitational field to suppress generation of negative power and improve energy efficiency. On the other hand, the coupled drive couples the driving system to distribute the output power equally among actuators and maximize the utilization of installed actuator power. First, we depict the GDA and coupled drive in detail. Then, we present actual machines, TITAN-III and VIII, quadruped walking machines designed on the basis of the GDA, and NINJA-I and II, quadruped wall walking machines designed on the basis of the coupled drive. Finally, we discuss walking machines that travel on three-dimensional terrain (3D terrain), which includes the ground, walls and ceiling. Then, we demonstrate with computer simulation that we can selectively leverage GDA and coupled drive by walking posture control.
A micro-machined source transducer for a parametric array in air.
Lee, Haksue; Kang, Daesil; Moon, Wonkyu
2009-04-01
Parametric array applications in air, such as highly directional parametric loudspeaker systems, usually rely on large radiators to generate the high-intensity primary beams required for nonlinear interactions. However, a conventional transducer, as a primary wave projector, requires a great deal of electrical power because its electroacoustic efficiency is very low due to the large characteristic mechanical impedance in air. The feasibility of a micro-machined ultrasonic transducer as an efficient finite-amplitude wave projector was studied. A piezoelectric micro-machined ultrasonic transducer array consisting of lead zirconate titanate uni-morph elements was designed and fabricated for this purpose. Theoretical and experimental evaluations showed that a micro-machined ultrasonic transducer array can be used as an efficient source transducer for a parametric array in air. The beam patterns and propagation curves of the difference frequency wave and the primary wave generated by the micro-machined ultrasonic transducer array were measured. Although the theoretical results were based on ideal parametric array models, the theoretical data explained the experimental results reasonably well. These experiments demonstrated the potential of micro-machined primary wave projector.
NASA Technical Reports Server (NTRS)
Alvarez, R.; Mennessier, M.-O.; Barthes, D.; Luri, X.; Mattei, J. A.
1997-01-01
Hipparcos astrometric and kinematical data of oxygen-rich Mira variables are used to calibrate absolute near-infrared magnitudes and kinematic parameters. Three distinct classes of stars with different kinematics and scale heights were identified. The two most significant groups present characteristics close to those usually assigned to extended/thick disk-halo populations and old disk populations, respectively, and thus they may differ by their metallicity abundance. Two parallel period-luminosity relations are found, one for each population. The shift between these relations is interpreted as the consequence of the effects of metallicity abundance on the luminosity.
Masses, luminosities and dynamics of galactic molecular clouds
NASA Technical Reports Server (NTRS)
Solomon, P. M.; Rivolo, A. R.; Mooney, T. J.; Barrett, J. W.; Sage, L. J.
1987-01-01
Star formation in galaxies takes place in molecular clouds and the Milky Way is the only galaxy in which it is possible to resolve and study the physical properties and star formation activity of individual clouds. The masses, luminosities, dynamics, and distribution of molecular clouds, primarily giant molecular clouds in the Milky Way are described and analyzed. The observational data sets are the Massachusetts-Stony Brook CO Galactic Plane Survey and the IRAS far IR images. The molecular mass and infrared luminosities of glactic clouds are then compared with the molecular mass and infrared luminosities of external galaxies.
Observations of jets from low-luminosity stars - DG Tauri B
NASA Technical Reports Server (NTRS)
Jones, B. F.; Cohen, Martin
1986-01-01
Low spectral resolution studies of DG Tau B, the faint system of knots south of the T Tauri star DG Tau, are described. The observations show this object to be bipolar, with the blueshifted lobe having extraordinarily low excitation. Infrared observations of the exciting star show it to be of very low luminosity, with a bolometric luminosity of 0.88 solar luminosity. The visual extinction indicates a highly nonspherical distribution of circumstellar dust around the exciting star. In spite of this lack of embedding within an obvious dark cloud, the system is identified as a young one.
Optical and X-ray luminosities of expanding nebulae around ultraluminous X-ray sources
NASA Astrophysics Data System (ADS)
Siwek, Magdalena; Sądowski, Aleksander; Narayan, Ramesh; Roberts, Timothy P.; Soria, Roberto
2017-09-01
We have performed a set of simulations of expanding, spherically symmetric nebulae inflated by winds from accreting black holes in ultraluminous X-ray sources (ULXs). We implemented a realistic cooling function to account for free-free and bound-free cooling. For all model parameters we considered, the forward shock in the interstellar medium becomes radiative at a radius ˜100 pc. The emission is primarily in optical and UV, and the radiative luminosity is about 50 per cent of the total kinetic luminosity of the wind. In contrast, the reverse shock in the wind is adiabatic so long as the terminal outflow velocity of the wind vw ≳ 0.003c. The shocked wind in these models radiates in X-rays, but with a luminosity of only ˜1035 erg s-1. For wind velocities vw ≲ 0.001c, the shocked wind becomes radiative, but it is no longer hot enough to produce X-rays. Instead it emits in optical and UV, and the radiative luminosity is comparable to 100 per cent of the wind kinetic luminosity. We suggest that measuring the optical luminosities and putting limits on the X-ray and radio emission from shock-ionized ULX bubbles may help in estimating the mass outflow rate of the central accretion disc and the velocity of the outflow.
SCUSS u- BAND EMISSION AS A STAR-FORMATION-RATE INDICATOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Zhimin; Zhou, Xu; Wu, Hong
2017-01-20
We present and analyze the possibility of using optical u- band luminosities to estimate star-formation rates (SFRs) of galaxies based on the data from the South Galactic Cap u band Sky Survey (SCUSS), which provides a deep u -band photometric survey covering about 5000 deg{sup 2} of the South Galactic Cap. Based on two samples of normal star-forming galaxies selected by the BPT diagram, we explore the correlations between u -band, H α , and IR luminosities by combing SCUSS data with the Sloan Digital Sky Survey and Wide-field Infrared Survey Explorer ( WISE ). The attenuation-corrected u -band luminositiesmore » are tightly correlated with the Balmer decrement-corrected H α luminosities with an rms scatter of ∼0.17 dex. The IR-corrected u luminosities are derived based on the correlations between the attenuation of u- band luminosities and WISE 12 (or 22) μ m luminosities, and then calibrated with the Balmer-corrected H α luminosities. The systematic residuals of these calibrations are tested against the physical properties over the ranges covered by our sample objects. We find that the best-fitting nonlinear relations are better than the linear ones and recommended to be applied in the measurement of SFRs. The systematic deviations mainly come from the pollution of old stellar population and the effect of dust extinction; therefore, a more detailed analysis is needed in future work.« less
A Faint Flux-limited Ly α Emitter Sample at z ∼ 0.3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wold, Isak G. B.; Finkelstein, Steven L.; Barger, Amy J.
2017-10-20
We present a flux-limited sample of z ∼ 0.3 Ly α emitters (LAEs) from Galaxy Evolution Explorer ( GALEX ) grism spectroscopic data. The published GALEX z ∼ 0.3 LAE sample is pre-selected from continuum-bright objects and thus is biased against high equivalent width (EW) LAEs. We remove this continuum pre-selection and compute the EW distribution and the luminosity function of the Ly α emission line directly from our sample. We examine the evolution of these quantities from z ∼ 0.3 to 2.2 and find that the EW distribution shows little evidence for evolution over this redshift range. As shownmore » by previous studies, the Ly α luminosity density from star-forming (SF) galaxies declines rapidly with declining redshift. However, we find that the decline in Ly α luminosity density from z = 2.2 to z = 0.3 may simply mirror the decline seen in the H α luminosity density from z = 2.2 to z = 0.4, implying little change in the volumetric Ly α escape fraction. Finally, we show that the observed Ly α luminosity density from AGNs is comparable to the observed Ly α luminosity density from SF galaxies at z = 0.3. We suggest that this significant contribution from AGNs to the total observed Ly α luminosity density persists out to z ∼ 2.2.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krolewski, Alex G.; Eisenstein, Daniel J., E-mail: akrolewski@college.harvard.edu
2015-04-10
We study the dependence of quasar clustering on quasar luminosity and black hole mass by measuring the angular overdensity of photometrically selected galaxies imaged by the Wide-field Infrared Survey Explorer (WISE) about z ∼ 0.8 quasars from SDSS. By measuring the quasar–galaxy cross-correlation function and using photometrically selected galaxies, we achieve a higher density of tracer objects and a more sensitive detection of clustering than measurements of the quasar autocorrelation function. We test models of quasar formation and evolution by measuring the luminosity dependence of clustering amplitude. We find a significant overdensity of WISE galaxies about z ∼ 0.8 quasarsmore » at 0.2–6.4 h{sup −1} Mpc in projected comoving separation. We find no appreciable increase in clustering amplitude with quasar luminosity across a decade in luminosity, and a power-law fit between luminosity and clustering amplitude gives an exponent of −0.01 ± 0.06 (1 σ error). We also fail to find a significant relationship between clustering amplitude and black hole mass, although our dynamic range in true mass is suppressed due to the large uncertainties in virial black hole mass estimates. Our results indicate that a small range in host dark matter halo mass maps to a large range in quasar luminosity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dehoff, Ryan R.; List, III, Frederick Alyious; Carver, Keith
ORNL Manufacturing Demonstration Facility worked with ECM Technologies LLC to investigate the use of precision electro-chemical machining technology to polish the surface of parts created by Arcam electron beam melting. The goals for phase one of this project have been met. The project goal was to determine whether electro-chemical machining is a viable method to improve the surface finish of Inconel 718 parts fabricated using the Arcam EBM method. The project partner (ECM) demonstrated viability for parts of both simple and complex geometry. During the course of the project, detailed process knowledge was generated. This project has resulted in themore » expansion of United States operations for ECM Technologies.« less
Knowledge-based machine vision systems for space station automation
NASA Technical Reports Server (NTRS)
Ranganath, Heggere S.; Chipman, Laure J.
1989-01-01
Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.
Optimization of Support Vector Machine (SVM) for Object Classification
NASA Technical Reports Server (NTRS)
Scholten, Matthew; Dhingra, Neil; Lu, Thomas T.; Chao, Tien-Hsin
2012-01-01
The Support Vector Machine (SVM) is a powerful algorithm, useful in classifying data into species. The SVMs implemented in this research were used as classifiers for the final stage in a Multistage Automatic Target Recognition (ATR) system. A single kernel SVM known as SVMlight, and a modified version known as a SVM with K-Means Clustering were used. These SVM algorithms were tested as classifiers under varying conditions. Image noise levels varied, and the orientation of the targets changed. The classifiers were then optimized to demonstrate their maximum potential as classifiers. Results demonstrate the reliability of SVM as a method for classification. From trial to trial, SVM produces consistent results.
Training and generalization of laundry skills: a multiple probe evaluation with handicapped persons.
Thompson, T J; Braam, S J; Fugua, R W
1982-01-01
An instructional procedure composed of a graded sequence of prompts and token reinforcement was used to train a complex chain of behaviors which included sorting, washing, and drying clothes. A multiple probe design with sequential instruction across seven major components of the laundering routine was used to demonstrate experimental control. Students were taught to launder clothing using machines located in their school and generalization was assessed later on machines located in the public laundromat. A comparison of students' laundry skills with those of normal peers indicated similar levels of proficiency. Follow-up probes demonstrated maintenance of laundry skills over a 10-month period. PMID:7096228
Training and generalization of laundry skills: a multiple probe evaluation with handicapped persons.
Thompson, T J; Braam, S J; Fugua, R W
1982-01-01
An instructional procedure composed of a graded sequence of prompts and token reinforcement was used to train a complex chain of behaviors which included sorting, washing, and drying clothes. A multiple probe design with sequential instruction across seven major components of the laundering routine was used to demonstrate experimental control. Students were taught to launder clothing using machines located in their school and generalization was assessed later on machines located in the public laundromat. A comparison of students' laundry skills with those of normal peers indicated similar levels of proficiency. Follow-up probes demonstrated maintenance of laundry skills over a 10-month period.
LDRD Report: Topological Design Optimization of Convolutes in Next Generation Pulsed Power Devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cyr, Eric C.; von Winckel, Gregory John; Kouri, Drew Philip
This LDRD project was developed around the ambitious goal of applying PDE-constrained opti- mization approaches to design Z-machine components whose performance is governed by elec- tromagnetic and plasma models. This report documents the results of this LDRD project. Our differentiating approach was to use topology optimization methods developed for structural design and extend them for application to electromagnetic systems pertinent to the Z-machine. To achieve this objective a suite of optimization algorithms were implemented in the ROL library part of the Trilinos framework. These methods were applied to standalone demonstration problems and the Drekar multi-physics research application. Out of thismore » exploration a new augmented Lagrangian approach to structural design problems was developed. We demonstrate that this approach has favorable mesh-independent performance. Both the final design and the algorithmic performance were independent of the size of the mesh. In addition, topology optimization formulations for the design of conducting networks were developed and demonstrated. Of note, this formulation was used to develop a design for the inner magnetically insulated transmission line on the Z-machine. The resulting electromagnetic device is compared with theoretically postulated designs.« less
Wang, Yuanjia; Chen, Tianle; Zeng, Donglin
2016-01-01
Learning risk scores to predict dichotomous or continuous outcomes using machine learning approaches has been studied extensively. However, how to learn risk scores for time-to-event outcomes subject to right censoring has received little attention until recently. Existing approaches rely on inverse probability weighting or rank-based regression, which may be inefficient. In this paper, we develop a new support vector hazards machine (SVHM) approach to predict censored outcomes. Our method is based on predicting the counting process associated with the time-to-event outcomes among subjects at risk via a series of support vector machines. Introducing counting processes to represent time-to-event data leads to a connection between support vector machines in supervised learning and hazards regression in standard survival analysis. To account for different at risk populations at observed event times, a time-varying offset is used in estimating risk scores. The resulting optimization is a convex quadratic programming problem that can easily incorporate non-linearity using kernel trick. We demonstrate an interesting link from the profiled empirical risk function of SVHM to the Cox partial likelihood. We then formally show that SVHM is optimal in discriminating covariate-specific hazard function from population average hazard function, and establish the consistency and learning rate of the predicted risk using the estimated risk scores. Simulation studies show improved prediction accuracy of the event times using SVHM compared to existing machine learning methods and standard conventional approaches. Finally, we analyze two real world biomedical study data where we use clinical markers and neuroimaging biomarkers to predict age-at-onset of a disease, and demonstrate superiority of SVHM in distinguishing high risk versus low risk subjects.
A New Mathematical Framework for Design Under Uncertainty
2016-05-05
blending multiple information sources via auto-regressive stochastic modeling. A computationally efficient machine learning framework is developed based on...sion and machine learning approaches; see Fig. 1. This will lead to a comprehensive description of system performance with less uncertainty than in the...Bayesian optimization of super-cavitating hy- drofoils The goal of this study is to demonstrate the capabilities of statistical learning and
Bio-Inspired Human-Level Machine Learning
2015-10-25
extensions to high-level cognitive functions such as anagram solving problem. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...extensions to high-level cognitive functions such as anagram solving problem. We expect that the bio-inspired human-level machine learning combined with...numbers of 1011 neurons and 1014 synaptic connections in the human brain. In previous work, we experimentally demonstrated the feasibility of cognitive
Another look at Atwood's machine
NASA Astrophysics Data System (ADS)
LoPresto, Michael C.
1999-02-01
Atwood's machine is a standard experimental apparatus that is likely to get pushed out of the laboratory portion of the general physics course due to the ever increasing use of microcomputers. To avoid this, I now use the apparatus for an experiment during the work and energy portion of the course which not only allows us to demonstrate those principles but also compare them with Newton's laws of motion.
NASA Technical Reports Server (NTRS)
Bradford, C. M.; Bock, J. J.; Dragovan, M.; Earle, L.; Glenn, J.; Naylor, B.; Nguyen, H.; Zmuidzinas, J.
2004-01-01
The discovery of galaxies beyond z approximately equal to 1 which emit the bulk of their luminosity at long wavelengths has demonstrated the need for high sensitivity, broadband spectroscopy in the far-IR/submm/mm bands. Because many of these sources are not detectable in the optical, long wavelength spectroscopy is key to measuring their redshifts and ISM conditions. The continuum source list will increase in the next decade with new ground-based instruments (SCUBA2, Bolocam, MAMBO) and the surveys of HSO and SIRTF. Yet the planned spectroscopic capabilities lag behind, primarily due to the difficulty in scaling existing IR spectrograph designs to longer wavelengths. To overcome these limitations, we are developing WaFIRS, a novel concept for long-wavelength spectroscopy which utilizes a parallel-plate waveguide and a curved diffraction grating. WaFIRS provides the large (approximately 60%) instantaneous bandwidth and high throughput of a conventional grating system, but offers a dramatic reduction in volume and mass. WaFIRS requires no space overheads for extra optical elements beyond the diffraction grating itself, and is two-dimensional because the propagation is confined between two parallel plates. Thus several modules could be stacked to multiplex either spatially or in different frequency bands. The size and mass savings provide opportunities for spectroscopy from space-borne observatories which would be impractical with conventional spectrographs. With background-limited detectors and a cooled 3.5 telescope, the line sensitivity would be better than that of ALMA, with instantaneous broad-band coverage. We have built and tested a WaFIRS prototype for 1-1.6 mm, and are currently constructing Z-Spec, a 100 mK model to be used as a ground-based lambda/DELTAlambda approximately equal to 350 submillimeter galaxy redshift machine.
Development of a Next Generation Concurrent Framework for the ATLAS Experiment
NASA Astrophysics Data System (ADS)
Calafiura, P.; Lampl, W.; Leggett, C.; Malon, D.; Stewart, G.; Wynne, B.
2015-12-01
The ATLAS experiment has successfully used its Gaudi/Athena software framework for data taking and analysis during the first LHC run, with billions of events successfully processed. However, the design of Gaudi/Athena dates from early 2000 and the software and the physics code has been written using a single threaded, serial design. This programming model has increasing difficulty in exploiting the potential of current CPUs, which offer their best performance only through taking full advantage of multiple cores and wide vector registers. Future CPU evolution will intensify this trend, with core counts increasing and memory per core falling. With current memory consumption for 64 bit ATLAS reconstruction in a high luminosity environment approaching 4GB, it will become impossible to fully occupy all cores in a machine without exhausting available memory. However, since maximizing performance per watt will be a key metric, a mechanism must be found to use all cores as efficiently as possible. In this paper we report on our progress with a practical demonstration of the use of multithreading in the ATLAS reconstruction software, using the GaudiHive framework. We have expanded support to Calorimeter, Inner Detector, and Tracking code, discussing what changes were necessary in order to allow the serially designed ATLAS code to run, both to the framework and to the tools and algorithms used. We report on both the performance gains, and what general lessons were learned about the code patterns that had been employed in the software and which patterns were identified as particularly problematic for multi-threading. We also present our findings on implementing a hybrid multi-threaded / multi-process framework, to take advantage of the strengths of each type of concurrency, while avoiding some of their corresponding limitations.
The role of submillimetre galaxies in galaxy evolution
NASA Astrophysics Data System (ADS)
Pope, Erin Alexandra
2007-08-01
This thesis presents a comprehensive study of high redshift submillimetre galaxies (SMGs) using the deepest multi-wavelength observations. The submm sample consists of galaxies detected at 850 mm with the Submillimetre Common User Bolometer Array (SCUBA) in the Great Observatories Origins Deep Survey- North region. Using the deep Spitzer Space Telescope images and new data and reductions of the Very Large Array radio data, I find statistically secure counterparts for 60% of the submm sample, and identify tentative counterparts for most of the remaining objects. This is the largest sample of submm galaxies with statistically secure counterparts detected in the radio and with Spitzer . This thesis presents spectral energy distributions (SEDs), Spitzer colours, and infrared (IR) luminosities for the SMGs. A composite rest-frame SED shows that the submm sources peak at longer wavelengths than those of local ultraluminous IR galaxies (ULIRGs), i.e. they appear to be cooler than local ULIRGs of the same luminosity. This demonstrates the strong selection effects, both locally and at high redshift, which may lead to an incomplete census of the ULIRG population. The SEDs of submm galaxies are also different from those of their high redshift neighbours, the near-IR selected BzK galaxies, whose mid-IR to radio SEDs are more like those of local ULIRGs. I fit templates that span the mid-IR through radio to derive the integrated 1R luminosities of the submm galaxies and find a median value of L IR (8-1000 mm) = 6.0 x 10 12 [Special characters omitted.] . I also find that submm flux densities by themselves systematically overpredict L IR when using templates which obey the local ULIRG temperature-luminosity relation. The SED fits show that SMGs are consistent with the correlation between radio and IR luminosity observed in local galaxies. Because the shorter Spitzer wavelengths sample the stellar bump at the redshifts of the submm sources, one can obtain a model independent estimate of the redshift, s(D z /(1 + z )) = 0.07. The median redshift of the secure submm counterparts is 2.0. Using X-ray and mid-IR imaging data, only 5% of the secure counterparts show strong evidence for an active galactic nucleus (AGN) dominating the IR luminosity. This thesis also presents deep Spitzer mid-IR spectroscopy of 13 of these SMGs in order to determine the contribution from AGN and starburst emission to the IR luminosity. I find strong polycyclic aromatic hydrocarbon (PAH) emission features in all of the targets, while only 2/13 SMGs have a significant mid-IR rising power-law component which would indicate an AGN. In the high signal-to- noise ratio composite spectrum of the SMGs I find that the AGN component contributes at most 30% of the mid-IR luminosity, implying that the total LIR in SMGs is dominated by star formation and not AGN emission. I also find that the SMGs lie on the relation between the luminosity of the main PAH features and L IR established for local starburst galaxies, confirming that the PAH luminosity can be used as a proxy for the star formation rate. Interestingly, local ULIRGs, which are often thought to be the low redshift analogues of SMGs, lie off these relations, as they appear deficient in PAH luminosity for a given L IR . In terms of an evolutionary scenario for IR luminous galaxies, SMGs are consistent with being an earlier phase in the massive merger (compared with other local or high redshift ULIRGs) in which the AGN has not yet become strong enough to heat the dust and dilute the PAH emission. I further investigate the overlap between high redshift infrared and submm populations using a statistical stacking analysis to measure the contribution of near- and mid-IR galaxy populations to the 850 mm submm background. For the first time, it is found that the 850 mm background can be completely resolved into individual galaxies and the bulk of these galaxies lie at z [Special characters omitted.] 3. Additionally I present a detailed study of the most distant SMG discovered to date, which I call GN20. This unusually bright source led to the discovery of a high redshift galaxy cluster, which is likely to be lensing the SMG. I discuss the potential for using bright SMGs in future submm surveys to identify high redshift clusters. Finally, for this complete sample of SMGs, I present the cumulative flux distribution at X-ray, optical, IR and radio wavelengths and I determine the depths at which one can expect to detect the majority of submm galaxies in future mm/submm surveys, such as with SCUBA-2, the successor to SCUBA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaumont, Christopher N.; Williams, Jonathan P.; Goodman, Alyssa A.
We present Brut, an algorithm to identify bubbles in infrared images of the Galactic midplane. Brut is based on the Random Forest algorithm, and uses bubbles identified by >35,000 citizen scientists from the Milky Way Project to discover the identifying characteristics of bubbles in images from the Spitzer Space Telescope. We demonstrate that Brut's ability to identify bubbles is comparable to expert astronomers. We use Brut to re-assess the bubbles in the Milky Way Project catalog, and find that 10%-30% of the objects in this catalog are non-bubble interlopers. Relative to these interlopers, high-reliability bubbles are more confined to themore » mid-plane, and display a stronger excess of young stellar objects along and within bubble rims. Furthermore, Brut is able to discover bubbles missed by previous searches—particularly bubbles near bright sources which have low contrast relative to their surroundings. Brut demonstrates the synergies that exist between citizen scientists, professional scientists, and machine learning techniques. In cases where ''untrained' citizens can identify patterns that machines cannot detect without training, machine learning algorithms like Brut can use the output of citizen science projects as input training sets, offering tremendous opportunities to speed the pace of scientific discovery. A hybrid model of machine learning combined with crowdsourced training data from citizen scientists can not only classify large quantities of data, but also address the weakness of each approach if deployed alone.« less
Chip formation and surface integrity in high-speed machining of hardened steel
NASA Astrophysics Data System (ADS)
Kishawy, Hossam Eldeen A.
Increasing demands for high production rates as well as cost reduction have emphasized the potential for the industrial application of hard turning technology during the past few years. Machining instead of grinding hardened steel components reduces the machining sequence, the machining time, and the specific cutting energy. Hard turning Is characterized by the generation of high temperatures, the formation of saw toothed chips, and the high ratio of thrust to tangential cutting force components. Although a large volume of literature exists on hard turning, the change in machined surface physical properties represents a major challenge. Thus, a better understanding of the cutting mechanism in hard turning is still required. In particular, the chip formation process and the surface integrity of the machined surface are important issues which require further research. In this thesis, a mechanistic model for saw toothed chip formation is presented. This model is based on the concept of crack initiation on the free surface of the workpiece. The model presented explains the mechanism of chip formation. In addition, experimental investigation is conducted in order to study the chip morphology. The effect of process parameters, including edge preparation and tool wear on the chip morphology, is studied using Scanning Electron Microscopy (SEM). The dynamics of chip formation are also investigated. The surface integrity of the machined parts is also investigated. This investigation focusses on residual stresses as well as surface and sub-surface deformation. A three dimensional thermo-elasto-plastic finite element model is developed to predict the machining residual stresses. The effect of flank wear is introduced during the analysis. Although residual stresses have complicated origins and are introduced by many factors, in this model only the thermal and mechanical factors are considered. The finite element analysis demonstrates the significant effect of the heat generated during cutting on the residual stresses. The machined specimens are also examined using x-ray diffraction technique to clarify the effect of different speeds, feeds and depths of cut as well as different edge preparations on the residual stress distribution beneath the machined surface. A reasonable agreement between the predicted and measured residual stress is obtained. The results obtained demonstrate the possibility of eliminating the existence of high tensile residual stresses in the workpiece surface by selecting the proper cutting conditions. The machined surfaces are examined using SEM to study the effect of different process parameters and edge preparations on the quality of the machined surface. The phenomenon of material side flow is investigated to clarify the mechanism of this phenomenon. The effect of process parameters and edge preparations on sub-surface deformation is also investigated.
Luminosity of serendipitous x-ray QSOs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margon, B.; Chanan, G.A.; Downes, R.A.
1982-02-01
We have identified the optical counterparts of 47 serendipitously discovered Einstein Observatory X-ray sources with previously unreported quasi-stellar objects. The mean ratio of X-ray to optical luminosity of this sample agrees reasonably well with that derived from X-ray observations of previously known QSOs. However, despite the fact that our limiting magnitude V = 18.5 should permit detection of typical QSOs (i.e., M/sub c/ = -26) to z = 0.9, the mean redshift of our sample is only z = 0.42 Thus the mean luminosity of these objects, M/sub c/ = -24, differs significantly from that of previous QSO surveys withmore » similar optical thresholds. The existence of large numbers of these lower luminosity QSOs which are difficult to discover by previous selection techniques, provides observational confirmation of the steep luminosity function inferred indirectly from optical counts. However, possible explanations for the lack of higher luminosity QSOs in our sample prove even more interesting. If one accepts the global value of the X-ray to optical luminosity ratio proposed by Zamorani et al, and Ku, Helfand, and Lucy, then reconciliation of this ratio with our observations severely constrains the QSO space density and luminosity functions. Alternatively, the ''typical'' QSO-a radio quiet, high redshift (z>1), optically luminous but not superluminous (M/sub c/> or =-27) object-may not be a strong X-ray source. This inference is not in conflict with existing results from Einstein X-ray surveys of preselected QSOs, which also fail to detect such objects. The contribution of QSOs to the diffuse X-ray background radiation is therefore highly uncertain, but may be quite small. Current X-ray data probably do not place significant constraints on the optical number counts of faint QSOs.« less
NASA Astrophysics Data System (ADS)
Gruppioni, C.; Berta, S.; Spinoglio, L.; Pereira-Santaella, M.; Pozzi, F.; Andreani, P.; Bonato, M.; De Zotti, G.; Malkan, M.; Negrello, M.; Vallini, L.; Vignali, C.
2016-06-01
We present new estimates of AGN accretion and star formation (SF) luminosity in galaxies obtained for the local 12 μm sample of Seyfert galaxies (12MGS), by performing a detailed broad-band spectral energy distribution (SED) decomposition including the emission of stars, dust heated by SF and a possible AGN dusty torus. Thanks to the availability of data from the X-rays to the sub-millimetre, we constrain and test the contribution of the stellar, AGN and SF components to the SEDs. The availability of Spitzer-InfraRed Spectrograph (IRS) low-resolution mid-infrared (mid-IR) spectra is crucial to constrain the dusty torus component at its peak wavelengths. The results of SED fitting are also tested against the available information in other bands: the reconstructed AGN bolometric luminosity is compared to those derived from X-rays and from the high excitation IR lines tracing AGN activity like [Ne V] and [O IV]. The IR luminosity due to SF and the intrinsic AGN bolometric luminosity are shown to be strongly related to the IR line luminosity. Variations of these relations with different AGN fractions are investigated, showing that the relation dispersions are mainly due to different AGN relative contribution within the galaxy. Extrapolating these local relations between line and SF or AGN luminosities to higher redshifts, by means of recent Herschel galaxy evolution results, we then obtain mid- and far-IR line luminosity functions useful to estimate how many star-forming galaxies and AGN we expect to detect in the different lines at different redshifts and luminosities with future IR facilities (e.g. JWST, SPICA).
NASA Astrophysics Data System (ADS)
Buat, V.; Takeuchi, T. T.; Iglesias-Páramo, J.; Xu, C. K.; Burgarella, D.; Boselli, A.; Barlow, T.; Bianchi, L.; Donas, J.; Forster, K.; Friedman, P. G.; Heckman, T. M.; Lee, Y.-W.; Madore, B. F.; Martin, D. C.; Milliard, B.; Morissey, P.; Neff, S.; Rich, M.; Schiminovich, D.; Seibert, M.; Small, T.; Szalay, A. S.; Welsh, B.; Wyder, T.; Yi, S. K.
2007-12-01
We select far-infrared (FIR: 60 μm) and far-ultraviolet (FUV: 530 Å) samples of nearby galaxies in order to discuss the biases encountered by monochromatic surveys (FIR or FUV). Very different volumes are sampled by each selection, and much care is taken to apply volume corrections to all the analyses. The distributions of the bolometric luminosity of young stars are compared for both samples: they are found to be consistent with each other for galaxies of intermediate luminosities, but some differences are found for high (>5×1010 Lsolar) luminosities. The shallowness of the IRAS survey prevents us from securing a comparison at low luminosities (<2×109 Lsolar). The ratio of the total infrared (TIR) luminosity to the FUV luminosity is found to increase with the bolometric luminosity in a similar way for both samples up to 5×1010 Lsolar. Brighter galaxies are found to have a different behavior according to their selection: the LTIR/LFUV ratio of the FUV-selected galaxies brighter than 5×1010 Lsolar reaches a plateau, whereas LTIR/LFUV continues to increase with the luminosity of bright galaxies selected in FIR. The volume-averaged specific star formation rate (SFR per unit galaxy stellar mass, SSFR) is found to decrease toward massive galaxies within each selection. The mean values of the SSFR are found to be larger than those measured for optical and NIR-selected samples over the whole mass range for the FIR selection, and for masses larger than 1010 Msolar for the FUV selection. Luminous and massive galaxies selected in FIR appear as active as galaxies with similar characteristics detected at z~0.7.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lusso, E.; Hennawi, J. F.; Richards, G. T.
2013-11-10
The fraction of active galactic nucleus (AGN) luminosity obscured by dust and re-emitted in the mid-IR is critical for understanding AGN evolution, unification, and parsec-scale AGN physics. For unobscured (Type 1) AGNs, where we have a direct view of the accretion disk, the dust covering factor can be measured by computing the ratio of re-processed mid-IR emission to intrinsic nuclear bolometric luminosity. We use this technique to estimate the obscured AGN fraction as a function of luminosity and redshift for 513 Type 1 AGNs from the XMM-COSMOS survey. The re-processed and intrinsic luminosities are computed by fitting the 18 bandmore » COSMOS photometry with a custom spectral energy distribution fitting code, which jointly models emission from hot dust in the AGN torus, from the accretion disk, and from the host galaxy. We find a relatively shallow decrease of the luminosity ratio as a function of L{sub bol}, which we interpret as a corresponding decrease in the obscured fraction. In the context of the receding torus model, where dust sublimation reduces the covering factor of more luminous AGNs, our measurements require a torus height that increases with luminosity as h ∝ L{sub bol}{sup 0.3-0.4}. Our obscured-fraction-luminosity relation agrees with determinations from Sloan Digital Sky Survey censuses of Type 1 and Type 2 quasars and favors a torus optically thin to mid-IR radiation. We find a much weaker dependence of the obscured fraction on 2-10 keV luminosity than previous determinations from X-ray surveys and argue that X-ray surveys miss a significant population of highly obscured Compton-thick AGNs. Our analysis shows no clear evidence for evolution of the obscured fraction with redshift.« less
CONSTRAINTS ON THE FAINT END OF THE QUASAR LUMINOSITY FUNCTION AT z {approx} 5 IN THE COSMOS FIELD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ikeda, H.; Matsuoka, K.; Kajisawa, M.
2012-09-10
We present the result of our low-luminosity quasar survey in the redshift range of 4.5 {approx}< z {approx}< 5.5 in the COSMOS field. Using the COSMOS photometric catalog, we selected 15 quasar candidates with 22 < i' < 24 at z {approx} 5 that are {approx}3 mag fainter than the Sloan Digital Sky Survey quasars in the same redshift range. We obtained optical spectra for 14 of the 15 candidates using FOCAS on the Subaru Telescope and did not identify any low-luminosity type-1 quasars at z {approx} 5, while a low-luminosity type-2 quasar at z {approx} 5.07 was discovered. Inmore » order to constrain the faint end of the quasar luminosity function at z {approx} 5, we calculated the 1{sigma} confidence upper limits of the space density of type-1 quasars. As a result, the 1{sigma} confidence upper limits on the quasar space density are {Phi} < 1.33 Multiplication-Sign 10{sup -7} Mpc{sup -3} mag{sup -1} for -24.52 < M{sub 1450} < -23.52 and {Phi} < 2.88 Multiplication-Sign 10{sup -7} Mpc{sup -3} mag{sup -1} for -23.52 < M{sub 1450} < -22.52. The inferred 1{sigma} confidence upper limits of the space density are then used to provide constraints on the faint-end slope and the break absolute magnitude of the quasar luminosity function at z {approx} 5. We find that the quasar space density decreases gradually as a function of redshift at low luminosity (M{sub 1450} {approx} -23), being similar to the trend found for quasars with high luminosity (M{sub 1450} < -26). This result is consistent with the so-called downsizing evolution of quasars seen at lower redshifts.« less
Efficient and Scalable Cross-Matching of (Very) Large Catalogs
NASA Astrophysics Data System (ADS)
Pineau, F.-X.; Boch, T.; Derriere, S.
2011-07-01
Whether it be for building multi-wavelength datasets from independent surveys, studying changes in objects luminosities, or detecting moving objects (stellar proper motions, asteroids), cross-catalog matching is a technique widely used in astronomy. The need for efficient, reliable and scalable cross-catalog matching is becoming even more pressing with forthcoming projects which will produce huge catalogs in which astronomers will dig for rare objects, perform statistical analysis and classification, or real-time transients detection. We have developed a formalism and the corresponding technical framework to address the challenge of fast cross-catalog matching. Our formalism supports more than simple nearest-neighbor search, and handles elliptical positional errors. Scalability is improved by partitioning the sky using the HEALPix scheme, and processing independently each sky cell. The use of multi-threaded two-dimensional kd-trees adapted to managing equatorial coordinates enables efficient neighbor search. The whole process can run on a single computer, but could also use clusters of machines to cross-match future very large surveys such as GAIA or LSST in reasonable times. We already achieve performances where the 2MASS (˜470M sources) and SDSS DR7 (˜350M sources) can be matched on a single machine in less than 10 minutes. We aim at providing astronomers with a catalog cross-matching service, available on-line and leveraging on the catalogs present in the VizieR database. This service will allow users both to access pre-computed cross-matches across some very large catalogs, and to run customized cross-matching operations. It will also support VO protocols for synchronous or asynchronous queries.
Discovery and spectroscopy of the young jovian planet 51 Eri b with the Gemini Planet Imager.
Macintosh, B; Graham, J R; Barman, T; De Rosa, R J; Konopacky, Q; Marley, M S; Marois, C; Nielsen, E L; Pueyo, L; Rajan, A; Rameau, J; Saumon, D; Wang, J J; Patience, J; Ammons, M; Arriaga, P; Artigau, E; Beckwith, S; Brewster, J; Bruzzone, S; Bulger, J; Burningham, B; Burrows, A S; Chen, C; Chiang, E; Chilcote, J K; Dawson, R I; Dong, R; Doyon, R; Draper, Z H; Duchêne, G; Esposito, T M; Fabrycky, D; Fitzgerald, M P; Follette, K B; Fortney, J J; Gerard, B; Goodsell, S; Greenbaum, A Z; Hibon, P; Hinkley, S; Cotten, T H; Hung, L-W; Ingraham, P; Johnson-Groh, M; Kalas, P; Lafreniere, D; Larkin, J E; Lee, J; Line, M; Long, D; Maire, J; Marchis, F; Matthews, B C; Max, C E; Metchev, S; Millar-Blanchaer, M A; Mittal, T; Morley, C V; Morzinski, K M; Murray-Clay, R; Oppenheimer, R; Palmer, D W; Patel, R; Perrin, M D; Poyneer, L A; Rafikov, R R; Rantakyrö, F T; Rice, E L; Rojo, P; Rudy, A R; Ruffio, J-B; Ruiz, M T; Sadakuni, N; Saddlemyer, L; Salama, M; Savransky, D; Schneider, A C; Sivaramakrishnan, A; Song, I; Soummer, R; Thomas, S; Vasisht, G; Wallace, J K; Ward-Duong, K; Wiktorowicz, S J; Wolff, S G; Zuckerman, B
2015-10-02
Directly detecting thermal emission from young extrasolar planets allows measurement of their atmospheric compositions and luminosities, which are influenced by their formation mechanisms. Using the Gemini Planet Imager, we discovered a planet orbiting the ~20-million-year-old star 51 Eridani at a projected separation of 13 astronomical units. Near-infrared observations show a spectrum with strong methane and water-vapor absorption. Modeling of the spectra and photometry yields a luminosity (normalized by the luminosity of the Sun) of 1.6 to 4.0 × 10(-6) and an effective temperature of 600 to 750 kelvin. For this age and luminosity, "hot-start" formation models indicate a mass twice that of Jupiter. This planet also has a sufficiently low luminosity to be consistent with the "cold-start" core-accretion process that may have formed Jupiter. Copyright © 2015, American Association for the Advancement of Science.
Discovery and spectroscopy of the young Jovian planet 51 Eri b with the Gemini Planet Imager
Macintosh, B.; Graham, J. R.; Barman, T.; ...
2015-10-02
Directly detecting thermal emission from young extrasolar planets allows measurement of their atmospheric compositions and luminosities, which are influenced by their formation mechanisms. Using the Gemini Planet Imager, we discovered a planet orbiting the ~20-million-year-old star 51 Eridani at a projected separation of 13 astronomical units. Near-infrared observations show a spectrum with strong methane and water-vapor absorption. Modeling of the spectra and photometry yields a luminosity (normalized by the luminosity of the Sun) of 1.6 to 4.0 × 10 –6 and an effective temperature of 600 to 750 kelvin. For this age and luminosity, “hot-start” formation models indicate a massmore » twice that of Jupiter. As a result, this planet also has a sufficiently low luminosity to be consistent with the “cold-start” core-accretion process that may have formed Jupiter.« less
Systematic study of magnetar outbursts
NASA Astrophysics Data System (ADS)
Coti Zelati, F.; Rea, N.; Pons, J. A.; Campana, S.; Esposito, P.
2017-12-01
We present the results of the systematic study of all magnetar outbursts observed to date through a reanalysis of data acquired in about 1100 X-ray observations. We track the temporal evolution of the luminosity for all these events, model empirically their decays, and estimate the characteristic decay time-scales and the energy involved. We study the link between different parameters (maximum luminosity increase, outburst peak luminosities, quiescent X-ray and bolometric luminosities, energetics, decay time-scales, magnetic field, spin-down luminosity and age), and reveal several correlations between different quantities. We discuss our results in the framework of the models proposed to explain the triggering mechanism and evolution of magnetar outbursts. The study is complemented by the Magnetar Outburst Online Catalog (http://www.magnetars.ice.csic.es), an interactive database where the user can plot any combination of the parameters derived in this work and download all reduced data.
Discovery and spectroscopy of the young Jovian planet 51 Eri b with the Gemini Planet Imager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macintosh, B.; Graham, J. R.; Barman, T.
Directly detecting thermal emission from young extrasolar planets allows measurement of their atmospheric compositions and luminosities, which are influenced by their formation mechanisms. Using the Gemini Planet Imager, we discovered a planet orbiting the ~20-million-year-old star 51 Eridani at a projected separation of 13 astronomical units. Near-infrared observations show a spectrum with strong methane and water-vapor absorption. Modeling of the spectra and photometry yields a luminosity (normalized by the luminosity of the Sun) of 1.6 to 4.0 × 10 –6 and an effective temperature of 600 to 750 kelvin. For this age and luminosity, “hot-start” formation models indicate a massmore » twice that of Jupiter. As a result, this planet also has a sufficiently low luminosity to be consistent with the “cold-start” core-accretion process that may have formed Jupiter.« less
Optical Variability of Two High-Luminosity Radio-Quiet Quasars, PDS 456 and PHL 1811
NASA Astrophysics Data System (ADS)
Gaskell, C. M.; Benker, A. J.; Campbell, J. S.; Crowley, K. A.; George, T. A.; Hedrick, C. H.; Hiller, M. E.; Klimek, E. S.; Leonard, J. P.; Peterson, B. W.; Sanders, K. M.
2003-12-01
PDS 456 and PHL 1811 are two of the highest luminosity low-redshift quasars. Both have optical luminosities comparable to 3C 273, but they have low radio luminosities. PDS 456 is a broad line object but PHL 1811 could be classified as a high-luminosity Narrow-Line Seyfert 1 (NLS1) object. We present the results of optical (V-band) continuum monitoring of PDS 456 and PHL 1811. We compare the variability properties of these two very different AGNs compared with the radio-loud AGN 3C 273, and we discuss the implications for the origin of the optical continuum variability in AGNs. This research has been supported in part by the Howard Hughes Foundation, Nebraska EPSCoR, the University of Nebraska Layman Fund, the University of Nebraska Undergraduate Creative Activities and Research Experiences, Pepsi-Cola, and the National Science Foundation through grant AST 03-07912.
Discovery and spectroscopy of the young jovian planet 51 Eri b with the Gemini Planet Imager
NASA Astrophysics Data System (ADS)
Macintosh, B.; Graham, J. R.; Barman, T.; De Rosa, R. J.; Konopacky, Q.; Marley, M. S.; Marois, C.; Nielsen, E. L.; Pueyo, L.; Rajan, A.; Rameau, J.; Saumon, D.; Wang, J. J.; Patience, J.; Ammons, M.; Arriaga, P.; Artigau, E.; Beckwith, S.; Brewster, J.; Bruzzone, S.; Bulger, J.; Burningham, B.; Burrows, A. S.; Chen, C.; Chiang, E.; Chilcote, J. K.; Dawson, R. I.; Dong, R.; Doyon, R.; Draper, Z. H.; Duchêne, G.; Esposito, T. M.; Fabrycky, D.; Fitzgerald, M. P.; Follette, K. B.; Fortney, J. J.; Gerard, B.; Goodsell, S.; Greenbaum, A. Z.; Hibon, P.; Hinkley, S.; Cotten, T. H.; Hung, L.-W.; Ingraham, P.; Johnson-Groh, M.; Kalas, P.; Lafreniere, D.; Larkin, J. E.; Lee, J.; Line, M.; Long, D.; Maire, J.; Marchis, F.; Matthews, B. C.; Max, C. E.; Metchev, S.; Millar-Blanchaer, M. A.; Mittal, T.; Morley, C. V.; Morzinski, K. M.; Murray-Clay, R.; Oppenheimer, R.; Palmer, D. W.; Patel, R.; Perrin, M. D.; Poyneer, L. A.; Rafikov, R. R.; Rantakyrö, F. T.; Rice, E. L.; Rojo, P.; Rudy, A. R.; Ruffio, J.-B.; Ruiz, M. T.; Sadakuni, N.; Saddlemyer, L.; Salama, M.; Savransky, D.; Schneider, A. C.; Sivaramakrishnan, A.; Song, I.; Soummer, R.; Thomas, S.; Vasisht, G.; Wallace, J. K.; Ward-Duong, K.; Wiktorowicz, S. J.; Wolff, S. G.; Zuckerman, B.
2015-10-01
Directly detecting thermal emission from young extrasolar planets allows measurement of their atmospheric compositions and luminosities, which are influenced by their formation mechanisms. Using the Gemini Planet Imager, we discovered a planet orbiting the ~20-million-year-old star 51 Eridani at a projected separation of 13 astronomical units. Near-infrared observations show a spectrum with strong methane and water-vapor absorption. Modeling of the spectra and photometry yields a luminosity (normalized by the luminosity of the Sun) of 1.6 to 4.0 × 10-6 and an effective temperature of 600 to 750 kelvin. For this age and luminosity, “hot-start” formation models indicate a mass twice that of Jupiter. This planet also has a sufficiently low luminosity to be consistent with the “cold-start” core-accretion process that may have formed Jupiter.
Araki, Tadashi; Ikeda, Nobutaka; Shukla, Devarshi; Jain, Pankaj K; Londhe, Narendra D; Shrivastava, Vimal K; Banchhor, Sumit K; Saba, Luca; Nicolaides, Andrew; Shafique, Shoaib; Laird, John R; Suri, Jasjit S
2016-05-01
Percutaneous coronary interventional procedures need advance planning prior to stenting or an endarterectomy. Cardiologists use intravascular ultrasound (IVUS) for screening, risk assessment and stratification of coronary artery disease (CAD). We hypothesize that plaque components are vulnerable to rupture due to plaque progression. Currently, there are no standard grayscale IVUS tools for risk assessment of plaque rupture. This paper presents a novel strategy for risk stratification based on plaque morphology embedded with principal component analysis (PCA) for plaque feature dimensionality reduction and dominant feature selection technique. The risk assessment utilizes 56 grayscale coronary features in a machine learning framework while linking information from carotid and coronary plaque burdens due to their common genetic makeup. This system consists of a machine learning paradigm which uses a support vector machine (SVM) combined with PCA for optimal and dominant coronary artery morphological feature extraction. Carotid artery proven intima-media thickness (cIMT) biomarker is adapted as a gold standard during the training phase of the machine learning system. For the performance evaluation, K-fold cross validation protocol is adapted with 20 trials per fold. For choosing the dominant features out of the 56 grayscale features, a polling strategy of PCA is adapted where the original value of the features is unaltered. Different protocols are designed for establishing the stability and reliability criteria of the coronary risk assessment system (cRAS). Using the PCA-based machine learning paradigm and cross-validation protocol, a classification accuracy of 98.43% (AUC 0.98) with K=10 folds using an SVM radial basis function (RBF) kernel was achieved. A reliability index of 97.32% and machine learning stability criteria of 5% were met for the cRAS. This is the first Computer aided design (CADx) system of its kind that is able to demonstrate the ability of coronary risk assessment and stratification while demonstrating a successful design of the machine learning system based on our assumptions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hannel, Mark D.; Abdulali, Aidan; O'Brien, Michael; Grier, David G.
2018-06-01
Holograms of colloidal particles can be analyzed with the Lorenz-Mie theory of light scattering to measure individual particles' three-dimensional positions with nanometer precision while simultaneously estimating their sizes and refractive indexes. Extracting this wealth of information begins by detecting and localizing features of interest within individual holograms. Conventionally approached with heuristic algorithms, this image analysis problem can be solved faster and more generally with machine-learning techniques. We demonstrate that two popular machine-learning algorithms, cascade classifiers and deep convolutional neural networks (CNN), can solve the feature-localization problem orders of magnitude faster than current state-of-the-art techniques. Our CNN implementation localizes holographic features precisely enough to bootstrap more detailed analyses based on the Lorenz-Mie theory of light scattering. The wavelet-based Haar cascade proves to be less precise, but is so computationally efficient that it creates new opportunities for applications that emphasize speed and low cost. We demonstrate its use as a real-time targeting system for holographic optical trapping.
NASA Astrophysics Data System (ADS)
Rajabifar, Bahram; Kim, Sanha; Slinker, Keith; Ehlert, Gregory J.; Hart, A. John; Maschmann, Matthew R.
2015-10-01
We demonstrate that vertically aligned carbon nanotubes (CNTs) can be precisely machined in a low pressure water vapor ambient using the electron beam of an environmental scanning electron microscope. The electron beam locally damages the irradiated regions of the CNT forest and also dissociates the water vapor molecules into reactive species including hydroxyl radicals. These species then locally oxidize the damaged region of the CNTs. The technique offers material removal capabilities ranging from selected CNTs to hundreds of cubic microns. We study how the material removal rate is influenced by the acceleration voltage, beam current, dwell time, operating pressure, and CNT orientation. Milled cuts with depths between 0-100 microns are generated, corresponding to a material removal rate of up to 20.1 μm3/min. The technique produces little carbon residue and does not disturb the native morphology of the CNT network. Finally, we demonstrate direct machining of pyramidal surfaces and re-entrant cuts to create freestanding geometries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajabifar, Bahram; Maschmann, Matthew R., E-mail: MaschmannM@missouri.edu; Kim, Sanha
2015-10-05
We demonstrate that vertically aligned carbon nanotubes (CNTs) can be precisely machined in a low pressure water vapor ambient using the electron beam of an environmental scanning electron microscope. The electron beam locally damages the irradiated regions of the CNT forest and also dissociates the water vapor molecules into reactive species including hydroxyl radicals. These species then locally oxidize the damaged region of the CNTs. The technique offers material removal capabilities ranging from selected CNTs to hundreds of cubic microns. We study how the material removal rate is influenced by the acceleration voltage, beam current, dwell time, operating pressure, andmore » CNT orientation. Milled cuts with depths between 0–100 microns are generated, corresponding to a material removal rate of up to 20.1 μm{sup 3}/min. The technique produces little carbon residue and does not disturb the native morphology of the CNT network. Finally, we demonstrate direct machining of pyramidal surfaces and re-entrant cuts to create freestanding geometries.« less
Ogawa, Takeshi; Hirayama, Jun-Ichiro; Gupta, Pankaj; Moriya, Hiroki; Yamaguchi, Shumpei; Ishikawa, Akihiro; Inoue, Yoshihiro; Kawanabe, Motoaki; Ishii, Shin
2015-08-01
Smart houses for elderly or physically challenged people need a method to understand residents' intentions during their daily-living behaviors. To explore a new possibility, we here developed a novel brain-machine interface (BMI) system integrated with an experimental smart house, based on a prototype of a wearable near-infrared spectroscopy (NIRS) device, and verified the system in a specific task of controlling of the house's equipments with BMI. We recorded NIRS signals of three participants during typical daily-living actions (DLAs), and classified them by linear support vector machine. In our off-line analysis, four DLAs were classified at about 70% mean accuracy, significantly above the chance level of 25%, in every participant. In an online demonstration in the real smart house, one participant successfully controlled three target appliances by BMI at 81.3% accuracy. Thus we successfully demonstrated the feasibility of using NIRS-BMI in real smart houses, which will possibly enhance new assistive smart-home technologies.
Influence of the direction of selective laser sintering on machinability of parts from 316L steel
NASA Astrophysics Data System (ADS)
Alexeev, V. P.; Balyakin, A. V.; Khaimovich, A. I.
2017-02-01
This work presents the results of research of the impact of layer-by-layer growing of workpieces made of 316L steel on their machinability. The results of determination of residual stresses and measurement of hardness of the workpieces grown have been demonstrated. A series of experimental studies has been performed in order to determine the cutting force which occurs in the process of machining. The microstructure of the workpieces grown has been examined. It has been shown that the workpieces machined using Selective Laser Melting technology have the microstructure which is a totality of ‘microwelded seams’, which have a significant influence on the behavior of deformation processes in case of machining. The studies have shown that in case of lateral milling of the horizontally grown workpiece, the codirectional microwelded borders prevent any significant deformation of the misalignment which increases the cutting force by up to 10% as compared with milling of the vertically grown workpiece.
Automation of energy demand forecasting
NASA Astrophysics Data System (ADS)
Siddique, Sanzad
Automation of energy demand forecasting saves time and effort by searching automatically for an appropriate model in a candidate model space without manual intervention. This thesis introduces a search-based approach that improves the performance of the model searching process for econometrics models. Further improvements in the accuracy of the energy demand forecasting are achieved by integrating nonlinear transformations within the models. This thesis introduces machine learning techniques that are capable of modeling such nonlinearity. Algorithms for learning domain knowledge from time series data using the machine learning methods are also presented. The novel search based approach and the machine learning models are tested with synthetic data as well as with natural gas and electricity demand signals. Experimental results show that the model searching technique is capable of finding an appropriate forecasting model. Further experimental results demonstrate an improved forecasting accuracy achieved by using the novel machine learning techniques introduced in this thesis. This thesis presents an analysis of how the machine learning techniques learn domain knowledge. The learned domain knowledge is used to improve the forecast accuracy.
Reversibility in Quantum Models of Stochastic Processes
NASA Astrophysics Data System (ADS)
Gier, David; Crutchfield, James; Mahoney, John; James, Ryan
Natural phenomena such as time series of neural firing, orientation of layers in crystal stacking and successive measurements in spin-systems are inherently probabilistic. The provably minimal classical models of such stochastic processes are ɛ-machines, which consist of internal states, transition probabilities between states and output values. The topological properties of the ɛ-machine for a given process characterize the structure, memory and patterns of that process. However ɛ-machines are often not ideal because their statistical complexity (Cμ) is demonstrably greater than the excess entropy (E) of the processes they represent. Quantum models (q-machines) of the same processes can do better in that their statistical complexity (Cq) obeys the relation Cμ >= Cq >= E. q-machines can be constructed to consider longer lengths of strings, resulting in greater compression. With code-words of sufficiently long length, the statistical complexity becomes time-symmetric - a feature apparently novel to this quantum representation. This result has ramifications for compression of classical information in quantum computing and quantum communication technology.
Experimental Machine Learning of Quantum States
NASA Astrophysics Data System (ADS)
Gao, Jun; Qiao, Lu-Feng; Jiao, Zhi-Qiang; Ma, Yue-Chi; Hu, Cheng-Qiu; Ren, Ruo-Jing; Yang, Ai-Lin; Tang, Hao; Yung, Man-Hong; Jin, Xian-Min
2018-06-01
Quantum information technologies provide promising applications in communication and computation, while machine learning has become a powerful technique for extracting meaningful structures in "big data." A crossover between quantum information and machine learning represents a new interdisciplinary area stimulating progress in both fields. Traditionally, a quantum state is characterized by quantum-state tomography, which is a resource-consuming process when scaled up. Here we experimentally demonstrate a machine-learning approach to construct a quantum-state classifier for identifying the separability of quantum states. We show that it is possible to experimentally train an artificial neural network to efficiently learn and classify quantum states, without the need of obtaining the full information of the states. We also show how adding a hidden layer of neurons to the neural network can significantly boost the performance of the state classifier. These results shed new light on how classification of quantum states can be achieved with limited resources, and represent a step towards machine-learning-based applications in quantum information processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tal, J.; Lopez, A.; Edwards, J.M.
1995-04-01
In this paper, an alternative solution to the traditional CNC machine tool controller has been introduced. Software and hardware modules have been described and their incorporation in a CNC control system has been outlined. This type of CNC machine tool controller demonstrates that technology is accessible and can be readily implemented into an open architecture machine tool controller. Benefit to the user is greater controller flexibility, while being economically achievable. PC based, motion as well as non-motion features will provide flexibility through a Windows environment. Up-grading this type of controller system through software revisions will keep the machine tool inmore » a competitive state with minimal effort. Software and hardware modules are mass produced permitting competitive procurement and incorporation. Open architecture CNC systems provide diagnostics thus enhancing maintainability, and machine tool up-time. A major concern of traditional CNC systems has been operator training time. Training time can be greatly minimized by making use of Windows environment features.« less
Local Luminosity Function at 15 micro m and Galaxy Evolution Seen by ISOCAM 15 micro m Surveys
NASA Technical Reports Server (NTRS)
Xu, C.
2000-01-01
A local luminosity function at 15 micro m is derived using the bivariate (15 micro m vs. 60 micro m luminosities) method, based on the newly published ISOCAM LW3-band (15 micro m) survey of the very deep IRAS 60 micro m sample in the north ecliptic pole region (NEPR).
VY Canis Majoris: The Astrophysical Basis of Its Luminosity
NASA Astrophysics Data System (ADS)
Gehrz, Robert D.; Humphreys, R. M.; Jones, T. J.
2006-12-01
The luminosity of the famous red supergiant VY CMa ( L = 4 5 x 105 L ) is well-determined from its spectral energy distribution and distance, and places it near the empirical upper luminosity limit for cool hypergiants. In contrast, its surface temperature is fundamentally ill-defined. Implications for its location on the HR Diagram and its apparent size are discussed.
A reevaluation of the infrared-radio correlation for spiral galaxies
NASA Technical Reports Server (NTRS)
Devereux, Nicholas A.; Eales, Stephen A.
1989-01-01
The infrared radio correlation has been reexamined for a sample of 237 optically bright spiral galaxies which range from 10 to the 8th to 10 to the 11th solar luminosities in far-infrared luminosity. The slope of the correlation is not unity. A simple model in which dust heating by both star formation and the interstellar radiation field contribute to the far-infrared luminosity can account for the nonunity slope. The model differs from previous two component models, however, in that the relative contribution of the two components is independent of far-infrared color temperature, but is dependent on the far-infrared luminosity.
Using luminosity data as a proxy for economic statistics
Chen, Xi
2011-01-01
A pervasive issue in social and environmental research has been how to improve the quality of socioeconomic data in developing countries. Given the shortcomings of standard sources, the present study examines luminosity (measures of nighttime lights visible from space) as a proxy for standard measures of output (gross domestic product). We compare output and luminosity at the country level and at the 1° latitude × 1° longitude grid-cell level for the period 1992–2008. We find that luminosity has informational value for countries with low-quality statistical systems, particularly for those countries with no recent population or economic censuses. PMID:21576474
Einstein X-ray observations of Herbig Ae/Be stars
NASA Technical Reports Server (NTRS)
Damiani, F.; Micela, G.; Sciortino, S.; Harnden, F. R., Jr.
1994-01-01
We have investigated the X-ray emission from Herbig Ae/Be stars, using the full set of Einstein Imaging Proportional Counter (IPC) observations. Of a total of 31 observed Herbig stars, 11 are confidently identified with X-ray sources, with four additonal dubious identifications. We have used maximum likelihood luminosity functions to study the distribution of X-ray luminosity, and we find that Be stars are significantly brighter in X-rays than Ae stars and that their X-ray luminosity is independent of projected rotational velocity v sin i. The X-ray emission is instead correlated with stellar bolometric luminosity and with effective temperature, and also with the kinetic luminosity of the stellar wind. These results seem to exclude a solar-like origin for the X-ray emission, a possibility suggested by the most recent models of Herbig stars' structure, and suggest an analogy with the X-ray emission of O (and early B) stars. We also observe correlations between X-ray luminosity and the emission at 2.2 microns (K band) and 25 microns, which strengthen the case for X-ray emission of Herbig stars originating in their circumstellar envelopes.
Star formation in AGNs at the hundred parsec scale using MIR high-resolution images
NASA Astrophysics Data System (ADS)
Ruschel-Dutra, Daniel; Rodríguez Espinosa, José Miguel; González Martín, Omaira; Pastoriza, Miriani; Riffel, Rogério
2017-04-01
It has been well established in the past decades that the central black hole masses of galaxies correlate with dynamical properties of their harbouring bulges. This notion begs the question of whether there are causal connections between the active galactic nucleus (AGN) and its immediate vicinity in the host galaxy. In this paper, we analyse the presence of circumnuclear star formation in a sample of 15 AGN using mid-infrared observations. The data consist of a set of 11.3 μm polycyclic aromatic hydrocarbon emission and reference continuum images, taken with ground-based telescopes, with sub-arcsecond resolution. By comparing our star formation estimates with AGN accretion rates, derived from X-ray luminosities, we investigate the validity of theoretical predictions for the AGN-starburst connection. Our main results are: (I) circumnuclear star formation is found, at distances as low as tens of parsecs from the nucleus, in nearly half of our sample (7/15); (II) star formation luminosities are correlated with the bolometric luminosity of the AGN (LAGN) only for objects with LAGN ≥ 1042 erg s-1; (III) low-luminosity AGNs (LAGN < 1042 erg s-1) seem to have starburst luminosities far greater than their bolometric luminosities.
The effect of accretion environment at large radius on hot accretion flows
NASA Astrophysics Data System (ADS)
Yang, Xiao-Hong; Bu, De-Fu
2018-05-01
We study the effects of accretion environment (gas density, temperature, and angular momentum) at large radii (˜10 pc) on luminosity of hot accretion flows. The radiative feedback effects from the accretion flow on the accretion environment are also self-consistently taken into account. We find that the slowly rotating flows at large radii can significantly deviate from Bondi accretion when radiation heating and cooling are considered. We further find that when the temperature of environment gas is low (e.g. T = 2 × 107 K), the luminosity of hot accretion flows is high. When the temperature of gas is high (e.g. T ≥ 4 × 107 K), the luminosity of hot accretion flow significantly deceases. The environment gas density can also significantly influence the luminosity of accretion flows. When density is higher than ˜4 × 10-22 g cm-3 and temperature is lower than 2 × 107 K, hot accretion flow with luminosity lower than 2 per cent LEdd is not present. Therefore, the parsec-scale environment density and temperature are two important parameters to determine the luminosity. The results are also useful for the subgrid models adopted by the cosmological simulations.
Studies of hydrodynamic events in stellar evolution. 3: Ejection of planetary nebulae
NASA Technical Reports Server (NTRS)
Sparks, W. M.; Kutter, G. S.
1973-01-01
The dynamic behavior of the H-rich envelope (0.101 solar mass) of an evolved star (1.1 solar mass) as the luminosity rises to 19000 solar luminosity during the second ascent of the red giant branch. For luminosities in the range 3100 L 19000 solar luminosity the H-rich envelope pulsates like a long-period variable (LPV) with periods of the order of a year. As L reaches 19000 solar luminosity, the entire H-rich envelope is ejected as a shell with speeds of a few 10 km/s. The ejection occurs on a timescale of a few LPV pulsation periods. This ejection is associated with the formation of a planetary nebula. The computations are based on an implicit hydrodynamic computer code. T- and RHO-dependent opacities and excitation and ionization energies are included. As the H-rich envelope is accelerated off the stellar core, the gap between envelope and core is approximated by a vacuum, filled with radiation. Across the vacuum, the luminosity is conserved and the anisotropy of the radiation is considered as well as the solid angle subtended by the remnant star at the inner surface of the H-rich envelope. Spherical symmetry and the diffusion approximation are assumed.
The power of relativistic jets is larger than the luminosity of their accretion disks.
Ghisellini, G; Tavecchio, F; Maraschi, L; Celotti, A; Sbarrato, T
2014-11-20
Theoretical models for the production of relativistic jets from active galactic nuclei predict that jet power arises from the spin and mass of the central supermassive black hole, as well as from the magnetic field near the event horizon. The physical mechanism underlying the contribution from the magnetic field is the torque exerted on the rotating black hole by the field amplified by the accreting material. If the squared magnetic field is proportional to the accretion rate, then there will be a correlation between jet power and accretion luminosity. There is evidence for such a correlation, but inadequate knowledge of the accretion luminosity of the limited and inhomogeneous samples used prevented a firm conclusion. Here we report an analysis of archival observations of a sample of blazars (quasars whose jets point towards Earth) that overcomes previous limitations. We find a clear correlation between jet power, as measured through the γ-ray luminosity, and accretion luminosity, as measured by the broad emission lines, with the jet power dominating the disk luminosity, in agreement with numerical simulations. This implies that the magnetic field threading the black hole horizon reaches the maximum value sustainable by the accreting matter.
Narrow vs. Broad line Seyfert 1 galaxies: X-ray, optical and mid-infrared AGN characteristics
NASA Astrophysics Data System (ADS)
Lakićević, Maša; Popović, Luka Č.; Kovačević-Dojčinović, Jelena
2018-05-01
We investigated narrow line Seyfert 1 galaxies (NLS1s) at optical, mid-infrared (MIR) and X-ray wavelengths, comparing them to the broad line active galactic nuclei (BLAGNs). We found that black hole mass, coronal line luminosities, X-ray hardness ratio and X-ray, optical and MIR luminosities are higher for the BLAGNs than for NLS1s, while policyclic aromatic hydrocarbon (PAH) contribution and the accretion rates are higher for the NLS1s. Furthermore, we found some trends among spectral parameters that NLS1s have and BLAGNs do not have. The evolution of FWHM(Hβ) with the luminosities of MIR and coronal lines, continuum luminosities, PAH contribution, Hβ broad line luminosity, FWHM[O III] and EW(HβNLR), are important trends found for NLS1s. That may contribute to the insight that NLS1s are developing AGNs, growing their black holes, while their luminosities and FWHM(Hβ) consequently grow, and that BLAGNs are mature, larger objects of slower and/or different evolution. Black hole mass is related to PAH contribution only for NLS1s, which may suggest that PAHs are more efficiently destroyed in NLS1s.
Potential for luminosity improvement for low-energy RHIC operation with long bunches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fedotov, A.; Blaskiewicz, M.
Electron cooling was proposed to increase luminosity of the RHIC collider for heavy ion beams at low energies. Luminosity decreases as the square of bunch intensity due to the beam loss from the RF bucket as a result of the longitudinal intra beam scattering (IBS), as well as due to the transverse emittance growth because of the transverse IBS. Both transverse and longitudinal IBS can be counteracted with electron cooling. This would allow one to keep the initial peak luminosity close to constant throughout the store essentially without the beam loss. In addition, the phase-space density of the hadron beamsmore » can be further increased by providing stronger electron cooling. Unfortunately, the defining limitation for low energies in RHIC is expected to be the space charge. Here we explore an idea of additional improvement in luminosity, on top of the one coming from just IBS compensation and longer stores, which may be expected if one can operate with longer bunches at the space-charge limit in a collider. This approach together with electron cooling may result in about 10-fold improvement in total luminosity for low-energy RHIC program.« less
Chemically intuited, large-scale screening of MOFs by machine learning techniques
NASA Astrophysics Data System (ADS)
Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.
2017-10-01
A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.
Descartes' pineal neuropsychology.
Smith, C U
1998-02-01
The year 1996 marked the quattrocentenary of Descartes' birth. This paper reviews his pineal neuropsychology. It demonstrates that Descartes understood the true anatomical position of the pineal. His intraventricular pineal (or glande H) was a theoretical construct which allowed him to describe the operations of his man-like "earthen machine." In the Treatise of Man he shows how all the behaviors of such machines could then be accounted for without the presence of self-consciousness. Infrahuman animals are "conscious automata." In Passions of the Soul he adds, but only for humans, self-consciousness to the machine. In a modern formulation, only humans not only know but know that they know. Copyright 1998 Academic Press.
Parallel processors and nonlinear structural dynamics algorithms and software
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.
1989-01-01
The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.
Differential spatial activity patterns of acupuncture by a machine learning based analysis
NASA Astrophysics Data System (ADS)
You, Youbo; Bai, Lijun; Xue, Ting; Zhong, Chongguang; Liu, Zhenyu; Tian, Jie
2011-03-01
Acupoint specificity, lying at the core of the Traditional Chinese Medicine, underlies the theoretical basis of acupuncture application. However, recent studies have reported that acupuncture stimulation at nonacupoint and acupoint can both evoke similar signal intensity decreases in multiple regions. And these regions were spatially overlapped. We used a machine learning based Support Vector Machine (SVM) approach to elucidate the specific neural response pattern induced by acupuncture stimulation. Group analysis demonstrated that stimulation at two different acupoints (belong to the same nerve segment but different meridians) could elicit distinct neural response patterns. Our findings may provide evidence for acupoint specificity.
Precision Timing Calorimeter for High Energy Physics
Anderson, Dustin; Apresyan, Artur; Bornheim, Adolf; ...
2016-04-01
Here, we present studies on the performance and characterization of the time resolution of LYSO-based calorimeters. Results for an LYSO sampling calorimeter and an LYSO-tungsten Shashlik calorimeter are presented. We also demonstrate that a time resolution of 30 ps is achievable for the LYSO sampling calorimeter. Timing calorimetry is described as a tool for mitigating the effects due to the large number of simultaneous interactions in the high luminosity environment foreseen for the Large Hadron Collider.
ROSAT survey of emission from Be stars
NASA Technical Reports Server (NTRS)
Grady, Carol
1993-01-01
ROSAT pointed observations of bright, classical Be stars have demonstrated that detection of soft x-rays at a level expected for normal B stars of comparable T(sub eff) and luminosity is anti-correlated with the presence of episodes of enhanced mass ejection and formation of a dense, moderately ionized equatorial circumstellar disk. At epochs of lower than average disk column density, x-ray flaring has been detected in 2 Be stars, lambda Eri and pi Aqr.
Searching for propeller-phase ULXs in the XMM-Newton Serendipitous Source Catalogue
NASA Astrophysics Data System (ADS)
Earnshaw, H. P.; Roberts, T. P.; Sathyaprakash, R.
2018-05-01
We search for transient sources in a sample of ultraluminous X-ray sources (ULXs) from the 3XMM-DR4 release of the XMM-Newton Serendipitous Source Catalogue in order to find candidate neutron star ULXs alternating between an accreting state and the propeller regime, in which the luminosity drops dramatically. By examining their fluxes and flux upper limits, we identify five ULXs that demonstrate long-term variability of over an order of magnitude. Using Chandra and Swift data to further characterize their light curves, we find that two of these sources are detected only once and could be X-ray binaries in outburst that only briefly reach ULX luminosities. Two others are consistent with being super-Eddington accreting sources with high levels of inter-observation variability. One source, M51 ULX-4, demonstrates apparent bimodal flux behaviour that could indicate the propeller regime. It has a hard X-ray spectrum, but no significant pulsations in its timing data, although with an upper limit of 10 per cent of the signal pulsed at ˜1.5 Hz a pulsating ULX cannot be excluded, particularly if the pulsations are transient. By simulating XMM-Newton observations of a population of pulsating ULXs, we predict that there could be approximately 200 other bimodal ULXs that have not been observed sufficiently well by XMM-Newton to be identified as transient.
Demonstration of Cataloging Support Services and Marc II Conversion. Final Report.
ERIC Educational Resources Information Center
Buckland, Lawrence F.; And Others
Beginning in December, 1967, the New England Library Information Network (NELINET) was demonstrated in actual operation using Machine-Readable Cataloging (MARC I) bibliographic data. Section 1 of this report is an introduction and summary of the project. Section 2 described the library processing function demonstrated which included catalog card…
Project A+ Elementary Technology Demonstration Schools 1990-91. The First Year.
ERIC Educational Resources Information Center
Marable, Paula; Frazer, Linda
Project A+ Elementary Technology Demonstration Schools is a program made possible through grants from IBM (International Business Machines Corporation) and Apple, Inc. The primary purpose of the program is to demonstrate the educational effectiveness of technology in accelerating the learning of low achieving at-risk students and enhancing the…
Measuring the usefulness of hidden units in Boltzmann machines with mutual information.
Berglund, Mathias; Raiko, Tapani; Cho, Kyunghyun
2015-04-01
Restricted Boltzmann machines (RBMs) and deep Boltzmann machines (DBMs) are important models in deep learning, but it is often difficult to measure their performance in general, or measure the importance of individual hidden units in specific. We propose to use mutual information to measure the usefulness of individual hidden units in Boltzmann machines. The measure is fast to compute, and serves as an upper bound for the information the neuron can pass on, enabling detection of a particular kind of poor training results. We confirm experimentally that the proposed measure indicates how much the performance of the model drops when some of the units of an RBM are pruned away. We demonstrate the usefulness of the measure for early detection of poor training in DBMs. Copyright © 2014 Elsevier Ltd. All rights reserved.
Enhanced networked server management with random remote backups
NASA Astrophysics Data System (ADS)
Kim, Song-Kyoo
2003-08-01
In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.
Machine vision inspection of lace using a neural network
NASA Astrophysics Data System (ADS)
Sanby, Christopher; Norton-Wayne, Leonard
1995-03-01
Lace is particularly difficult to inspect using machine vision since it comprises a fine and complex pattern of threads which must be verified, on line and in real time. Small distortions in the pattern are unavoidable. This paper describes instrumentation for inspecting lace actually on the knitting machine. A CCD linescan camera synchronized to machine motions grabs an image of the lace. Differences between this lace image and a perfect prototype image are detected by comparison methods, thresholding techniques, and finally, a neural network (to distinguish real defects from false alarms). Though produced originally in a laboratory on SUN Sparc work-stations, the processing has subsequently been implemented on a 50 Mhz 486 PC-look-alike. Successful operation has been demonstrated in a factory, but over a restricted width. Full width coverage awaits provision of faster processing.
Axions and the luminosity function of white dwarfs. The thin and thick disks, and the halo
NASA Astrophysics Data System (ADS)
Isern, J.; García-Berro, E.; Torres, S.; Cojocaru, R.; Catalán, S.
2018-05-01
The evolution of white dwarfs is a simple gravothermal process of cooling. Since the shape of their luminosity function is sensitive to the characteristic cooling time, it is possible to use its slope to test the existence of additional sources or sinks of energy, such as those predicted by alternative physical theories. The aim of this paper is to study if the changes in the slope of the white dwarf luminosity function around bolometric magnitudes ranging from 8 to 10 and previously attributed to axion emission are, effectively, a consequence of the existence of axions and not an artifact introduced by the star formation rate. We compute theoretical luminosity functions of the thin and thick disk, and of the stellar halo including axion emission and we compare them with the existing observed luminosity functions. Since these stellar populations have different star formation histories, the slope change should be present in all of them at the same place if it is due to axions or any other intrinsic cooling mechanism. The signature of an unexpected cooling seems to be present in the luminosity functions of the thin and thick disks, as well as in the halo luminosity function. This additional cooling is compatible with axion emission, thus supporting to the idea that DFSZ axions, with a mass in the range of 4 to 10 meV, could exist. If this were the case, these axions could be detected by the future solar axioscope IAXO.
Magnetic flux-load current interactions in ferrous conductors
NASA Astrophysics Data System (ADS)
Cannell, Michael J.; McConnell, Richard A.
1992-06-01
A modeling technique has been developed to account for interactions between load current and magnetic flux in an iron conductor. Such a conductor would be used in the active region of a normally conducting homopolar machine. This approach has been experimentally verified and its application to a real machine demonstrated. Additionally, measurements of the resistivity of steel under the combined effects of magnetic field and current have been conducted.
23. NORTHEAST TO CIRCA 1875 POWER SHEAR, PUNCH, AND RIVETING ...
23. NORTHEAST TO CIRCA 1875 POWER SHEAR, PUNCH, AND RIVETING MACHINE SET UP TO DEMONSTRATE USE IN RIVETING COMPONENTS OF WHEEL ARMS FOR ELI WINDMILLS. HISTORIC DEBRIS FROM PUNCHING WORK IS VISIBLE BENEATH THE MACHINE IN THE OPERATOR'S PIT.' ON THE LEFT IS A U-SHAPED LOVEJOY FIELD PUNCH FOR USE IN INSTALLING STEEL WINDMILL/TOWER COMPONENTS. - Kregel Windmill Company Factory, 1416 Central Avenue, Nebraska City, Otoe County, NE
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
Origin of acoustic emission produced during single point machining
NASA Astrophysics Data System (ADS)
Heiple, C. R.; Carpenter, S. H.; Armentrout, D. L.
1991-05-01
Acoustic emission was monitored during single point, continuous machining of 4340 steel and Ti-6Al-4V as a function of heat treatment. Acoustic emission produced during tensile and compressive deformation of these alloys has been previously characterized as a function of heat treatment. Heat treatments which increase the strength of 4340 steel increase the amount of acoustic emission produced during deformation, while heat treatments which increase the strength of Ti-6Al-4V decrease the amount of acoustic emission produced during deformation. If chip deformation were the primary source of acoustic emission during single point machining, then opposite trends in the level of acoustic emission produced during machining as a function of material strength would be expected for these two alloys. Trends in rms acoustic emission level with increasing strength were similar for both alloys, demonstrating that chip deformation is not a major source of acoustic emission in single point machining. Acoustic emission has also been monitored as a function of machining parameters on 6061-T6 aluminum, 304 stainless steel, 17-4PH stainless steel, lead, and teflon. The data suggest that sliding friction between the nose and/or flank of the tool and the newly machined surface is the primary source of acoustic emission. Changes in acoustic emission with tool wear were strongly material dependent.
NASA Astrophysics Data System (ADS)
Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.
2018-01-01
This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.
Weighted K-means support vector machine for cancer prediction.
Kim, SungHwan
2016-01-01
To date, the support vector machine (SVM) has been widely applied to diverse bio-medical fields to address disease subtype identification and pathogenicity of genetic variants. In this paper, I propose the weighted K-means support vector machine (wKM-SVM) and weighted support vector machine (wSVM), for which I allow the SVM to impose weights to the loss term. Besides, I demonstrate the numerical relations between the objective function of the SVM and weights. Motivated by general ensemble techniques, which are known to improve accuracy, I directly adopt the boosting algorithm to the newly proposed weighted KM-SVM (and wSVM). For predictive performance, a range of simulation studies demonstrate that the weighted KM-SVM (and wSVM) with boosting outperforms the standard KM-SVM (and SVM) including but not limited to many popular classification rules. I applied the proposed methods to simulated data and two large-scale real applications in the TCGA pan-cancer methylation data of breast and kidney cancer. In conclusion, the weighted KM-SVM (and wSVM) increases accuracy of the classification model, and will facilitate disease diagnosis and clinical treatment decisions to benefit patients. A software package (wSVM) is publicly available at the R-project webpage (https://www.r-project.org).
Measuring the X-ray luminosities of SDSS DR7 clusters from ROSAT All Sky Survey
NASA Astrophysics Data System (ADS)
Wang, Lei; Yang, Xiaohu; Shen, Shiyin; Mo, H. J.; van den Bosch, Frank C.; Luo, Wentao; Wang, Yu; Lau, Erwin T.; Wang, Q. D.; Kang, Xi; Li, Ran
2014-03-01
We use ROSAT All Sky Survey broad-band X-ray images and the optical clusters identified from Sloan Digital Sky Survey Data Release 7 to estimate the X-ray luminosities around ˜65 000 candidate clusters with masses ≳ 1013 h- 1 M⊙ based on an optical to X-ray (OTX) code we develop. We obtain a catalogue with X-ray luminosity for each cluster. This catalogue contains 817 clusters (473 at redshift z ≤ 0.12) with signal-to-noise ratio >3 in X-ray detection. We find about 65 per cent of these X-ray clusters have their most massive member located near the X-ray flux peak; for the rest 35 per cent, the most massive galaxy is separated from the X-ray peak, with the separation following a distribution expected from a Navarro-Frenk-White profile. We investigate a number of correlations between the optical and X-ray properties of these X-ray clusters, and find that the cluster X-ray luminosity is correlated with the stellar mass (luminosity) of the clusters, as well as with the stellar mass (luminosity) of the central galaxy and the mass of the halo, but the scatter in these correlations is large. Comparing the properties of X-ray clusters of similar halo masses but having different X-ray luminosities, we find that massive haloes with masses ≳ 1014 h- 1 M⊙ contain a larger fraction of red satellite galaxies when they are brighter in X-ray. An opposite trend is found in central galaxies in relative low-mass haloes with masses ≲ 1014 h- 1 M⊙ where X-ray brighter clusters have smaller fraction of red central galaxies. Clusters with masses ≳ 1014 h- 1 M⊙ that are strong X-ray emitters contain many more low-mass satellite galaxies than weak X-ray emitters. These results are also confirmed by checking X-ray clusters of similar X-ray luminosities but having different characteristic stellar masses. A cluster catalogue containing the optical properties of member galaxies and the X-ray luminosity is available at http://gax.shao.ac.cn/data/Group.html.
The 5-10 keV AGN luminosity function at 0.01 < z < 4.0
NASA Astrophysics Data System (ADS)
Fotopoulou, S.; Buchner, J.; Georgantopoulos, I.; Hasinger, G.; Salvato, M.; Georgakakis, A.; Cappelluti, N.; Ranalli, P.; Hsu, L. T.; Brusa, M.; Comastri, A.; Miyaji, T.; Nandra, K.; Aird, J.; Paltani, S.
2016-03-01
The active galactic nuclei (AGN) X-ray luminosity function traces actively accreting supermassive black holes and is essential for the study of the properties of the AGN population, black hole evolution, and galaxy-black hole coevolution. Up to now, the AGN luminosity function has been estimated several times in soft (0.5-2 keV) and hard X-rays (2-10 keV). AGN selection in these energy ranges often suffers from identification and redshift incompleteness and, at the same time, photoelectric absorption can obscure a significant amount of the X-ray radiation. We estimate the evolution of the luminosity function in the 5-10 keV band, where we effectively avoid the absorbed part of the spectrum, rendering absorption corrections unnecessary up to NH ~ 1023 cm-2. Our dataset is a compilation of six wide, and deep fields: MAXI, HBSS, XMM-COSMOS, Lockman Hole, XMM-CDFS, AEGIS-XD, Chandra-COSMOS, and Chandra-CDFS. This extensive sample of ~1110 AGN (0.01 < z < 4.0, 41 < log Lx < 46) is 98% redshift complete with 68% spectroscopic redshifts. For sources lacking a spectroscopic redshift estimation we use the probability distribution function of photometric redshift estimation specifically tuned for AGN, and a flat probability distribution function for sources with no redshift information. We use Bayesian analysis to select the best parametric model from simple pure luminosity and pure density evolution to more complicated luminosity and density evolution and luminosity-dependent density evolution (LDDE). We estimate the model parameters that describe best our dataset separately for each survey and for the combined sample. We show that, according to Bayesian model selection, the preferred model for our dataset is the LDDE. Our estimation of the AGN luminosity function does not require any assumption on the AGN absorption and is in good agreement with previous works in the 2-10 keV energy band based on X-ray hardness ratios to model the absorption in AGN up to redshift three. Our sample does not show evidence of a rapid decline of the AGN luminosity function up to redshift four.
NASA Astrophysics Data System (ADS)
Sawicki, Marcin; Thompson, David
2006-09-01
We use our very deep UnGRI catalog of z~4, 3, and 2 UV-selected star-forming galaxies to study the cosmological evolution of the rest-frame 1700 Å luminosity density. The ability to reliably constrain the contribution of faint galaxies is critical here, and our data do so by reaching deep into the galaxy population, to M*LBG+2 at z~4 and deeper still at lower redshifts (M*LBG=-21.0 and L*LBG is the corresponding luminosity). We find that the luminosity density at z>~2 is dominated by the hitherto poorly studied galaxies fainter than L*LBG, and, indeed, the bulk of the UV light at these epochs comes from galaxies in the rather narrow luminosity range L=(0.1-1)L*LBG. Overall, there is a gradual rise in total luminosity density starting at >~4 (we find twice as much UV light at z~3 as at z~4), followed by a shallow peak or plateau within z~3-1, finally followed by the well-known plunge to z~0. Within this total picture, luminosity density in sub-L*LBG galaxies at z>~2 evolves more rapidly than that in more luminous objects; this trend is reversed at lower redshifts, z<~1-a reversal that is reminiscent of galaxy downsizing. We find that within the context of commonly used models there seemingly are not enough faint or bright LBGs to maintain ionization of intergalactic gas even as recently as z~4, and the problem becomes worse at higher redshifts: apparently the universe must be easier to reionize than some recent studies have assumed. Nevertheless, sub-L*LBG galaxies do dominate the total UV luminosity density at z>~2, and this dominance highlights the need for follow-up studies that will teach us more about these very numerous but thus far largely unexplored systems. Based on data obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and NASA and was made possible by the generous financial support of the W. M. Keck Foundation.
Keck Deep Fields. II. The Ultraviolet Galaxy Luminosity Function at z ~ 4, 3, and 2
NASA Astrophysics Data System (ADS)
Sawicki, Marcin; Thompson, David
2006-05-01
We use very deep UnGRI multifield imaging obtained at the Keck telescope to study the evolution of the rest-frame 1700 Å galaxy luminosity function as the universe doubles its age from z~4 to ~2. We use exactly the same filters and color-color selection as those used by the Steidel team but probe significantly fainter limits, well below L*. The depth of our imaging allows us to constrain the faint end of the luminosity function, reaching M1700~-18.5 at z~3 (equivalent to ~1 Msolar yr-1), accounting for both N1/2 uncertainty in the number of galaxies and cosmic variance. We carefully examine many potential sources of systematic bias in our LF measurements before drawing the following conclusions. We find that the luminosity function of Lyman break galaxies evolves with time and that this evolution is differential with luminosity. The result is best constrained between the epochs at z~4 and ~3, where we find that the number density of sub-L* galaxies increases with time by at least a factor of 2.3 (11 σ statistical confidence); while the faint end of the LF evolves, the bright end appears to remain virtually unchanged, indicating that there may be differential, luminosity-dependent evolution (98.5% statistical probability). Potential systematic biases restrict our ability to draw strong conclusions about continued evolution of the luminosity function to lower redshifts, z~2.2 and ~1.7, but, nevertheless, it appears certain that the number density of z~2.2 galaxies at all luminosities we studied, -22>M1700>-18, is at least as high as that of their counterparts at z~3. While it is not yet clear what mechanism underlies the observed evolution, the fact that this evolution is differential with luminosity opens up new avenues of improving our understanding of how galaxies form and evolve at high redshift. Based on data obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and NASA and was made possible by the generous financial support of the W. M. Keck Foundation.
NASA Astrophysics Data System (ADS)
Cai, Zhen-Yi; Lapi, Andrea; Bressan, Alessandro; De Zotti, Gianfranco; Negrello, Mattia; Danese, Luigi
2014-04-01
We present a physical model for the evolution of the ultraviolet (UV) luminosity function of high-redshift galaxies, taking into account in a self-consistent way their chemical evolution and the associated evolution of dust extinction. Dust extinction is found to increase fast with halo mass. A strong correlation between dust attenuation and halo/stellar mass for UV selected high-z galaxies is thus predicted. The model yields good fits of the UV and Lyman-α (Lyα) line luminosity functions at all redshifts at which they have been measured. The weak observed evolution of both luminosity functions between z = 2 and z = 6 is explained as the combined effect of the negative evolution of the halo mass function; of the increase with redshift of the star formation efficiency due to the faster gas cooling; and of dust extinction, differential with halo mass. The slope of the faint end of the UV luminosity function is found to steepen with increasing redshift, implying that low luminosity galaxies increasingly dominate the contribution to the UV background at higher and higher redshifts. The observed range of the UV luminosities at high z implies a minimum halo mass capable of hosting active star formation M crit <~ 109.8 M ⊙, which is consistent with the constraints from hydrodynamical simulations. From fits of Lyα line luminosity functions, plus data on the luminosity dependence of extinction, and from the measured ratios of non-ionizing UV to Lyman-continuum flux density for samples of z ~= 3 Lyman break galaxies and Lyα emitters, we derive a simple relationship between the escape fraction of ionizing photons and the star formation rate. It implies that the escape fraction is larger for low-mass galaxies, which are almost dust-free and have lower gas column densities. Galaxies already represented in the UV luminosity function (M UV <~ -18) can keep the universe fully ionized up to z ~= 6. This is consistent with (uncertain) data pointing to a rapid drop of the ionization degree above z ~= 6, such as indications of a decrease of the comoving emission rate of ionizing photons at z ~= 6, a decrease of sizes of quasar near zones, and a possible decline of the Lyα transmission through the intergalactic medium at z > 6. On the other hand, the electron scattering optical depth, τes, inferred from cosmic microwave background (CMB) experiments favor an ionization degree close to unity up to z ~= 9-10. Consistency with CMB data can be achieved if M crit ~= 108.5 M ⊙, implying that the UV luminosity functions extend to M UV ~= -13, although the corresponding τes is still on the low side of CMB-based estimates.
Taber-Doughty, Teresa
2005-01-01
Three secondary age students with moderate intellectual disabilities learned to use the system of least prompts, a self-operated picture prompting system, and a self-operated auditory prompting system to use a copy machine and a debit machine. Both the effectiveness and efficiency of these prompting systems were compared. Additionally, student preference of instructional method was examined. The results demonstrated that each prompting system was effective and efficient with varying students when skill acquisition and duration of task performance were measured. All students demonstrated increased independence in completing both tasks. This study found that the preferred prompting systems were more effective in terms of both skill acquisition and duration for completing tasks for students.
Luminosity function of faint galaxies with ultraviolet continuum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stepanyan, D.A.
1985-05-01
The spatial density of faint galaxies with ultraviolet continuum in the Second Survey of the Byurakan Astrophysical Observatory is determined. The luminosity function of galaxies with ultraviolet continuum can be extended to objects fainter by 1-1.5 magnitudes. The spatial density of such galaxies in the interval of luminosities -16 /sup m/ .5 to -21 /sup m/ .5 is on the average 0.08 of the total density of field galaxies in the same interval of absolute magnitudes. The spatial density of low-luminosity galaxies with ultraviolet continuum is very high. In the interval from -12 /sup m/ .5 to -15 /sup m/more » .5 it is 0.23 Mpc/sup -3/.« less
X-ray studies of quasars with the Einstein Observatory. II
NASA Technical Reports Server (NTRS)
Zamorani, G.; Maccacaro, T.; Henry, J. P.; Tananbaum, H.; Soltan, A.; Liebert, J.; Stocke, J.; Strittmatter, P. A.; Weymann, R. J.; Smith, M. G.
1981-01-01
X-ray observations of 107 quasars have been carried out with the Einstein Observatory, and 79 have been detected. A correlation between optical emission and X-ray emission is found; and for radio-loud quasars, the data show a correlation between radio emission and X-ray emission. For a given optical luminosity, the average X-ray emission of radio-loud quasars is about three times higher than that of radio-quiet quasars. The data also suggest that the ratio of X-ray to optical luminosity is decreasing with increasing redshift and/or optical luminosity. The data support the picture in which luminosity evolution, rather than pure density evolution, describes the quasar behavior as a function of redshift.
Rosen's (M,R) system as an X-machine.
Palmer, Michael L; Williams, Richard A; Gatherer, Derek
2016-11-07
Robert Rosen's (M,R) system is an abstract biological network architecture that is allegedly both irreducible to sub-models of its component states and non-computable on a Turing machine. (M,R) stands as an obstacle to both reductionist and mechanistic presentations of systems biology, principally due to its self-referential structure. If (M,R) has the properties claimed for it, computational systems biology will not be possible, or at best will be a science of approximate simulations rather than accurate models. Several attempts have been made, at both empirical and theoretical levels, to disprove this assertion by instantiating (M,R) in software architectures. So far, these efforts have been inconclusive. In this paper, we attempt to demonstrate why - by showing how both finite state machine and stream X-machine formal architectures fail to capture the self-referential requirements of (M,R). We then show that a solution may be found in communicating X-machines, which remove self-reference using parallel computation, and then synthesise such machine architectures with object-orientation to create a formal basis for future software instantiations of (M,R) systems. Copyright © 2016 Elsevier Ltd. All rights reserved.
Hybrid Power Management for Office Equipment
NASA Astrophysics Data System (ADS)
Gingade, Ganesh P.
Office machines (such as printers, scanners, fax, and copiers) can consume significant amounts of power. Few studies have been devoted to power management of office equipment. Most office machines have sleep modes to save power. Power management of these machines are usually timeout-based: a machine sleeps after being idle long enough. Setting the timeout duration can be difficult: if it is too long, the machine wastes power during idleness. If it is too short, the machine sleeps too soon and too often--the wakeup delay can significantly degrade productivity. Thus, power management is a tradeoff between saving energy and keeping short response time. Many power management policies have been published and one policy may outperform another in some scenarios. There is no definite conclusion which policy is always better. This thesis describes two methods for office equipment power management. The first method adaptively reduces power based on a constraint of the wakeup delay. The second method is a hybrid with multiple candidate policies and it selects the most appropriate power management policy. Using six months of request traces from 18 different offices, we demonstrate that the hybrid policy outperforms individual policies. We also discover that power management based on business hours does not produce consistent energy savings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chuyu
2012-12-31
Beam diagnostics is an essential constituent of any accelerator, so that it is named as "organs of sense" or "eyes of the accelerator." Beam diagnostics is a rich field. A great variety of physical effects or physical principles are made use of in this field. Some devices are based on electro-magnetic influence by moving charges, such as faraday cups, beam transformers, pick-ups; Some are related to Coulomb interaction of charged particles with matter, such as scintillators, viewing screens, ionization chambers; Nuclear or elementary particle physics interactions happen in some other devices, like beam loss monitors, polarimeters, luminosity monitors; Some measuremore » photons emitted by moving charges, such as transition radiation, synchrotron radiation monitors and diffraction radiation-which is the topic of the first part of this thesis; Also, some make use of interaction of particles with photons, such as laser wire and Compton polarimeters-which is the second part of my thesis. Diagnostics let us perceive what properties a beam has and how it behaves in a machine, give us guideline for commissioning, controlling the machine and indispensable parameters vital to physics experiments. In the next two decades, the research highlight will be colliders (TESLA, CLIC, JLC) and fourth-generation light sources (TESLA FEL, LCLS, Spring 8 FEL) based on linear accelerator. These machines require a new generation of accelerator with smaller beam, better stability and greater efficiency. Compared with those existing linear accelerators, the performance of next generation linear accelerator will be doubled in all aspects, such as 10 times smaller horizontal beam size, more than 10 times smaller vertical beam size and a few or more times higher peak power. Furthermore, some special positions in the accelerator have even more stringent requirements, such as the interaction point of colliders and wigglor of free electron lasers. Higher performance of these accelerators increases the difficulty of diagnostics. For most cases, intercepting measurements are no longer acceptable, and nonintercepting method like synchrotron radiation monitor can not be applied to linear accelerators. The development of accelerator technology asks for simutanous diagnostics innovations, to expand the performance of diagnostic tools to meet the requirements of the next generation accelerators. Diffraction radiation and inverse Compton scattering are two of the most promising techniques, their nonintercepting nature avoids perturbance to the beam and damage to the instrumentation. This thesis is divided into two parts, beam size measurement by optical diffraction radiation and Laser system for Compton polarimeter. Diffraction radiation, produced by the interaction between the electric field of charged particles and the target, is related to transition radiation. Even though the theory of diffraction radiation has been discussed since 1960s, there are only a few experimental studies in recent years. The successful beam size measurement by optical diffraction radiation at CEBAF machine is a milestone: First of all, we have successfully demonstrated diffraction radiation as an effective nonintercepting diagnostics; Secondly, the simple linear relationship between the diffraction radiation image size and the actual beam size improves the reliability of ODR measurements; And, we measured the polarized components of diffraction radiation for the first time and I analyzed the contribution from edge radiation to diffraction radiation.« less
Gravitational-Wave Luminosity of Binary Neutron Stars Mergers
NASA Astrophysics Data System (ADS)
Zappa, Francesco; Bernuzzi, Sebastiano; Radice, David; Perego, Albino; Dietrich, Tim
2018-03-01
We study the gravitational-wave peak luminosity and radiated energy of quasicircular neutron star mergers using a large sample of numerical relativity simulations with different binary parameters and input physics. The peak luminosity for all the binaries can be described in terms of the mass ratio and of the leading-order post-Newtonian tidal parameter solely. The mergers resulting in a prompt collapse to black hole have the largest peak luminosities. However, the largest amount of energy per unit mass is radiated by mergers that produce a hypermassive neutron star or a massive neutron star remnant. We quantify the gravitational-wave luminosity of binary neutron star merger events, and set upper limits on the radiated energy and the remnant angular momentum from these events. We find that there is an empirical universal relation connecting the total gravitational radiation and the angular momentum of the remnant. Our results constrain the final spin of the remnant black hole and also indicate that stable neutron star remnant forms with super-Keplerian angular momentum.
The joint fit of the BHMF and ERDF for the BAT AGN Sample
NASA Astrophysics Data System (ADS)
Weigel, Anna K.; Koss, Michael; Ricci, Claudio; Trakhtenbrot, Benny; Oh, Kyuseok; Schawinski, Kevin; Lamperti, Isabella
2018-01-01
A natural product of an AGN survey is the AGN luminosity function. This statistical measure describes the distribution of directly measurable AGN luminosities. Intrinsically, the shape of the luminosity function depends on the distribution of black hole masses and Eddington ratios. To constrain these fundamental AGN properties, the luminosity function thus has to be disentangled into the black hole mass and Eddington ratio distribution function. The BASS survey is unique as it allows such a joint fit for a large number of local AGN, is unbiased in terms of obscuration in the X-rays and provides black hole masses for type-1 and type-2 AGN. The black hole mass function at z ~ 0 represents an essential baseline for simulations and black hole growth models. The normalization of the Eddington ratio distribution function directly constrains the AGN fraction. Together, the BASS AGN luminosity, black hole mass and Eddington ratio distribution functions thus provide a complete picture of the local black hole population.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ploeg, Harrison; Gordon, Chris; Crocker, Roland
Fermi Large Area Telescope data reveal an excess of GeV gamma rays from the direction of the Galactic Center and bulge. Several explanations have been proposed for this excess including an unresolved population of millisecond pulsars (MSPs) and self-annihilating dark matter. It has been claimed that a key discriminant for or against the MSP explanation can be extracted from the properties of the luminosity function describing this source population. Specifically, is the luminosity function of the putative MSPs in the Galactic Center consistent with that characterizing the resolved MSPs in the Galactic disk? To investigate this we have used amore » Bayesian Markov Chain Monte Carlo to evaluate the posterior distribution of the parameters of the MSP luminosity function describing both resolved MSPs and the Galactic Center excess. At variance with some other claims, our analysis reveals that, within current uncertainties, both data sets can be well fit with the same luminosity function.« less
Gravitational-Wave Luminosity of Binary Neutron Stars Mergers.
Zappa, Francesco; Bernuzzi, Sebastiano; Radice, David; Perego, Albino; Dietrich, Tim
2018-03-16
We study the gravitational-wave peak luminosity and radiated energy of quasicircular neutron star mergers using a large sample of numerical relativity simulations with different binary parameters and input physics. The peak luminosity for all the binaries can be described in terms of the mass ratio and of the leading-order post-Newtonian tidal parameter solely. The mergers resulting in a prompt collapse to black hole have the largest peak luminosities. However, the largest amount of energy per unit mass is radiated by mergers that produce a hypermassive neutron star or a massive neutron star remnant. We quantify the gravitational-wave luminosity of binary neutron star merger events, and set upper limits on the radiated energy and the remnant angular momentum from these events. We find that there is an empirical universal relation connecting the total gravitational radiation and the angular momentum of the remnant. Our results constrain the final spin of the remnant black hole and also indicate that stable neutron star remnant forms with super-Keplerian angular momentum.
On the nature of the symbiotic binary AX Persei
NASA Technical Reports Server (NTRS)
Mikolajewska, Joanna; Kenyon, Scott J.
1992-01-01
Photometric and spectroscopic observations of the symbiotic binary AX Persei are presented. This system contains a red giant that fills its tidal lobe and transfers material into an accretion disk surrounding a low-mass main-sequence star. The stellar masses - 1 solar mass for the red giant and about 0.4 solar mass for the companion - suggest AX Per is poised to enter a common envelope phase of evolution. The disk luminosity increases from L(disk) about 100 solar luminosity in quiescence to L(disk) about 5700 solar luminosity in outburst for a distance of d = 2.5 kpc. Except for visual maximum, high ionization permitted emission lines - such as He II - imply an EUV luminosity comparable to the disk luminosity. High-energy photons emitted by a hot boundary layer between the disk and central star ionize a surrounding nebula to produce this permitted line emission. High ionization forbidden lines form in an extended, shock-excited region well out of the binary's orbital plane and may be associated with mass loss from the disk.
Cosmic evolution of AGN with moderate-to-high radiative luminosity in the COSMOS field
NASA Astrophysics Data System (ADS)
Ceraj, L.; Smolčić, V.; Delvecchio, I.; Delhaize, J.; Novak, M.
2018-05-01
We study the moderate-to-high radiative luminosity active galactic nuclei (HLAGN) within the VLA-COSMOS 3 GHz Large Project. The survey covers 2.6 square degrees centered on the COSMOS field with a 1σ sensitivity of 2.3 μJy/beam across the field. This provides the simultaneously largest and deepest radio continuum survey available to date with exquisite multi-wavelength coverage. The survey yields 10,830 radio sources with signal-to-noise ratios >=5. A subsample of 1,604 HLAGN is analyzed here. These were selected via a combination of X-ray luminosity and mid-infrared colors. We derive luminosity functions for these AGN and constrain their cosmic evolution out to a redshift of z ~ 6, for the first time decomposing the star formation and AGN contributions to the radio continuum emission in the AGN. We study the evolution of number density and luminosity density finding a peak at z ~ 1.5 followed by a decrease out to a redshift z ~ 6.
NASA Technical Reports Server (NTRS)
Muszynska, A.
1985-01-01
The operation of rotor rigs used to demonstrate various instability phenomena occurring in rotating machines is described. The instability phenomena demonstrated included oil whirl/whip antiswirl, rub, loose rotating parts, water-lubricated bearing instabilities, and cracked shaft. The rotor rigs were also used to show corrective measures for preventing instabilities. Vibrational response data from the rigs were taken with modern, computerized instrumentation. The rotor nonsynchronous perturbation rig demonstrated modal identification techniques for rotor/bearing systems. Computer-aided data acquisition and presentation, using the dynamic stiffness method, makes it possible to identify rotor and bearing parameters for low modes. The shaft mode demonstrator presented the amplified modal shape line of the shaft excited by inertia forces of unbalance (synchronous perturbation). The first three bending modes of the shaft can be demonstrated. The user-friendly software, Orbits, presented a simulation of rotor precessional motion that is characteristic of various instability phenomena. The data presentation demonstration used data measured on a turbine driven compressor train as an example of how computer aided data acquisition and presentation assists in identifying rotating machine malfunctions.
The effect of abrading and cutting instruments on machinability of dental ceramics.
Sakoda, Satoshi; Nakao, Noriko; Watanabe, Ikuya
2018-03-16
The aim was to investigate the effect of machining instruments on machinability of dental ceramics. Four dental ceramics, including two zirconia ceramics were machined by three types (SiC, diamond vitrified, and diamond sintered) of wheels with a hand-piece engine and two types (diamond and carbide) of burs with a high-speed air turbine. The machining conditions used were abrading speeds of 10,000 and 15,000 r.p.m. with abrading force of 100 gf for the hand-piece engine, and a pressure of 200 kPa and a cutting force of 80 gf for the air-turbine hand-piece. The machinability efficiency was evaluated by volume losses after machining the ceramics. A high-abrading speed had high-abrading efficiency (high-volume loss) compared to low-abrading speed in all abrading instruments used. The diamond vitrified wheels demonstrated higher volume loss for two zirconia ceramics than those of SiC and diamond sintered wheels. When the high-speed air-turbine instruments were used, the diamond points showed higher volume losses compared to the carbide burs for one ceramic and two zirconia ceramics with high-mechanical properties. The results of this study indicated that the machinability of dental ceramics depends on the mechanical and physical properties of dental ceramics and machining instruments. The abrading wheels show autogenous action of abrasive grains, in which ground abrasive grains drop out from the binder during abrasion, then the binder follow to wear out, subsequently new abrasive grains come out onto the instrument surface (autogenous action) and increase the grinding amount (volume loss) of grinding materials.
Sengupta, Partho P.; Huang, Yen-Min; Bansal, Manish; Ashrafi, Ali; Fisher, Matt; Shameer, Khader; Gall, Walt; Dudley, Joel T
2016-01-01
Background Associating a patient’s profile with the memories of prototypical patients built through previous repeat clinical experience is a key process in clinical judgment. We hypothesized that a similar process using a cognitive computing tool would be well suited for learning and recalling multidimensional attributes of speckle tracking echocardiography (STE) data sets derived from patients with known constrictive pericarditis (CP) and restrictive cardiomyopathy (RCM). Methods and Results Clinical and echocardiographic data of 50 patients with CP and 44 with RCM were used for developing an associative memory classifier (AMC) based machine learning algorithm. The STE data was normalized in reference to 47 controls with no structural heart disease, and the diagnostic area under the receiver operating characteristic curve (AUC) of the AMC was evaluated for differentiating CP from RCM. Using only STE variables, AMC achieved a diagnostic AUC of 89·2%, which improved to 96·2% with addition of 4 echocardiographic variables. In comparison, the AUC of early diastolic mitral annular velocity and left ventricular longitudinal strain were 82.1% and 63·7%, respectively. Furthermore, AMC demonstrated greater accuracy and shorter learning curves than other machine learning approaches with accuracy asymptotically approaching 90% after a training fraction of 0·3 and remaining flat at higher training fractions. Conclusions This study demonstrates feasibility of a cognitive machine learning approach for learning and recalling patterns observed during echocardiographic evaluations. Incorporation of machine learning algorithms in cardiac imaging may aid standardized assessments and support the quality of interpretations, particularly for novice readers with limited experience. PMID:27266599
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richards, Von L.
2012-09-19
The objective of this task was to determine whether ductile iron and compacted graphite iron exhibit age strengthening to a statistically significant extent. Further, this effort identified the mechanism by which gray iron age strengthens and the mechanism by which age-strengthening improves the machinability of gray cast iron. These results were then used to determine whether age strengthening improves the machinability of ductile iron and compacted graphite iron alloys in order to develop a predictive model of alloy factor effects on age strengthening. The results of this work will lead to reduced section sizes, and corresponding weight and energy savings.more » Improved machinability will reduce scrap and enhance casting marketability. Technical Conclusions: Age strengthening was demonstrated to occur in gray iron ductile iron and compacted graphite iron. Machinability was demonstrated to be improved by age strengthening when free ferrite was present in the microstructure, but not in a fully pearlitic microstructure. Age strengthening only occurs when there is residual nitrogen in solid solution in the Ferrite, whether the ferrite is free ferrite or the ferrite lamellae within pearlite. Age strengthening can be accelerated by Mn at about 0.5% in excess of the Mn/S balance Estimated energy savings over ten years is 13.05 trillion BTU, based primarily on yield improvement and size reduction of castings for equivalent service. Also it is estimated that the heavy truck end use of lighter castings for equivalent service requirement will result in a diesel fuel energy savings of 131 trillion BTU over ten years.« less
NASA Astrophysics Data System (ADS)
Mølgaard, Lasse L.; Buus, Ole T.; Larsen, Jan; Babamoradi, Hamid; Thygesen, Ida L.; Laustsen, Milan; Munk, Jens Kristian; Dossi, Eleftheria; O'Keeffe, Caroline; Lässig, Lina; Tatlow, Sol; Sandström, Lars; Jakobsen, Mogens H.
2017-05-01
We present a data-driven machine learning approach to detect drug- and explosives-precursors using colorimetric sensor technology for air-sampling. The sensing technology has been developed in the context of the CRIM-TRACK project. At present a fully- integrated portable prototype for air sampling with disposable sensing chips and automated data acquisition has been developed. The prototype allows for fast, user-friendly sampling, which has made it possible to produce large datasets of colorimetric data for different target analytes in laboratory and simulated real-world application scenarios. To make use of the highly multi-variate data produced from the colorimetric chip a number of machine learning techniques are employed to provide reliable classification of target analytes from confounders found in the air streams. We demonstrate that a data-driven machine learning method using dimensionality reduction in combination with a probabilistic classifier makes it possible to produce informative features and a high detection rate of analytes. Furthermore, the probabilistic machine learning approach provides a means of automatically identifying unreliable measurements that could produce false predictions. The robustness of the colorimetric sensor has been evaluated in a series of experiments focusing on the amphetamine pre-cursor phenylacetone as well as the improvised explosives pre-cursor hydrogen peroxide. The analysis demonstrates that the system is able to detect analytes in clean air and mixed with substances that occur naturally in real-world sampling scenarios. The technology under development in CRIM-TRACK has the potential as an effective tool to control trafficking of illegal drugs, explosive detection, or in other law enforcement applications.
Mass Accretion Rate of Very Low Luminosity Objects
NASA Astrophysics Data System (ADS)
Sung, Ren-Shiang; Lai, Shih-Ping; Hsieh, Tien-Hao
2013-08-01
We propose to measure the mass accretion rate of six Very Low Luminosity Objects (VeLLOs) using Near-infrared Integral Spectrometer (NIFS). The extremely low luminosity of VeLLOs, L_int ≤ 0.1 L_⊙, was previously thought not existing in the nature because the typical accretion rate gives much larger accretion luminosity even for the lowest mass star (``Luminosity Problem''). The commonly accepted solution is that the accretion rate is not constant but episodic. Thus, VeLLOs could be interpreted as protostars being in the quiescent phase of accretion activities. However, there is no observational data directly measuring the mass accretion rate of VeLLOs. The main goal of this proposal is to examine such theory and directly measure the mass accretion rate of VeLLOs for the first time. We propose to measure the blue continuum excess (veiling) of the stellar spectrum, which is the most reliable method for measuring the accretion rate. The measurements have to be made in infrared due to the very high extinction for highly embedded protostars. Our proposal provide a first opportunity to explain the long time ``Luminosity Problem'' through the observational aspects, and Gemini is the only instrument that can provide accurate and high sensitivity infrared spectroscopy measurements within reasonably short time scale.