Sample records for energy physics machine

  1. High Energy Colliders

    NASA Astrophysics Data System (ADS)

    Palmer, R. B.; Gallardo, J. C.

    INTRODUCTION PHYSICS CONSIDERATIONS GENERAL REQUIRED LUMINOSITY FOR LEPTON COLLIDERS THE EFFECTIVE PHYSICS ENERGIES OF HADRON COLLIDERS HADRON-HADRON MACHINES LUMINOSITY SIZE AND COST CIRCULAR e^{+}e^- MACHINES LUMINOSITY SIZE AND COST e^{+}e^- LINEAR COLLIDERS LUMINOSITY CONVENTIONAL RF SUPERCONDUCTING RF AT HIGHER ENERGIES γ - γ COLLIDERS μ ^{+} μ^- COLLIDERS ADVANTAGES AND DISADVANTAGES DESIGN STUDIES STATUS AND REQUIRED R AND D COMPARISION OF MACHINES CONCLUSIONS DISCUSSION

  2. Towards a generalized energy prediction model for machine tools

    PubMed Central

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H.; Dornfeld, David A.; Helu, Moneer; Rachuri, Sudarsan

    2017-01-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process. PMID:28652687

  3. Towards a generalized energy prediction model for machine tools.

    PubMed

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan

    2017-04-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.

  4. The Physics and Physical Chemistry of Molecular Machines.

    PubMed

    Astumian, R Dean; Mukherjee, Shayantani; Warshel, Arieh

    2016-06-17

    The concept of a "power stroke"-a free-energy releasing conformational change-appears in almost every textbook that deals with the molecular details of muscle, the flagellar rotor, and many other biomolecular machines. Here, it is shown by using the constraints of microscopic reversibility that the power stroke model is incorrect as an explanation of how chemical energy is used by a molecular machine to do mechanical work. Instead, chemically driven molecular machines operating under thermodynamic constraints imposed by the reactant and product concentrations in the bulk function as information ratchets in which the directionality and stopping torque or stopping force are controlled entirely by the gating of the chemical reaction that provides the fuel for the machine. The gating of the chemical free energy occurs through chemical state dependent conformational changes of the molecular machine that, in turn, are capable of generating directional mechanical motions. In strong contrast to this general conclusion for molecular machines driven by catalysis of a chemical reaction, a power stroke may be (and often is) an essential component for a molecular machine driven by external modulation of pH or redox potential or by light. This difference between optical and chemical driving properties arises from the fundamental symmetry difference between the physics of optical processes, governed by the Bose-Einstein relations, and the constraints of microscopic reversibility for thermally activated processes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Comparative evaluation of features and techniques for identifying activity type and estimating energy cost from accelerometer data

    PubMed Central

    Kate, Rohit J.; Swartz, Ann M.; Welch, Whitney A.; Strath, Scott J.

    2016-01-01

    Wearable accelerometers can be used to objectively assess physical activity. However, the accuracy of this assessment depends on the underlying method used to process the time series data obtained from accelerometers. Several methods have been proposed that use this data to identify the type of physical activity and estimate its energy cost. Most of the newer methods employ some machine learning technique along with suitable features to represent the time series data. This paper experimentally compares several of these techniques and features on a large dataset of 146 subjects doing eight different physical activities wearing an accelerometer on the hip. Besides features based on statistics, distance based features and simple discrete features straight from the time series were also evaluated. On the physical activity type identification task, the results show that using more features significantly improve results. Choice of machine learning technique was also found to be important. However, on the energy cost estimation task, choice of features and machine learning technique were found to be less influential. On that task, separate energy cost estimation models trained specifically for each type of physical activity were found to be more accurate than a single model trained for all types of physical activities. PMID:26862679

  6. Resource Letter AFHEP-1: Accelerators for the Future of High-Energy Physics

    NASA Astrophysics Data System (ADS)

    Barletta, William A.

    2012-02-01

    This Resource Letter provides a guide to literature concerning the development of accelerators for the future of high-energy physics. Research articles, books, and Internet resources are cited for the following topics: motivation for future accelerators, present accelerators for high-energy physics, possible future machine, and laboratory and collaboration websites.

  7. PARTICLE PHYSICS: CERN Gives Higgs Hunters Extra Month to Collect Data.

    PubMed

    Morton, O

    2000-09-22

    After 11 years of banging electrons and positrons together at higher energies than any other machine in the world, CERN, the European laboratory for particle physics, had decided to shut down the Large Electron-Positron collider (LEP) and install a new machine, the Large Hadron Collider (LHC), in its 27-kilometer tunnel. In 2005, the LHC will start bashing protons together at even higher energies. But tantalizing hints of a long-sought fundamental particle have forced CERN managers to grant LEP a month's reprieve.

  8. Tribology and energy efficiency: from molecules to lubricated contacts to complete machines.

    PubMed

    Taylor, Robert Ian

    2012-01-01

    The impact of lubricants on energy efficiency is considered. Molecular details of base oils used in lubricants can have a great impact on the lubricant's physical properties which will affect the energy efficiency performance of a lubricant. In addition, molecular details of lubricant additives can result in significant differences in measured friction coefficients for machine elements operating in the mixed/boundary lubrication regime. In single machine elements, these differences will result in lower friction losses, and for complete systems (such as cars, trucks, hydraulic circuits, industrial gearboxes etc.) lower fuel consumption or lower electricity consumption can result.

  9. Prediction of the far field noise from wind energy farms

    NASA Technical Reports Server (NTRS)

    Shepherd, K. P.; Hubbard, H. H.

    1986-01-01

    The basic physical factors involved in making predictions of wind turbine noise and an approach which allows for differences in the machines, the wind energy farm configurations and propagation conditions are reviewed. Example calculations to illustrate the sensitivity of the radiated noise to such variables as machine size, spacing and numbers, and such atmosphere variables as absorption and wind direction are presented. It is found that calculated far field distances to particular sound level contours are greater for lower values of atmospheric absorption, for a larger total number of machines, for additional rows of machines and for more powerful machines. At short and intermediate distances, higher sound pressure levels are calculated for closer machine spacings, for more powerful machines, for longer row lengths and for closer row spacings.

  10. Learning molecular energies using localized graph kernels.

    PubMed

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-21

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  11. Learning molecular energies using localized graph kernels

    NASA Astrophysics Data System (ADS)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  12. Machining Specific Fourier Power Spectrum Profiles into Plastics for High Energy Density Physics Experiments [Machining Specific Fourier Power Spectrum Profiles into Plastics for HEDP Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, Derek William; Cardenas, Tana; Doss, Forrest W.

    In this paper, the High Energy Density Physics program at Los Alamos National Laboratory (LANL) has had a multiyear campaign to verify the predictive capability of the interface evolution of shock propagation through different profiles machined into the face of a plastic package with an iodine-doped plastic center region. These experiments varied the machined surface from a simple sine wave to a double sine wave and finally to a multitude of different profiles with power spectrum ranges and shapes to verify LANL’s simulation capability. The MultiMode-A profiles had a band-pass flat region of the power spectrum, while the MultiMode-B profilemore » had two band-pass flat regions. Another profile of interest was the 1-Peak profile, a band-pass concept with a spike to one side of the power spectrum. All these profiles were machined in flat and tilted orientations of 30 and 60 deg. Tailor-made machining profiles, supplied by experimental physicists, were compared to actual machined surfaces, and Fourier power spectra were compared to see the reproducibility of the machining process over the frequency ranges that physicists require.« less

  13. Machining Specific Fourier Power Spectrum Profiles into Plastics for High Energy Density Physics Experiments [Machining Specific Fourier Power Spectrum Profiles into Plastics for HEDP Experiments

    DOE PAGES

    Schmidt, Derek William; Cardenas, Tana; Doss, Forrest W.; ...

    2018-01-15

    In this paper, the High Energy Density Physics program at Los Alamos National Laboratory (LANL) has had a multiyear campaign to verify the predictive capability of the interface evolution of shock propagation through different profiles machined into the face of a plastic package with an iodine-doped plastic center region. These experiments varied the machined surface from a simple sine wave to a double sine wave and finally to a multitude of different profiles with power spectrum ranges and shapes to verify LANL’s simulation capability. The MultiMode-A profiles had a band-pass flat region of the power spectrum, while the MultiMode-B profilemore » had two band-pass flat regions. Another profile of interest was the 1-Peak profile, a band-pass concept with a spike to one side of the power spectrum. All these profiles were machined in flat and tilted orientations of 30 and 60 deg. Tailor-made machining profiles, supplied by experimental physicists, were compared to actual machined surfaces, and Fourier power spectra were compared to see the reproducibility of the machining process over the frequency ranges that physicists require.« less

  14. Methods for Probing New Physics at High Energies

    NASA Astrophysics Data System (ADS)

    Denton, Peter B.

    This dissertation covers two broad topics. The title, " Methods for Probing New Physics at High Energies," hopefully encompasses both of them. The first topic is located in part I of this work and is about integral dispersion relations. This is a technique to probe for new physics at energy scales near to the machine energy of a collider. For example, a hadron collider taking data at a given energy is typically only sensitive to new physics occurring at energy scales about a factor of five to ten beneath the actual machine energy due to parton distribution functions. This technique is sensitive to physics happening directly beneath the machine energy in addition to the even more interesting case: directly above. Precisely where this technique is sensitive is one of the main topics of this area of research. The other topic is located in part II and is about cosmic ray anisotropy at the highest energies. The unanswered questions about cosmic rays at the highest energies are numerous and interconnected in complicated ways. What may be the first piece of the puzzle to fall into place is determining their sources. This work looks to determine if and when the use of spherical harmonics becomes sensitive enough to determine these sources. The completed papers for this work can be found online. For part I on integral dispersion relations see reference published in Physical Review D. For part II on cosmic ray anisotropy, there are conference proceedings published in the Journal of Physics: Conference Series. The analysis of the location of an experiment on anisotropy reconstruction is, and the comparison of different experiments' abilities to reconstruct anisotropies is published in The Astrophysical Journal and the Journal of High Energy Astrophysics respectively. While this dissertation is focused on three papers completed with Tom Weiler at Vanderbilt University, other papers were completed at the same time. The first was with Nicusor Arsene, Lauretiu Caramete, and Octavian Micu in Romania on the detectability of quantum black holes in extensive air showers. The next was with Luis Anchordoqui, Haim Goldberg, Thomas Paul, Luiz da Silva, Brian Vlcek, and Tom Weiler on placing limits on Weinberg's Higgs portal, originally written to explain anomalous Neff values, from direct detection and collider experiments which was published in Physical Review D. The final was completed at Fermilab with Stephen Parke and Hisakazu Minakata on a perturbative description of neutrino oscillations in matter which was published in the Journal of High Energy Physics, and the code behind this paper is publicly available.

  15. School beverage environment and children's energy expenditure associated with physical education class: an agent-based model simulation.

    PubMed

    Chen, H-J; Xue, H; Kumanyika, S; Wang, Y

    2017-06-01

    Physical activity contributes to children's energy expenditure and prevents excess weight gain, but fluid replacement with sugar-sweetened beverages (SSBs) may diminish this benefit. The aim of this study was to explore the net energy expenditure (EE) after physical education (PE) class given the competition between water and SSB consumption for rehydration and explore environmental factors that may influence the net EE, e.g. PE duration, affordability of SSB and students' SSB preference. We built an agent-based model that simulates the behaviour of 13-year-old children in a PE class with nearby water fountains and SSB vending machines available. A longer PE class contributed to greater prevalence of dehydration and required more time for rehydration. The energy cost of a PE class with activity intensity equivalent to 45 min of jogging is about 300 kcal on average, i.e. 10-15% of average 13-year-old children's total daily EE. Adding an SSB vending machine could offset PE energy expenditure by as much as 90 kcal per child, which was associated with PE duration, students' pocket money and SSB preference. Sugar-sweetened beverage vending machines in school may offset some of the EE in PE classes. This could be avoided if water is the only readily available source for children's fluid replacement after class. © 2016 World Obesity Federation.

  16. Energy landscapes for machine learning

    NASA Astrophysics Data System (ADS)

    Ballard, Andrew J.; Das, Ritankar; Martiniani, Stefano; Mehta, Dhagash; Sagun, Levent; Stevenson, Jacob D.; Wales, David J.

    Machine learning techniques are being increasingly used as flexible non-linear fitting and prediction tools in the physical sciences. Fitting functions that exhibit multiple solutions as local minima can be analysed in terms of the corresponding machine learning landscape. Methods to explore and visualise molecular potential energy landscapes can be applied to these machine learning landscapes to gain new insight into the solution space involved in training and the nature of the corresponding predictions. In particular, we can define quantities analogous to molecular structure, thermodynamics, and kinetics, and relate these emergent properties to the structure of the underlying landscape. This Perspective aims to describe these analogies with examples from recent applications, and suggest avenues for new interdisciplinary research.

  17. Learning molecular energies using localized graph kernels

    DOE PAGES

    Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos

    2017-03-21

    We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less

  18. Learning molecular energies using localized graph kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos

    We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less

  19. Physics Accomplishments and Future Prospects of the BES Experiments at the Beijing Electron-Positron Collider

    NASA Astrophysics Data System (ADS)

    Briere, Roy A.; Harris, Frederick A.; Mitchell, Ryan E.

    2016-10-01

    The cornerstone of the Chinese experimental particle physics program is a series of experiments performed in the τ-charm energy region. China began building e+e- colliders at the Institute for High Energy Physics in Beijing more than three decades ago. Beijing Electron Spectrometer (BES) is the common root name for the particle physics detectors operated at these machines. We summarize the development of the BES program and highlight the physics results across several topical areas.

  20. Physics with e{sup +}e{sup -} Linear Colliders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barklow, Timothy L

    2003-05-05

    We describe the physics potential of e{sup +}e{sup -} linear colliders in this report. These machines are planned to operate in the first phase at a center-of-mass energy of 500 GeV, before being scaled up to about 1 TeV. In the second phase of the operation, a final energy of about 2 TeV is expected. The machines will allow us to perform precision tests of the heavy particles in the Standard Model, the top quark and the electroweak bosons. They are ideal facilities for exploring the properties of Higgs particles, in particular in the intermediate mass range. New vector bosonsmore » and novel matter particles in extended gauge theories can be searched for and studied thoroughly. The machines provide unique opportunities for the discovery of particles in supersymmetric extensions of the Standard Model, the spectrum of Higgs particles, the supersymmetric partners of the electroweak gauge and Higgs bosons, and of the matter particles. High precision analyses of their properties and interactions will allow for extrapolations to energy scales close to the Planck scale where gravity becomes significant. In alternative scenarios, like compositeness models, novel matter particles and interactions can be discovered and investigated in the energy range above the existing colliders up to the TeV scale. Whatever scenario is realized in Nature, the discovery potential of e{sup +}e{sup -} linear colliders and the high-precision with which the properties of particles and their interactions can be analyzed, define an exciting physics programme complementary to hadron machines.« less

  1. The secondary supernova machine: Gravitational compression, stored Coulomb energy, and SNII displays

    NASA Astrophysics Data System (ADS)

    Clayton, Donald D.; Meyer, Bradley S.

    2016-04-01

    Radioactive power for several delayed optical displays of core-collapse supernovae is commonly described as having been provided by decays of 56Ni nuclei. This review analyses the provenance of that energy more deeply: the form in which that energy is stored; what mechanical work causes its storage; what conservation laws demand that it be stored; and why its release is fortuitously delayed for about 106 s into a greatly expanded supernova envelope. We call the unifying picture of those energy transfers the secondary supernova machine owing to its machine-like properties; namely, mechanical work forces storage of large increases of nuclear Coulomb energy, a positive energy component within new nuclei synthesized by the secondary machine. That positive-energy increase occurs despite the fusion decreasing negative total energy within nuclei. The excess of the Coulomb energy can later be radiated, accounting for the intense radioactivity in supernovae. Detailed familiarity with this machine is the focus of this review. The stored positive-energy component created by the machine will not be reduced until roughly 106 s later by radioactive emissions (EC and β +) owing to the slowness of weak decays. The delayed energy provided by the secondary supernova machine is a few × 1049 erg, much smaller than the one percent of the 1053 erg collapse that causes the prompt ejection of matter; however, that relatively small stored energy is vital for activation of the late displays. The conceptual basis of the secondary supernova machine provides a new framework for understanding the energy source for late SNII displays. We demonstrate the nuclear dynamics with nuclear network abundance calculations, with a model of sudden compression and reexpansion of the nuclear gas, and with nuclear energy decompositions of a nuclear-mass law. These tools identify excess Coulomb energy, a positive-energy component of the total negative nuclear energy, as the late activation energy. If the value of fundamental charge e were smaller, SNII would not be so profoundly radioactive. Excess Coulomb energy has been carried within nuclei radially for roughly 109 km before being radiated into greatly expanded supernova remnants. The Coulomb force claims heretofore unacknowledged significance for supernova physics.

  2. HIGH ENERGY PHYSICS: CERN Link Breathes Life Into Russian Physics.

    PubMed

    Stone, R

    2000-10-13

    Without fanfare, 600 Russian scientists here at CERN, the European particle physics laboratory, are playing key roles in building the Large Hadron Collider (LHC), a machine that will explore fundamental questions such as why particles have mass, as well as search for exotic new particles whose existence would confirm supersymmetry, a popular theory that aims to unify the four forces of nature. In fact, even though Russia is not one of CERN's 20 member states, most top high-energy physicists in Russia are working on the LHC. Some say their work could prove the salvation of high-energy physics back home.

  3. Possible limits of plasma linear colliders

    NASA Astrophysics Data System (ADS)

    Zimmermann, F.

    2017-07-01

    Plasma linear colliders have been proposed as next or next-next generation energy-frontier machines for high-energy physics. I investigate possible fundamental limits on energy and luminosity of such type of colliders, considering acceleration, multiple scattering off plasma ions, intrabeam scattering, bremsstrahlung, and betatron radiation. The question of energy efficiency is also addressed.

  4. Performance of thigh-mounted triaxial accelerometer algorithms in objective quantification of sedentary behaviour and physical activity in older adults

    PubMed Central

    Verschueren, Sabine M. P.; Degens, Hans; Morse, Christopher I.; Onambélé, Gladys L.

    2017-01-01

    Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female) energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity) was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual’s physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry. PMID:29155839

  5. Performance of thigh-mounted triaxial accelerometer algorithms in objective quantification of sedentary behaviour and physical activity in older adults.

    PubMed

    Wullems, Jorgen A; Verschueren, Sabine M P; Degens, Hans; Morse, Christopher I; Onambélé, Gladys L

    2017-01-01

    Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female) energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity) was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual's physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry.

  6. Predicting the Performance of Chain Saw Machines Based on Shore Scleroscope Hardness

    NASA Astrophysics Data System (ADS)

    Tumac, Deniz

    2014-03-01

    Shore hardness has been used to estimate several physical and mechanical properties of rocks over the last few decades. However, the number of researches correlating Shore hardness with rock cutting performance is quite limited. Also, rather limited researches have been carried out on predicting the performance of chain saw machines. This study differs from the previous investigations in the way that Shore hardness values (SH1, SH2, and deformation coefficient) are used to determine the field performance of chain saw machines. The measured Shore hardness values are correlated with the physical and mechanical properties of natural stone samples, cutting parameters (normal force, cutting force, and specific energy) obtained from linear cutting tests in unrelieved cutting mode, and areal net cutting rate of chain saw machines. Two empirical models developed previously are improved for the prediction of the areal net cutting rate of chain saw machines. The first model is based on a revised chain saw penetration index, which uses SH1, machine weight, and useful arm cutting depth as predictors. The second model is based on the power consumed for only cutting the stone, arm thickness, and specific energy as a function of the deformation coefficient. While cutting force has a strong relationship with Shore hardness values, the normal force has a weak or moderate correlation. Uniaxial compressive strength, Cerchar abrasivity index, and density can also be predicted by Shore hardness values.

  7. Smart material screening machines using smart materials and controls

    NASA Astrophysics Data System (ADS)

    Allaei, Daryoush; Corradi, Gary; Waigand, Al

    2002-07-01

    The objective of this product is to address the specific need for improvements in the efficiency and effectiveness in physical separation technologies in the screening areas. Currently, the mining industry uses approximately 33 billion kW-hr per year, costing 1.65 billion dollars at 0.05 cents per kW-hr, of electrical energy for physical separations. Even though screening and size separations are not the single most energy intensive process in the mining industry, they are often the major bottleneck in the whole process. Improvements to this area offer tremendous potential in both energy savings and production improvements. Additionally, the vibrating screens used in the mining processing plants are the most costly areas from maintenance and worker health and safety point of views. The goal of this product is to reduce energy use in the screening and total processing areas. This goal is accomplished by developing an innovative screening machine based on smart materials and smart actuators, namely smart screen that uses advanced sensory system to continuously monitor the screening process and make appropriate adjustments to improve production. The theory behind the development of Smart Screen technology is based on two key technologies, namely smart actuators and smart Energy Flow ControlT (EFCT) strategies, developed initially for military applications. Smart Screen technology controls the flow of vibration energy and confines it to the screen rather than shaking much of the mass that makes up the conventional vibratory screening machine. Consequently, Smart Screens eliminates and downsizes many of the structural components associated with conventional vibratory screening machines. As a result, the surface area of the screen increases for a given envelope. This increase in usable screening surface area extends the life of the screens, reduces required maintenance by reducing the frequency of screen change-outs and improves throughput or productivity.

  8. Machine Learning Based Multi-Physical-Model Blending for Enhancing Renewable Energy Forecast -- Improvement via Situation Dependent Error Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar

    With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual modelmore » has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.« less

  9. Kinesin Motor Enzymology: Chemistry, Structure, and Physics of Nanoscale Molecular Machines.

    PubMed

    Cochran, J C

    2015-09-01

    Molecular motors are enzymes that convert chemical potential energy into controlled kinetic energy for mechanical work inside cells. Understanding the biophysics of these motors is essential for appreciating life as well as apprehending diseases that arise from motor malfunction. This review focuses on kinesin motor enzymology with special emphasis on the literature that reports the chemistry, structure and physics of several different kinesin superfamily members.

  10. Weakly supervised classification in high energy physics

    DOE PAGES

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; ...

    2017-05-01

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less

  11. Weakly supervised classification in high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less

  12. Learning Activity Package, Physical Science. LAP Numbers 8, 9, 10, and 11.

    ERIC Educational Resources Information Center

    Williams, G. J.

    These four units of the Learning Activity Packages (LAPs) for individualized instruction in physical science cover nuclear reactions, alpha and beta particles, atomic radiation, medical use of nuclear energy, fission, fusion, simple machines, Newton's laws of motion, electricity, currents, electromagnetism, Oersted's experiment, sound, light,…

  13. Another look at Atwood's machine

    NASA Astrophysics Data System (ADS)

    LoPresto, Michael C.

    1999-02-01

    Atwood's machine is a standard experimental apparatus that is likely to get pushed out of the laboratory portion of the general physics course due to the ever increasing use of microcomputers. To avoid this, I now use the apparatus for an experiment during the work and energy portion of the course which not only allows us to demonstrate those principles but also compare them with Newton's laws of motion.

  14. Stability Assessment of a System Comprising a Single Machine and Inverter with Scalable Ratings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian B; Lin, Yashen; Gevorgian, Vahan

    Synchronous machines have traditionally acted as the foundation of large-scale electrical infrastructures and their physical properties have formed the cornerstone of system operations. However, with the increased integration of distributed renewable resources and energy-storage technologies, there is a need to systematically acknowledge the dynamics of power-electronics inverters - the primary energy-conversion interface in such systems - in all aspects of modeling, analysis, and control of the bulk power network. In this paper, we assess the properties of coupled machine-inverter systems by studying an elementary system comprised of a synchronous generator, three-phase inverter, and a load. The inverter model is formulatedmore » such that its power rating can be scaled continuously across power levels while preserving its closed-loop response. Accordingly, the properties of the machine-inverter system can be assessed for varying ratios of machine-to-inverter power ratings. After linearizing the model and assessing its eigenvalues, we show that system stability is highly dependent on the inverter current controller and machine exciter, thus uncovering a key concern with mixed machine-inverter systems and motivating the need for next-generation grid-stabilizing inverter controls.« less

  15. Energy-Efficient Hosting Rich Content from Mobile Platforms with Relative Proximity Sensing.

    PubMed

    Park, Ki-Woong; Lee, Younho; Baek, Sung Hoon

    2017-08-08

    In this paper, we present a tiny networked mobile platform, termed Tiny-Web-Thing ( T-Wing ), which allows the sharing of data-intensive content among objects in cyber physical systems. The object includes mobile platforms like a smartphone, and Internet of Things (IoT) platforms for Human-to-Human (H2H), Human-to-Machine (H2M), Machine-to-Human (M2H), and Machine-to-Machine (M2M) communications. T-Wing makes it possible to host rich web content directly on their objects, which nearby objects can access instantaneously. Using a new mechanism that allows the Wi-Fi interface of the object to be turned on purely on-demand, T-Wing achieves very high energy efficiency. We have implemented T-Wing on an embedded board, and present evaluation results from our testbed. From the evaluation result of T-Wing , we compare our system against alternative approaches to implement this functionality using only the cellular or Wi-Fi (but not both), and show that in typical usage, T-Wing consumes less than 15× the energy and is faster by an order of magnitude.

  16. A journey from nuclear criticality methods to high energy density radflow experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urbatsch, Todd James

    Los Alamos National Laboratory is a nuclear weapons laboratory supporting our nation's defense. In support of this mission is a high energy-density physics program in which we design and execute experiments to study radiationhydrodynamics phenomena and improve the predictive capability of our largescale multi-physics software codes on our big-iron computers. The Radflow project’s main experimental effort now is to understand why we haven't been able to predict opacities on Sandia National Laboratory's Z-machine. We are modeling an increasing fraction of the Z-machine's dynamic hohlraum to find multi-physics explanations for the experimental results. Further, we are building an entirely different opacitymore » platform on Lawrence Livermore National Laboratory's National Ignition Facility (NIF), which is set to get results early 2017. Will the results match our predictions, match the Z-machine, or give us something entirely different? The new platform brings new challenges such as designing hohlraums and spectrometers. The speaker will recount his history, starting with one-dimensional Monte Carlo nuclear criticality methods in graduate school, radiative transfer methods research and software development for his first 16 years at LANL, and, now, radflow technology and experiments. Who knew that the real world was more than just radiation transport? Experiments aren't easy, but they sure are fun.« less

  17. Future hadron colliders: From physics perspectives to technology R&D

    NASA Astrophysics Data System (ADS)

    Barletta, William; Battaglia, Marco; Klute, Markus; Mangano, Michelangelo; Prestemon, Soren; Rossi, Lucio; Skands, Peter

    2014-11-01

    High energy hadron colliders have been instrumental to discoveries in particle physics at the energy frontier and their role as discovery machines will remain unchallenged for the foreseeable future. The full exploitation of the LHC is now the highest priority of the energy frontier collider program. This includes the high luminosity LHC project which is made possible by a successful technology-readiness program for Nb3Sn superconductor and magnet engineering based on long-term high-field magnet R&D programs. These programs open the path towards collisions with luminosity of 5×1034 cm-2 s-1 and represents the foundation to consider future proton colliders of higher energies. This paper discusses physics requirements, experimental conditions, technological aspects and design challenges for the development towards proton colliders of increasing energy and luminosity.

  18. Teardrop chunker performance.

    Treesearch

    Joseph B. Sturos

    1989-01-01

    Describes a new machine designed to reduce small-diameter logs into small chunks or blocks. The chunks can be used to manufacture flakeboard and composite wood products as well as for energy wood. Presents data on the physical character of chunkwood produced; production rates; and torque, power, and energy requirements for two species and two nominal chunk lengths....

  19. Machine Learning to Improve Energy Expenditure Estimation in Children With Disabilities: A Pilot Study in Duchenne Muscular Dystrophy.

    PubMed

    Pande, Amit; Mohapatra, Prasant; Nicorici, Alina; Han, Jay J

    2016-07-19

    Children with physical impairments are at a greater risk for obesity and decreased physical activity. A better understanding of physical activity pattern and energy expenditure (EE) would lead to a more targeted approach to intervention. This study focuses on studying the use of machine-learning algorithms for EE estimation in children with disabilities. A pilot study was conducted on children with Duchenne muscular dystrophy (DMD) to identify important factors for determining EE and develop a novel algorithm to accurately estimate EE from wearable sensor-collected data. There were 7 boys with DMD, 6 healthy control boys, and 22 control adults recruited. Data were collected using smartphone accelerometer and chest-worn heart rate sensors. The gold standard EE values were obtained from the COSMED K4b2 portable cardiopulmonary metabolic unit worn by boys (aged 6-10 years) with DMD and controls. Data from this sensor setup were collected simultaneously during a series of concurrent activities. Linear regression and nonlinear machine-learning-based approaches were used to analyze the relationship between accelerometer and heart rate readings and COSMED values. Existing calorimetry equations using linear regression and nonlinear machine-learning-based models, developed for healthy adults and young children, give low correlation to actual EE values in children with disabilities (14%-40%). The proposed model for boys with DMD uses ensemble machine learning techniques and gives a 91% correlation with actual measured EE values (root mean square error of 0.017). Our results confirm that the methods developed to determine EE using accelerometer and heart rate sensor values in normal adults are not appropriate for children with disabilities and should not be used. A much more accurate model is obtained using machine-learning-based nonlinear regression specifically developed for this target population. ©Amit Pande, Prasant Mohapatra, Alina Nicorici, Jay J Han. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 19.07.2016.

  20. Parallel Computing:. Some Activities in High Energy Physics

    NASA Astrophysics Data System (ADS)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  1. Molecular machines operating on the nanoscale: from classical to quantum

    PubMed Central

    2016-01-01

    Summary The main physical features and operating principles of isothermal nanomachines in the microworld, common to both classical and quantum machines, are reviewed. Special attention is paid to the dual, constructive role of dissipation and thermal fluctuations, the fluctuation–dissipation theorem, heat losses and free energy transduction, thermodynamic efficiency, and thermodynamic efficiency at maximum power. Several basic models are considered and discussed to highlight generic physical features. This work examines some common fallacies that continue to plague the literature. In particular, the erroneous beliefs that one should minimize friction and lower the temperature for high performance of Brownian machines, and that the thermodynamic efficiency at maximum power cannot exceed one-half are discussed. The emerging topic of anomalous molecular motors operating subdiffusively but very efficiently in the viscoelastic environment of living cells is also discussed. PMID:27335728

  2. Machine learning-based dual-energy CT parametric mapping

    NASA Astrophysics Data System (ADS)

    Su, Kuan-Hao; Kuo, Jung-Wen; Jordan, David W.; Van Hedent, Steven; Klahr, Paul; Wei, Zhouping; Helo, Rose Al; Liang, Fan; Qian, Pengjiang; Pereira, Gisele C.; Rassouli, Negin; Gilkeson, Robert C.; Traughber, Bryan J.; Cheng, Chee-Wai; Muzic, Raymond F., Jr.

    2018-06-01

    The aim is to develop and evaluate machine learning methods for generating quantitative parametric maps of effective atomic number (Zeff), relative electron density (ρ e), mean excitation energy (I x ), and relative stopping power (RSP) from clinical dual-energy CT data. The maps could be used for material identification and radiation dose calculation. Machine learning methods of historical centroid (HC), random forest (RF), and artificial neural networks (ANN) were used to learn the relationship between dual-energy CT input data and ideal output parametric maps calculated for phantoms from the known compositions of 13 tissue substitutes. After training and model selection steps, the machine learning predictors were used to generate parametric maps from independent phantom and patient input data. Precision and accuracy were evaluated using the ideal maps. This process was repeated for a range of exposure doses, and performance was compared to that of the clinically-used dual-energy, physics-based method which served as the reference. The machine learning methods generated more accurate and precise parametric maps than those obtained using the reference method. Their performance advantage was particularly evident when using data from the lowest exposure, one-fifth of a typical clinical abdomen CT acquisition. The RF method achieved the greatest accuracy. In comparison, the ANN method was only 1% less accurate but had much better computational efficiency than RF, being able to produce parametric maps in 15 s. Machine learning methods outperformed the reference method in terms of accuracy and noise tolerance when generating parametric maps, encouraging further exploration of the techniques. Among the methods we evaluated, ANN is the most suitable for clinical use due to its combination of accuracy, excellent low-noise performance, and computational efficiency.

  3. Machine learning-based dual-energy CT parametric mapping.

    PubMed

    Su, Kuan-Hao; Kuo, Jung-Wen; Jordan, David W; Van Hedent, Steven; Klahr, Paul; Wei, Zhouping; Al Helo, Rose; Liang, Fan; Qian, Pengjiang; Pereira, Gisele C; Rassouli, Negin; Gilkeson, Robert C; Traughber, Bryan J; Cheng, Chee-Wai; Muzic, Raymond F

    2018-06-08

    The aim is to develop and evaluate machine learning methods for generating quantitative parametric maps of effective atomic number (Z eff ), relative electron density (ρ e ), mean excitation energy (I x ), and relative stopping power (RSP) from clinical dual-energy CT data. The maps could be used for material identification and radiation dose calculation. Machine learning methods of historical centroid (HC), random forest (RF), and artificial neural networks (ANN) were used to learn the relationship between dual-energy CT input data and ideal output parametric maps calculated for phantoms from the known compositions of 13 tissue substitutes. After training and model selection steps, the machine learning predictors were used to generate parametric maps from independent phantom and patient input data. Precision and accuracy were evaluated using the ideal maps. This process was repeated for a range of exposure doses, and performance was compared to that of the clinically-used dual-energy, physics-based method which served as the reference. The machine learning methods generated more accurate and precise parametric maps than those obtained using the reference method. Their performance advantage was particularly evident when using data from the lowest exposure, one-fifth of a typical clinical abdomen CT acquisition. The RF method achieved the greatest accuracy. In comparison, the ANN method was only 1% less accurate but had much better computational efficiency than RF, being able to produce parametric maps in 15 s. Machine learning methods outperformed the reference method in terms of accuracy and noise tolerance when generating parametric maps, encouraging further exploration of the techniques. Among the methods we evaluated, ANN is the most suitable for clinical use due to its combination of accuracy, excellent low-noise performance, and computational efficiency.

  4. CLIC CDR - physics and detectors: CLIC conceptual design report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berger, E.; Demarteau, M.; Repond, J.

    This report forms part of the Conceptual Design Report (CDR) of the Compact LInear Collider (CLIC). The CLIC accelerator complex is described in a separate CDR volume. A third document, to appear later, will assess strategic scenarios for building and operating CLIC in successive center-of-mass energy stages. It is anticipated that CLIC will commence with operation at a few hundred GeV, giving access to precision standard-model physics like Higgs and top-quark physics. Then, depending on the physics landscape, CLIC operation would be staged in a few steps ultimately reaching the maximum 3 TeV center-of-mass energy. Such a scenario would maximizemore » the physics potential of CLIC providing new physics discovery potential over a wide range of energies and the ability to make precision measurements of possible new states previously discovered at the Large Hadron Collider (LHC). The main purpose of this document is to address the physics potential of a future multi-TeV e{sup +}e{sup -} collider based on CLIC technology and to describe the essential features of a detector that are required to deliver the full physics potential of this machine. The experimental conditions at CLIC are significantly more challenging than those at previous electron-positron colliders due to the much higher levels of beam-induced backgrounds and the 0.5 ns bunch-spacing. Consequently, a large part of this report is devoted to understanding the impact of the machine environment on the detector with the aim of demonstrating, with the example of realistic detector concepts, that high precision physics measurements can be made at CLIC. Since the impact of background increases with energy, this document concentrates on the detector requirements and physics measurements at the highest CLIC center-of-mass energy of 3 TeV. One essential output of this report is the clear demonstration that a wide range of high precision physics measurements can be made at CLIC with detectors which are challenging, but considered feasible following a realistic future R&D program.« less

  5. Industrial femtosecond lasers for machining of heat-sensitive polymers (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hendricks, Frank; Bernard, Benjamin; Matylitsky, Victor V.

    2017-03-01

    Heat-sensitive materials, such as polymers, are used increasingly in various industrial sectors such as medical device manufacturing and organic electronics. Medical applications include implantable devices like stents, catheters and wires, which need to be structured and cut with minimum heat damage. Also the flat panel display market moves from LCD displays to organic LED (OLED) solutions, which utilize heat-sensitive polymer substrates. In both areas, the substrates often consist of multilayer stacks with different types of materials, such as metals, dielectric layers and polymers with different physical characteristic. The different thermal behavior and laser absorption properties of the materials used makes these stacks difficult to machine using conventional laser sources. Femtosecond lasers are an enabling technology for micromachining of these materials since it is possible to machine ultrafine structures with minimum thermal impact and very precise control over material removed. An industrial femtosecond Spirit HE laser system from Spectra-Physics with pulse duration <400 fs, pulse energies of >120 μJ and average output powers of >16 W is an ideal tool for industrial micromachining of a wide range of materials with highest quality and efficiency. The laser offers process flexibility with programmable pulse energy, repetition rate, and pulse width. In this paper, we provide an overview of machining heat-sensitive materials using Spirit HE laser. In particular, we show how the laser parameters (e.g. laser wavelength, pulse duration, applied energy and repetition rate) and the processing strategy (gas assisted single pass cut vs. multi-scan process) influence the efficiency and quality of laser processing.

  6. Uniting Cheminformatics and Chemical Theory To Predict the Intrinsic Aqueous Solubility of Crystalline Druglike Molecules

    PubMed Central

    2014-01-01

    We present four models of solution free-energy prediction for druglike molecules utilizing cheminformatics descriptors and theoretically calculated thermodynamic values. We make predictions of solution free energy using physics-based theory alone and using machine learning/quantitative structure–property relationship (QSPR) models. We also develop machine learning models where the theoretical energies and cheminformatics descriptors are used as combined input. These models are used to predict solvation free energy. While direct theoretical calculation does not give accurate results in this approach, machine learning is able to give predictions with a root mean squared error (RMSE) of ∼1.1 log S units in a 10-fold cross-validation for our Drug-Like-Solubility-100 (DLS-100) dataset of 100 druglike molecules. We find that a model built using energy terms from our theoretical methodology as descriptors is marginally less predictive than one built on Chemistry Development Kit (CDK) descriptors. Combining both sets of descriptors allows a further but very modest improvement in the predictions. However, in some cases, this is a statistically significant enhancement. These results suggest that there is little complementarity between the chemical information provided by these two sets of descriptors, despite their different sources and methods of calculation. Our machine learning models are also able to predict the well-known Solubility Challenge dataset with an RMSE value of 0.9–1.0 log S units. PMID:24564264

  7. Intelligent Vehicle Power Management Using Machine Learning and Fuzzy Logic

    DTIC Science & Technology

    2008-06-01

    batteries of similar physical size. An ultracapacitor can receive regenerative energy and give power during peak periods. Moreno et al. proposed to...use an ultracapacitor as an auxiliary energy system in combination with a primary source that is unable to accept energy from the regenerative ... braking [22]. There are other power sources that are being considered in HEV research [20-22] and future vehicle systems may use combinations of

  8. Physical Analytics: An emerging field with real-world applications and impact

    NASA Astrophysics Data System (ADS)

    Hamann, Hendrik

    2015-03-01

    In the past most information on the internet has been originated by humans or computers. However with the emergence of cyber-physical systems, vast amount of data is now being created by sensors from devices, machines etc digitizing the physical world. While cyber-physical systems are subject to active research around the world, the vast amount of actual data generated from the physical world has attracted so far little attention from the engineering and physics community. In this presentation we use examples to highlight the opportunities in this new subject of ``Physical Analytics'' for highly inter-disciplinary research (including physics, engineering and computer science), which aims understanding real-world physical systems by leveraging cyber-physical technologies. More specifically, the convergence of the physical world with the digital domain allows applying physical principles to everyday problems in a much more effective and informed way than what was possible in the past. Very much like traditional applied physics and engineering has made enormous advances and changed our lives by making detailed measurements to understand the physics of an engineered device, we can now apply the same rigor and principles to understand large-scale physical systems. In the talk we first present a set of ``configurable'' enabling technologies for Physical Analytics including ultralow power sensing and communication technologies, physical big data management technologies, numerical modeling for physical systems, machine learning based physical model blending, and physical analytics based automation and control. Then we discuss in detail several concrete applications of Physical Analytics ranging from energy management in buildings and data centers, environmental sensing and controls, precision agriculture to renewable energy forecasting and management.

  9. Using Phun to Study ``Perpetual Motion'' Machines

    NASA Astrophysics Data System (ADS)

    Koreš, Jaroslav

    2012-05-01

    The concept of "perpetual motion" has a long history. The Indian astronomer and mathematician Bhaskara II (12th century) was the first person to describe a perpetual motion (PM) machine. An example of a 13th- century PM machine is shown in Fig. 1. Although the law of conservation of energy clearly implies the impossibility of PM construction, over the centuries numerous proposals for PM have been made, involving ever more elements of modern science in their construction. It is possible to test a variety of PM machines in the classroom using a program called Phun2 or its commercial version Algodoo.3 The programs are designed to simulate physical processes and we can easily simulate mechanical machines using them. They provide an intuitive graphical environment controlled with a mouse; a programming language is not needed. This paper describes simulations of four different (supposed) PM machines.4

  10. A journey from nuclear criticality methods to high energy density radflow experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urbatsch, Todd James

    Los Alamos National Laboratory is a nuclear weapons laboratory supporting our nation's defense. In support of this mission is a high energy-density physics program in which we design and execute experiments to study radiationhydrodynamics phenomena and improve the predictive capability of our largescale multi-physics software codes on our big-iron computers. The Radflow project’s main experimental effort now is to understand why we haven't been able to predict opacities on Sandia National Laboratory's Z-machine. We are modeling an increasing fraction of the Z-machine's dynamic hohlraum to find multi-physics explanations for the experimental results. Further, we are building an entirely different opacitymore » platform on Lawrence Livermore National Laboratory's National Ignition Facility (NIF), which is set to get results early 2017. Will the results match our predictions, match the Z-machine, or give us something entirely different? The new platform brings new challenges such as designing hohlraums and spectrometers. The speaker will recount his history, starting with one-dimensional Monte Carlo nuclear criticality methods in graduate school, radiative transfer methods research and software development for his first 16 years at LANL, and, now, radflow technology and experiments. Who knew that the real world was more than just radiation transport? Experiments aren't easy and they are as saturated with politics as a presidential election, but they sure are fun.« less

  11. Energy-Efficient Hosting Rich Content from Mobile Platforms with Relative Proximity Sensing

    PubMed Central

    Baek, Sung Hoon

    2017-01-01

    In this paper, we present a tiny networked mobile platform, termed Tiny-Web-Thing (T-Wing), which allows the sharing of data-intensive content among objects in cyber physical systems. The object includes mobile platforms like a smartphone, and Internet of Things (IoT) platforms for Human-to-Human (H2H), Human-to-Machine (H2M), Machine-to-Human (M2H), and Machine-to-Machine (M2M) communications. T-Wing makes it possible to host rich web content directly on their objects, which nearby objects can access instantaneously. Using a new mechanism that allows the Wi-Fi interface of the object to be turned on purely on-demand, T-Wing achieves very high energy efficiency. We have implemented T-Wing on an embedded board, and present evaluation results from our testbed. From the evaluation result of T-Wing, we compare our system against alternative approaches to implement this functionality using only the cellular or Wi-Fi (but not both), and show that in typical usage, T-Wing consumes less than 15× the energy and is faster by an order of magnitude. PMID:28786942

  12. The machine body metaphor: From science and technology to physical education and sport, in France (1825-1935).

    PubMed

    Gleyse, J

    2013-12-01

    The long history of the conception of physical exercise in France may be viewed as a function of a series of changes in understanding the body. Scientific concepts were used to present the body in official texts by authors specializing in the subject, or to describe them, as did Michel Foucault, as epistemic changes. A departure occurred during the 19th century that is clearly demonstrated in the writings of Gustave Adolphe Hirn. This breakthrough concerned the idea of considering the organism as an energy-generating machine. This metaphor was employed in describing the body during physical exercise from the 17th to the 19th centuries, when the body was thought of as mechanical. Such metaphors were used by the most relevant figures writing at the end of the 19th century in the rationale that is examined in this paper. It shows how Hirn, Marey, Lagrange, Demenij, Hebert, and Tissié saw the body and how they employed machine metaphors when referring to it. These machine metaphors are analyzed from the time of their scientific and technological origins up to their current use in physical and sports education. This analysis will contribute to the understanding of how a scientific metaphor comes to be in common use and may lead to particular exercise practices. © 2012 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Rotating electrical machines: Poynting flow

    NASA Astrophysics Data System (ADS)

    Donaghy-Spargo, C.

    2017-09-01

    This paper presents a complementary approach to the traditional Lorentz and Faraday approaches that are typically adopted in the classroom when teaching the fundamentals of electrical machines—motors and generators. The approach adopted is based upon the Poynting vector, which illustrates the ‘flow’ of electromagnetic energy. It is shown through simple vector analysis that the energy-flux density flow approach can provide insight into the operation of electrical machines and it is also shown that the results are in agreement with conventional Maxwell stress-based theory. The advantage of this approach is its complementary completion of the physical picture regarding the electromechanical energy conversion process—it is also a means of maintaining student interest in this subject and as an unconventional application of the Poynting vector during normal study of electromagnetism.

  14. UFMulti: A new parallel processing software system for HEP

    NASA Astrophysics Data System (ADS)

    Avery, Paul; White, Andrew

    1989-12-01

    UFMulti is a multiprocessing software package designed for general purpose high energy physics applications, including physics and detector simulation, data reduction and DST physics analysis. The system is particularly well suited for installations where several workstation or computers are connected through a local area network (LAN). The initial configuration of the software is currently running on VAX/VMS machines with a planned extension to ULTRIX, using the new RISC CPUs from Digital, in the near future.

  15. Proceedings of the workshop on B physics at hadron accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McBride, P.; Mishra, C.S.

    1993-12-31

    This report contains papers on the following topics: Measurement of Angle {alpha}; Measurement of Angle {beta}; Measurement of Angle {gamma}; Other B Physics; Theory of Heavy Flavors; Charged Particle Tracking and Vertexing; e and {gamma} Detection; Muon Detection; Hadron ID; Electronics, DAQ, and Computing; and Machine Detector Interface. Selected papers have been indexed separately for inclusion the in Energy Science and Technology Database.

  16. The Effects of Operational Parameters on a Mono-wire Cutting System: Efficiency in Marble Processing

    NASA Astrophysics Data System (ADS)

    Yilmazkaya, Emre; Ozcelik, Yilmaz

    2016-02-01

    Mono-wire block cutting machines that cut with a diamond wire can be used for squaring natural stone blocks and the slab-cutting process. The efficient use of these machines reduces operating costs by ensuring less diamond wire wear and longer wire life at high speeds. The high investment costs of these machines will lead to their efficient use and reduce production costs by increasing plant efficiency. Therefore, there is a need to investigate the cutting performance parameters of mono-wire cutting machines in terms of rock properties and operating parameters. This study aims to investigate the effects of the wire rotational speed (peripheral speed) and wire descending speed (cutting speed), which are the operating parameters of a mono-wire cutting machine, on unit wear and unit energy, which are the performance parameters in mono-wire cutting. By using the obtained results, cuttability charts for each natural stone were created on the basis of unit wear and unit energy values, cutting optimizations were performed, and the relationships between some physical and mechanical properties of rocks and the optimum cutting parameters obtained as a result of the optimization were investigated.

  17. Orthogonal cutting modeling of hybrid CFRP/Ti toward specific cutting energy and induced damage analyses

    NASA Astrophysics Data System (ADS)

    Xu, Jinyang; El Mansori, Mohamed

    2016-10-01

    This paper studied the machinability of hybrid CFRP/Ti stack via the numerical approach. To this aim, an original FE model consisting of three fundamental physical constituents, i.e., CFRP phase, interface and Ti phase, was established in the Abaqus Explicit/code to construct the machining behavior of the composite-to-metal alliance. The CFRP phase was modeled as an equivalent homogeneous material (EHM) by considering its anisotropic behavior relative to the fiber orientation (θ) while the Ti alloy phase was assumed to exhibit isotropic and elastic-plastic behavior. The "interface" linking the "CFRP-to-Ti" contact boundary was physically modeled as an intermediate transition region through the concept of cohesive zone (CZ). Different constitutive laws and damage criteria were implemented to simulate the chip separation process of the bi-material system. The key cutting responses including specific cutting energy consumption, induced subsurface damage, and interface delamination were precisely addressed via the comprehensive FE analyses, and several key conclusions were drawn from this study.

  18. Status of the Future Circular Collider Study

    NASA Astrophysics Data System (ADS)

    Benedikt, Michael

    2016-03-01

    Following the 2013 update of the European Strategy for Particle Physics, the international Future Circular Collider (FCC) Study has been launched by CERN as host institute, to design an energy frontier hadron collider (FCC-hh) in a new 80-100 km tunnel with a centre-of-mass energy of about 100 TeV, an order of magnitude beyond the LHC's, as a long-term goal. The FCC study also includes the design of a 90-350 GeV high-luminosity lepton collider (FCC-ee) installed in the same tunnel, serving as Higgs, top and Z factory, as a potential intermediate step, as well as an electron-proton collider option (FCC-he). The physics cases for such machines will be assessed and concepts for experiments will be developed in time for the next update of the European Strategy for Particle Physics by the end of 2018. The presentation will summarize the status of machine designs and parameters and discuss the essential technical components to be developed in the frame of the FCC study. Key elements are superconducting accelerator-dipole magnets with a field of 16 T for the hadron collider and high-power, high-efficiency RF systems for the lepton collider. In addition the unprecedented beam power presents special challenges for the hadron collider for all aspects of beam handling and machine protection. First conclusions of geological investigations and implementation studies will be presented. The status of the FCC collaboration and the further planning for the study will be outlined.

  19. National Synchrotron Light Source annual report 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hulbert, S.L.; Lazarz, N.M.

    1992-04-01

    This report discusses the following research conducted at NSLS: atomic and molecular science; energy dispersive diffraction; lithography, microscopy and tomography; nuclear physics; UV photoemission and surface science; x-ray absorption spectroscopy; x-ray scattering and crystallography; x-ray topography; workshop on surface structure; workshop on electronic and chemical phenomena at surfaces; workshop on imaging; UV FEL machine reviews; VUV machine operations; VUV beamline operations; VUV storage ring parameters; x-ray machine operations; x-ray beamline operations; x-ray storage ring parameters; superconducting x-ray lithography source; SXLS storage ring parameters; the accelerator test facility; proposed UV-FEL user facility at the NSLS; global orbit feedback systems; and NSLSmore » computer system.« less

  20. A plea for "variational neuroethology". Comment on "Answering Schrödinger's question: A free-energy formulation" by M.J. Desormeau Ramstead et al.

    NASA Astrophysics Data System (ADS)

    Daunizeau, Jean

    2018-03-01

    What is life? According to Erwin Schrödinger [13], the living cell departs from other physical systems in that it - apparently - resists the second law of thermodynamics by restricting the dynamical repertoire (minimizing the entropy) of its physiological states. This is a physical rephrasing of Claude Bernard's biological notion of homeostasis, namely: the capacity of living systems to self-organize in order to maintain the stability of their internal milieu despite uninterrupted exchanges with an ever-altering external environment [2]. The important point here is that physical systems can neither identify nor prevent a state of high entropy. The Free Energy Principle or FEP was originally proposed as a mathematical description of how the brain actually solves this issue [4]. In line with the Bayesian brain hypothesis, the FEP views the brain as a hierarchical statistical learning machine, endowed with the imperative of minimizing Free Energy, i.e. prediction error. Action prescription under the FEP, however, does not follow standard Bayesian decision theory. Rather, action is assumed to further minimize Free Energy, which makes the active brain a self-fulfilling prophecy machine [6]. This is adaptive, under the assumption that evolution has equipped the brain with innate priors centered on homeostatic set points. In turn, avoiding (surprising) violations of such prior predictions implements homeostatic regulation [10], which becomes increasingly anticipatory as learning unfolds over the course of ontological development [5].

  1. A Machine LearningFramework to Forecast Wave Conditions

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; James, S. C.; O'Donncha, F.

    2017-12-01

    Recently, significant effort has been undertaken to quantify and extract wave energy because it is renewable, environmental friendly, abundant, and often close to population centers. However, a major challenge is the ability to accurately and quickly predict energy production, especially across a 48-hour cycle. Accurate forecasting of wave conditions is a challenging undertaking that typically involves solving the spectral action-balance equation on a discretized grid with high spatial resolution. The nature of the computations typically demands high-performance computing infrastructure. Using a case-study site at Monterey Bay, California, a machine learning framework was trained to replicate numerically simulated wave conditions at a fraction of the typical computational cost. Specifically, the physics-based Simulating WAves Nearshore (SWAN) model, driven by measured wave conditions, nowcast ocean currents, and wind data, was used to generate training data for machine learning algorithms. The model was run between April 1st, 2013 and May 31st, 2017 generating forecasts at three-hour intervals yielding 11,078 distinct model outputs. SWAN-generated fields of 3,104 wave heights and a characteristic period could be replicated through simple matrix multiplications using the mapping matrices from machine learning algorithms. In fact, wave-height RMSEs from the machine learning algorithms (9 cm) were less than those for the SWAN model-verification exercise where those simulations were compared to buoy wave data within the model domain (>40 cm). The validated machine learning approach, which acts as an accurate surrogate for the SWAN model, can now be used to perform real-time forecasts of wave conditions for the next 48 hours using available forecasted boundary wave conditions, ocean currents, and winds. This solution has obvious applications to wave-energy generation as accurate wave conditions can be forecasted with over a three-order-of-magnitude reduction in computational expense. The low computational cost (and by association low computer-power requirement) means that the machine learning algorithms could be installed on a wave-energy converter as a form of "edge computing" where a device could forecast its own 48-hour energy production.

  2. Resident Space Object Characterization and Behavior Understanding via Machine Learning and Ontology-based Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Furfaro, R.; Linares, R.; Gaylor, D.; Jah, M.; Walls, R.

    2016-09-01

    In this paper, we present an end-to-end approach that employs machine learning techniques and Ontology-based Bayesian Networks (BN) to characterize the behavior of resident space objects. State-of-the-Art machine learning architectures (e.g. Extreme Learning Machines, Convolutional Deep Networks) are trained on physical models to learn the Resident Space Object (RSO) features in the vectorized energy and momentum states and parameters. The mapping from measurements to vectorized energy and momentum states and parameters enables behavior characterization via clustering in the features space and subsequent RSO classification. Additionally, Space Object Behavioral Ontologies (SOBO) are employed to define and capture the domain knowledge-base (KB) and BNs are constructed from the SOBO in a semi-automatic fashion to execute probabilistic reasoning over conclusions drawn from trained classifiers and/or directly from processed data. Such an approach enables integrating machine learning classifiers and probabilistic reasoning to support higher-level decision making for space domain awareness applications. The innovation here is to use these methods (which have enjoyed great success in other domains) in synergy so that it enables a "from data to discovery" paradigm by facilitating the linkage and fusion of large and disparate sources of information via a Big Data Science and Analytics framework.

  3. 10 CFR 431.292 - Definitions concerning refrigerated bottled or canned beverage vending machines.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... beverage vending machines. 431.292 Section 431.292 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY... Vending Machines § 431.292 Definitions concerning refrigerated bottled or canned beverage vending machines. Basic model means, with respect to refrigerated bottled or canned beverage vending machines, all units...

  4. Study of energy parameters of machine parts of water-ice jet cleaning applications

    NASA Astrophysics Data System (ADS)

    Prezhbilov, A. N.; Burnashov, M. A.

    2018-03-01

    The reader will achieve a benchmark understanding of the essence of cleaning for the removal of contaminants from machine elements by means of cryo jet/water-ice jet with particles prepared beforehand. This paper represents the classification of the most common contaminants appearing on the surfaces of machine elements after a long-term service. The conceptual contribution of the paper is to represent a thermo-physical model of contaminant removal by means of a water ice jet. In conclusion, it is evident that this study has shown the dependencies between the friction force of an ice particle with an obstacle (contamination), a dimensional change of an ice particle in the cleaning process and the quantity of heat transmitted to an ice particle.

  5. National Synchrotron Light Source annual report 1991. Volume 1, October 1, 1990--September 30, 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hulbert, S.L.; Lazarz, N.M.

    1992-04-01

    This report discusses the following research conducted at NSLS: atomic and molecular science; energy dispersive diffraction; lithography, microscopy and tomography; nuclear physics; UV photoemission and surface science; x-ray absorption spectroscopy; x-ray scattering and crystallography; x-ray topography; workshop on surface structure; workshop on electronic and chemical phenomena at surfaces; workshop on imaging; UV FEL machine reviews; VUV machine operations; VUV beamline operations; VUV storage ring parameters; x-ray machine operations; x-ray beamline operations; x-ray storage ring parameters; superconducting x-ray lithography source; SXLS storage ring parameters; the accelerator test facility; proposed UV-FEL user facility at the NSLS; global orbit feedback systems; and NSLSmore » computer system.« less

  6. Dynamic VMs placement for energy efficiency by PSO in cloud computing

    NASA Astrophysics Data System (ADS)

    Dashti, Seyed Ebrahim; Rahmani, Amir Masoud

    2016-03-01

    Recently, cloud computing is growing fast and helps to realise other high technologies. In this paper, we propose a hieratical architecture to satisfy both providers' and consumers' requirements in these technologies. We design a new service in the PaaS layer for scheduling consumer tasks. In the providers' perspective, incompatibility between specification of physical machine and user requests in cloud leads to problems such as energy-performance trade-off and large power consumption so that profits are decreased. To guarantee Quality of service of users' tasks, and reduce energy efficiency, we proposed to modify Particle Swarm Optimisation to reallocate migrated virtual machines in the overloaded host. We also dynamically consolidate the under-loaded host which provides power saving. Simulation results in CloudSim demonstrated that whatever simulation condition is near to the real environment, our method is able to save as much as 14% more energy and the number of migrations and simulation time significantly reduces compared with the previous works.

  7. Comparison of measured electron energy spectra for six matched, radiotherapy accelerators.

    PubMed

    McLaughlin, David J; Hogstrom, Kenneth R; Neck, Daniel W; Gibbons, John P

    2018-05-01

    This study compares energy spectra of the multiple electron beams of individual radiotherapy machines, as well as the sets of spectra across multiple matched machines. Also, energy spectrum metrics are compared with central-axis percent depth-dose (PDD) metrics. A lightweight, permanent magnet spectrometer was used to measure energy spectra for seven electron beams (7-20 MeV) on six matched Elekta Infinity accelerators with the MLCi2 treatment head. PDD measurements in the distal falloff region provided R 50 and R 80-20 metrics in Plastic Water ® , which correlated with energy spectrum metrics, peak mean energy (PME) and full-width at half maximum (FWHM). Visual inspection of energy spectra and their metrics showed whether beams on single machines were properly tuned, i.e., FWHM is expected to increase and peak height decrease monotonically with increased PME. Also, PME spacings are expected to be approximately equal for 7-13 MeV beams (0.5-cm R 90 spacing) and for 13-16 MeV beams (1.0-cm R 90 spacing). Most machines failed these expectations, presumably due to tolerances for initial beam matching (0.05 cm in R 90 ; 0.10 cm in R 80-20 ) and ongoing quality assurance (0.2 cm in R 50 ). Also, comparison of energy spectra or metrics for a single beam energy (six machines) showed outlying spectra. These variations in energy spectra provided ample data spread for correlating PME and FWHM with PDD metrics. Least-squares fits showed that R 50 and R 80-20 varied linearly and supralinearly with PME, respectively; however, both suggested a secondary dependence on FWHM. Hence, PME and FWHM could serve as surrogates for R 50 and R 80-20 for beam tuning by the accelerator engineer, possibly being more sensitive (e.g., 0.1 cm in R 80-20 corresponded to 2.0 MeV in FWHM). Results of this study suggest a lightweight, permanent magnet spectrometer could be a useful beam-tuning instrument for the accelerator engineer to (a) match electron beams prior to beam commissioning, (b) tune electron beams for the duration of their clinical use, and (c) provide estimates of PDD metrics following machine maintenance. However, a real-time version of the spectrometer is needed to be practical. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  8. Design study for a staged Very Large Hadron Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peter J. Limon et al.

    Advancing accelerator designs and technology to achieve the highest energies has enabled remarkable discoveries in particle physics. This report presents the results of a design study for a new collider at Fermilab that will create exceptional opportunities for particle physics--a two-stage very large hadron collider. In its first stage, the machine provides a facility for energy-frontier particle physics research, at an affordable cost and on a reasonable time scale. In a second-stage upgrade in the same tunnel, the VLHC offers the possibility of reaching 100 times the collision energy of the Tevatron. The existing Fermilab accelerator complex serves as themore » injector, and the collision halls are on the Fermilab site. The Stage-1 VLHC reaches a collision energy of 40 TeV and a luminosity comparable to that of the LHC, using robust superferric magnets of elegant simplicity housed in a large-circumference tunnel. The Stage-2 VLHC, constructed after the scientific potential of the first stage has been fully realized, reaches a collision energy of at least 175 TeV with the installation of high-field magnets in the same tunnel. It makes optimal use of the infrastructure developed for the Stage-1 machine, using the Stage-1 accelerator itself as the injector. The goals of this study, commissioned by the Fermilab Director in November 2000, are: to create reasonable designs for the Stage-1 and Stage-2 VLHC in the same tunnel; to discover the technical challenges and potential impediments to building such a facility at Fermilab; to determine the approximate costs of the major elements of the Stage-1 VLHC; and to identify areas requiring significant R and D to establish the basis for the design.« less

  9. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    NASA Astrophysics Data System (ADS)

    Chernoded, Andrey; Dudko, Lev; Myagkov, Igor; Volkov, Petr

    2017-10-01

    Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  10. The HEPiX Virtualisation Working Group: Towards a Grid of Clouds

    NASA Astrophysics Data System (ADS)

    Cass, Tony

    2012-12-01

    The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.

  11. From Emergence to Eruption: The Physics and Diagnostics of Solar Active Regions

    NASA Astrophysics Data System (ADS)

    Cheung, Mark

    2017-08-01

    The solar photosphere is continuously seeded by the emergence of magnetic fields from the solar interior. In turn, photospheric evolution shapes the magnetic terrain in the overlying corona. Magnetic fields in the corona store the energy needed to power coronal mass ejections (CMEs) and solar flares. In this talk, we recount a physics-based narrative of solar eruptive events from cradle to grave, from emergence to eruption, from evaporation to condensation. We review the physical processes which are understood to transport magnetic flux from the interior to the surface, inject free energy and twist into the corona, disentangle the coronal field to permit explosive energy release, and subsequently convert the released energy into observable signatures. Along the way, we review observational diagnostics used to constrain theories of active region evolution and eruption. Finally, we discuss the opportunities and challenges enabled by the large existing repository of solar observations. We argue that the synthesis of physics and diagnostics embodied in (1) data-driven modeling and (2) machine learning efforts will be an accelerating agent for scientific discovery.

  12. Status and Roadmap of CernVM

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.

    2015-12-01

    Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.

  13. Perspective: Machine learning potentials for atomistic simulations

    NASA Astrophysics Data System (ADS)

    Behler, Jörg

    2016-11-01

    Nowadays, computer simulations have become a standard tool in essentially all fields of chemistry, condensed matter physics, and materials science. In order to keep up with state-of-the-art experiments and the ever growing complexity of the investigated problems, there is a constantly increasing need for simulations of more realistic, i.e., larger, model systems with improved accuracy. In many cases, the availability of sufficiently efficient interatomic potentials providing reliable energies and forces has become a serious bottleneck for performing these simulations. To address this problem, currently a paradigm change is taking place in the development of interatomic potentials. Since the early days of computer simulations simplified potentials have been derived using physical approximations whenever the direct application of electronic structure methods has been too demanding. Recent advances in machine learning (ML) now offer an alternative approach for the representation of potential-energy surfaces by fitting large data sets from electronic structure calculations. In this perspective, the central ideas underlying these ML potentials, solved problems and remaining challenges are reviewed along with a discussion of their current applicability and limitations.

  14. Physics of Self-Field-Dominated Plasmas.

    DTIC Science & Technology

    1995-03-31

    plasma focus machines (APF) for different optimal levels of discharge feeding energy W, in particular for APF-20O (W <or = 200 kJ) and APF-50 (W <or= 50 kJ). The function of these APF systems was to determine, along with the data of smaller machines, the scaling laws of the emission (fluence) of ion and ion cluster beams as a function of W, ejected from the self field dominated plasma of the APF pinch. Typical ion spectra from a Thomson (parabola) spectrometer in the 80 deg direction from the electrode/pinch axis are also included

  15. Exploring cluster Monte Carlo updates with Boltzmann machines

    NASA Astrophysics Data System (ADS)

    Wang, Lei

    2017-11-01

    Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.

  16. Engineering of the `PCAST machine`

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinnis, J.; Brooks, A.; Brown, T.

    The President`s Committee of Advisors on Science and Technology (PCAST) has suggested that a device with a mission of ignition and moderate burn time could address the physics of burning plasmas at a lesser cost than ITER with its more comprehensive physics and technology mission. The Department of Energy commissioned a study to explore this PCAST suggestion. This paper describes the results of the engineering portion of the study of this `PCAST Machine;` physics is covered in a companion paper authored by G.H. Neilson, et al; and the costs are covered in a companion paper by R.T. Simmons, et al.more » Both are published in the proceedings of this conference. The study was undertaken by a team under the direction of Bruce Montgomery that included representatives from MIT, PPPL, ORNL, LLNL, GA, Northrup-Grumman, and Stone and Webster. The performance requirements for the PCAST machine are to form and sustain a burning plasma for three helium accumulation times. The philosophy adopted for this design was to achieve the required performance at lower cost by decreasing the major radius to five meters, increasing the toroidal field to 7 tesla, and using stronger shaping. The major device parameters are given. 4 refs., 4 figs., 1 tab.« less

  17. Obtaining the Thermal Efficiency of a Steam Railroad Machine Toy According Dale's Cone of Learning

    NASA Astrophysics Data System (ADS)

    Bautista-Hernandez, Omar Tomas; Ruiz-Chavarria, Gregorio

    2011-03-01

    Physics is crucial to understanding the world around us, the world inside us, and the world beyond us. It is the most basic and fundamental science, hence, our interest in developing innovative strategies supported by the imagination and knowledge to make the learning process funny, attractive and interesting to people, so, we can help to change the general idea that Physics is an abstract and complicated science. We all know this instinctively, however, turn-of-the-century educationist Edgar Dale illustrated this with research when he developed the Cone of Learning - which states that after two weeks we remember only 10% of what we read, but we remember 90% of what we do. Based on that theory, we obtain the thermal efficiency of a steam railroad machine -this is a toy train that could be bought at any department store-, and show you the great percentage of energy lost when moving this railroad machine, just as the real life is. While doing this practice we don't focus on the results itself, instead, we try to demostrate that physics is funny and it is not difficult to learn. We must stress that this practice was done with pre-universitary and univesitary students, however, can be shown to the community in general.

  18. Energy management that generates terrain following versus apex-preserving hopping in man and machine.

    PubMed

    Kalveram, Karl Theodor; Haeufle, Daniel F B; Seyfarth, André; Grimmer, Sten

    2012-01-01

    While hopping, 12 subjects experienced a sudden step down of 5 or 10 cm. Results revealed that the hopping style was "terrain following". It means that the subjects pursued to keep the distance between maximum hopping height (apex) and ground profile constant. The spring-loaded inverse pendulum (SLIP) model, however, which is currently considered as template for stable legged locomotion would predict apex-preserving hopping, by which the absolute maximal hopping height is kept constant regardless of changes of the ground level. To get more insight into the physics of hopping, we outlined two concepts of energy management: "constant energy supply", by which in each bounce--regardless of perturbations--the same amount of mechanical energy is injected, and "lost energy supply", by which the mechanical energy that is going to be dissipated in the current cycle is assessed and replenished. When tested by simulations and on a robot testbed capable of hopping, constant energy supply generated stable and robust terrain following hopping, whereas lost energy supply led to something like apex-preserving hopping, which, however, lacks stability as well as robustness. Comparing simulated and machine hopping with human hopping suggests that constant energy supply has a good chance to be used by humans to generate hopping.

  19. Improving Energy Efficiency in CNC Machining

    NASA Astrophysics Data System (ADS)

    Pavanaskar, Sushrut S.

    We present our work on analyzing and improving the energy efficiency of multi-axis CNC milling process. Due to the differences in energy consumption behavior, we treat 3- and 5-axis CNC machines separately in our work. For 3-axis CNC machines, we first propose an energy model that estimates the energy requirement for machining a component on a specified 3-axis CNC milling machine. Our model makes machine-specific predictions of energy requirements while also considering the geometric aspects of the machining toolpath. Our model - and the associated software tool - facilitate direct comparison of various alternative toolpath strategies based on their energy-consumption performance. Further, we identify key factors in toolpath planning that affect energy consumption in CNC machining. We then use this knowledge to propose and demonstrate a novel toolpath planning strategy that may be used to generate new toolpaths that are inherently energy-efficient, inspired by research on digital micrography -- a form of computational art. For 5-axis CNC machines, the process planning problem consists of several sub-problems that researchers have traditionally solved separately to obtain an approximate solution. After illustrating the need to solve all sub-problems simultaneously for a truly optimal solution, we propose a unified formulation based on configuration space theory. We apply our formulation to solve a problem variant that retains key characteristics of the full problem but has lower dimensionality, allowing visualization in 2D. Given the complexity of the full 5-axis toolpath planning problem, our unified formulation represents an important step towards obtaining a truly optimal solution. With this work on the two types of CNC machines, we demonstrate that without changing the current infrastructure or business practices, machine-specific, geometry-based, customized toolpath planning can save energy in CNC machining.

  20. Physical Activity and Childhood Obesity: Strategies and Solutions for Schools and Parents

    ERIC Educational Resources Information Center

    Green, Gregory; Riley, Clarence; Hargrove, Brenda

    2012-01-01

    One of the reasons American children and adolescents gain weight over the generations is that children expend significantly less energy on a daily basis than their parents and grandparents did at their age. Today's youth spend many hours participating in sedentary activities. Additionally, we eat more fast food and vending machine food than we…

  1. COURSE OUTLINE FOR SECOND SIX WEEKS OF SCIENCE-LEVEL III, TALENT PRESERVATION CLASSES.

    ERIC Educational Resources Information Center

    Houston Independent School District, TX.

    EACH UNIT IS OF APPROXIMATELY 6 WEEKS' DURATION. UNITS ARE ON ENERGY AND THE HUMAN BODY, HEAT, ELECTRICITY AND MACHINES, CONSUMER SCIENCE FROM A COMMUNICATION AND PHYSICAL SCIENCE APPROACH, AND CONSUMER SCIENCE FROM BIOLOGICAL AND EARTH APPROCH. IN ALL UNITS, AS MANY CONCEPTS AS POSSIBLE SHOULD BE RELATED TO THE STUDENTS' EXPERIENCES. IN…

  2. The Lhc Collider:. Status and Outlook to Operation

    NASA Astrophysics Data System (ADS)

    Schmidt, Rüdiger

    2006-04-01

    For the LHC to provide particle physics with proton-proton collisions at the centre of mass energy of 14 TeV with a luminosity of 1034 cm-2s-1, the machine will operate with high-field dipole magnets using NbTi superconductors cooled to below the lambda point of helium. In order to reach design performance, the LHC requires both, the use of existing technologies pushed to the limits as well as the application of novel technologies. The construction follows a decade of intensive R&D and technical validation of major collider sub-systems. This paper will focus on the required LHC performance, and on the implications on the used technologies. The consequences of the unprecedented quantity of energy stored in both magnets and beams will be discussed. A brief outlook to operation and its consequences for machine protection will be given.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belley, M; Schmidt, M; Knutson, N

    Purpose: Physics second-checks for external beam radiation therapy are performed, in-part, to verify that the machine parameters in the Record-and-Verify (R&V) system that will ultimately be sent to the LINAC exactly match the values initially calculated by the Treatment Planning System (TPS). While performing the second-check, a large portion of the physicists’ time is spent navigating and arranging display windows to locate and compare the relevant numerical values (MLC position, collimator rotation, field size, MU, etc.). Here, we describe the development of a software tool that guides the physicist by aggregating and succinctly displaying machine parameter data relevant to themore » physics second-check process. Methods: A data retrieval software tool was developed using Python to aggregate data and generate a list of machine parameters that are commonly verified during the physics second-check process. This software tool imported values from (i) the TPS RT Plan DICOM file and (ii) the MOSAIQ (R&V) Structured Query Language (SQL) database. The machine parameters aggregated for this study included: MLC positions, X&Y jaw positions, collimator rotation, gantry rotation, MU, dose rate, wedges and accessories, cumulative dose, energy, machine name, couch angle, and more. Results: A GUI interface was developed to generate a side-by-side display of the aggregated machine parameter values for each field, and presented to the physicist for direct visual comparison. This software tool was tested for 3D conformal, static IMRT, sliding window IMRT, and VMAT treatment plans. Conclusion: This software tool facilitated the data collection process needed in order for the physicist to conduct a second-check, thus yielding an optimized second-check workflow that was both more user friendly and time-efficient. Utilizing this software tool, the physicist was able to spend less time searching through the TPS PDF plan document and the R&V system and focus the second-check efforts on assessing the patient-specific plan-quality.« less

  4. Progress with High-Field Superconducting Magnets for High-Energy Colliders

    NASA Astrophysics Data System (ADS)

    Apollinari, Giorgio; Prestemon, Soren; Zlobin, Alexander V.

    2015-10-01

    One of the possible next steps for high-energy physics research relies on a high-energy hadron or muon collider. The energy of a circular collider is limited by the strength of bending dipoles, and its maximum luminosity is determined by the strength of final focus quadrupoles. For this reason, the high-energy physics and accelerator communities have shown much interest in higher-field and higher-gradient superconducting accelerator magnets. The maximum field of NbTi magnets used in all present high-energy machines, including the LHC, is limited to ˜10 T at 1.9 K. Fields above 10 T became possible with the use of Nb3Sn superconductors. Nb3Sn accelerator magnets can provide operating fields up to ˜15 T and can significantly increase the coil temperature margin. Accelerator magnets with operating fields above 15 T require high-temperature superconductors. This review discusses the status and main results of Nb3Sn accelerator magnet research and development and work toward 20-T magnets.

  5. Progress with high-field superconducting magnets for high-energy colliders

    DOE PAGES

    Apollinari, Giorgio; Prestemon, Soren; Zlobin, Alexander V.

    2015-10-01

    One of the possible next steps for high-energy physics research relies on a high-energy hadron or muon collider. The energy of a circular collider is limited by the strength of bending dipoles, and its maximum luminosity is determined by the strength of final focus quadrupoles. For this reason, the high-energy physics and accelerator communities have shown much interest in higher-field and higher-gradient superconducting accelerator magnets. The maximum field of NbTi magnets used in all present high-energy machines, including the LHC, is limited to ~10 T at 1.9 K. Fields above 10 T became possible with the use of Nbmore » $$_3$$Sn superconductors. Nb$$_3$$Sn accelerator magnets can provide operating fields up to ~15 T and can significantly increase the coil temperature margin. Accelerator magnets with operating fields above 15 T require high-temperature superconductors. Furthermore, this review discusses the status and main results of Nb$$_3$$Sn accelerator magnet research and development and work toward 20-T magnets.« less

  6. A journey into medical physics as viewed by a physicist

    NASA Astrophysics Data System (ADS)

    Gueye, Paul

    2007-03-01

    The world of physics is usually linked to a large variety of subjects spanning from astrophysics, nuclear/high energy physics, materials and optical sciences, plasma physics etc. Lesser is known about the exciting world of medical physics that includes radiation therapy physics, medical diagnostic and imaging physics, nuclear medicine physics, and medical radiation safety. These physicists are typically based in hospital departments of radiation oncology or radiology, and provide technical support for patient diagnosis and treatment in a clinical environment. This talk will focus on providing a bridge between selected areas of physics and their medical applications. The journey will first start from our understanding of high energy beam production and transport beamlines for external beam treatment of diseases (e.g., electron, gamma, X-ray and proton machines) as they relate to accelerator physics. We will then embrace the world of nuclear/high energy physics where detectors development provide a unique tool for understanding low energy beam distribution emitted from radioactive sources used in Brachytherapy treatment modality. Because the ultimate goal of radiation based therapy is its killing power on tumor cells, the next topic will be microdosimetry where responses of biological systems can be studied via electromagnetic systems. Finally, the impact on the imaging world will be embraced using tools heavily used in plasma physics, fluid mechanics and Monte Carlo simulations. These various scientific areas provide unique opportunities for faculty and students at universities, as well as for staff from research centers and laboratories to contribute in this field. We will conclude with the educational training related to medical physics programs.

  7. Improving Lidar Turbulence Estimates for Wind Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.

    2016-10-06

    Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. This presentation primarily focuses on the physics-based corrections, which include corrections for instrument noise, volume averaging, and variance contamination. As different factors affect TI under different stability conditions, the combination of physical corrections applied in L-TERRA changes depending on the atmospheric stability during each 10-minute time period. This stability-dependent version of L-TERRA performed well at both sites, reducing TI error and bringing lidar TI estimates closer to estimates from instruments on towers. However, there is still scatter evident in the lidar TI estimates, indicating that there are physics that are not being captured in the current version of L-TERRA. Two options are discussed for modeling the remainder of the TI error physics in L-TERRA: machine learning and lidar simulations. Lidar simulations appear to be a better approach, as they can help improve understanding of atmospheric effects on TI error and do not require a large training data set.« less

  8. Engineering a lunar photolithoautotroph to thrive on the moon - life or simulacrum?

    NASA Astrophysics Data System (ADS)

    Ellery, A. A.

    2018-07-01

    Recent work in developing self-replicating machines has approached the problem as an engineering problem, using engineering materials and methods to implement an engineering analogue of a hitherto uniquely biological function. The question is - can anything be learned that might be relevant to an astrobiological context in which the problem is to determine the general form of biology independent of the Earth. Compared with other non-terrestrial biology disciplines, engineered life is more demanding. Engineering a self-replicating machine tackles real environments unlike artificial life which avoids the problem of physical instantiation altogether by examining software models. Engineering a self-replicating machine is also more demanding than synthetic biology as no library of functional components exists. Everything must be constructed de novo. Biological systems already have the capacity to self-replicate but no engineered machine has yet been constructed with the same ability - this is our primary goal. On the basis of the von Neumann analysis of self-replication, self-replication is a by-product of universal construction capability - a universal constructor is a machine that can construct anything (in a functional sense) given the appropriate instructions (DNA/RNA), energy (ATP) and materials (food). In the biological cell, the universal construction mechanism is the ribosome. The ribosome is a biological assembly line for constructing proteins while DNA constitutes a design specification. For a photoautotroph, the energy source is ambient and the food is inorganic. We submit that engineering a self-replicating machine opens up new areas of astrobiology to be explored in the limits of life.

  9. Analysis of NREL Cold-Drink Vending Machines for Energy Savings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deru, M.; Torcellini, P.; Bottom, K.

    NREL Staff, as part of Sustainable NREL, an initiative to improve the overall energy and environmental performance of the lab, decided to control how its vending machines used energy. The cold-drink vending machines across the lab were analyzed for potential energy savings opportunities. This report gives the monitoring and the analysis of two energy conservation measures applied to the cold-drink vending machines at NREL.

  10. Prediction-based manufacturing center self-adaptive demand side energy optimization in cyber physical systems

    NASA Astrophysics Data System (ADS)

    Sun, Xinyao; Wang, Xue; Wu, Jiangwei; Liu, Youda

    2014-05-01

    Cyber physical systems(CPS) recently emerge as a new technology which can provide promising approaches to demand side management(DSM), an important capability in industrial power systems. Meanwhile, the manufacturing center is a typical industrial power subsystem with dozens of high energy consumption devices which have complex physical dynamics. DSM, integrated with CPS, is an effective methodology for solving energy optimization problems in manufacturing center. This paper presents a prediction-based manufacturing center self-adaptive energy optimization method for demand side management in cyber physical systems. To gain prior knowledge of DSM operating results, a sparse Bayesian learning based componential forecasting method is introduced to predict 24-hour electric load levels for specific industrial areas in China. From this data, a pricing strategy is designed based on short-term load forecasting results. To minimize total energy costs while guaranteeing manufacturing center service quality, an adaptive demand side energy optimization algorithm is presented. The proposed scheme is tested in a machining center energy optimization experiment. An AMI sensing system is then used to measure the demand side energy consumption of the manufacturing center. Based on the data collected from the sensing system, the load prediction-based energy optimization scheme is implemented. By employing both the PSO and the CPSO method, the problem of DSM in the manufacturing center is solved. The results of the experiment show the self-adaptive CPSO energy optimization method enhances optimization by 5% compared with the traditional PSO optimization method.

  11. Neural Representations of Physics Concepts.

    PubMed

    Mason, Robert A; Just, Marcel Adam

    2016-06-01

    We used functional MRI (fMRI) to assess neural representations of physics concepts (momentum, energy, etc.) in juniors, seniors, and graduate students majoring in physics or engineering. Our goal was to identify the underlying neural dimensions of these representations. Using factor analysis to reduce the number of dimensions of activation, we obtained four physics-related factors that were mapped to sets of voxels. The four factors were interpretable as causal motion visualization, periodicity, algebraic form, and energy flow. The individual concepts were identifiable from their fMRI signatures with a mean rank accuracy of .75 using a machine-learning (multivoxel) classifier. Furthermore, there was commonality in participants' neural representation of physics; a classifier trained on data from all but one participant identified the concepts in the left-out participant (mean accuracy = .71 across all nine participant samples). The findings indicate that abstract scientific concepts acquired in an educational setting evoke activation patterns that are identifiable and common, indicating that science education builds abstract knowledge using inherent, repurposed brain systems. © The Author(s) 2016.

  12. Wireless Monitoring of Induction Machine Rotor Physical Variables

    PubMed Central

    Doolan Fernandes, Jefferson; Carvalho Souza, Francisco Elvis; de Paiva, José Alvaro

    2017-01-01

    With the widespread use of electric machines, there is a growing need to extract information from the machines to improve their control systems and maintenance management. The present work shows the development of an embedded system to perform the monitoring of the rotor physical variables of a squirrel cage induction motor. The system is comprised of: a circuit to acquire desirable rotor variable(s) and value(s) that send it to the computer; a rectifier and power storage circuit that converts an alternating current in a continuous current but also stores energy for a certain amount of time to wait for the motor’s shutdown; and a magnetic generator that harvests energy from the rotating field to power the circuits mentioned above. The embedded system is set on the rotor of a 5 HP squirrel cage induction motor, making it difficult to power the system because it is rotating. This problem can be solved with the construction of a magnetic generator device to avoid the need of using batteries or collector rings and will send data to the computer using a wireless NRF24L01 module. For the proposed system, initial validation tests were made using a temperature sensor (DS18b20), as this variable is known as the most important when identifying the need for maintenance and control systems. Few tests have shown promising results that, with further improvements, can prove the feasibility of using sensors in the rotor. PMID:29156564

  13. Wireless Monitoring of Induction Machine Rotor Physical Variables.

    PubMed

    Doolan Fernandes, Jefferson; Carvalho Souza, Francisco Elvis; Cipriano Maniçoba, Glauco George; Salazar, Andrés Ortiz; de Paiva, José Alvaro

    2017-11-18

    With the widespread use of electric machines, there is a growing need to extract information from the machines to improve their control systems and maintenance management. The present work shows the development of an embedded system to perform the monitoring of the rotor physical variables of a squirrel cage induction motor. The system is comprised of: a circuit to acquire desirable rotor variable(s) and value(s) that send it to the computer; a rectifier and power storage circuit that converts an alternating current in a continuous current but also stores energy for a certain amount of time to wait for the motor's shutdown; and a magnetic generator that harvests energy from the rotating field to power the circuits mentioned above. The embedded system is set on the rotor of a 5 HP squirrel cage induction motor, making it difficult to power the system because it is rotating. This problem can be solved with the construction of a magnetic generator device to avoid the need of using batteries or collector rings and will send data to the computer using a wireless NRF24L01 module. For the proposed system, initial validation tests were made using a temperature sensor (DS18b20), as this variable is known as the most important when identifying the need for maintenance and control systems. Few tests have shown promising results that, with further improvements, can prove the feasibility of using sensors in the rotor.

  14. Chapter 16 - Predictive Analytics for Comprehensive Energy Systems State Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yingchen; Yang, Rui; Hodge, Brian S

    Energy sustainability is a subject of concern to many nations in the modern world. It is critical for electric power systems to diversify energy supply to include systems with different physical characteristics, such as wind energy, solar energy, electrochemical energy storage, thermal storage, bio-energy systems, geothermal, and ocean energy. Each system has its own range of control variables and targets. To be able to operate such a complex energy system, big-data analytics become critical to achieve the goal of predicting energy supplies and consumption patterns, assessing system operation conditions, and estimating system states - all providing situational awareness to powermore » system operators. This chapter presents data analytics and machine learning-based approaches to enable predictive situational awareness of the power systems.« less

  15. Teaching And Training Tools For The Undergraduate: Experience With A Rebuilt AN-400 Accelerator

    NASA Astrophysics Data System (ADS)

    Roberts, Andrew D.

    2011-06-01

    There is an increasingly recognized need for people trained in a broad range of applied nuclear science techniques, indicated by reports from the American Physical Society and elsewhere. Anecdotal evidence suggests that opportunities for hands-on training with small particle accelerators have diminished in the US, as development programs established in the 1960's and 1970's have been decommissioned over recent decades. Despite the reduced interest in the use of low energy accelerators in fundamental research, these machines can offer a powerful platform for bringing unique training opportunities to the undergraduate curriculum in nuclear physics, engineering and technology. We report here on the new MSU Applied Nuclear Science Lab, centered around the rebuild of an AN400 electrostatic accelerator. This machine is run entirely by undergraduate students under faculty supervision, allowing a great deal of freedom in its use without restrictions from graduate or external project demands.

  16. Teaching And Training Tools For The Undergraduate: Experience With A Rebuilt AN-400 Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, Andrew D.

    2011-06-01

    There is an increasingly recognized need for people trained in a broad range of applied nuclear science techniques, indicated by reports from the American Physical Society and elsewhere. Anecdotal evidence suggests that opportunities for hands-on training with small particle accelerators have diminished in the US, as development programs established in the 1960's and 1970's have been decommissioned over recent decades. Despite the reduced interest in the use of low energy accelerators in fundamental research, these machines can offer a powerful platform for bringing unique training opportunities to the undergraduate curriculum in nuclear physics, engineering and technology. We report here onmore » the new MSU Applied Nuclear Science Lab, centered around the rebuild of an AN400 electrostatic accelerator. This machine is run entirely by undergraduate students under faculty supervision, allowing a great deal of freedom in its use without restrictions from graduate or external project demands.« less

  17. Review of EuCARD project on accelerator infrastructure in Europe

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2013-01-01

    The aim of big infrastructural and research programs (like pan-European Framework Programs) and individual projects realized inside these programs in Europe is to structure the European Research Area - ERA in this way as to be competitive with the leaders of the world. One of this projects in EuCARD (European Coordination of Accelerator Research and Development) with the aim to structure and modernize accelerator, (including accelerators for big free electron laser machines) research infrastructure. This article presents the periodic development of EuCARD which took place between the annual meeting, April 2012 in Warsaw and SC meeting in Uppsala, December 2012. The background of all these efforts are achievements of the LHC machine and associated detectors in the race for new physics. The LHC machine works in the regime of p-p, Pb-p, Pb-Pb (protons and lead ions). Recently, a discovery by the LHC of Higgs like boson, has started vivid debates on the further potential of this machine and the future. The periodic EuCARD conference, workshop and meetings concern building of the research infrastructure, including in this advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution. The aim of the discussion is not only summarize the current status but make plans and prepare practically to building new infrastructures. Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. Accelerator technology is intensely developed in all developed nations and regions of the world. The EuCARD project contains a lot of subjects related directly and indirectly to photon physics and photonics, as well as optoelectronics, electronics and integration of these with large research infrastructure.

  18. Technical Note: Defining cyclotron-based clinical scanning proton machines in a FLUKA Monte Carlo system.

    PubMed

    Fiorini, Francesca; Schreuder, Niek; Van den Heuvel, Frank

    2018-02-01

    Cyclotron-based pencil beam scanning (PBS) proton machines represent nowadays the majority and most affordable choice for proton therapy facilities, however, their representation in Monte Carlo (MC) codes is more complex than passively scattered proton system- or synchrotron-based PBS machines. This is because degraders are used to decrease the energy from the cyclotron maximum energy to the desired energy, resulting in a unique spot size, divergence, and energy spread depending on the amount of degradation. This manuscript outlines a generalized methodology to characterize a cyclotron-based PBS machine in a general-purpose MC code. The code can then be used to generate clinically relevant plans starting from commercial TPS plans. The described beam is produced at the Provision Proton Therapy Center (Knoxville, TN, USA) using a cyclotron-based IBA Proteus Plus equipment. We characterized the Provision beam in the MC FLUKA using the experimental commissioning data. The code was then validated using experimental data in water phantoms for single pencil beams and larger irregular fields. Comparisons with RayStation TPS plans are also presented. Comparisons of experimental, simulated, and planned dose depositions in water plans show that same doses are calculated by both programs inside the target areas, while penumbrae differences are found at the field edges. These differences are lower for the MC, with a γ(3%-3 mm) index never below 95%. Extensive explanations on how MC codes can be adapted to simulate cyclotron-based scanning proton machines are given with the aim of using the MC as a TPS verification tool to check and improve clinical plans. For all the tested cases, we showed that dose differences with experimental data are lower for the MC than TPS, implying that the created FLUKA beam model is better able to describe the experimental beam. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  19. Doubly fed induction machine

    DOEpatents

    Skeist, S. Merrill; Baker, Richard H.

    2005-10-11

    An electro-mechanical energy conversion system coupled between an energy source and an energy load including an energy converter device having a doubly fed induction machine coupled between the energy source and the energy load to convert the energy from the energy source and to transfer the converted energy to the energy load and an energy transfer multiplexer coupled to the energy converter device to control the flow of power or energy through the doubly fed induction machine.

  20. Stability Assessment of a System Comprising a Single Machine and Inverter with Scalable Ratings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian B; Lin, Yashen; Gevorgian, Vahan

    From the inception of power systems, synchronous machines have acted as the foundation of large-scale electrical infrastructures and their physical properties have formed the cornerstone of system operations. However, power electronics interfaces are playing a growing role as they are the primary interface for several types of renewable energy sources and storage technologies. As the role of power electronics in systems continues to grow, it is crucial to investigate the properties of bulk power systems in low inertia settings. In this paper, we assess the properties of coupled machine-inverter systems by studying an elementary system comprised of a synchronous generator,more » three-phase inverter, and a load. Furthermore, the inverter model is formulated such that its power rating can be scaled continuously across power levels while preserving its closed-loop response. Accordingly, the properties of the machine-inverter system can be assessed for varying ratios of machine-to-inverter power ratings and, hence, differing levels of inertia. After linearizing the model and assessing its eigenvalues, we show that system stability is highly dependent on the interaction between the inverter current controller and machine exciter, thus uncovering a key concern with mixed machine-inverter systems and motivating the need for next-generation grid-stabilizing inverter controls.« less

  1. THE NATURE OF ENERGY TRANSFER TO ELECTRODES IN A PULSE DISCHARGE WITH SMALL GAPS,

    DTIC Science & Technology

    SPARK MACHINING, ELECTRIC DISCHARGES), (*ELECTROMAGNETIC PULSES, SPARK MACHINING), ELECTROEROSIVE MACHINING, ENERGY, ELECTRON IRRADIATION, ION BOMBARDMENT, THERMAL CONDUCTIVITY, FILMS, KINETIC ENERGY, ZONE MELTING, USSR

  2. Electro-mechanical energy conversion system having a permanent magnet machine with stator, resonant transfer link and energy converter controls

    DOEpatents

    Skeist, S. Merrill; Baker, Richard H.

    2006-01-10

    An electro-mechanical energy conversion system coupled between an energy source and an energy load comprising an energy converter device including a permanent magnet induction machine coupled between the energy source and the energy load to convert the energy from the energy source and to transfer the converted energy to the energy load and an energy transfer multiplexer to control the flow of power or energy through the permanent magnetic induction machine.

  3. The FLUKA Code: An Overview

    NASA Technical Reports Server (NTRS)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; hide

    2006-01-01

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  4. Development of a low energy micro sheet forming machine

    NASA Astrophysics Data System (ADS)

    Razali, A. R.; Ann, C. T.; Shariff, H. M.; Kasim, N. I.; Musa, M. A.; Ahmad, A. F.

    2017-10-01

    It is expected that with the miniaturization of materials being processed, energy consumption is also being `miniaturized' proportionally. The focus of this study was to design a low energy micro-sheet-forming machine for thin sheet metal application and fabricate a low direct current powered micro-sheet-forming machine. A prototype of low energy system for a micro-sheet-forming machine which includes mechanical and electronic elements was developed. The machine was tested for its performance in terms of natural frequency, punching forces, punching speed and capability, energy consumption (single punch and frequency-time based). Based on the experiments, the machine can do 600 stroke per minute and the process is unaffected by the machine's natural frequency. It was also found that sub-Joule of power was required for a single stroke of punching/blanking process. Up to 100micron thick carbon steel shim was successfully tested and punched. It concludes that low power forming machine is feasible to be developed and be used to replace high powered machineries to form micro-products/parts.

  5. Machine learning properties of materials and molecules with entropy-regularized kernels

    NASA Astrophysics Data System (ADS)

    Ceriotti, Michele; Bartók, Albert; CsáNyi, GáBor; de, Sandip

    Application of machine-learning methods to physics, chemistry and materials science is gaining traction as a strategy to obtain accurate predictions of the properties of matter at a fraction of the typical cost of quantum mechanical electronic structure calculations. In this endeavor, one can leverage general-purpose frameworks for supervised-learning. It is however very important that the input data - for instance the positions of atoms in a molecule or solid - is processed into a form that reflects all the underlying physical symmetries of the problem, and that possesses the regularity properties that are required by machine-learning algorithms. Here we introduce a general strategy to build a representation of this kind. We will start from existing approaches to compare local environments (basically, groups of atoms), and combine them using techniques borrowed from optimal transport theory, discussing the relation between this idea and additive energy decompositions. We will present a few examples demonstrating the potential of this approach as a tool to predict molecular and materials' properties with an accuracy on par with state-of-the-art electronic structure methods. MARVEL NCCR (Swiss National Science Foundation) and ERC StG HBMAP (European Research Council, G.A. 677013).

  6. Existing machine propulsion is transformed by state-of-the-art gearbox apparatus saves at least 50% energy

    NASA Astrophysics Data System (ADS)

    Abramov, V.

    2013-12-01

    This innovation on www.repowermachine.com is finalist at Clean-tech and Energy of 2012 Minnesota's TEKNE AWARDS. Vehicles are pushed by force of friction between their wheels and land, propellers and water or air according to Third Newton's law of physics of moving. Force of friction is dependent to vehicle weight as highest torque of wheel or propeller for vehicle moving from stop. Friction force DOES NOT dependent to motor power. Why existing SUV of 2,000 lb uses 550 hp motor when first vehicle has 0.75 hp motor (Carl Benz';s patent #37435, January 29, 1886 in Germany)? Gas or magnet field reaches needed torque of wheels too slowly because requires huge motor power for acceleration SUV from 0 to 100 mph for 5 second. The acceleration system by gas or magnet field uses additional energy for increasing motor shaft idle speed and reduces its highest torque of physical volume because necessary to increase motor power that equal/exceed motor power according to vehicle weight. Therefore, any transmission torque DOES NOT NEED and it is use as second brake. Ship, locomotives, helicopters, CNC machine tools, etc motor(s) directly turn wheels, propellers, spindles or ignore to use gear -transmission designs. How do you follow to Creator's physics law of LEVER for saving energy? Existing machine propulsion is transformed by one comprising least numbers of gears and maybe shafts from above state-of-the-art 1,000 gearbox apparatus designs. It is installed or replaced transmission in existing propulsion that is transformed to non-accelerated propulsion. It cuts about 80% mechanical energy that acceleration system wastes in motor heat form, cuts time of movement by reaching each speed for 1-2 seconds. It produces all needed speeds and uses only idle speed of cheapest motor with reduced power and cost that have replaced existing motor too. There is opportunity to eliminate vehicle/machine roads traffics in cities that creates additional unknown GHG emissions Revolutionary methods capability to create 144 forward/72 reverse torque/overdrive speeds by one gear less than heavy-duty truck gearbox of 18 forward/2 reverse torque plus 10 compound gearboxes for vehicle maneuverability improvement. It capability to reduce size of motor up to 5x5x5x5x5x5=15,625 times by 7 shafts !!! Therefore, SUV non-accelerated propulsion comprising GAEES of 24 overdrive speeds that uses 20 hp motor idle speed only or torque that will be sufficient to move this SUV from stop. HEAVY-DUTY TRUCK: Chosen GAEEF of 36 torques/overdrive and 18 reverse speeds by 20 gears/5 shafts (in comparison to its 18 torques/2 reverse by 29 gears/4 shafts) reduces heavy-duty truck motor power from 400 hp to 50 hp. It increases energy economy in 400/50=8 times!!! PABLIC TRANSPORTATION: Existing cruise ship/locomotive with chosen GAEES of 64 torques/overdrive speeds and 32 reverse speeds by 22 gears/7 shafts that provide to reduce from 3000 hp to 200 hp for energy economy in 3000/200=15 times!!!

  7. Index to NASA Tech Briefs, 1974

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The following information was given for 1974: (1) abstracts of reports dealing with new technology derived from the research and development activities of NASA or the U.S. Atomic Energy Commission, arranged by subjects: electronics/electrical, electronics/electrical systems, physical sciences, materials/chemistry, life sciences, mechanics, machines, equipment and tools, fabrication technology, and computer programs, (2) indexes for the above documents: subject, personal author, originating center.

  8. Industry 4.0 - How will the nonwoven production of tomorrow look like?

    NASA Astrophysics Data System (ADS)

    Cloppenburg, F.; Münkel, A.; Gloy, Y.; Gries, T.

    2017-10-01

    Industry 4.0 stands for the on-going fourth industrial revolution, which uses cyber physical systems. In the textile industry the terms of industry 4.0 are not sufficiently known yet. First developments of industry 4.0 are mainly visible in the weaving industry. The cost structure of the nonwoven industry is unique in the textile industry. High shares of personnel, energy and machine costs are distinctive for nonwoven producers. Therefore the industry 4.0 developments in the nonwoven industry should concentrate on reducing these shares by using the work force efficiently and by increasing the productivity of first-rate quality and therefore decreasing waste production and downtimes. Using the McKinsey digital compass three main working fields are necessary: Self-optimizing nonwoven machines, big data analytics and assistance systems. Concepts for the nonwoven industry are shown, like the “EasyNonwoven” concept, which aims on economically optimizing the machine settings using self-optimization routines.

  9. Micromechanical Machining Processes and their Application to Aerospace Structures, Devices and Systems

    NASA Technical Reports Server (NTRS)

    Friedrich, Craig R.; Warrington, Robert O.

    1995-01-01

    Micromechanical machining processes are those micro fabrication techniques which directly remove work piece material by either a physical cutting tool or an energy process. These processes are direct and therefore they can help reduce the cost and time for prototype development of micro mechanical components and systems. This is especially true for aerospace applications where size and weight are critical, and reliability and the operating environment are an integral part of the design and development process. The micromechanical machining processes are rapidly being recognized as a complementary set of tools to traditional lithographic processes (such as LIGA) for the fabrication of micromechanical components. Worldwide efforts in the U.S., Germany, and Japan are leading to results which sometimes rival lithography at a fraction of the time and cost. Efforts to develop processes and systems specific to aerospace applications are well underway.

  10. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    NASA Astrophysics Data System (ADS)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  11. Experimental study on Response Parameters of Ni-rich NiTi Shape Memory Alloy during Wire Electric Discharge Machining

    NASA Astrophysics Data System (ADS)

    Bisaria, Himanshu; Shandilya, Pragya

    2018-03-01

    Nowadays NiTi SMAs are gaining more prominence due to their unique properties such as superelasticity, shape memory effect, high fatigue strength and many other enriched physical and mechanical properties. The current studies explore the effect of machining parameters namely, peak current (Ip), pulse off time (TOFF), and pulse on time (TON) on wire wear ratio (WWR), and dimensional deviation (DD) in WEDM. It was found that high discharge energy was mainly ascribed to high WWR and DD. The WWR and DD increased with the increase in pulse on time and peak current whereas high pulse off time was favourable for low WWR and DD.

  12. Reinventing the Accelerator for the High Energy Frontier

    ScienceCinema

    Rosenzweig, James [UCLA, Los Angeles, California, United States

    2017-12-09

    The history of discovery in high-energy physics has been intimately connected with progress in methods of accelerating particles for the past 75 years. This remains true today, as the post-LHC era in particle physics will require significant innovation and investment in a superconducting linear collider. The choice of the linear collider as the next-generation discovery machine, and the selection of superconducting technology has rather suddenly thrown promising competing techniques -- such as very large hadron colliders, muon colliders, and high-field, high frequency linear colliders -- into the background. We discuss the state of such conventional options, and the likelihood of their eventual success. We then follow with a much longer view: a survey of a new, burgeoning frontier in high energy accelerators, where intense lasers, charged particle beams, and plasmas are all combined in a cross-disciplinary effort to reinvent the accelerator from its fundamental principles on up.

  13. Physics design of a 10 MeV injector test stand for an accelerator-driven subcritical system

    NASA Astrophysics Data System (ADS)

    Yan, Fang; Pei, Shilun; Geng, Huiping; Meng, Cai; Zhao, Yaliang; Sun, Biao; Cheng, Peng; Yang, Zheng; Ouyang, Huafu; Li, Zhihui; Tang, Jingyu; Wang, Jianli; Sui, Yefeng; Dai, Jianping; Sha, Peng; Ge, Rui

    2015-05-01

    The 10 MeV accelerator-driven subcritical system (ADS) Injector I test stand at Institute of High Energy Physics (IHEP) is a testing facility dedicated to demonstrate one of the two injector design schemes [Injector Scheme-I, which works at 325 MHz], for the ADS project in China. The injector is composed of two parts, the linac part and the beam dump line. The former is designed on the basis of 325 MHz four-vane type copper structure radio frequency quadrupole and superconducting (SC) spoke cavities with β =0.12 . The latter is designed to transport the beam coming out of the SC section of the linac to the beam dump, where the beam transverse profile is fairly enlarged and unformed to simplify the beam target design. The SC section consists of two cryomodules with 14 β =0.12 Spoke cavities, 14 solenoid and 14 BPMs in total. The first challenge in the physics design comes from the necessary space required for the cryomodule separation where the periodical lattice is destroyed at a relatively lower energy of ˜5 MeV . Another challenge is the beam dump line design, as it will be the first beam dump line being built by using a step field magnet for the transverse beam expansion and uniformity in the world. This paper gives an overview of the physics design study together with the design principles and machine construction considerations. The results of an optimized design, fabrication status and end to end simulations including machine errors are presented.

  14. Using integral dispersion relations to extend the LHC reach for new physics

    NASA Astrophysics Data System (ADS)

    Denton, Peter B.; Weiler, Thomas J.

    2014-02-01

    Many models of electroweak symmetry breaking predict new particles with masses at or just beyond LHC energies. Even if these particles are too massive to be produced on-shell at the LHC, it may be possible to see evidence of their existence through the use of integral dispersion relations (IDRs). Making use of Cauchy's integral formula and the analyticity of the scattering amplitude, IDRs are sensitive in principle to changes in the cross section at arbitrarily large energies. We investigate some models of new physics. We find that a sudden, order-one increase in the cross section above new particle mass thresholds can be inferred well below the threshold energy. On the other hand, for two more physical models of particle production, we show that the reach in energy and the signal strength of the IDR technique is greatly reduced. The peak sensitivity for the IDR technique is shown to occur when the new particle masses are near the machine energy, an energy where direct production of new particles is kinematically disallowed, phase-space suppressed, or, if applicable, suppressed by the soft parton distribution functions. Thus, IDRs do extend the reach of the LHC, but only to a window around Mχ˜√sLHC .

  15. Women, work and pregnancy outcome.

    PubMed

    Huffman, S

    1988-01-01

    In developing countries, 1/3 of infants are born weighing less than 2500 grams. A study conducted in Ethiopia among women consuming about 1600 kcal/day, those who were very physically active during pregnancy bore smaller babies, and gained less weight during pregnancy, than those who were not so active. Average birth weight was 3068 grams for the 1st group, 3270 grams for the less active. The active group of women gained an average of 6.5 kilograms, and the less active 9.2 kilograms. Women who did not engage in heavy work during pregnancy, although they were undernourished, apparently did not bear growth-retarded babies. Indirect evidence for the effect of physical activity on pregnancy outcome comes from studies conducted in Taiwan, and the Gambia. These studies, and others from Malawi, Burkina Faso, and Kenya have shown that women's energy expenditures vary greatly with the agricultural season. Daily housekeeping tasks, however, also consume a lot of women's energy. Technologies that allow women to reduce energy expenditure can have beneficial effects, if they do not simultaneously reduce their incomes. For instance, programs improving water or fuel availability, or reducing fuel needs, reduce women's energy expenditures. Food processing mills can help too if women have access to them, and are thus not in danger of being displaced from their jobs and losing necessary income. Examples of technology improving women's tasks are pedal drying machines for nice in Bangladesh, using a greater and pressing machine to prepare gari in Ghana; but growing thicker rice stalks in Indonesia displaced women workers and reduced income.

  16. Allocating dissipation across a molecular machine cycle to maximize flux

    PubMed Central

    Brown, Aidan I.; Sivak, David A.

    2017-01-01

    Biomolecular machines consume free energy to break symmetry and make directed progress. Nonequilibrium ATP concentrations are the typical free energy source, with one cycle of a molecular machine consuming a certain number of ATP, providing a fixed free energy budget. Since evolution is expected to favor rapid-turnover machines that operate efficiently, we investigate how this free energy budget can be allocated to maximize flux. Unconstrained optimization eliminates intermediate metastable states, indicating that flux is enhanced in molecular machines with fewer states. When maintaining a set number of states, we show that—in contrast to previous findings—the flux-maximizing allocation of dissipation is not even. This result is consistent with the coexistence of both “irreversible” and reversible transitions in molecular machine models that successfully describe experimental data, which suggests that, in evolved machines, different transitions differ significantly in their dissipation. PMID:29073016

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clendenin, James E

    The International Committee supported the proposal of the Chairman of the XVIII International Linac Conference to issue a new Compendium of linear accelerators. The last one was published in 1976. The Local Organizing Committee of Linac96 decided to set up a sub-committee for this purpose. Contrary to the catalogues of the High Energy Accelerators which compile accelerators with energies above 1 GeV, we have not defined a specific limit in energy. Microtrons and cyclotrons are not in this compendium. Also data from thousands of medical and industrial linacs has not been collected. Therefore, only scientific linacs are listed in themore » present compendium. Each linac found in this research and involved in a physics context was considered. It could be used, for example, either as an injector for high energy accelerators, or in nuclear physics, materials physics, free electron lasers or synchrotron light machines. Linear accelerators are developed in three continents only: America, Asia, and Europe. This geographical distribution is kept as a basis. The compendium contains the parameters and status of scientific linacs. Most of these linacs are operational. However, many facilities under construction or design studies are also included. A special mention has been made at the end for the studies of future linear colliders.« less

  18. Determination of initial conditions for heat exchanger placed in furnace by burning pellets

    NASA Astrophysics Data System (ADS)

    Durčanský, Peter; Jandačka, Jozef; Kapjor, Andrej

    2014-08-01

    Objective of the experimental facility and subsequent measurements is generally determine whether the expected physical properties of the verification, identification of the real behavior of the proposed system, or part thereof. For the design of heat exchanger for combined energy machine is required to identify and verify a large number of parameters. One of these are the boundary conditions of heat exchanger and pellets burner.

  19. Initial operation of the Lockheed Martin T4B experiment

    NASA Astrophysics Data System (ADS)

    Garrett, M. L.; Blinzer, A.; Ebersohn, F.; Gucker, S.; Heinrich, J.; Lohff, C.; McGuire, T.; Montecalvo, N.; Raymond, A.; Rhoads, J.; Ross, P.; Sommers, B.; Strandberg, E.; Sullivan, R.; Walker, J.

    2017-10-01

    The T4B experiment is a linear, encapsulated ring cusp confinement device, designed to develop a physics and technology basis for a follow-on high beta (β 1) machine. The experiment consists of 13 magnetic field coils (11 external, 2 internal), to produce a series of on-axis field nulls surrounded by modest magnetic fields of up to 0.3 T. The primary plasma source used on T4B is a lanthanum hexaboride (LaB6) cathode, capable of coupling over 100 kW into the plasma. Initial testing focused on commissioning of components and integration of diagnostics. Diagnostics include both long and short wavelength interferometry, bolometry, visible and X-ray spectroscopy, Langmuir and B-dot probes, Thomson scattering, flux loops, and fast camera imagery. Low energy discharges were used to begin validation of physics models and simulation efforts. Following the initial machine check-out, neutral beam injection (NBI) was integrated onto the device. Detailed results will be presented. 2017 Lockheed Martin Corporation. All Rights Reserved.

  20. The effect of cutting conditions on power inputs when machining

    NASA Astrophysics Data System (ADS)

    Petrushin, S. I.; Gruby, S. V.; Nosirsoda, Sh C.

    2016-08-01

    Any technological process involving modification of material properties or product form necessitates consumption of a certain power amount. When developing new technologies one should take into account the benefits of their implementation vs. arising power inputs. It is revealed that procedures of edge cutting machining are the most energy-efficient amongst the present day forming procedures such as physical and technical methods including electrochemical, electroerosion, ultrasound, and laser processing, rapid prototyping technologies etc, such as physical and technical methods including electrochemical, electroerosion, ultrasound, and laser processing, rapid prototyping technologies etc. An expanded formula for calculation of power inputs is deduced, which takes into consideration the mode of cutting together with the tip radius, the form of the replaceable multifaceted insert and its wear. Having taken as an example cutting of graphite iron by the assembled cutting tools with replaceable multifaceted inserts the authors point at better power efficiency of high feeding cutting in comparison with high-speed cutting.

  1. The ALICE Experiment at CERN Lhc:. Status and First Results

    NASA Astrophysics Data System (ADS)

    Vercellin, Ermanno

    The ALICE experiment is aimed at studying the properties of the hot and dense matter produced in heavy-ion collisions at LHC energies. In the first years of LHC operation the ALICE physics program will be focused on Pb-Pb and p-p collisions. The latter, on top of their intrinsic interest, will provide the necessary baseline for heavy-ion data. After its installation and a long commissioning with cosmic rays, in late fall 2009 ALICE participated (very successfully) in the first LHC run, by collecting data in p-p collisions at c.m. energy 900 GeV. After a short stop during winter, LHC operations have been resumed; the machine is now able to accelerate proton beams up to 3.5 TeV and ALICE has undertaken the data taking campaign at 7 TeV c.m. energy. After an overview of the ALICE physics goals and a short description of the detector layout, the ALICE performance in p-p collisions will be presented. The main physics results achieved so far will be highlighted as well as the main aspects of the ongoing data analysis.

  2. Studies of Missing Energy Decays at Belle II

    NASA Astrophysics Data System (ADS)

    Guan, Yinghui

    The Belle II experiment at the SuperKEKB collider is a major upgrade of the KEK “B factory” facility in Tsukuba, Japan. The machine is designed for an instantaneous luminosity of 8 × 1035cm‑2s‑1, and the experiment is expected to accumulate a data sample of about 50 ab‑1. With this amount of data, decays sensitive to physics beyond the Standard Model can be studied with unprecedented precision. One promising set of modes are physics processes with missing energy such as B+ → τ+ν, B → D(∗)τν, and B → K(∗)νν¯ decays. The B → K(∗)νν¯ decay provides one of the cleanest experimental probes of the flavour-changing neutral current process b → sνν¯, which is sensitive to physics beyond the Standard Model. However, the missing energies of the neutrinos in the final state makes the measurement challenging and requires full reconstruction of the spectator B meson in e+e‑→ Υ(4S) → BB¯ events. This report discusses the expected sensitivities of Belle II for these rare decays.

  3. The future of the Large Hadron Collider and CERN.

    PubMed

    Heuer, Rolf-Dieter

    2012-02-28

    This paper presents the Large Hadron Collider (LHC) and its current scientific programme and outlines options for high-energy colliders at the energy frontier for the years to come. The immediate plans include the exploitation of the LHC at its design luminosity and energy, as well as upgrades to the LHC and its injectors. This may be followed by a linear electron-positron collider, based on the technology being developed by the Compact Linear Collider and the International Linear Collider collaborations, or by a high-energy electron-proton machine. This contribution describes the past, present and future directions, all of which have a unique value to add to experimental particle physics, and concludes by outlining key messages for the way forward.

  4. The association between state bans on soda only and adolescent substitution with other sugar-sweetened beverages: a cross-sectional study.

    PubMed

    Taber, Daniel R; Chriqui, Jamie F; Vuillaume, Renee; Kelder, Steven H; Chaloupka, Frank J

    2015-07-27

    Across the United States, many states have actively banned the sale of soda in high schools, and evidence suggests that students' in-school access to soda has declined as a result. However, schools may be substituting soda with other sugar-sweetened beverages (SSBs), and national trends indicate that adolescents are consuming more sports drinks and energy drinks. This study examined whether students consumed more non-soda SSBs in states that banned the sale of soda in school. Student data on consumption of various SSBs and in-school access to vending machines that sold SSBs were obtained from the National Youth Physical Activity and Nutrition Study (NYPANS), conducted in 2010. Student data were linked to state laws regarding the sale of soda in school in 2010. Students were cross-classified based on their access to vending machines and whether their state banned soda in school, creating 4 comparison groups. Zero-inflated negative binomial models were used to compare these 4 groups with respect to students’ self-reported consumption of diet soda, sports drinks, energy drinks, coffee/tea, or other SSBs. Students who had access to vending machines in a state that did not ban soda were the reference group. Models were adjusted for race/ethnicity, sex, grade, home food access, state median income, and U.S. Census region. Students consumed more servings of sports drinks, energy drinks, coffee/tea, and other SSBs if they resided in a state that banned soda in school but attended a school with vending machines that sold other SSBs. Similar results were observed where schools did not have vending machines but the state allowed soda to be sold in school. Intake was generally not elevated where both states and schools limited SSB availability – i.e., states banned soda and schools did not have SSB vending machines. State laws that ban soda but allow other SSBs may lead students to substitute other non-soda SSBs. Additional longitudinal research is needed to confirm this. Elevated SSB intake was not observed when both states and schools took steps to remove SSBs from school.

  5. The association between state bans on soda only and adolescent substitution with other sugar-sweetened beverages: a cross-sectional study

    PubMed Central

    2015-01-01

    Background Across the United States, many states have actively banned the sale of soda in high schools, and evidence suggests that students’ in-school access to soda has declined as a result. However, schools may be substituting soda with other sugar-sweetened beverages (SSBs), and national trends indicate that adolescents are consuming more sports drinks and energy drinks. This study examined whether students consumed more non-soda SSBs in states that banned the sale of soda in school. Methods Student data on consumption of various SSBs and in-school access to vending machines that sold SSBs were obtained from the National Youth Physical Activity and Nutrition Study (NYPANS), conducted in 2010. Student data were linked to state laws regarding the sale of soda in school in 2010. Students were cross-classified based on their access to vending machines and whether their state banned soda in school, creating 4 comparison groups. Zero-inflated negative binomial models were used to compare these 4 groups with respect to students’ self-reported consumption of diet soda, sports drinks, energy drinks, coffee/tea, or other SSBs. Students who had access to vending machines in a state that did not ban soda were the reference group. Models were adjusted for race/ethnicity, sex, grade, home food access, state median income, and U.S. Census region. Results Students consumed more servings of sports drinks, energy drinks, coffee/tea, and other SSBs if they resided in a state that banned soda in school but attended a school with vending machines that sold other SSBs. Similar results were observed where schools did not have vending machines but the state allowed soda to be sold in school. Intake was generally not elevated where both states and schools limited SSB availability – i.e., states banned soda and schools did not have SSB vending machines. Conclusion State laws that ban soda but allow other SSBs may lead students to substitute other non-soda SSBs. Additional longitudinal research is needed to confirm this. Elevated SSB intake was not observed when both states and schools took steps to remove SSBs from school. PMID:26221969

  6. Smart Screening System (S3) In Taconite Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daryoush Allaei; Ryan Wartman; David Tarnowski

    2006-03-01

    The conventional screening machines used in processing plants have had undesirable high noise and vibration levels. They also have had unsatisfactorily low screening efficiency, high energy consumption, high maintenance cost, low productivity, and poor worker safety. These conventional vibrating machines have been used in almost every processing plant. Most of the current material separation technology uses heavy and inefficient electric motors with an unbalanced rotating mass to generate the shaking. In addition to being excessively noisy, inefficient, and high-maintenance, these vibrating machines are often the bottleneck in the entire process. Furthermore, these motors, along with the vibrating machines and supportingmore » structure, shake other machines and structures in the vicinity. The latter increases maintenance costs while reducing worker health and safety. The conventional vibrating fine screens at taconite processing plants have had the same problems as those listed above. This has resulted in lower screening efficiency, higher energy and maintenance cost, and lower productivity and workers safety concerns. The focus of this work is on the design of a high performance screening machine suitable for taconite processing plants. SmartScreens{trademark} technology uses miniaturized motors, based on smart materials, to generate the shaking. The underlying technologies are Energy Flow Control{trademark} and Vibration Control by Confinement{trademark}. These concepts are used to direct energy flow and confine energy efficiently and effectively to the screen function. The SmartScreens{trademark} technology addresses problems related to noise and vibration, screening efficiency, productivity, and maintenance cost and worker safety. Successful development of SmartScreens{trademark} technology will bring drastic changes to the screening and physical separation industry. The final designs for key components of the SmartScreens{trademark} have been developed. The key components include smart motor and associated electronics, resonators, and supporting structural elements. It is shown that the smart motors have an acceptable life and performance. Resonator (or motion amplifier) designs are selected based on the final system requirement and vibration characteristics. All the components for a fully functional prototype are fabricated. The development program is on schedule. The last semi-annual report described the completion of the design refinement phase. This phase resulted in a Smart Screen design that meets performance targets both in the dry condition and with taconite slurry flow using PZT motors. This system was successfully demonstrated for the DOE and partner companies at the Coleraine Mineral Research Laboratory in Coleraine, Minnesota. Since then, the fabrication of the dry application prototype (incorporating an electromagnetic drive mechanism and a new deblinding concept) has been completed and successfully tested at QRDC's lab.« less

  7. "Pack[superscript2]": VM Resource Scheduling for Fine-Grained Application SLAs in Highly Consolidated Environment

    ERIC Educational Resources Information Center

    Sukwong, Orathai

    2013-01-01

    Virtualization enables the ability to consolidate multiple servers on a single physical machine, increasing the infrastructure utilization. Maximizing the ratio of server virtual machines (VMs) to physical machines, namely the consolidation ratio, becomes an important goal toward infrastructure cost saving in a cloud. However, the consolidation…

  8. Science 101: Q--What Is the Physics behind Simple Machines?

    ERIC Educational Resources Information Center

    Robertson, Bill

    2013-01-01

    Bill Robertson thinks that questioning the physics behind simple machines is a great idea because when he encounters the subject of simple machines in textbooks, activities, and classrooms, he seldom encounters, a scientific explanation of how they work. Instead, what one often sees is a discussion of load, effort, fulcrum, actual mechanical…

  9. Spectral and spatial characterisation of laser-driven positron beams

    DOE PAGES

    Sarri, G.; Warwick, J.; Schumaker, W.; ...

    2016-10-18

    The generation of high-quality relativistic positron beams is a central area of research in experimental physics, due to their potential relevance in a wide range of scientific and engineering areas, ranging from fundamental science to practical applications. There is now growing interest in developing hybrid machines that will combine plasma-based acceleration techniques with more conventional radio-frequency accelerators, in order to minimise the size and cost of these machines. Here we report on recent experiments on laser-driven generation of high-quality positron beams using a relatively low energy and potentially table-top laser system. Lastly, the results obtained indicate that current technology allowsmore » to create, in a compact setup, positron beams suitable for injection in radio-frequency accelerators.« less

  10. Jet-images — deep learning edition

    DOE PAGES

    de Oliveira, Luke; Kagan, Michael; Mackey, Lester; ...

    2016-07-13

    Building on the notion of a particle physics detector as a camera and the collimated streams of high energy particles, or jets, it measures as an image, we investigate the potential of machine learning techniques based on deep learning architectures to identify highly boosted W bosons. Modern deep learning algorithms trained on jet images can out-perform standard physically-motivated feature driven approaches to jet tagging. We develop techniques for visualizing how these features are learned by the network and what additional information is used to improve performance. Finally, this interplay between physically-motivated feature driven tools and supervised learning algorithms is generalmore » and can be used to significantly increase the sensitivity to discover new particles and new forces, and gain a deeper understanding of the physics within jets.« less

  11. Jet-images — deep learning edition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Oliveira, Luke; Kagan, Michael; Mackey, Lester

    Building on the notion of a particle physics detector as a camera and the collimated streams of high energy particles, or jets, it measures as an image, we investigate the potential of machine learning techniques based on deep learning architectures to identify highly boosted W bosons. Modern deep learning algorithms trained on jet images can out-perform standard physically-motivated feature driven approaches to jet tagging. We develop techniques for visualizing how these features are learned by the network and what additional information is used to improve performance. Finally, this interplay between physically-motivated feature driven tools and supervised learning algorithms is generalmore » and can be used to significantly increase the sensitivity to discover new particles and new forces, and gain a deeper understanding of the physics within jets.« less

  12. Lean energy analysis of CNC lathe

    NASA Astrophysics Data System (ADS)

    Liana, N. A.; Amsyar, N.; Hilmy, I.; Yusof, MD

    2018-01-01

    The industrial sector in Malaysia is one of the main sectors that have high percentage of energy demand compared to other sector and this problem may lead to the future power shortage and increasing the production cost of a company. Suitable initiatives should be implemented by the industrial sectors to solve the issues such as by improving the machining system. In the past, the majority of the energy consumption in industry focus on lighting, HVAC and office section usage. Future trend, manufacturing process is also considered to be included in the energy analysis. A study on Lean Energy Analysis in a machining process is presented. Improving the energy efficiency in a lathe machine by enhancing the cutting parameters of turning process is discussed. Energy consumption of a lathe machine was analyzed in order to identify the effect of cutting parameters towards energy consumption. It was found that the combination of parameters for third run (spindle speed: 1065 rpm, depth of cut: 1.5 mm, feed rate: 0.3 mm/rev) was the most preferred and ideal to be used during the turning machining process as it consumed less energy usage.

  13. CEPC-SPPC accelerator status towards CDR

    NASA Astrophysics Data System (ADS)

    Gao, J.

    2017-12-01

    In this paper we will give an introduction to the Circular Electron Positron Collider (CEPC). The scientific background, physics goal, the collider design requirements and the conceptual design principle of the CEPC are described. On the CEPC accelerator, the optimization of parameter designs for the CEPC with different energies, machine lengths, single ring and crab-waist collision partial double ring, advanced partial double ring and fully partial double ring options, etc. have been discussed systematically, and compared. The CEPC accelerator baseline and alternative designs have been proposed based on the luminosity potential in relation with the design goals. The CEPC sub-systems, such as the collider main ring, booster, electron positron injector, etc. have also been introduced. The detector and the MAchine-Detector Interface (MDI) design have been briefly mentioned. Finally, the optimization design of the Super Proton-Proton Collider (SppC), its energy and luminosity potentials, in the same tunnel of the CEPC are also discussed. The CEPC-SppC Progress Report (2015-2016) has been published.

  14. 10 CFR 431.294 - Uniform test method for the measurement of energy consumption of refrigerated bottled or canned...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... consumption of refrigerated bottled or canned beverage vending machines. 431.294 Section 431.294 Energy... EQUIPMENT Refrigerated Bottled or Canned Beverage Vending Machines Test Procedures § 431.294 Uniform test... machines. (a) Scope. This section provides test procedures for measuring, pursuant to EPCA, the energy...

  15. Performance Analyses of 38 kWe Turbo-Machine Unit for Space Reactor Power Systems

    NASA Astrophysics Data System (ADS)

    Gallo, Bruno M.; El-Genk, Mohamed S.

    2008-01-01

    This paper developed a design and investigated the performance of 38 kWe turbo-machine unit for space nuclear reactor power systems with Closed Brayton Cycle (CBC) energy conversion. The compressor and turbine of this unit are scaled versions of the NASA's BRU developed in the sixties and seventies. The performance results of turbo-machine unit are calculated for rotational speed up to 45 krpm, variable reactor thermal power and system pressure, and fixed turbine and compressor inlet temperatures of 1144 K and 400 K. The analyses used a detailed turbo-machine model developed at the University of New Mexico that accounts for the various energy losses in the compressor and turbine and the effect of compressibility of the He-Xe (40 mole/g) working fluid with increased flow rate. The model also accounts for the changes in the physical and transport properties of the working fluid with temperature and pressure. Results show that a unit efficiency of 24.5% is achievable at rotation speed of 45 krpm and system pressure of 0.75 MPa, assuming shaft and electrical generator efficiencies of 86.7% and 90%. The corresponding net electric power output of the unit is 38.5 kWe, the flow rate of the working fluid is 1.667 kg/s, the pressure ratio and polytropic efficiency for the compressor are 1.60 and 83.1%, and 1.51 and 88.3% for the turbine.

  16. Evaluation of CFETR as a Fusion Nuclear Science Facility using multiple system codes

    NASA Astrophysics Data System (ADS)

    Chan, V. S.; Costley, A. E.; Wan, B. N.; Garofalo, A. M.; Leuer, J. A.

    2015-02-01

    This paper presents the results of a multi-system codes benchmarking study of the recently published China Fusion Engineering Test Reactor (CFETR) pre-conceptual design (Wan et al 2014 IEEE Trans. Plasma Sci. 42 495). Two system codes, General Atomics System Code (GASC) and Tokamak Energy System Code (TESC), using different methodologies to arrive at CFETR performance parameters under the same CFETR constraints show that the correlation between the physics performance and the fusion performance is consistent, and the computed parameters are in good agreement. Optimization of the first wall surface for tritium breeding and the minimization of the machine size are highly compatible. Variations of the plasma currents and profiles lead to changes in the required normalized physics performance, however, they do not significantly affect the optimized size of the machine. GASC and TESC have also been used to explore a lower aspect ratio, larger volume plasma taking advantage of the engineering flexibility in the CFETR design. Assuming the ITER steady-state scenario physics, the larger plasma together with a moderately higher BT and Ip can result in a high gain Qfus ˜ 12, Pfus ˜ 1 GW machine approaching DEMO-like performance. It is concluded that the CFETR baseline mode can meet the minimum goal of the Fusion Nuclear Science Facility (FNSF) mission and advanced physics will enable it to address comprehensively the outstanding critical technology gaps on the path to a demonstration reactor (DEMO). Before proceeding with CFETR construction steady-state operation has to be demonstrated, further development is needed to solve the divertor heat load issue, and blankets have to be designed with tritium breeding ratio (TBR) >1 as a target.

  17. Study of a Variable Mass Atwood's Machine Using a Smartphone

    ERIC Educational Resources Information Center

    Lopez, Dany; Caprile, Isidora; Corvacho, Fernando; Reyes, Orfa

    2018-01-01

    The Atwood machine was invented in 1784 by George Atwood and this system has been widely studied both theoretically and experimentally over the years. Nowadays, it is commonplace that many experimental physics courses include both Atwood's machine and variable mass to introduce more complex concepts in physics. To study the dynamics of the masses…

  18. Translations on USSR Science and Technology, Physical Sciences and Technology, Number 16

    DTIC Science & Technology

    1977-08-05

    34INVESTIGATION OF SPLITTING OF LIGHT NUCLEI WITH HIGH-ENERGY y -RAYS WITH THE METHOD OF WILSON’S CHAMBER OPERATING IN POWERFUL BEAMS OF ELECTRONIC...boast high reliability, high speed, and extremely modest power requirements. Information oh the Screen Visual display devices greatly facilitate...area of application of these units Includes navigation, control of power systems, machine tools, and manufac- turing processes. Th» ^»abilities of

  19. 10 CFR 431.292 - Definitions concerning refrigerated bottled or canned beverage vending machines.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., and functional (or hydraulic) characteristics that affect energy consumption, energy efficiency, water... 10 Energy 3 2012-01-01 2012-01-01 false Definitions concerning refrigerated bottled or canned beverage vending machines. 431.292 Section 431.292 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY...

  20. 10 CFR 431.292 - Definitions concerning refrigerated bottled or canned beverage vending machines.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., and functional (or hydraulic) characteristics that affect energy consumption, energy efficiency, water... 10 Energy 3 2013-01-01 2013-01-01 false Definitions concerning refrigerated bottled or canned beverage vending machines. 431.292 Section 431.292 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY...

  1. 10 CFR 431.292 - Definitions concerning refrigerated bottled or canned beverage vending machines.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., and functional (or hydraulic) characteristics that affect energy consumption, energy efficiency, water... 10 Energy 3 2014-01-01 2014-01-01 false Definitions concerning refrigerated bottled or canned beverage vending machines. 431.292 Section 431.292 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY...

  2. The Beginning of the Physics of Leptons

    NASA Astrophysics Data System (ADS)

    Ting, Samuel C. C.

    Over the last 30 years the study of lepton pairs from both hadron and electron accelerators and colliders has led to the discovery of J, ϒ, Z and W particles. The study of acoplanar eμ pairs + missing energy has led to the discovery of the heavy lepton, now called τ lepton. Indeed, the study of lepton pairs with and without missing energy has become the main method in high energy colliders for searching new particles. This paper presents some of the important contributions made by Antonino Zichichi over a 10 year period at CERN and Frascati in opening this new field of physics. This includes the development of instrumentation to distinguish leptons from hadrons, the first experiment on lepton pair production from hadron machines, the precision tests of electrodynamics at very small distances, the production of hadrons from e+e- collisions and most importantly his invention of a new method e+e- → eμ + missing momenta, experimentally proving that, thanks to his new electron and muon detection technology, these signals have very little background.

  3. Multisensor data fusion for physical activity assessment.

    PubMed

    Liu, Shaopeng; Gao, Robert X; John, Dinesh; Staudenmayer, John W; Freedson, Patty S

    2012-03-01

    This paper presents a sensor fusion method for assessing physical activity (PA) of human subjects, based on support vector machines (SVMs). Specifically, acceleration and ventilation measured by a wearable multisensor device on 50 test subjects performing 13 types of activities of varying intensities are analyzed, from which activity type and energy expenditure are derived. The results show that the method correctly recognized the 13 activity types 88.1% of the time, which is 12.3% higher than using a hip accelerometer alone. Also, the method predicted energy expenditure with a root mean square error of 0.42 METs, 22.2% lower than using a hip accelerometer alone. Furthermore, the fusion method was effective in reducing the subject-to-subject variability (standard deviation of recognition accuracies across subjects) in activity recognition, especially when data from the ventilation sensor were added to the fusion model. These results demonstrate that the multisensor fusion technique presented is more effective in identifying activity type and energy expenditure than the traditional accelerometer-alone-based methods.

  4. New tools for jet analysis in high energy collisions

    NASA Astrophysics Data System (ADS)

    Duffty, Daniel

    Our understanding of the fundamental interactions of particles has come far in the last century, and is still pushing forward. As we build ever more powerful machines to probe higher and higher energies, we will need to develop new tools to not only understand the new physics objects we are trying to detect, but even to understand the environment that we are searching in. We examine methods of identifying both boosted objects and low energy jets which will be shrouded in a sea of noise from other parts of the detector. We display the power of boosted-b tagging in a simulated W search. We also examine the effect of pileup on low energy jet reconstructions. For this purpose we develop a new priority-based jet algorithm, "p-jets", to cluster the energy that belongs together, but ignore the rest.

  5. Effect of Width of Kerf on Machining Accuracy and Subsurface Layer After WEDM

    NASA Astrophysics Data System (ADS)

    Mouralova, K.; Kovar, J.; Klakurkova, L.; Prokes, T.

    2018-02-01

    Wire electrical discharge machining is an unconventional machining technology that applies physical principles to material removal. The material is removed by a series of recurring current discharges between the workpiece and the tool electrode, and a `kerf' is created between the wire and the material being machined. The width of the kerf is directly dependent not only on the diameter of the wire used, but also on the machine parameter settings and, in particular, on the set of mechanical and physical properties of the material being machined. To ensure precise machining, it is important to have the width of the kerf as small as possible. The present study deals with the evaluation of the width of the kerf for four different metallic materials (some of which were subsequently heat treated using several methods) with different machine parameter settings. The kerf is investigated on metallographic cross sections using light and electron microscopy.

  6. Can we build a more efficient airplane? Using applied questions to teach physics

    NASA Astrophysics Data System (ADS)

    Bhatia, Aatish

    2014-03-01

    For students and for the science-interested public, applied questions can serve as a hook to learn introductory physics. Can we radically improve the energy efficiency of modern day aircraft? Are solar planes like the Solar Impulse the future of travel? How do migratory birds like the alpine swift fly nonstop for nearly seven months? Using examples from aeronautical engineering and biology, I'll discuss how undergraduate physics can shed light on these questions about transport, and place fundamental constraints on the flight properties of flying machines, whether birds or planes. Education research has shown that learners are likely to forget vast content knowledge unless they get to apply this knowledge to novel and unfamiliar situations. By applying physics to real-life problems, students can learn to build and apply quantitative models, making use of skills such as order of magnitude estimates, dimensional analysis, and reasoning about uncertainty. This applied skillset allows students to transfer their knowledge outside the classroom, and helps build connections between traditionally distinct content areas. I'll also describe the results of an education experiment at Rutgers University where my colleagues and I redesigned a 100+ student introductory physics course for social science and humanities majors to address applied questions such as evaluating the energy cost of transport, and asking whether the United States could obtain all its energy from renewable sources.

  7. A Comprehensive Understanding of Machine and Material Behaviors During Inertia Friction Welding

    NASA Astrophysics Data System (ADS)

    Tung, Daniel J.

    Inertia Friction Welding (IFW), a critical process to many industries, currently relies on trial-and-error experimentation to optimize process parameters. Although this Edisonian approach is very effective, the high time and dollar costs incurred during process development are the driving force for better design approaches. Thermal-stress finite element modeling has been increasingly used to aid in process development in the literature; however, several fundamental questions on machine and material behaviors remain unanswered. The work presented here aims produce an analytical foundation to significantly reduce the costly physical experimentation currently required to design the inertia welding of production parts. Particularly, the work is centered around the following two major areas. First, machine behavior during IFW, which critically determines deformation and heating, had not been well understood to date. In order to properly characterize the IFW machine behavior, a novel method based on torque measurements was invented to measure machine efficiency, i.e. the ratio of the initial kinetic energy of the flywheel to that contributing to workpiece heating and deformation. The measured efficiency was validated by both simple energy balance calculations and more sophisticated finite element modeling. For the first time, the efficiency dependence on both process parameters (flywheel size, initial rotational velocity, axial load, and surface roughness) and materials (1018 steel, Low Solvus High Refractory LSHR and Waspaloy) was quantified using the torque based measurement method. The effect of process parameters on machine efficiency was analyzed to establish simple-to-use yet powerful equations for selection and optimization of IFW process parameters for making welds; however, design criteria such as geometry and material optimization were not addressed. Second, there had been a lack of understanding of the bond formation during IFW. In the present research, an interrupted welding study was developed utilizing purposefully-designed dissimilar metal couples to investigate bond formation for this specific material combination. The inertia welding process was interrupted at various times as the flywheel velocity decreased. The fraction of areas with intermixed metals was quantified to reveal the bond formation during IFW. The results revealed a relationship between the upset and the fraction of bonded material, which, interestingly, was found to be consistent to that established for roll bonding literature. The relationship is critical to studying the bonding mechanism and surface interactions during IFW. Moreover, it is essential to accurately interpret the modeling results to determine the extent of bonding using the computed strains near the workpiece interface. With this method developed, similar data can now be collected for additional similar and dissimilar material combinations. In summary, in the quest to develop, validate, and execute a modeling framework to study the inertia friction weldability of different alloy systems, particularly Fe- and Ni-base alloys, many new discoveries have been made to enhance the body of knowledge surrounding IFW. The data and trends discussed in this dissertation constitute a physics-based framework to understand the machine and material behaviors during IFW. Such a physics-based framework is essential to significantly reduce the costly trial-and-error experimentation currently required to successfully and consistently perform the inertia welding of production parts.

  8. Network Modeling and Energy-Efficiency Optimization for Advanced Machine-to-Machine Sensor Networks

    PubMed Central

    Jung, Sungmo; Kim, Jong Hyun; Kim, Seoksoo

    2012-01-01

    Wireless machine-to-machine sensor networks with multiple radio interfaces are expected to have several advantages, including high spatial scalability, low event detection latency, and low energy consumption. Here, we propose a network model design method involving network approximation and an optimized multi-tiered clustering algorithm that maximizes node lifespan by minimizing energy consumption in a non-uniformly distributed network. Simulation results show that the cluster scales and network parameters determined with the proposed method facilitate a more efficient performance compared to existing methods. PMID:23202190

  9. Making extreme computations possible with virtual machines

    NASA Astrophysics Data System (ADS)

    Reuter, J.; Chokoufe Nejad, B.; Ohl, T.

    2016-10-01

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  10. Physics-informed machine learning for inorganic scintillator discovery

    NASA Astrophysics Data System (ADS)

    Pilania, G.; McClellan, K. J.; Stanek, C. R.; Uberuaga, B. P.

    2018-06-01

    Applications of inorganic scintillators—activated with lanthanide dopants, such as Ce and Eu—are found in diverse fields. As a strict requirement to exhibit scintillation, the 4f ground state (with the electronic configuration of [Xe]4fn 5d0) and 5d1 lowest excited state (with the electronic configuration of [Xe]4fn-1 5d1) levels induced by the activator must lie within the host bandgap. Here we introduce a new machine learning (ML) based search strategy for high-throughput chemical space explorations to discover and design novel inorganic scintillators. Building upon well-known physics-based chemical trends for the host dependent electron binding energies within the 4f and 5d1 energy levels of lanthanide ions and available experimental data, the developed ML model—coupled with knowledge of the vacuum referred valence and conduction band edges computed from first principles—can rapidly and reliably estimate the relative positions of the activator's energy levels relative to the valence and conduction band edges of any given host chemistry. Using perovskite oxides and elpasolite halides as examples, the presented approach has been demonstrated to be able to (i) capture systematic chemical trends across host chemistries and (ii) effectively screen promising compounds in a high-throughput manner. While a number of other application-specific performance requirements need to be considered for a viable scintillator, the scheme developed here can be a practically useful tool to systematically down-select the most promising candidate materials in a first line of screening for a subsequent in-depth investigation.

  11. Sustainable manufacturing by calculating the energy demand during turning of AISI 1045 steel

    NASA Astrophysics Data System (ADS)

    Nur, R.; Nasrullah, B.; Suyuti, M. A.; Apollo

    2018-01-01

    Sustainable development will become important issues for many fields, including production, industry, and manufacturing. In order to achieve sustainable development, industry should be able to perform of sustainable production processes and environmentally friendly. Therefore, there is need to minimize the energy demand in the machining process. This paper presents a calculation method of energy consumption in the machining process, especially turning process which calculated by summing the number of energy consumption, such as the electric energy consumed during the machining preparation, the electrical energy during the cutting processes, and the electrical energy to produce a cutting tool. A case study was performed on dry turning of mild carbon steel using coated carbide. This approach can be used to determine the total amount of electrical energy consumed in the specific machining process. It concluded that the energy consumption will be an increase for using the high cutting speed as well as for the feed rate was increased.

  12. Energy Savings and Persistence from an Energy Services Performance Contract at an Army Base

    DTIC Science & Technology

    2011-10-01

    control system upgrades, lighting retrofits, vending machine controls, and cooling tower variable frequency drivers (VFDs). To accomplish the...controls were installed in the vending machines , and for the 87018 thermal plant, cooling tower VFDs were implemented. To develop baseline models...identify the reasons of improved or deteriorated energy performance of the buildings. For example, periodic submetering of the vending machines

  13. An Intelligent and Interactive Simulation and Tutoring Environment for Exploring and Learning Simple Machines

    NASA Astrophysics Data System (ADS)

    Myneni, Lakshman Sundeep

    Students in middle school science classes have difficulty mastering physics concepts such as energy and work, taught in the context of simple machines. Moreover, students' naive conceptions of physics often remain unchanged after completing a science class. To address this problem, I developed an intelligent tutoring system, called the Virtual Physics System (ViPS), which coaches students through problem solving with one class of simple machines, pulley systems. The tutor uses a unique cognitive based approach to teaching simple machines, and includes innovations in three areas. (1) It employs a teaching strategy that focuses on highlighting links among concepts of the domain that are essential for conceptual understanding yet are seldom learned by students. (2) Concepts are taught through a combination of effective human tutoring techniques (e.g., hinting) and simulations. (3) For each student, the system identifies which misconceptions he or she has, from a common set of student misconceptions gathered from domain experts, and tailors tutoring to match the correct line of scientific reasoning regarding the misconceptions. ViPS was implemented as a platform on which students can design and simulate pulley system experiments, integrated with a constraint-based tutor that intervenes when students make errors during problem solving to teach them and to help them. ViPS has a web-based client-server architecture, and has been implemented using Java technologies. ViPS is different from existing physics simulations and tutoring systems due to several original features. (1). It is the first system to integrate a simulation based virtual experimentation platform with an intelligent tutoring component. (2) It uses a novel approach, based on Bayesian networks, to help students construct correct pulley systems for experimental simulation. (3) It identifies student misconceptions based on a novel decision tree applied to student pretest scores, and tailors tutoring to individual students based on detected misconceptions. ViPS has been evaluated through usability and usefulness experiments with undergraduate engineering students taking their first college-level engineering physics course and undergraduate pre-service teachers taking their first college-level physics course. These experiments demonstrated that ViPS is highly usable and effective. Students using ViPS reduced their misconceptions, and students conducting virtual experiments in ViPS learned more than students who conducted experiments with physical pulley systems. Interestingly, it was also found that college students exhibited many of the same misconceptions that have been identified in middle school students.

  14. Basic Machines - The "Nuts and Bolts" of Technical Physics Minicourse, Career Oriented Pre-Technical Physics. Preliminary Edition.

    ERIC Educational Resources Information Center

    Bullock, Bob; And Others

    This minicourse was prepared for use with secondary physics students in the Dallas Independent School District and is one option in a physics program which provides for the selection of topics on the basis of student career needs and interests. This minicourse was aimed at two levels in the study of basic machines. The "light" level…

  15. The International Linear Collider

    NASA Astrophysics Data System (ADS)

    List, Benno

    2014-04-01

    The International Linear Collider (ILC) is a proposed e+e- linear collider with a centre-of-mass energy of 200-500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.

  16. Issues in Space Physics in Need of Reconnection with Laboratory Physics

    NASA Astrophysics Data System (ADS)

    Coppi, B.

    2017-10-01

    Predicted space observations, such as the ``foot'' in front of collisionless shocks or the occurrence of magnetic reconnection in the Earth`s magnetotail leading to auroral substorms, have highlighted the fruitful connection of laboratory and space plasma physics. The emergence of high energy astrophysics has then benefitted by the contribution of experiments devised for fusion research to the understanding of issues such as that of angular momentum transport processes that have a key role in allowing accretion of matter on a central object (e.g. black hole). The theory proposed for the occurrence of spontaneous rotation in toroidal plasmas was suggested by that developed for accretion. The particle density values, =1015 cm-3 that are estimated to be those of plasmas surrounding known galactic black holes have in fact been produced by the Alcator and other machines. Collective modes excited in the presence of high energy particle populations in laboratory plasmas (e.g. when the ``slide away'' regime has been produced) have found successful applications in space. Magnetic reconnection theory developments and the mode particle resonances associated with them have led to envision new processes for novel high energy particle acceleration. Sponsored in part by the U.S. DoE.

  17. Evaluation of the Huawei UDS cloud storage system for CERN specific data

    NASA Astrophysics Data System (ADS)

    Zotes Resines, M.; Heikkila, S. S.; Duellmann, D.; Adde, G.; Toebbicke, R.; Hughes, J.; Wang, L.

    2014-06-01

    Cloud storage is an emerging architecture aiming to provide increased scalability and access performance, compared to more traditional solutions. CERN is evaluating this promise using Huawei UDS and OpenStack SWIFT storage deployments, focusing on the needs of high-energy physics. Both deployed setups implement S3, one of the protocols that are emerging as a standard in the cloud storage market. A set of client machines is used to generate I/O load patterns to evaluate the storage system performance. The presented read and write test results indicate scalability both in metadata and data perspectives. Futher the Huawei UDS cloud storage is shown to be able to recover from a major failure of losing 16 disks. Both cloud storages are finally demonstrated to function as back-end storage systems to a filesystem, which is used to deliver high energy physics software.

  18. Electron Accelerators for Research at the Frontiers of Nuclear Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartline, Beverly; Grunder, Hermann

    1986-10-01

    Electron accelerators for the frontiers of nuclear physics must provide high duty factor (gte 80) for coincidence measurements; few-hundred-MeV through few-GeV energy for work in the nucleonic, hadronic, and confinement regimes; energy resolution of ~ 10 -4; and high current (gte 100 zA). To fulfill these requirements new machines and upgrades of existing ones are being planned or constructed. Representative microtron-based facilities are the upgrade of MAMI at the University of Mainz (West Germany), the proposed two-stage cascade microtron at the University of Illinois (U.S.A.), and the three-stage Troitsk ``polytron'' (USSR). Representative projects to add pulse stretcher rings to existingmore » linacs are the upgrades at MIT-Bates (U.S.A.) and at NIKHEF-K (Netherlands). Recent advances in superconducting rf technology, especially in cavity design and fabrication, have made large superconducting cw linacs become feasible. Recirculating superconducting cw linacs are under construc« less

  19. A Validation Framework for the Long Term Preservation of High Energy Physics Data

    NASA Astrophysics Data System (ADS)

    Ozerov, Dmitri; South, David M.

    2014-06-01

    The study group on data preservation in high energy physics, DPHEP, is moving to a new collaboration structure, which will focus on the implementation of preservation projects, such as those described in the group's large scale report published in 2012. One such project is the development of a validation framework, which checks the compatibility of evolving computing environments and technologies with the experiments software for as long as possible, with the aim of substantially extending the lifetime of the analysis software, and hence of the usability of the data. The framework is designed to automatically test and validate the software and data of an experiment against changes and upgrades to the computing environment, as well as changes to the experiment software itself. Technically, this is realised using a framework capable of hosting a number of virtual machine images, built with different configurations of operating systems and the relevant software, including any necessary external dependencies.

  20. The New Big Science: What's New, What's Not, and What's the Difference

    NASA Astrophysics Data System (ADS)

    Westfall, Catherine

    2016-03-01

    This talk will start with a brief recap of the development of the ``Big Science'' epitomized by high energy physics, that is, the science that flourished after WWII based on accelerators, teams, and price tags that grew ever larger. I will then explain the transformation that started in the 1980s and culminated in the 1990s when the Cold War ended and the next big machine needed to advance high energy physics, the multi-billion dollar Superconducting Supercollider (SSC), was cancelled. I will go on to outline the curious series of events that ushered in the New Big Science, a form of research well suited to a post-Cold War environment that valued practical rather than esoteric projects. To show the impact of the New Big Science I will describe how decisions were ``set into concrete'' during the development of experimental equipment at the Thomas Jefferson National Accelerator Facility in Newport News, Virginia.

  1. Low energy and high energy dumps for ELI-NP accelerator facility: rational and Monte-Carlo calculations - results

    NASA Astrophysics Data System (ADS)

    Esposito, A.; Frasciello, O.; Pelliccioni, M.

    2017-09-01

    ELI-NP will be a new international research infrastructure facility for laser-based Nuclear Physics to be built in Magurele, south west of Bucharest, Romania. For the machine to operate as an intense γ rays' source based on Compton back-scattering, electron beams are employed, undergoing a two stage acceleration to 320 MeV and 740 MeV (and, with an eventual energy upgrade, also to 840 MeV) beam energies. In order to assess the radiation safety issues, concerning the effectiveness of the dumps in absorbing the primary electron beams, the generated prompt radiation field and the residual dose rates coming from the activation of constituent materials, as well as the shielding of the adjacent environments against both prompt and residual radiation fields, an extensive design study by means of Monte Carlo simulations with FLUKA code was performed, for both low energy 320 MeV and high energy 720 MeV (840 MeV) beam dumps. For the low energy dump we discuss also the rational of the choice to place it in the building basement, instead of installing it in one of the shielding wall at the machine level, as it was originally conceived. Ambient dose equivalent rate constraints, according to the Rumenian law in force in radiation protection matter were 0.1 /iSv/h everywhere outside the shielding walls and 1.4 μiSv/h outside the high energy dump area. The dumps' placements and layouts are shown to be fully compliant with the dose constraints and environmental impact.

  2. Using Perturbed Physics Ensembles and Machine Learning to Select Parameters for Reducing Regional Biases in a Global Climate Model

    NASA Astrophysics Data System (ADS)

    Li, S.; Rupp, D. E.; Hawkins, L.; Mote, P.; McNeall, D. J.; Sarah, S.; Wallom, D.; Betts, R. A.

    2017-12-01

    This study investigates the potential to reduce known summer hot/dry biases over Pacific Northwest in the UK Met Office's atmospheric model (HadAM3P) by simultaneously varying multiple model parameters. The bias-reduction process is done through a series of steps: 1) Generation of perturbed physics ensemble (PPE) through the volunteer computing network weather@home; 2) Using machine learning to train "cheap" and fast statistical emulators of climate model, to rule out regions of parameter spaces that lead to model variants that do not satisfy observational constraints, where the observational constraints (e.g., top-of-atmosphere energy flux, magnitude of annual temperature cycle, summer/winter temperature and precipitation) are introduced sequentially; 3) Designing a new PPE by "pre-filtering" using the emulator results. Steps 1) through 3) are repeated until results are considered to be satisfactory (3 times in our case). The process includes a sensitivity analysis to find dominant parameters for various model output metrics, which reduces the number of parameters to be perturbed with each new PPE. Relative to observational uncertainty, we achieve regional improvements without introducing large biases in other parts of the globe. Our results illustrate the potential of using machine learning to train cheap and fast statistical emulators of climate model, in combination with PPEs in systematic model improvement.

  3. Smart Screening System (S3) In Taconite Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daryoush Allaei; Angus Morison; David Tarnowski

    2005-09-01

    The conventional screening machines used in processing plants have had undesirable high noise and vibration levels. They also have had unsatisfactorily low screening efficiency, high energy consumption, high maintenance cost, low productivity, and poor worker safety. These conventional vibrating machines have been used in almost every processing plant. Most of the current material separation technology uses heavy and inefficient electric motors with an unbalanced rotating mass to generate the shaking. In addition to being excessively noisy, inefficient, and high-maintenance, these vibrating machines are often the bottleneck in the entire process. Furthermore, these motors, along with the vibrating machines and supportingmore » structure, shake other machines and structures in the vicinity. The latter increases maintenance costs while reducing worker health and safety. The conventional vibrating fine screens at taconite processing plants have had the same problems as those listed above. This has resulted in lower screening efficiency, higher energy and maintenance cost, and lower productivity and workers safety concerns. The focus of this work is on the design of a high performance screening machine suitable for taconite processing plants. SmartScreens{trademark} technology uses miniaturized motors, based on smart materials, to generate the shaking. The underlying technologies are Energy Flow Control{trademark} and Vibration Control by Confinement{trademark}. These concepts are used to direct energy flow and confine energy efficiently and effectively to the screen function. The SmartScreens{trademark} technology addresses problems related to noise and vibration, screening efficiency, productivity, and maintenance cost and worker safety. Successful development of SmartScreens{trademark} technology will bring drastic changes to the screening and physical separation industry. The final designs for key components of the SmartScreens{trademark} have been developed. The key components include smart motor and associated electronics, resonators, and supporting structural elements. It is shown that the smart motors have an acceptable life and performance. Resonator (or motion amplifier) designs are selected based on the final system requirement and vibration characteristics. All the components for a fully functional prototype are fabricated. The development program is on schedule. The last semi-annual report described the process of FE model validation and correlation with experimental data in terms of dynamic performance and predicted stresses. It also detailed efforts into making the supporting structure less important to system performance. Finally, an introduction into the dry application concept was presented. Since then, the design refinement phase was completed. This has resulted in a Smart Screen design that meets performance targets both in the dry condition and with taconite slurry flow using PZT motors. Furthermore, this system was successfully demonstrated for the DOE and partner companies at the Coleraine Mineral Research Laboratory in Coleraine, Minnesota.« less

  4. VIEW EASTLEFTBUILDING 2 PHYSICAL TESTING HOUSE (1928) RIGHTBUILDING 7 MACHINE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW EAST-LEFT-BUILDING 2 PHYSICAL TESTING HOUSE (1928) RIGHT-BUILDING 7 MACHINE SHOP (1901 SECTION) - John A. Roebling's Sons Company & American Steel & Wire Company, South Broad, Clark, Elmer, Mott & Hudson Streets, Trenton, Mercer County, NJ

  5. Effects of machining conditions on the specific cutting energy of carbon fibre reinforced polymer composites

    NASA Astrophysics Data System (ADS)

    Azmi, A. I.; Syahmi, A. Z.; Naquib, M.; Lih, T. C.; Mansor, A. F.; Khalil, A. N. M.

    2017-10-01

    This article presents an approach to evaluate the effects of different machining conditions on the specific cutting energy of carbon fibre reinforced polymer composites (CFRP). Although research works in the machinability of CFRP composites have been very substantial, the present literature rarely discussed the topic of energy consumption and the specific cutting energy. A series of turning experiments were carried out on two different CFRP composites in order to determine the power and specific energy constants and eventually evaluate their effects due to the changes in machining conditions. A good agreement between the power and material removal rate using a simple linear relationship. Further analyses revealed that a power law function is best to describe the effect of feed rate on the changes in the specific cutting energy. At lower feed rate, the specific cutting energy increases exponentially due to the nature of finishing operation, whereas at higher feed rate, the changes in specific cutting energy is minimal due to the nature of roughing operation.

  6. SiD Linear Collider Detector R&D, DOE Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brau, James E.; Demarteau, Marcel

    2015-05-15

    The Department of Energy’s Office of High Energy Physics supported the SiD university detector R&D projects in FY10, FY11, and FY12 with no-cost extensions through February, 2015. The R&D projects were designed to advance the SiD capabilities to address the fundamental questions of particle physics at the International Linear Collider (ILC): • What is the mechanism responsible for electroweak symmetry breaking and the generation of mass? • How do the forces unify? • Does the structure of space-time at small distances show evidence for extra dimensions? • What are the connections between the fundamental particles and forces and cosmology? Siliconmore » detectors are used extensively in SiD and are well-matched to the challenges presented by ILC physics and the ILC machine environment. They are fast, robust against machine-induced background, and capable of very fine segmentation. SiD is based on silicon tracking and silicon-tungsten sampling calorimetry, complemented by powerful pixel vertex detection, and outer hadronic calorimetry and muon detection. Radiation hard forward detectors which can be read out pulse by pulse are required. Advanced calorimetry based on a particle flow algorithm (PFA) provides excellent jet energy resolution. The 5 Tesla solenoid is outside the calorimeter to improve energy resolution. PFA calorimetry requires fine granularity for both electromagnetic and hadronic calorimeters, leading naturally to finely segmented silicon-tungsten electromagnetic calorimetry. Since silicon-tungsten calorimetry is expensive, the detector architecture is compact. Precise tracking is achieved with the large magnetic field and high precision silicon microstrips. An ancillary benefit of the large magnetic field is better control of the e⁺e⁻ pair backgrounds, permitting a smaller radius beampipe and improved impact parameter resolution. Finally, SiD is designed with a cost constraint in mind. Significant advances and new capabilities have been made and are described in this report.« less

  7. Searching for Physics Beyond the Standard Model: Strongly-Coupled Field Theories at the Intensity and Energy Frontiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brower, Richard C.

    This proposal is to develop the software and algorithmic infrastructure needed for the numerical study of quantum chromodynamics (QCD), and of theories that have been proposed to describe physics beyond the Standard Model (BSM) of high energy physics, on current and future computers. This infrastructure will enable users (1) to improve the accuracy of QCD calculations to the point where they no longer limit what can be learned from high-precision experiments that seek to test the Standard Model, and (2) to determine the predictions of BSM theories in order to understand which of them are consistent with the data thatmore » will soon be available from the LHC. Work will include the extension and optimizations of community codes for the next generation of leadership class computers, the IBM Blue Gene/Q and the Cray XE/XK, and for the dedicated hardware funded for our field by the Department of Energy. Members of our collaboration at Brookhaven National Laboratory and Columbia University worked on the design of the Blue Gene/Q, and have begun to develop software for it. Under this grant we will build upon their experience to produce high-efficiency production codes for this machine. Cray XE/XK computers with many thousands of GPU accelerators will soon be available, and the dedicated commodity clusters we obtain with DOE funding include growing numbers of GPUs. We will work with our partners in NVIDIA's Emerging Technology group to scale our existing software to thousands of GPUs, and to produce highly efficient production codes for these machines. Work under this grant will also include the development of new algorithms for the effective use of heterogeneous computers, and their integration into our codes. It will include improvements of Krylov solvers and the development of new multigrid methods in collaboration with members of the FASTMath SciDAC Institute, using their HYPRE framework, as well as work on improved symplectic integrators.« less

  8. Overview and recent results of the Magnetized Shock Experiment (MSX)

    NASA Astrophysics Data System (ADS)

    Weber, T. E.; Smith, R. J.; Hsu, S. C.; Omelchenko, Y.

    2015-11-01

    Recent machine and diagnostics upgrades to the Magnetized Shock Experiment (MSX) at LANL have enabled unprecedented access to the physical processes arising from stagnating magnetized (β ~ 1), collisionless, highly supersonic (M ,MA ~ 10) flows, similar in dimensionless parameters to those found in both space and astrophysical shocks. Hot (100s of eV during translation), dense (1022 - 1023 m-3) Field Reversed Configuration (FRC) plasmoids are accelerated to high velocities (100s of km/s) and subsequently impact against a static target such as a strong parallel or anti-parallel (reconnection-wise) magnetic mirror, a solid obstacle, or neutral gas cloud to recreate the physics of interest with characteristic length and time scales that are both large enough to observe yet small enough to fit within the experiment. Long-lived (>50 μs) stagnated plasmas with density enhancement much greater than predicted by fluid theory (>4x) are observed, accompanied by discontinuous plasma structures indicating shocks and jetting (visible emission and interferometry) and copious >1 keV x-ray emission. An overview of the experimental program will be presented, including machine design and capabilities, diagnostics, and an examination of the physical processes that occur during stagnation against a variety of targets. Supported by the DOE Office of Fusion Energy Sciences under contract DE-AC52-06NA25369.

  9. A non-LTE analysis of high energy density Kr plasmas on Z and NIF

    NASA Astrophysics Data System (ADS)

    Dasgupta, A.; Clark, R. W.; Ouart, N.; Giuliani, J.; Velikovich, A.; Ampleford, D. J.; Hansen, S. B.; Jennings, C.; Harvey-Thompson, A. J.; Jones, B.; Flanagan, T. M.; Bell, K. S.; Apruzese, J. P.; Fournier, K. B.; Scott, H. A.; May, M. J.; Barrios, M. A.; Colvin, J. D.; Kemp, G. E.

    2016-10-01

    Multi-keV X-ray radiation sources have a wide range of applications, from biomedical studies and research on thermonuclear fusion to materials science and astrophysics. The refurbished Z pulsed power machine at the Sandia National Laboratories produces intense multi-keV X-rays from argon Z-pinches, but for a krypton Z-pinch, the yield decreases much faster with atomic number ZA than similar sources on the National Ignition Facility (NIF) laser at the Lawrence Livermore National Laboratory. To investigate whether fundamental energy deposition differences between pulsed power and lasers could account for the yield differences, we consider the Kr plasma on the two machines. The analysis assumes the plasma not in local thermodynamic equilibrium, with a detailed coupling between the hydrodynamics, the radiation field, and the ionization physics. While for the plasma parameters of interest the details of krypton's M-shell are not crucial, both the L-shell and the K-shell must be modeled in reasonable detail, including the state-specific dielectronic recombination processes that significantly affect Kr's ionization balance and the resulting X-ray spectrum. We present a detailed description of the atomic model, provide synthetic K- and L-shell spectra, and compare these with the available experimental data from the Z-machine and from NIF to show that the K-shell yield behavior versus ZA is indeed related to the energy input characteristics. This work aims at understanding the probable causes that might explain the differences in the X-ray conversion efficiencies of several radiation sources on Z and NIF.

  10. ENERGY STAR Certified Vending Machines

    EPA Pesticide Factsheets

    Certified models meet all ENERGY STAR requirements as listed in the Version 3.0 ENERGY STAR Program Requirements for Refrigerated Beverage Vending Machines that are effective as of March 1, 2013. A detailed listing of key efficiency criteria are available at

  11. Steps in the bacterial flagellar motor.

    PubMed

    Mora, Thierry; Yu, Howard; Sowa, Yoshiyuki; Wingreen, Ned S

    2009-10-01

    The bacterial flagellar motor is a highly efficient rotary machine used by many bacteria to propel themselves. It has recently been shown that at low speeds its rotation proceeds in steps. Here we propose a simple physical model, based on the storage of energy in protein springs, that accounts for this stepping behavior as a random walk in a tilted corrugated potential that combines torque and contact forces. We argue that the absolute angular position of the rotor is crucial for understanding step properties and show this hypothesis to be consistent with the available data, in particular the observation that backward steps are smaller on average than forward steps. We also predict a sublinear speed versus torque relationship for fixed load at low torque, and a peak in rotor diffusion as a function of torque. Our model provides a comprehensive framework for understanding and analyzing stepping behavior in the bacterial flagellar motor and proposes novel, testable predictions. More broadly, the storage of energy in protein springs by the flagellar motor may provide useful general insights into the design of highly efficient molecular machines.

  12. Modelling machine ensembles with discrete event dynamical system theory

    NASA Technical Reports Server (NTRS)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  13. Adaptive machine and its thermodynamic costs

    NASA Astrophysics Data System (ADS)

    Allahverdyan, Armen E.; Wang, Q. A.

    2013-03-01

    We study the minimal thermodynamically consistent model for an adaptive machine that transfers particles from a higher chemical potential reservoir to a lower one. This model describes essentials of the inhomogeneous catalysis. It is supposed to function with the maximal current under uncertain chemical potentials: if they change, the machine tunes its own structure fitting it to the maximal current under new conditions. This adaptation is possible under two limitations: (i) The degree of freedom that controls the machine's structure has to have a stored energy (described via a negative temperature). The origin of this result is traced back to the Le Chatelier principle. (ii) The machine has to malfunction at a constant environment due to structural fluctuations, whose relative magnitude is controlled solely by the stored energy. We argue that several features of the adaptive machine are similar to those of living organisms (energy storage, aging).

  14. The Physics of Ultrabroadband Frequency Comb Generation and Optimized Combs for Measurements in Fundamental Physics

    DTIC Science & Technology

    2016-07-02

    beams Superresolution machining Threshold effect of ablation means that structure diameter is less than the beam diameter fs pulses at 800 nm yield 200...Approved for public release: distribution unlimited. Applications of Bessel beams Superresolution machining Threshold effect of ablation means that... Superresolution machining Threshold effect of ablation means that structure diameter is less than the beam diameter fs pulses at 800 nm yield 200 nm

  15. A machine learning approach for predicting the relationship between energy resources and economic development

    NASA Astrophysics Data System (ADS)

    Cogoljević, Dušan; Alizamir, Meysam; Piljan, Ivan; Piljan, Tatjana; Prljić, Katarina; Zimonjić, Stefan

    2018-04-01

    The linkage between energy resources and economic development is a topic of great interest. Research in this area is also motivated by contemporary concerns about global climate change, carbon emissions fluctuating crude oil prices, and the security of energy supply. The purpose of this research is to develop and apply the machine learning approach to predict gross domestic product (GDP) based on the mix of energy resources. Our results indicate that GDP predictive accuracy can be improved slightly by applying a machine learning approach.

  16. Application of machine learning techniques to lepton energy reconstruction in water Cherenkov detectors

    NASA Astrophysics Data System (ADS)

    Drakopoulou, E.; Cowan, G. A.; Needham, M. D.; Playfer, S.; Taani, M.

    2018-04-01

    The application of machine learning techniques to the reconstruction of lepton energies in water Cherenkov detectors is discussed and illustrated for TITUS, a proposed intermediate detector for the Hyper-Kamiokande experiment. It is found that applying these techniques leads to an improvement of more than 50% in the energy resolution for all lepton energies compared to an approach based upon lookup tables. Machine learning techniques can be easily applied to different detector configurations and the results are comparable to likelihood-function based techniques that are currently used.

  17. School Vending Machine Purchasing Behavior: Results from the 2005 YouthStyles Survey

    ERIC Educational Resources Information Center

    Thompson, Olivia M.; Yaroch, Amy L.; Moser, Richard P.; Rutten, Lila J. Finney; Agurs-Collins, Tanya

    2010-01-01

    Background: Competitive foods are often available in school vending machines. Providing youth with access to school vending machines, and thus competitive foods, is of concern, considering the continued high prevalence of childhood obesity: competitive foods tend to be energy dense and nutrient poor and can contribute to increased energy intake in…

  18. Changes in the physical status of the typical and leached chernozems of Kursk oblast within 40 years

    NASA Astrophysics Data System (ADS)

    Kuznetsova, I. V.

    2013-04-01

    The changes in the physical properties of the chernozems in the Central Russian province of the forest-steppe zone (Kursk oblast) that took place from 1964 to 2002 are analyzed in relation to the corresponding changes in the agrotechnology, agroeconomy, and agroecology. Three periods of the soil transformation are distinguished. The first period was characterized by the use of machines with relatively small pressure on the soil and by the dynamic equilibrium between the physical state of the soils and the processes of the humification-mineralization of the soil organic matter. The use of power-intensive machines in the next period resulted in greater soil compaction with negative changes in the soil physical properties. At the same time, the physical properties of the chernozems remained close to optimum on the fields where heavy machines were not used. The third period was characterized by the use of heavy machines and by the decrease in the rates of the organic and mineral fertilizers and certain disturbances in the crop rotation systems because of the economic difficulties. The negative tendencies of the changes in the soil physical properties observed during the preceding period continued.

  19. FINAL REPORT. DOE Grant Award Number DE-SC0004062

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiesa, Luisa

    With the support of the DOE-OFES Early Career Award and the Tufts startup support the PI has developed experimental and analytical expertise on the electromechanical characterization of Low Temperature Superconductor (LTS) and High Temperature Superconductor (HTS) for high magnetic field applications. These superconducting wires and cables are used in fusion and high-energy physics magnet applications. In a short period of time, the PI has built a laboratory and research group with unique capabilities that include both experimental and numerical modeling effort to improve the design and performance of superconducting cables and magnets. All the projects in the PI’s laboratory exploremore » the fundamental electromechanical behavior of superconductors but the types of materials, geometries and operating conditions are chosen to be directly relevant to real machines, in particular fusion machines like ITER.« less

  20. Agreement Between Institutional Measurements and Treatment Planning System Calculations for Basic Dosimetric Parameters as Measured by the Imaging and Radiation Oncology Core-Houston

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, James R.; Followill, David S.; Imaging and Radiation Oncology Core-Houston, The University of Texas Health Science Center-Houston, Houston, Texas

    Purpose: To compare radiation machine measurement data collected by the Imaging and Radiation Oncology Core at Houston (IROC-H) with institutional treatment planning system (TPS) values, to identify parameters with large differences in agreement; the findings will help institutions focus their efforts to improve the accuracy of their TPS models. Methods and Materials: Between 2000 and 2014, IROC-H visited more than 250 institutions and conducted independent measurements of machine dosimetric data points, including percentage depth dose, output factors, off-axis factors, multileaf collimator small fields, and wedge data. We compared these data with the institutional TPS values for the same points bymore » energy, class, and parameter to identify differences and similarities using criteria involving both the medians and standard deviations for Varian linear accelerators. Distributions of differences between machine measurements and institutional TPS values were generated for basic dosimetric parameters. Results: On average, intensity modulated radiation therapy–style and stereotactic body radiation therapy–style output factors and upper physical wedge output factors were the most problematic. Percentage depth dose, jaw output factors, and enhanced dynamic wedge output factors agreed best between the IROC-H measurements and the TPS values. Although small differences were shown between 2 common TPS systems, neither was superior to the other. Parameter agreement was constant over time from 2000 to 2014. Conclusions: Differences in basic dosimetric parameters between machine measurements and TPS values vary widely depending on the parameter, although agreement does not seem to vary by TPS and has not changed over time. Intensity modulated radiation therapy–style output factors, stereotactic body radiation therapy–style output factors, and upper physical wedge output factors had the largest disagreement and should be carefully modeled to ensure accuracy.« less

  1. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  2. RIP-REMOTE INTERACTIVE PARTICLE-TRACER

    NASA Technical Reports Server (NTRS)

    Rogers, S. E.

    1994-01-01

    Remote Interactive Particle-tracing (RIP) is a distributed-graphics program which computes particle traces for computational fluid dynamics (CFD) solution data sets. A particle trace is a line which shows the path a massless particle in a fluid will take; it is a visual image of where the fluid is going. The program is able to compute and display particle traces at a speed of about one trace per second because it runs on two machines concurrently. The data used by the program is contained in two files. The solution file contains data on density, momentum and energy quantities of a flow field at discrete points in three-dimensional space, while the grid file contains the physical coordinates of each of the discrete points. RIP requires two computers. A local graphics workstation interfaces with the user for program control and graphics manipulation, and a remote machine interfaces with the solution data set and performs time-intensive computations. The program utilizes two machines in a distributed mode for two reasons. First, the data to be used by the program is usually generated on the supercomputer. RIP avoids having to convert and transfer the data, eliminating any memory limitations of the local machine. Second, as computing the particle traces can be computationally expensive, RIP utilizes the power of the supercomputer for this task. Although the remote site code was developed on a CRAY, it is possible to port this to any supercomputer class machine with a UNIX-like operating system. Integration of a velocity field from a starting physical location produces the particle trace. The remote machine computes the particle traces using the particle-tracing subroutines from PLOT3D/AMES, a CFD post-processing graphics program available from COSMIC (ARC-12779). These routines use a second-order predictor-corrector method to integrate the velocity field. Then the remote program sends graphics tokens to the local machine via a remote-graphics library. The local machine interprets the graphics tokens and draws the particle traces. The program is menu driven. RIP is implemented on the silicon graphics IRIS 3000 (local workstation) with an IRIX operating system and on the CRAY2 (remote station) with a UNICOS 1.0 or 2.0 operating system. The IRIS 4D can be used in place of the IRIS 3000. The program is written in C (67%) and FORTRAN 77 (43%) and has an IRIS memory requirement of 4 MB. The remote and local stations must use the same user ID. PLOT3D/AMES unformatted data sets are required for the remote machine. The program was developed in 1988.

  3. The East, the West and the universal machine.

    PubMed

    Marchal, Bruno

    2017-12-01

    After reviewing the basic of theology of Universal Numbers/Machines, as detailed in Marchal (2007), I illustrate how that body of thought might be used to shed some light upon the apparent dichotomy in Eastern/Western spirituality. This paper relies entirely on my previous interdisciplinary work in mathematical logic, computer science and machine's theology, where "theology" is used here in the sense of Plato: it is the truth, or the "truth-theory" (in the sense of logicians) about a machine that the machine can either deduce from some of its primitive beliefs, or can be intuited in some sense that eventually is made clear through the modal logic of machine self-reference. Such a theology appears to be testable, because it has been shown that physics has to be necessarily retrieved from it when we assume the mechanist hypothesis in the cognitive sciences, and this in a unique precise (introspective) way, so that we only need to compare the physics of the introspective machine with the physics inferred from the human observation; and up to now, it is the only theory known to fit both the existence of personal "consciousness" (undoubtable yet unjustifiable truth) and quanta and quantum relationships (Marchal, 1998; Marchal, 2004; Marchal, 2013; Marchal, 2015). Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. 10 CFR 431.294 - Uniform test method for the measurement of energy consumption of refrigerated bottled or canned...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... consumption of refrigerated bottled or canned beverage vending machines. 431.294 Section 431.294 Energy... method for the measurement of energy consumption of refrigerated bottled or canned beverage vending... test procedure for energy consumption of refrigerated bottled or canned beverage vending machines shall...

  5. Children's Beliefs about the Fantasy/Reality Status of Hypothesized Machines

    ERIC Educational Resources Information Center

    Cook, Claire; Sobel, David M.

    2011-01-01

    Four-year-olds, 6-year-olds, and adults were asked to make judgments about the reality status of four different types of machines: real machines that children and adults interact with on a daily basis, real machines that children and adults interact with rarely (if at all), and impossible machines that violated a real-world physical or biological…

  6. Increasing energy efficiency level of building production based on applying modern mechanization facilities

    NASA Astrophysics Data System (ADS)

    Prokhorov, Sergey

    2017-10-01

    Building industry in a present day going through the hard times. Machine and mechanism exploitation cost, on a field of construction and installation works, takes a substantial part in total building construction expenses. There is a necessity to elaborate high efficient method, which allows not only to increase production, but also to reduce direct costs during machine fleet exploitation, and to increase its energy efficiency. In order to achieve the goal we plan to use modern methods of work production, hi-tech and energy saving machine tools and technologies, and use of optimal mechanization sets. As the optimization criteria there are exploitation prime cost and set efficiency. During actual task-solving process we made a conclusion, which shows that mechanization works, energy audit with production juxtaposition, prime prices and costs for energy resources allow to make complex machine fleet supply, improve ecological level and increase construction and installation work quality.

  7. AFM surface imaging of AISI D2 tool steel machined by the EDM process

    NASA Astrophysics Data System (ADS)

    Guu, Y. H.

    2005-04-01

    The surface morphology, surface roughness and micro-crack of AISI D2 tool steel machined by the electrical discharge machining (EDM) process were analyzed by means of the atomic force microscopy (AFM) technique. Experimental results indicate that the surface texture after EDM is determined by the discharge energy during processing. An excellent machined finish can be obtained by setting the machine parameters at a low pulse energy. The surface roughness and the depth of the micro-cracks were proportional to the power input. Furthermore, the AFM application yielded information about the depth of the micro-cracks is particularly important in the post treatment of AISI D2 tool steel machined by EDM.

  8. The Fluid Foil: The Seventh Simple Machine

    ERIC Educational Resources Information Center

    Mitts, Charles R.

    2012-01-01

    A simple machine does one of two things: create a mechanical advantage (lever) or change the direction of an applied force (pulley). Fluid foils are unique among simple machines because they not only change the direction of an applied force (wheel and axle); they convert fluid energy into mechanical energy (wind and Kaplan turbines) or vice versa,…

  9. Single-shot high aspect ratio bulk nanostructuring of fused silica using chirp-controlled ultrafast laser Bessel beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhuyan, M. K.; Velpula, P. K.; Colombier, J. P.

    2014-01-13

    We report single-shot, high aspect ratio nanovoid fabrication in bulk fused silica using zeroth order chirp-controlled ultrafast laser Bessel beams. We identify a unique laser pulse length and energy dependence of the physical characteristics of machined structures over which nanovoids of diameter in the range 200–400 nm and aspect ratios exceeding 1000 can be fabricated. A mechanism based on the axial energy deposition of nonlinear ultrashort Bessel beams and subsequent material densification or rarefaction in fused silica is proposed, intricating the non-diffractive nature with the diffusing character of laser-generated free carriers. Fluid flow through nanochannel is also demonstrated.

  10. Optimal Control of Induction Machines to Minimize Transient Energy Losses

    NASA Astrophysics Data System (ADS)

    Plathottam, Siby Jose

    Induction machines are electromechanical energy conversion devices comprised of a stator and a rotor. Torque is generated due to the interaction between the rotating magnetic field from the stator, and the current induced in the rotor conductors. Their speed and torque output can be precisely controlled by manipulating the magnitude, frequency, and phase of the three input sinusoidal voltage waveforms. Their ruggedness, low cost, and high efficiency have made them ubiquitous component of nearly every industrial application. Thus, even a small improvement in their energy efficient tend to give a large amount of electrical energy savings over the lifetime of the machine. Hence, increasing energy efficiency (reducing energy losses) in induction machines is a constrained optimization problem that has attracted attention from researchers. The energy conversion efficiency of induction machines depends on both the speed-torque operating point, as well as the input voltage waveform. It also depends on whether the machine is in the transient or steady state. Maximizing energy efficiency during steady state is a Static Optimization problem, that has been extensively studied, with commercial solutions available. On the other hand, improving energy efficiency during transients is a Dynamic Optimization problem that is sparsely studied. This dissertation exclusively focuses on improving energy efficiency during transients. This dissertation treats the transient energy loss minimization problem as an optimal control problem which consists of a dynamic model of the machine, and a cost functional. The rotor field oriented current fed model of the induction machine is selected as the dynamic model. The rotor speed and rotor d-axis flux are the state variables in the dynamic model. The stator currents referred to as d-and q-axis currents are the control inputs. A cost functional is proposed that assigns a cost to both the energy losses in the induction machine, as well as the deviations from desired speed-torque-magnetic flux setpoints. Using Pontryagin's minimum principle, a set of necessary conditions that must be satisfied by the optimal control trajectories are derived. The conditions are in the form a two-point boundary value problem, that can be solved numerically. The conjugate gradient method that was modified using the Hestenes-Stiefel formula was used to obtain the numerical solution of both the control and state trajectories. Using the distinctive shape of the numerical trajectories as inspiration, analytical expressions were derived for the state, and control trajectories. It was shown that the trajectory could be fully described by finding the solution of a one-dimensional optimization problem. The sensitivity of both the optimal trajectory and the optimal energy efficiency to different induction machine parameters were analyzed. A non-iterative solution that can use feedback for generating optimal control trajectories in real time was explored. It was found that an artificial neural network could be trained using the numerical solutions and made to emulate the optimal control trajectories with a high degree of accuracy. Hence a neural network along with a supervisory logic was implemented and used in a real-time simulation to control the Finite Element Method model of the induction machine. The results were compared with three other control regimes and the optimal control system was found to have the highest energy efficiency for the same drive cycle.

  11. Design and performance studies of a hadronic calorimeter for a FCC-hh experiment

    NASA Astrophysics Data System (ADS)

    Faltova, J.

    2018-03-01

    The hadron-hadron Future Circular Collider (FCC-hh) project studies the physics reach of a proton-proton machine with a centre-of-mass-energy of 100 TeV and five times greater peak luminosities than at the High-Luminosity LHC (HL-LHC). The high-energy regime of the FCC-hh opens new opportunities for the discovery of physics beyond the standard model. At 100 TeV a large fraction of the W, Z, H bosons and top quarks are produced with a significant boost. It implies an efficient reconstruction of very high energetic objects decaying hadronically. The reconstruction of those boosted objects sets the calorimeter performance requirements in terms of energy resolution, containment of highly energetic hadron showers, and high transverse granularity. We present the current baseline technologies for the calorimeter system in the barrel region of the FCC-hh reference detector: a liquid argon electromagnetic and a scintillator-steel hadronic calorimeters. The focus of this paper is on the hadronic calorimeter and the performance studies for hadrons. The reconstruction of single particles and the achieved energy resolution for the combined system of the electromagnetic and hadronic calorimeters are discussed.

  12. Calibration of raw accelerometer data to measure physical activity: A systematic review.

    PubMed

    de Almeida Mendes, Márcio; da Silva, Inácio C M; Ramires, Virgílio V; Reichert, Felipe F; Martins, Rafaela C; Tomasi, Elaine

    2018-03-01

    Most of calibration studies based on accelerometry were developed using count-based analyses. In contrast, calibration studies based on raw acceleration signals are relatively recent and their evidences are incipient. The aim of the current study was to systematically review the literature in order to summarize methodological characteristics and results from raw data calibration studies. The review was conducted up to May 2017 using four databases: PubMed, Scopus, SPORTDiscus and Web of Science. Methodological quality of the included studies was evaluated using the Landis and Koch's guidelines. Initially, 1669 titles were identified and, after assessing titles, abstracts and full-articles, 20 studies were included. All studies were conducted in high-income countries, most of them with relatively small samples and specific population groups. Physical activity protocols were different among studies and the indirect calorimetry was the criterion measure mostly used. High mean values of sensitivity, specificity and accuracy from the intensity thresholds of cut-point-based studies were observed (93.7%, 91.9% and 95.8%, respectively). The most frequent statistical approach applied was machine learning-based modelling, in which the mean coefficient of determination was 0.70 to predict physical activity energy expenditure. Regarding the recognition of physical activity types, the mean values of accuracy for sedentary, household and locomotive activities were 82.9%, 55.4% and 89.7%, respectively. In conclusion, considering the construct of physical activity that each approach assesses, linear regression, machine-learning and cut-point-based approaches presented promising validity parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Thermodynamic work from operational principles

    NASA Astrophysics Data System (ADS)

    Gallego, R.; Eisert, J.; Wilming, H.

    2016-10-01

    In recent years we have witnessed a concentrated effort to make sense of thermodynamics for small-scale systems. One of the main difficulties is to capture a suitable notion of work that models realistically the purpose of quantum machines, in an analogous way to the role played, for macroscopic machines, by the energy stored in the idealisation of a lifted weight. Despite several attempts to resolve this issue by putting forward specific models, these are far from realistically capturing the transitions that a quantum machine is expected to perform. In this work, we adopt a novel strategy by considering arbitrary kinds of systems that one can attach to a quantum thermal machine and defining work quantifiers. These are functions that measure the value of a transition and generalise the concept of work beyond those models familiar from phenomenological thermodynamics. We do so by imposing simple operational axioms that any reasonable work quantifier must fulfil and by deriving from them stringent mathematical condition with a clear physical interpretation. Our approach allows us to derive much of the structure of the theory of thermodynamics without taking the definition of work as a primitive. We can derive, for any work quantifier, a quantitative second law in the sense of bounding the work that can be performed using some non-equilibrium resource by the work that is needed to create it. We also discuss in detail the role of reversibility and correlations in connection with the second law. Furthermore, we recover the usual identification of work with energy in degrees of freedom with vanishing entropy as a particular case of our formalism. Our mathematical results can be formulated abstractly and are general enough to carry over to other resource theories than quantum thermodynamics.

  14. Machinability of lithium disilicate glass ceramic in in vitro dental diamond bur adjusting process.

    PubMed

    Song, Xiao-Fei; Ren, Hai-Tao; Yin, Ling

    2016-01-01

    Esthetic high-strength lithium disilicate glass ceramics (LDGC) are used for monolithic crowns and bridges produced in dental CAD/CAM and oral adjusting processes, which machinability affects the restorative quality. A machinability study has been made in the simulated oral clinical machining of LDGC with a dental handpiece and diamond burs, regarding the diamond tool wear and chip control, machining forces and energy, surface finish and integrity. Machining forces, speeds and energy in in vitro dental adjusting of LDGC were measured by a high-speed data acquisition and force sensor system. Machined LDGC surfaces were assessed using three-dimensional non-contact chromatic confocal optical profilometry and scanning electron microscopy (SEM). Diamond bur morphology and LDGC chip shapes were also examined using SEM. Minimum tool wear but significant LDGC chip accumulations were found. Machining forces and energy significantly depended on machining conditions (p<0.05) and were significantly higher than other glass ceramics (p<0.05). Machining speeds dropped more rapidly with increased removal rates than other glass ceramics (p<0.05). Two material machinability indices associated with the hardness, Young's modulus and fracture toughness were derived based on the normal force-removal rate relations, which ranked LDGC the most difficult to machine among glass ceramics. Surface roughness for machined LDGC was comparable for other glass ceramics. The removal mechanisms of LDGC were dominated by penetration-induced brittle fracture and shear-induced plastic deformation. Unlike most other glass ceramics, distinct intergranular and transgranular fractures of lithium disilicate crystals were found in LDGC. This research provides the fundamental data for dental clinicians on the machinability of LDGC in intraoral adjustments. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Dependency of the Reynolds number on the water flow through the perforated tube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Závodný, Zdenko, E-mail: zdenko.zavodny@stuba.sk; Bereznai, Jozef, E-mail: jozef.bereznai@stuba.sk; Urban, František

    Safe and effective loading of nuclear reactor fuel assemblies demands qualitative and quantitative analysis of the relationship between the coolant temperature in the fuel assembly outlet, measured by the thermocouple, and the mean coolant temperature profile in the thermocouple plane position. It is not possible to perform the analysis directly in the reactor, so it is carried out using measurements on the physical model, and the CFD fuel assembly coolant flow models. The CFD models have to be verified and validated in line with the temperature and velocity profile obtained from the measurements of the cooling water flowing in themore » physical model of the fuel assembly. Simplified physical model with perforated central tube and its validated CFD model serve to design of the second physical model of the fuel assembly of the nuclear reactor VVER 440. Physical model will be manufactured and installed in the laboratory of the Institute of Energy Machines, Faculty of Mechanical Engineering of the Slovak University of Technology in Bratislava.« less

  16. Learning Simple Machines through Cross-Age Collaborations

    ERIC Educational Resources Information Center

    Lancor, Rachael; Schiebel, Amy

    2008-01-01

    In this project, introductory college physics students (noneducation majors) were asked to teach simple machines to a class of second graders. This nontraditional activity proved to be a successful way to encourage college students to think critically about physics and how it applied to their everyday lives. The noneducation majors benefited by…

  17. Support vector machines classifiers of physical activities in preschoolers

    USDA-ARS?s Scientific Manuscript database

    The goal of this study is to develop, test, and compare multinomial logistic regression (MLR) and support vector machines (SVM) in classifying preschool-aged children physical activity data acquired from an accelerometer. In this study, 69 children aged 3-5 years old were asked to participate in a s...

  18. Safety of stationary grinding machines - impact resistance of work zone enclosures.

    PubMed

    Mewes, Detlef; Adler, Christian

    2017-09-01

    Guards on machine tools are intended to protect persons from being injured by parts ejected with high kinetic energy from the work zone of the machine. Stationary grinding machines are a typical example. Generally such machines are provided with abrasive product guards closely enveloping the grinding wheel. However, many machining tasks do not allow the use of abrasive product guards. In such cases, the work zone enclosure has to be dimensioned so that, in case of failure, grinding wheel fragments remain inside the machine's working zone. To obtain data for the dimensioning of work zone enclosures on stationary grinding machines, which must be operated without an abrasive product guard, burst tests were conducted with vitrified grinding wheels. The studies show that, contrary to widely held opinion, narrower grinding wheels can be more critical concerning the impact resistance than wider wheels although their fragment energy is smaller.

  19. Research in Chinese-English Machine Translation. Final Report.

    ERIC Educational Resources Information Center

    Wang, William S-Y.; And Others

    This report documents results of a two-year effort toward the study and investigation of the design of a prototype system for Chinese-English machine translation in the general area of physics. Previous work in Chinese-English machine translation is reviewed. Grammatical considerations in machine translation are discussed and detailed aspects of…

  20. Special electrical machines: Sources and converters of energy

    NASA Astrophysics Data System (ADS)

    Bertinov, A. I.; But, D. A.; Miziurin, S. R.; Alievskii, B. L.; Sineva, N. V.

    The principles underlying the operation of electromechanical and dynamic energy converters are discussed, along with those for the direct conversion of solar, thermal, and chemical energy into electrical energy. The theory for electromechanical and dynamic converters is formulated using a generalized model for the electromechanical conversion of energy. Particular attention is given to electrical machinery designed for special purposes. Features of superconductor electrical machines are discussed.

  1. Reduced physical activity and risk of chronic disease: the biology behind the consequences.

    PubMed

    Booth, Frank W; Laye, Matthew J; Lees, Simon J; Rector, R Scott; Thyfault, John P

    2008-03-01

    This review focuses on three preserved, ancient, biological mechanisms (physical activity, insulin sensitivity, and fat storage). Genes in humans and rodents were selected in an environment of high physical activity that favored an optimization of aerobic metabolic pathways to conserve energy for a potential, future food deficiency. Today machines and other technologies have replaced much of the physical activity that selected optimal gene expression for energy metabolism. Distressingly, the negative by-product of a lack of ancient physical activity levels in our modern civilization is an increased risk of chronic disease. We have been employing a rodent wheel-lock model to approximate the reduction in physical activity in humans from the level under which genes were selected to a lower level observed in modern daily functioning. Thus far, two major changes have been identified when rats undertaking daily, natural voluntary running on wheels experience an abrupt cessation of the running (wheel lock model). First, insulin sensitivity in the epitrochlearis muscle of rats falls to sedentary values after 2 days of the cessation of running, confirming the decline to sedentary values in whole-body insulin sensitivity when physically active humans stop high levels of daily exercise. Second, visceral fat increases within 1 week after rats cease daily running, confirming the plasticity of human visceral fat. This review focuses on the supporting data for the aforementioned two outcomes. Our primary goal is to better understand how a physically inactive lifestyle initiates maladaptations that cause chronic disease.

  2. Effect of Machining Parameters on Oxidation Behavior of Mild Steel

    NASA Astrophysics Data System (ADS)

    Majumdar, P.; Shekhar, S.; Mondal, K.

    2015-01-01

    This study aims to find out a correlation between machining parameters, resultant microstructure, and isothermal oxidation behavior of lathe-machined mild steel in the temperature range of 660-710 °C. The tool rake angles "α" used were +20°, 0°, and -20°, and cutting speeds used were 41, 232, and 541 mm/s. Under isothermal conditions, non-machined and machined mild steel samples follow parabolic oxidation kinetics with activation energy of 181 and ~400 kJ/mol, respectively. Exaggerated grain growth of the machined surface was observed, whereas, the center part of the machined sample showed minimal grain growth during oxidation at higher temperatures. Grain growth on the surface was attributed to the reduction of strain energy at high temperature oxidation, which was accumulated on the sub-region of the machined surface during machining. It was also observed that characteristic surface oxide controlled the oxidation behavior of the machined samples. This study clearly demonstrates the effect of equivalent strain, roughness, and grain size due to machining, and subsequent grain growth on the oxidation behavior of the mild steel.

  3. A study of energy consumption in turning process using lubrication of nanoparticles enhanced coconut oil (NECO)

    NASA Astrophysics Data System (ADS)

    Mansor, A. F.; Zakaria, M. S.; Azmi, A. I.; Khalil, A. N. M.; Musa, N. A.

    2017-10-01

    Cutting fluids play very important role in machining application in order to increase tool life, surface finish and reduce energy consumption. Instead of using petrochemical and synthetic based cutting fluids, vegetable oil based lubricants is safety for operators, environmental friendly and become more popular in the industrial applications. This research paper aims to find the advantage of using vegetable oils (coconut oil) with additional of nano particles (CuO) as lubricant to the energy consumption during machining process. The energy was measured for each run from 2 level factorial experimental layout. Obtained results illustrate that lubricant with enhancement of nanoparticles has capability to improve the energy consumption during the machining process.

  4. Improving wave forecasting by integrating ensemble modelling and machine learning

    NASA Astrophysics Data System (ADS)

    O'Donncha, F.; Zhang, Y.; James, S. C.

    2017-12-01

    Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.

  5. Physical Properties of Nyamplung Oil (Calophyllum inophyllum L) for Biodiesel Production

    NASA Astrophysics Data System (ADS)

    Dewang, Syamsir; Suriani; Hadriani, Siti; Bannu; Abdullah, B.

    2017-05-01

    Worldwide energy crisis due to the too high of energy consumption causes the people trying to find alternative energy to support energy requirements. The use of energy from environmentally friendly plant-based materials into an effort to assist communities in sufficient of national energy needs. Some processing of Nyamplung (Calophyllum inophyllum L) oil production is drying and pressing to produce crude oil. Degumming process is then performed to remove the sap contained in the oil. The next process is to remove free fatty acids (FFA) below 2% that can cause corrosion on the machine when in use. The results performed of the density properties quality to produce oil that appropriate with the international standards by time variation of catalyst. The result was obtained the density value of 0.92108 gr/cm3 at the time of 3 hours by trans-esterification process, and the best yield value was measured at 98.2% in 2 hours stirring of transesterification.

  6. Characterizing Slow Slip Applying Machine Learning

    NASA Astrophysics Data System (ADS)

    Hulbert, C.; Rouet-Leduc, B.; Bolton, D. C.; Ren, C. X.; Marone, C.; Johnson, P. A.

    2017-12-01

    Over the last two decades it has become apparent from strain and GPS measurements, that slow slip on earthquake faults is a widespread phenomenon. Slow slip is also inferred from small amplitude seismic signals known as tremor and low frequency earthquakes (LFE's) and has been reproduced in laboratory studies, providing useful physical insight into the frictional properties associated with the behavior. From such laboratory studies we ask whether we can obtain quantitative information regarding the physics of friction from only the recorded continuous acoustical data originating from the fault zone. We show that by applying machine learning to the acoustical signal, we can infer upcoming slow slip failure initiation as well as the slip termination, and that we can also infer the magnitudes by a second machine learning procedure based on predicted inter-event times. We speculate that by applying this or other machine learning approaches to continuous seismic data, new information regarding the physics of faulting could be obtained.

  7. A non-LTE analysis of high energy density Kr plasmas on Z and NIF

    DOE PAGES

    Dasgupta, A.; Clark, R. W.; Ouart, N.; ...

    2016-10-20

    We report that multi-keV X-ray radiation sources have a wide range of applications, from biomedical studies and research on thermonuclear fusion to materials science and astrophysics. The refurbished Z pulsed power machine at the Sandia National Laboratories produces intense multi-keV X-rays from argon Z-pinches, but for a krypton Z-pinch, the yield decreases much faster with atomic number Z A than similar sources on the National Ignition Facility (NIF) laser at the Lawrence Livermore National Laboratory. To investigate whether fundamental energy deposition differences between pulsed power and lasers could account for the yield differences, we consider the Kr plasma on themore » two machines. The analysis assumes the plasma not in local thermodynamic equilibrium, with a detailed coupling between the hydrodynamics, the radiation field, and the ionization physics. While for the plasma parameters of interest the details of krypton’s M-shell are not crucial, both the L-shell and the K-shell must be modeled in reasonable detail, including the state-specific dielectronic recombination processes that significantly affect Kr’s ionization balance and the resulting X-ray spectrum. We present a detailed description of the atomic model, provide synthetic K- and L-shell spectra, and compare these with the available experimental data from the Z-machine and from NIF to show that the K-shell yield behavior versus Z A is indeed related to the energy input characteristics. In conclusion, this work aims at understanding the probable causes that might explain the differences in the X-ray conversion efficiencies of several radiation sources on Z and« less

  8. A non-LTE analysis of high energy density Kr plasmas on Z and NIF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta, A.; Clark, R. W.; Ouart, N.

    We report that multi-keV X-ray radiation sources have a wide range of applications, from biomedical studies and research on thermonuclear fusion to materials science and astrophysics. The refurbished Z pulsed power machine at the Sandia National Laboratories produces intense multi-keV X-rays from argon Z-pinches, but for a krypton Z-pinch, the yield decreases much faster with atomic number Z A than similar sources on the National Ignition Facility (NIF) laser at the Lawrence Livermore National Laboratory. To investigate whether fundamental energy deposition differences between pulsed power and lasers could account for the yield differences, we consider the Kr plasma on themore » two machines. The analysis assumes the plasma not in local thermodynamic equilibrium, with a detailed coupling between the hydrodynamics, the radiation field, and the ionization physics. While for the plasma parameters of interest the details of krypton’s M-shell are not crucial, both the L-shell and the K-shell must be modeled in reasonable detail, including the state-specific dielectronic recombination processes that significantly affect Kr’s ionization balance and the resulting X-ray spectrum. We present a detailed description of the atomic model, provide synthetic K- and L-shell spectra, and compare these with the available experimental data from the Z-machine and from NIF to show that the K-shell yield behavior versus Z A is indeed related to the energy input characteristics. In conclusion, this work aims at understanding the probable causes that might explain the differences in the X-ray conversion efficiencies of several radiation sources on Z and« less

  9. An experimental investigation of pulsed laser-assisted machining of AISI 52100 steel

    NASA Astrophysics Data System (ADS)

    Panjehpour, Afshin; Soleymani Yazdi, Mohammad R.; Shoja-Razavi, Reza

    2014-11-01

    Grinding and hard turning are widely used for machining of hardened bearing steel parts. Laser-assisted machining (LAM) has emerged as an efficient alternative to grinding and hard turning for hardened steel parts. In most cases, continuous-wave lasers were used as a heat source to cause localized heating prior to material removal by a cutting tool. In this study, an experimental investigation of pulsed laser-assisted machining of AISI 52100 bearing steel was conducted. The effects of process parameters (i.e., laser mean power, pulse frequency, pulse energy, cutting speed and feed rate) on state variables (i.e., material removal temperature, specific cutting energy, surface roughness, microstructure, tool wear and chip formation) were investigated. At laser mean power of 425 W with frequency of 120 Hz and cutting speed of 70 m/min, the benefit of LAM was shown by 25% decrease in specific cutting energy and 18% improvement in surface roughness, as compared to those of the conventional machining. It was shown that at constant laser power, the increase of laser pulse energy causes the rapid increase in tool wear rate. Pulsed laser allowed efficient control of surface temperature and heat penetration in material removal region. Examination of the machined subsurface microstructure and microhardness profiles showed no change under LAM and conventional machining. Continuous chips with more uniform plastic deformation were produced in LAM.

  10. Homopolar machine for reversible energy storage and transfer systems

    DOEpatents

    Stillwagon, Roy E.

    1978-01-01

    A homopolar machine designed to operate as a generator and motor in reversibly storing and transferring energy between the machine and a magnetic load coil for a thermo-nuclear reactor. The machine rotor comprises hollow thin-walled cylinders or sleeves which form the basis of the system by utilizing substantially all of the rotor mass as a conductor thus making it possible to transfer substantially all the rotor kinetic energy electrically to the load coil in a highly economical and efficient manner. The rotor is divided into multiple separate cylinders or sleeves of modular design, connected in series and arranged to rotate in opposite directions but maintain the supply of current in a single direction to the machine terminals. A stator concentrically disposed around the sleeves consists of a hollow cylinder having a number of excitation coils each located radially outward from the ends of adjacent sleeves. Current collected at an end of each sleeve by sleeve slip rings and brushes is transferred through terminals to the magnetic load coil. Thereafter, electrical energy returned from the coil then flows through the machine which causes the sleeves to motor up to the desired speed in preparation for repetition of the cycle. To eliminate drag on the rotor between current pulses, the brush rigging is designed to lift brushes from all slip rings in the machine.

  11. Homopolar machine for reversible energy storage and transfer systems

    DOEpatents

    Stillwagon, Roy E.

    1981-01-01

    A homopolar machine designed to operate as a generator and motor in reversibly storing and transferring energy between the machine and a magnetic load coil for a thermo-nuclear reactor. The machine rotor comprises hollow thin-walled cylinders or sleeves which form the basis of the system by utilizing substantially all of the rotor mass as a conductor thus making it possible to transfer substantially all the rotor kinetic energy electrically to the load coil in a highly economical and efficient manner. The rotor is divided into multiple separate cylinders or sleeves of modular design, connected in series and arranged to rotate in opposite directions but maintain the supply of current in a single direction to the machine terminals. A stator concentrically disposed around the sleeves consists of a hollow cylinder having a number of excitation coils each located radially outward from the ends of adjacent sleeves. Current collected at an end of each sleeve by sleeve slip rings and brushes is transferred through terminals to the magnetic load coil. Thereafter, electrical energy returned from the coil then flows through the machine which causes the sleeves to motor up to the desired speed in preparation for repetition of the cycle. To eliminate drag on the rotor between current pulses, the brush rigging is designed to lift brushes from all slip rings in the machine.

  12. Selected topics in particle accelerators: Proceedings of the CAP meetings. Volume 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsa, Z.

    1995-10-01

    This Report includes copies of transparencies and notes from the presentations made at the Center for Accelerator Physics at Brookhaven National Laboratory Editing and changes to the authors` contributions in this Report were made only to fulfill the publication requirements. This volume includes notes and transparencies on nine presentations: ``The Energy Exchange and Efficiency Consideration in Klystrons``, ``Some Properties of Microwave RF Sources for Future Colliders + Overview of Microwave Generation Activity at the University of Maryland``, ``Field Quality Improvements in Superconducting Magnets for RHIC``, ``Hadronic B-Physics``, ``Spiking Pulses from Free Electron Lasers: Observations and Computational Models``, ``Crystalline Beams inmore » Circular Accelerators``, ``Accumulator Ring for AGS & Recent AGS Performance``, ``RHIC Project Machine Status``, and ``Gamma-Gamma Colliders.``« less

  13. European organization for nuclear research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoenbacher, H.; Tavlet, M.

    1987-09-10

    The CERN Intersecting Storage Rings (ISR) operated from 1971 to 1984. During that time high-energy physics experiments were carried out with 30 GeV colliding proton beams. At the end of this period the machine was decommissioned and dismantled. This involved the movement of about 1000 machine elements, e.g., magnets, vacuum pumps, rf cavities, etc., 2500 racks, 7000 shielding blocks, 3500 km of cables and 7 km of beam piping. All these items were considered to be radioactive until the contrary was proven. They were then sorted, either for storage and reuse or for radioactive or non-radioactive waste. The paper describesmore » the radiation protection surveillance of this project which lasted for five months. It includes the radiation protection standards, the control of personnel and materials, typical radioactivity levels and isotopes, as well as final cleaning and decommissioning of an originally restricted radiation area to a free accessible area.« less

  14. Hard permanent magnet development trends and their application to A.C. machines

    NASA Technical Reports Server (NTRS)

    Mildrum, H. F.

    1981-01-01

    The physical and magnetic properties of Mn-Al-C, Fe-Cr-Co, and RE-TM (rare earth-transition metal intermetallics) in polymer and soft metal bonded or sintered form are considered for ac circuit machine usage. The manufacturing processes for the magnetic materials are reviewed, and the mechanical and electrical properties of the magnetic materials are compared, with consideration given to the reference Alnico magnet. The Mn-Al-C magnets have the same magnetic properties and costs as Alnico units, operate well at low temperatures, but have poor high temperature performance. Fe-Cr-Co magnets also have comparable cost to Alnico magnets, and operate at high or low temperature, but are brittle, expensive, and contain Co. RE-Co magnets possess a high energy density, operate well in a wide temperature range, and are expensive. Recommendation for exploring the rare-earth alternatives are offered.

  15. Machine Learning Prediction of the Energy Gap of Graphene Nanoflakes Using Topological Autocorrelation Vectors.

    PubMed

    Fernandez, Michael; Abreu, Jose I; Shi, Hongqing; Barnard, Amanda S

    2016-11-14

    The possibility of band gap engineering in graphene opens countless new opportunities for application in nanoelectronics. In this work, the energy gaps of 622 computationally optimized graphene nanoflakes were mapped to topological autocorrelation vectors using machine learning techniques. Machine learning modeling revealed that the most relevant correlations appear at topological distances in the range of 1 to 42 with prediction accuracy higher than 80%. The data-driven model can statistically discriminate between graphene nanoflakes with different energy gaps on the basis of their molecular topology.

  16. A new wind energy conversion system

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.

    1975-01-01

    It is presupposed that vertical axis wind energy machines will be superior to horizontal axis machines on a power output/cost basis and the design of a new wind energy machine is presented. The design employs conical cones with sharp lips and smooth surfaces to promote maximum drag and minimize skin friction. The cones are mounted on a vertical axis in such a way as to assist torque development. Storing wind energy as compressed air is thought to be optimal and reasons are: (1) the efficiency of compression is fairly high compared to the conversion of mechanical energy to electrical energy in storage batteries; (2) the release of stored energy through an air motor has high efficiency; and (3) design, construction, and maintenance of an all-mechanical system is usually simpler than for a mechanical to electrical conversion system.

  17. Disinfection of sewage wastewater and sludge by electron treatment

    NASA Astrophysics Data System (ADS)

    Trump, J. G.; Merrill, E. W.; Wright, K. A.

    The use of machine-accelerated electrons to disinfect sewage waterwaste and sludge is discussed. The method is shown to be practical and energy-efficient for the broad spectrum disinfection of pathogenic organisms in municipal wastewaters and sludge removed from them. Studies of biological, chemical and physical effects are reported. Electron treatment is suggested as an alternative to chlorination of municipal liquid wastes after electron treatment to provide disinfection. Disposal of sewage sludge is recommended as an agricultural resource by subsurface land injection, or as a nutrient for fish populations by widespread ocean dispersal.

  18. Integration of Openstack cloud resources in BES III computing cluster

    NASA Astrophysics Data System (ADS)

    Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan

    2017-10-01

    Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.

  19. Wear of carbide inserts with complex surface treatment when milling nickel alloy

    NASA Astrophysics Data System (ADS)

    Fedorov, Sergey; Swe, Min Htet; Kapitanov, Alexey; Egorov, Sergey

    2018-03-01

    One of the effective ways of strengthening hard alloys is the creating structure layers on their surface with the gradient distribution of physical and mechanical properties between the wear-resistant coating and the base material. The article discusses the influence of the near-surface layer which is modified by low-energy high-current electron-beam alloying and the upper anti-friction layer in a multi-component coating on the wear mechanism of the replaceable multifaceted plates in the dry milling of the difficult to machine nickel alloys.

  20. Ground Fault Overvoltage With Inverter-Interfaced Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ropp, Michael; Hoke, Anderson; Chakraborty, Sudipta

    Ground Fault Overvoltage can occur in situations in which a four-wire distribution circuit is energized by an ungrounded voltage source during a single phase to ground fault. The phenomenon is well-documented with ungrounded synchronous machines, but there is considerable discussion about whether inverters cause this phenomenon, and consequently whether inverters require effective grounding. This paper examines the overvoltages that can be supported by inverters during single phase to ground faults via theory, simulation and experiment, identifies the relevant physical mechanisms, quantifies expected levels of overvoltage, and makes recommendations for optimal mitigation.

  1. On the role of exchange of power and information signals in control and stability of the human-robot interaction

    NASA Technical Reports Server (NTRS)

    Kazerooni, H.

    1991-01-01

    A human's ability to perform physical tasks is limited, not only by his intelligence, but by his physical strength. If, in an appropriate environment, a machine's mechanical power is closely integrated with a human arm's mechanical power under the control of the human intellect, the resulting system will be superior to a loosely integrated combination of a human and a fully automated robot. Therefore, we must develop a fundamental solution to the problem of 'extending' human mechanical power. The work presented here defines 'extenders' as a class of robot manipulators worn by humans to increase human mechanical strength, while the wearer's intellect remains the central control system for manipulating the extender. The human, in physical contact with the extender, exchanges power and information signals with the extender. The aim is to determine the fundamental building blocks of an intelligent controller, a controller which allows interaction between humans and a broad class of computer-controlled machines via simultaneous exchange of both power and information signals. The prevalent trend in automation has been to physically separate the human from the machine so the human must always send information signals via an intermediary device (e.g., joystick, pushbutton, light switch). Extenders, however are perfect examples of self-powered machines that are built and controlled for the optimal exchange of power and information signals with humans. The human wearing the extender is in physical contact with the machine, so power transfer is unavoidable and information signals from the human help to control the machine. Commands are transferred to the extender via the contact forces and the EMG signals between the wearer and the extender. The extender augments human motor ability without accepting any explicit commands: it accepts the EMG signals and the contact force between the person's arm and the extender, and the extender 'translates' them into a desired position. In this unique configuration, mechanical power transfer between the human and the extender occurs because the human is pushing against the extender. The extender transfers to the human's hand, in feedback fashion, a scaled-down version of the actual external load which the extender is manipulating. This natural feedback force on the human's hand allows him to 'feel' a modified version of the external forces on the extender. The information signals from the human (e.g., EMG signals) to the computer reflect human cognitive ability, and the power transfer between the human and the machine (e.g., physical interaction) reflects human physical ability. Thus the information transfer to the machine augments cognitive ability, and the power transfer augments motor ability. These two actions are coupled through the human cognitive/motor dynamic behavior. The goal is to derive the control rules for a class of computer-controlled machines that augment human physical and cognitive abilities in certain manipulative tasks.

  2. Exploring Energy Landscapes

    NASA Astrophysics Data System (ADS)

    Wales, David J.

    2018-04-01

    Recent advances in the potential energy landscapes approach are highlighted, including both theoretical and computational contributions. Treating the high dimensionality of molecular and condensed matter systems of contemporary interest is important for understanding how emergent properties are encoded in the landscape and for calculating these properties while faithfully representing barriers between different morphologies. The pathways characterized in full dimensionality, which are used to construct kinetic transition networks, may prove useful in guiding such calculations. The energy landscape perspective has also produced new procedures for structure prediction and analysis of thermodynamic properties. Basin-hopping global optimization, with alternative acceptance criteria and generalizations to multiple metric spaces, has been used to treat systems ranging from biomolecules to nanoalloy clusters and condensed matter. This review also illustrates how all this methodology, developed in the context of chemical physics, can be transferred to landscapes defined by cost functions associated with machine learning.

  3. Uncertainty analysis of absorbed dose calculations from thermoluminescence dosimeters.

    PubMed

    Kirby, T H; Hanson, W F; Johnston, D A

    1992-01-01

    Thermoluminescence dosimeters (TLD) are widely used to verify absorbed doses delivered from radiation therapy beams. Specifically, they are used by the Radiological Physics Center for mailed dosimetry for verification of therapy machine output. The effects of the random experimental uncertainties of various factors on dose calculations from TLD signals are examined, including: fading, dose response nonlinearity, and energy response corrections; reproducibility of TL signal measurements and TLD reader calibration. Individual uncertainties are combined to estimate the total uncertainty due to random fluctuations. The Radiological Physics Center's (RPC) mail out TLD system, utilizing throwaway LiF powder to monitor high-energy photon and electron beam outputs, is analyzed in detail. The technique may also be applicable to other TLD systems. It is shown that statements of +/- 2% dose uncertainty and +/- 5% action criterion for TLD dosimetry are reasonable when related to uncertainties in the dose calculations, provided the standard deviation (s.d.) of TL readings is 1.5% or better.

  4. Cybersecurity and Optimization in Smart “Autonomous” Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mylrea, Michael E.; Gourisetti, Sri Nikhil Gup

    Significant resources have been invested in making buildings “smart” by digitizing, networking and automating key systems and operations. Smart autonomous buildings create new energy efficiency, economic and environmental opportunities. But as buildings become increasingly networked to the Internet, they can also become more vulnerable to various cyber threats. Automated and Internet-connected buildings systems, equipment, controls, and sensors can significantly increase cyber and physical vulnerabilities that threaten the confidentiality, integrity, and availability of critical systems in organizations. Securing smart autonomous buildings presents a national security and economic challenge to the nation. Ignoring this challenge threatens business continuity and the availability ofmore » critical infrastructures that are enabled by smart buildings. In this chapter, the authors address challenges and explore new opportunities in securing smart buildings that are enhanced by machine learning, cognitive sensing, artificial intelligence (AI) and smart-energy technologies. The chapter begins by identifying cyber-threats and challenges to smart autonomous buildings. Then it provides recommendations on how AI enabled solutions can help smart buildings and facilities better protect, detect and respond to cyber-physical threats and vulnerabilities. Next, the chapter will provide case studies that examine how combining AI with innovative smart-energy technologies can increase both cybersecurity and energy efficiency savings in buildings. The chapter will conclude by proposing recommendations for future cybersecurity and energy optimization research for examining AI enabled smart-energy technology.« less

  5. Protein structure modeling for CASP10 by multiple layers of global optimization.

    PubMed

    Joo, Keehyoung; Lee, Juyong; Sim, Sangjin; Lee, Sun Young; Lee, Kiho; Heo, Seungryong; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung

    2014-02-01

    In the template-based modeling (TBM) category of CASP10 experiment, we introduced a new protocol called protein modeling system (PMS) to generate accurate protein structures in terms of side-chains as well as backbone trace. In the new protocol, a global optimization algorithm, called conformational space annealing (CSA), is applied to the three layers of TBM procedure: multiple sequence-structure alignment, 3D chain building, and side-chain re-modeling. For 3D chain building, we developed a new energy function which includes new distance restraint terms of Lorentzian type (derived from multiple templates), and new energy terms that combine (physical) energy terms such as dynamic fragment assembly (DFA) energy, DFIRE statistical potential energy, hydrogen bonding term, etc. These physical energy terms are expected to guide the structure modeling especially for loop regions where no template structures are available. In addition, we developed a new quality assessment method based on random forest machine learning algorithm to screen templates, multiple alignments, and final models. For TBM targets of CASP10, we find that, due to the combination of three stages of CSA global optimizations and quality assessment, the modeling accuracy of PMS improves at each additional stage of the protocol. It is especially noteworthy that the side-chains of the final PMS models are far more accurate than the models in the intermediate steps. Copyright © 2013 Wiley Periodicals, Inc.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richter, B.

    In this paper I have reviewed the possibilities for new colliders that might be available in the 1990's. One or more new proton should be available in the late-90s based on plans of Europe, the US and the USSR. The two very high energy machines, LHC and SSC, are quite expensive, and their construction will be more decided by the politicians' view on the availability of resources than by the physicists' view of the need for new machines. Certainly something will be built, but the question is when. New electron colliders beyond LEP II could be available in the latemore » 1990's as well. Most of the people who have looked at this problem believe that at a minimum three years of RandD are required before a proposal can be made, two years will be required to convince the authorities to go ahead, and five years will be required to build such a machine. Thus the earliest time a new electron collider at high energy could be available is around 1988. A strong international RandD program will be required to meet that schedule. In the field of B factories, PSI's proposal is the first serious step beyond the capabilities of CESR. There are other promising techniques but these need more RandD. The least RandD would be required for the asymmetric storage ring systems, while the most would be required for high luminosity linear colliders. For the next decade, high energy physics will be doing its work at the high energy frontier with Tevatron I and II, UNK, SLC, LEP I and II, and HERA. The opportunities for science presented by experiments at these facilities are very great, and it is to be hoped that the pressure for funding to construct the next generation facilities will not badly affect the operating budgets of the ones we now have or which will soon be turning on. 9 refs., 12 figs., 6 tabs.« less

  7. Energy Survey of Machine Tools: Separating Power Information of the Main Transmission System During Machining Process

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Liu, Fei; Hu, Shaohua; Yin, Zhenbiao

    The major power information of the main transmission system in machine tools (MTSMT) during machining process includes effective output power (i.e. cutting power), input power and power loss from the mechanical transmission system, and the main motor power loss. These information are easy to obtain in the lab but difficult to evaluate in a manufacturing process. To solve this problem, a separation method is proposed here to extract the MTSMT power information during machining process. In this method, the energy flow and the mathematical models of major power information of MTSMT during the machining process are set up first. Based on the mathematical models and the basic data tables obtained from experiments, the above mentioned power information during machining process can be separated just by measuring the real time total input power of the spindle motor. The operation program of this method is also given.

  8. A Distributed Parallel Genetic Algorithm of Placement Strategy for Virtual Machines Deployment on Cloud Platform

    PubMed Central

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform. PMID:25097872

  9. A distributed parallel genetic algorithm of placement strategy for virtual machines deployment on cloud platform.

    PubMed

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  10. 48 CFR 908.7117 - Tabulating machine cards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Tabulating machine cards. 908.7117 Section 908.7117 Federal Acquisition Regulations System DEPARTMENT OF ENERGY COMPETITION... Tabulating machine cards. DOE offices shall acquire tabulating machine cards in accordance with FPMR 41 CFR...

  11. 48 CFR 908.7117 - Tabulating machine cards.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Tabulating machine cards. 908.7117 Section 908.7117 Federal Acquisition Regulations System DEPARTMENT OF ENERGY COMPETITION... Tabulating machine cards. DOE offices shall acquire tabulating machine cards in accordance with FPMR 41 CFR...

  12. Solving a Higgs optimization problem with quantum annealing for machine learning.

    PubMed

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-18

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  13. Solving a Higgs optimization problem with quantum annealing for machine learning

    NASA Astrophysics Data System (ADS)

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-01

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finley, V.L.; Wiezcorek, M.A.

    This report gives the results of the environmental activities and monitoring programs at the Princeton Plasma Physics Laboratory (PPPL) for CY93. The report is prepared to provide the U.S. Department of Energy (DOE) and the public with information on the level of radioactive and non-radioactive pollutants, if any, added to the environment as a result of PPPL operations, as well as environmental initiatives, assessments, and programs that were undertaken in 1993. The objective of the Annual Site Environmental Report is to document evidence that DOE facility environmental protection programs adequately protect the environment and the public health. The Princeton Plasmamore » Physics Laboratory has engaged in fusion energy research since 1951. The long-range goal of the U.S. Magnetic Fusion Energy Research Program is to develop and demonstrate the practical application of fusion power as an alternate energy source. In 1993, PPPL had both of its two large tokamak devices in operation; the Tokamak Fusion Test Reactor (TFTR) and the Princeton Beta Experiment-Modification (PBX-M). PBX-M completed its modifications and upgrades and resumed operation in November 1991. TFTR began the deuterium-tritium (D-T) experiments in December 1993 and set new records by producing over six million watts of energy. The engineering design phase of the Tokamak Physics Experiment (TPX), which replaced the cancelled Burning Plasma Experiment in 1992 as PPPL`s next machine, began in 1993 with the planned start up set for the year 2001. In 1993, the Environmental Assessment (EA) for the TFRR Shutdown and Removal (S&R) and TPX was prepared for submittal to the regulatory agencies.« less

  15. Assessment and Validation of Machine Learning Methods for Predicting Molecular Atomization Energies.

    PubMed

    Hansen, Katja; Montavon, Grégoire; Biegler, Franziska; Fazli, Siamac; Rupp, Matthias; Scheffler, Matthias; von Lilienfeld, O Anatole; Tkatchenko, Alexandre; Müller, Klaus-Robert

    2013-08-13

    The accurate and reliable prediction of properties of molecules typically requires computationally intensive quantum-chemical calculations. Recently, machine learning techniques applied to ab initio calculations have been proposed as an efficient approach for describing the energies of molecules in their given ground-state structure throughout chemical compound space (Rupp et al. Phys. Rev. Lett. 2012, 108, 058301). In this paper we outline a number of established machine learning techniques and investigate the influence of the molecular representation on the methods performance. The best methods achieve prediction errors of 3 kcal/mol for the atomization energies of a wide variety of molecules. Rationales for this performance improvement are given together with pitfalls and challenges when applying machine learning approaches to the prediction of quantum-mechanical observables.

  16. Man-systems integration and the man-machine interface

    NASA Technical Reports Server (NTRS)

    Hale, Joseph P.

    1990-01-01

    Viewgraphs on man-systems integration and the man-machine interface are presented. Man-systems integration applies the systems' approach to the integration of the user and the machine to form an effective, symbiotic Man-Machine System (MMS). A MMS is a combination of one or more human beings and one or more physical components that are integrated through the common purpose of achieving some objective. The human operator interacts with the system through the Man-Machine Interface (MMI).

  17. Harvesting forest biomass for energy in Minnesota: An assessment of guidelines, costs and logistics

    NASA Astrophysics Data System (ADS)

    Saleh, Dalia El Sayed Abbas Mohamed

    The emerging market for renewable energy in Minnesota has generated a growing interest in utilizing more forest biomass for energy. However, this growing interest is paralleled with limited knowledge of the environmental impacts and cost effectiveness of utilizing this resource. To address environmental and economic viability concerns, this dissertation has addressed three areas related to biomass harvest: First, existing biomass harvesting guidelines and sustainability considerations are examined. Second, the potential contribution of biomass energy production to reduce the costs of hazardous fuel reduction treatments in these trials is assessed. Third, the logistics of biomass production trials are analyzed. Findings show that: (1) Existing forest related guidelines are not sufficient to allow large-scale production of biomass energy from forest residue sustainably. Biomass energy guidelines need to be based on scientific assessments of how repeated and large scale biomass production is going to affect soil, water and habitat values, in an integrated and individual manner over time. Furthermore, such guidelines would need to recommend production logistics (planning, implementation, and coordination of operations) necessary for a potential supply with the least site and environmental impacts. (2) The costs of biomass production trials were assessed and compared with conventional treatment costs. In these trials, conventional mechanical treatment costs were lower than biomass energy production costs less income from biomass sale. However, a sensitivity analysis indicated that costs reductions are possible under certain site, prescriptions and distance conditions. (3) Semi-structured interviews with forest machine operators indicate that existing fuel reduction prescriptions need to be more realistic in making recommendations that can overcome operational barriers (technical and physical) and planning and coordination concerns (guidelines and communications) identified by machine operators, and which are necessary for a viable biomass energy production system. The results of this dissertation suggest that once biomass energy production is intended, incorporating an early understanding of production logistics while developing environmentally sensitive guidelines and site-specific prescriptions can improve biomass energy production, costs, performance and sustainability.

  18. Study of a variable mass Atwood's machine using a smartphone

    NASA Astrophysics Data System (ADS)

    Lopez, Dany; Caprile, Isidora; Corvacho, Fernando; Reyes, Orfa

    2018-03-01

    The Atwood machine was invented in 1784 by George Atwood and this system has been widely studied both theoretically and experimentally over the years. Nowadays, it is commonplace that many experimental physics courses include both Atwood's machine and variable mass to introduce more complex concepts in physics. To study the dynamics of the masses that compose the variable Atwood's machine, laboratories typically use a smart pulley. Now, the first work that introduced a smartphone as data acquisition equipment to study the acceleration in the Atwood's machine was the one by M. Monteiro et al. Since then, there has been no further information available on the usage of smartphones in variable mass systems. This prompted us to do a study of this kind of system by means of data obtained with a smartphone and to show the practicality of using smartphones in complex experimental situations.

  19. Analyzing non-LTE Kr plasmas produced in high energy density experiments: from the Z machine to the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Dasgupta, Arati

    2015-11-01

    Designing high fluence photon sources above 10 keV are a challenge for High Energy Density plasmas. This has motivated radiation source development investigations of Kr with K-shell energies around 13 keV. Recent pulsed power driven gas-puff experiments on the refurbished Z machine at Sandia have produced intense X-rays in the multi-keV photon energy range. K-shell radiative yields and efficiencies are very high for Ar, but rapidly decrease for higher atomic number (ZA) elements such as Kr. It has been suggested that an optimum exists corresponding to a trade-off between the increase of photon energy for higher ZA elements and the corresponding fall off in radiative power. However the conversion efficiency on NIF, where the drive, energy deposition process, and target dynamics are different, does not fall off with higher ZA as rapidly as on Z. We have developed detailed atomic structure and collisional data for the full K-, L- and partial M-shell of Kr using the Flexible Atomic Code (FAC). Our non-LTE atomic model includes all collisional and recombination processes, including state-specific dielectronic recombination (DR), that significantly affect ionization balance and spectra of Kr plasmas at the temperatures and densities of concern. The model couples ionization physics, radiation production and transport, and magnetohydrodynamics. In this talk, I will give a detailed description of the model and discuss 1D Kr simulations employing a multifrequency radiation transport scheme. Synthetic K- and L-shell spectra will be compared with available experimental data. This talk will analyze experimental data indicative of the differences between Z and NIF experimental data and discuss how they affect the K-shell radiative output of Kr plasma. Work supported by DOE/NNSA.

  20. Investigation of a tubular dual-stator flux-switching permanent-magnet linear generator for free-piston energy converter

    NASA Astrophysics Data System (ADS)

    Sui, Yi; Zheng, Ping; Tong, Chengde; Yu, Bin; Zhu, Shaohong; Zhu, Jianguo

    2015-05-01

    This paper describes a tubular dual-stator flux-switching permanent-magnet (PM) linear generator for free-piston energy converter. The operating principle, topology, and design considerations of the machine are investigated. Combining the motion characteristic of free-piston Stirling engine, a tubular dual-stator PM linear generator is designed by finite element method. Some major structural parameters, such as the outer and inner radii of the mover, PM thickness, mover tooth width, tooth width of the outer and inner stators, etc., are optimized to improve the machine performances like thrust capability and power density. In comparison with conventional single-stator PM machines like moving-magnet linear machine and flux-switching linear machine, the proposed dual-stator flux-switching PM machine shows advantages in higher mass power density, higher volume power density, and lighter mover.

  1. Simple Machines. Physical Science in Action[TM]. Schlessinger Science Library. [Videotape].

    ERIC Educational Resources Information Center

    2000

    In today's world, kids are aware that there are machines all around them. What they may not realize is that the function of all machines is to make work easier in some way. Simple Machines uses engaging visuals and colorful graphics to explain the concept of work and how humans use certain basic tools to help get work done. Students will learn…

  2. All about Simple Machines. Physical Science for Children[TM]. Schlessinger Science Library. [Videotape].

    ERIC Educational Resources Information Center

    2000

    All kids know the word "work." But they probably don't understand that work happens whenever a force is used to move something--whether it's lifting a heavy object or playing on a see-saw. All About Simple Machines introduces kids to the concepts of forces, work and how machines are used to make work easier. Six simple machines are…

  3. Linear microbunching analysis for recirculation machines

    DOE PAGES

    Tsai, C. -Y.; Douglas, D.; Li, R.; ...

    2016-11-28

    Microbunching instability (MBI) has been one of the most challenging issues in designs of magnetic chicanes for short-wavelength free-electron lasers or linear colliders, as well as those of transport lines for recirculating or energy-recovery-linac machines. To quantify MBI for a recirculating machine and for more systematic analyses, we have recently developed a linear Vlasov solver and incorporated relevant collective effects into the code, including the longitudinal space charge, coherent synchrotron radiation, and linac geometric impedances, with extension of the existing formulation to include beam acceleration. In our code, we semianalytically solve the linearized Vlasov equation for microbunching amplification factor formore » an arbitrary linear lattice. In this study we apply our code to beam line lattices of two comparative isochronous recirculation arcs and one arc lattice preceded by a linac section. The resultant microbunching gain functions and spectral responses are presented, with some results compared to particle tracking simulation by elegant (M. Borland, APS Light Source Note No. LS-287, 2002). These results demonstrate clearly the impact of arc lattice design on the microbunching development. Lastly, the underlying physics with inclusion of those collective effects is elucidated and the limitation of the existing formulation is also discussed.« less

  4. Power training using pneumatic machines vs. plate-loaded machines to improve muscle power in older adults.

    PubMed

    Balachandran, Anoop T; Gandia, Kristine; Jacobs, Kevin A; Streiner, David L; Eltoukhy, Moataz; Signorile, Joseph F

    2017-11-01

    Power training has been shown to be more effective than conventional resistance training for improving physical function in older adults; however, most trials have used pneumatic machines during training. Considering that the general public typically has access to plate-loaded machines, the effectiveness and safety of power training using plate-loaded machines compared to pneumatic machines is an important consideration. The purpose of this investigation was to compare the effects of high-velocity training using pneumatic machines (Pn) versus standard plate-loaded machines (PL). Independently-living older adults, 60years or older were randomized into two groups: pneumatic machine (Pn, n=19) and plate-loaded machine (PL, n=17). After 12weeks of high-velocity training twice per week, groups were analyzed using an intention-to-treat approach. Primary outcomes were lower body power measured using a linear transducer and upper body power using medicine ball throw. Secondary outcomes included lower and upper body muscle muscle strength, the Physical Performance Battery (PPB), gallon jug test, the timed up-and-go test, and self-reported function using the Patient Reported Outcomes Measurement Information System (PROMIS) and an online video questionnaire. Outcome assessors were blinded to group membership. Lower body power significantly improved in both groups (Pn: 19%, PL: 31%), with no significant difference between the groups (Cohen's d=0.4, 95% CI (-1.1, 0.3)). Upper body power significantly improved only in the PL group, but showed no significant difference between the groups (Pn: 3%, PL: 6%). For balance, there was a significant difference between the groups favoring the Pn group (d=0.7, 95% CI (0.1, 1.4)); however, there were no statistically significant differences between groups for PPB, gallon jug transfer, muscle muscle strength, timed up-and-go or self-reported function. No serious adverse events were reported in either of the groups. Pneumatic and plate-loaded machines were effective in improving lower body power and physical function in older adults. The results suggest that power training can be safely and effectively performed by older adults using either pneumatic or plate-loaded machines. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Magnetic-confinement fusion

    NASA Astrophysics Data System (ADS)

    Ongena, J.; Koch, R.; Wolf, R.; Zohm, H.

    2016-05-01

    Our modern society requires environmentally friendly solutions for energy production. Energy can be released not only from the fission of heavy nuclei but also from the fusion of light nuclei. Nuclear fusion is an important option for a clean and safe solution for our long-term energy needs. The extremely high temperatures required for the fusion reaction are routinely realized in several magnetic-fusion machines. Since the early 1990s, up to 16 MW of fusion power has been released in pulses of a few seconds, corresponding to a power multiplication close to break-even. Our understanding of the very complex behaviour of a magnetized plasma at temperatures between 150 and 200 million °C surrounded by cold walls has also advanced substantially. This steady progress has resulted in the construction of ITER, a fusion device with a planned fusion power output of 500 MW in pulses of 400 s. ITER should provide answers to remaining important questions on the integration of physics and technology, through a full-size demonstration of a tenfold power multiplication, and on nuclear safety aspects. Here we review the basic physics underlying magnetic fusion: past achievements, present efforts and the prospects for future production of electrical energy. We also discuss questions related to the safety, waste management and decommissioning of a future fusion power plant.

  6. Product design for energy reduction in concurrent engineering: An Inverted Pyramid Approach

    NASA Astrophysics Data System (ADS)

    Alkadi, Nasr M.

    Energy factors in product design in concurrent engineering (CE) are becoming an emerging dimension for several reasons; (a) the rising interest in "green design and manufacturing", (b) the national energy security concerns and the dramatic increase in energy prices, (c) the global competition in the marketplace and global climate change commitments including carbon tax and emission trading systems, and (d) the widespread recognition of the need for sustainable development. This research presents a methodology for the intervention of energy factors in concurrent engineering product development process to significantly reduce the manufacturing energy requirement. The work presented here is the first attempt at integrating the design for energy in concurrent engineering framework. It adds an important tool to the DFX toolbox for evaluation of the impact of design decisions on the product manufacturing energy requirement early during the design phase. The research hypothesis states that "Product Manufacturing Energy Requirement is a Function of Design Parameters". The hypothesis was tested by conducting experimental work in machining and heat treating that took place at the manufacturing lab of the Industrial and Management Systems Engineering Department (IMSE) at West Virginia University (WVU) and at a major U.S steel manufacturing plant, respectively. The objective of the machining experiment was to study the effect of changing specific product design parameters (Material type and diameter) and process design parameters (metal removal rate) on a gear head lathe input power requirement through performing defined sets of machining experiments. The objective of the heat treating experiment was to study the effect of varying product charging temperature on the fuel consumption of a walking beams reheat furnace. The experimental work in both directions have revealed important insights into energy utilization in machining and heat-treating processes and its variance based on product, process, and system design parameters. In depth evaluation to how the design and manufacturing normally happen in concurrent engineering provided a framework to develop energy system levels in machining within the concurrent engineering environment using the method of "Inverted Pyramid Approach", (IPA). The IPA features varying levels of output energy based information depending on the input design parameters that is available during each stage (level) of the product design. The experimental work, the in-depth evaluation of design and manufacturing in CE, and the developed energy system levels in machining provided a solid base for the development of the model for the design for energy reduction in CE. The model was used to analyze an example part where 12 evolving designs were thoroughly reviewed to investigate the sensitivity of energy to design parameters in machining. The model allowed product design teams to address manufacturing energy concerns early during the design stage. As a result, ranges for energy sensitive design parameters impacting product manufacturing energy consumption were found in earlier levels. As designer proceeds to deeper levels in the model, this range tightens and results in significant energy reductions.

  7. Automation of energy demand forecasting

    NASA Astrophysics Data System (ADS)

    Siddique, Sanzad

    Automation of energy demand forecasting saves time and effort by searching automatically for an appropriate model in a candidate model space without manual intervention. This thesis introduces a search-based approach that improves the performance of the model searching process for econometrics models. Further improvements in the accuracy of the energy demand forecasting are achieved by integrating nonlinear transformations within the models. This thesis introduces machine learning techniques that are capable of modeling such nonlinearity. Algorithms for learning domain knowledge from time series data using the machine learning methods are also presented. The novel search based approach and the machine learning models are tested with synthetic data as well as with natural gas and electricity demand signals. Experimental results show that the model searching technique is capable of finding an appropriate forecasting model. Further experimental results demonstrate an improved forecasting accuracy achieved by using the novel machine learning techniques introduced in this thesis. This thesis presents an analysis of how the machine learning techniques learn domain knowledge. The learned domain knowledge is used to improve the forecast accuracy.

  8. Feasibility study of a brine boiling machine by solar energy

    NASA Astrophysics Data System (ADS)

    Phayom, W.

    2018-06-01

    This study presented the technical and operational feasibility of brine boiling machine by using solar energy instead of firewood or husk for salt production. The solar salt brine boiling machine consisted of a boiling chamber with an enhanced thermal efficiency through use of a solar brine heater. The stainless steel solar salt brine boiling chamber had dimensions of 60 cm x 70 cm x 20 cm. The steel brine heater had dimensions of 70 cm x 80 cm x 20 cm. The tilt angle of both the boiling chamber and brine heater was 20 degrees from horizontal. The brine temperature in the reservoir tank was 42°C with a flow rate of 6.64 L/h discharging into the solar boiling machine. It was found that the thermal efficiency and overall efficiency of the solar salt brine boiling machine were 0.63 and 0.38, respectively at a solar irradiance of 787.6 W/m2. The results shows that the potential of using solar energy for salt production system is feasible.

  9. General Theory of the Double Fed Synchronous Machine. Ph.D. Thesis - Swiss Technological Univ., 1950

    NASA Technical Reports Server (NTRS)

    El-Magrabi, M. G.

    1982-01-01

    Motor and generator operation of a double-fed synchronous machine were studied and physically and mathematically treated. Experiments with different connections, voltages, etc. were carried out. It was concluded that a certain degree of asymmetry is necessary for the best utilization of the machine.

  10. Optical alignment of electrodes on electrical discharge machines

    NASA Technical Reports Server (NTRS)

    Boissevain, A. G.; Nelson, B. W.

    1972-01-01

    Shadowgraph system projects magnified image on screen so that alignment of small electrodes mounted on electrical discharge machines can be corrected and verified. Technique may be adapted to other machine tool equipment where physical contact cannot be made during inspection and access to tool limits conventional runout checking procedures.

  11. 14 CFR 382.3 - What do the terms in this rule mean?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... devices and medications. Automated airport kiosk means a self-service transaction machine that a carrier... machine means a continuous positive airway pressure machine. Department or DOT means the United States..., emotional or mental illness, and specific learning disabilities. The term physical or mental impairment...

  12. Physics 30 Program Machine-Scorable Open-Ended Questions: Unit 2: Electric and Magnetic Forces. Diploma Examinations Program.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    This document outlines the use of machine-scorable open-ended questions for the evaluation of Physics 30 in Alberta. Contents include: (1) an introduction to the questions; (2) sample instruction sheet; (3) fifteen sample items; (4) item information including the key, difficulty, and source of each item; (5) solutions to items having multiple…

  13. Graphene, a material for high temperature devices – intrinsic carrier density, carrier drift velocity, and lattice energy

    PubMed Central

    Yin, Yan; Cheng, Zengguang; Wang, Li; Jin, Kuijuan; Wang, Wenzhong

    2014-01-01

    Heat has always been a killing matter for traditional semiconductor machines. The underlining physical reason is that the intrinsic carrier density of a device made from a traditional semiconductor material increases very fast with a rising temperature. Once reaching a temperature, the density surpasses the chemical doping or gating effect, any p-n junction or transistor made from the semiconductor will fail to function. Here, we measure the intrinsic Fermi level (|EF| = 2.93 kBT) or intrinsic carrier density (nin = 3.87 × 106 cm−2K−2·T2), carrier drift velocity, and G mode phonon energy of graphene devices and their temperature dependencies up to 2400 K. Our results show intrinsic carrier density of graphene is an order of magnitude less sensitive to temperature than those of Si or Ge, and reveal the great potentials of graphene as a material for high temperature devices. We also observe a linear decline of saturation drift velocity with increasing temperature, and identify the temperature coefficients of the intrinsic G mode phonon energy. Above knowledge is vital in understanding the physical phenomena of graphene under high power or high temperature. PMID:25044003

  14. Prospects for Higgs physics at energies up to 100 TeV.

    PubMed

    Baglio, Julien; Djouadi, Abdelhak; Quevillon, Jérémie

    2016-11-01

    We summarize the prospects for Higgs boson physics at future proton-proton colliders with centre of mass (c.m.) energies up to 100 TeV. We first provide the production cross sections for the Higgs boson of the Standard Model from 13 TeV to 100 TeV, in the main production mechanisms and in subleading but important ones such as double Higgs production, triple production and associated production with two gauge bosons or with a single top quark. We then discuss the production of Higgs particles in beyond the Standard Model scenarios, starting with the one in the continuum of a pair of scalar, fermionic and vector dark matter particles in Higgs-portal models in various channels with virtual Higgs exchange. The cross sections for the production of the heavier CP-even and CP-odd neutral Higgs states and the charged Higgs states in two-Higgs doublet models, with a specific study of the case of the Minimal Supersymmetric Standard Model, are then given. The sensitivity of a 100 TeV proton machine to probe the new Higgs states is discussed and compared to that of the LHC with a c.m. energy of 14 TeV and at high luminosity.

  15. Machine learning for many-body physics: The case of the Anderson impurity model

    DOE PAGES

    Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole; ...

    2014-10-31

    We applied machine learning methods in order to find the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Furthermore, different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. Our results indicate that a machine learning approach to dynamical mean-field theory may be feasible.

  16. Machine learning for many-body physics: The case of the Anderson impurity model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole

    We applied machine learning methods in order to find the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Furthermore, different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. Our results indicate that a machine learning approach to dynamical mean-field theory may be feasible.

  17. The universal numbers. From Biology to Physics.

    PubMed

    Marchal, Bruno

    2015-12-01

    I will explain how the mathematicians have discovered the universal numbers, or abstract computer, and I will explain some abstract biology, mainly self-reproduction and embryogenesis. Then I will explain how and why, and in which sense, some of those numbers can dream and why their dreams can glue together and must, when we assume computationalism in cognitive science, generate a phenomenological physics, as part of a larger phenomenological theology (in the sense of the greek theologians). The title should have been "From Biology to Physics, through the Phenomenological Theology of the Universal Numbers", if that was not too long for a title. The theology will consist mainly, like in some (neo)platonist greek-indian-chinese tradition, in the truth about numbers' relative relations, with each others, and with themselves. The main difference between Aristotle and Plato is that Aristotle (especially in its common and modern christian interpretation) makes reality WYSIWYG (What you see is what you get: reality is what we observe, measure, i.e. the natural material physical science) where for Plato and the (rational) mystics, what we see might be only the shadow or the border of something else, which might be non physical (mathematical, arithmetical, theological, …). Since Gödel, we know that Truth, even just the Arithmetical Truth, is vastly bigger than what the machine can rationally justify. Yet, with Church's thesis, and the mechanizability of the diagonalizations involved, machines can apprehend this and can justify their limitations, and get some sense of what might be true beyond what they can prove or justify rationally. Indeed, the incompleteness phenomenon introduces a gap between what is provable by some machine and what is true about that machine, and, as Gödel saw already in 1931, the existence of that gap is accessible to the machine itself, once it is has enough provability abilities. Incompleteness separates truth and provable, and machines can justify this in some way. More importantly incompleteness entails the distinction between many intensional variants of provability. For example, the absence of reflexion (beweisbar(⌜A⌝) → A with beweisbar being Gödel's provability predicate) makes it impossible for the machine's provability to obey the axioms usually taken for a theory of knowledge. The most important consequence of this in the machine's possible phenomenology is that it provides sense, indeed arithmetical sense, to intensional variants of provability, like the logics of provability-and-truth, which at the propositional level can be mirrored by the logic of provable-and-true statements (beweisbar(⌜A⌝) ∧ A). It is incompleteness which makes this logic different from the logic of provability. Other variants, like provable-and-consistent, or provable-and-consistent-and-true, appears in the same way, and inherits the incompleteness splitting, unlike beweisbar(⌜A⌝) ∧ A. I will recall thought experience which motivates the use of those intensional variants to associate a knower and an observer in some canonical way to the machines or the numbers. We will in this way get an abstract and phenomenological theology of a machine M through the true logics of their true self-referential abilities (even if not provable, or knowable, by the machine itself), in those different intensional senses. Cognitive science and theoretical physics motivate the study of those logics with the arithmetical interpretation of the atomic sentences restricted to the "verifiable" (Σ1) sentences, which is the way to study the theology of the computationalist machine. This provides a logic of the observable, as expected by the Universal Dovetailer Argument, which will be recalled briefly, and which can lead to a comparison of the machine's logic of physics with the empirical logic of the physicists (like quantum logic). This leads also to a series of open problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Low blow Charpy impact of silicon carbides

    NASA Technical Reports Server (NTRS)

    Abe, H.; Chandan, H. C.; Bradt, R. C.

    1978-01-01

    The room-temperature impact resistance of several commercial silicon carbides was examined using an instrumented pendulum-type machine and Charpy-type specimens. Energy balance compliance methods and fracture toughness approaches, both applicable to other ceramics, were used for analysis. The results illustrate the importance of separating the machine and the specimen energy contributions and confirm the equivalence of KIc and KId. The material's impact energy was simply the specimen's stored elastic strain energy at fracture.

  19. The Four Lives of a Nuclear Accelerator

    NASA Astrophysics Data System (ADS)

    Wiescher, Michael

    2017-06-01

    Electrostatic accelerators have emerged as a major tool in research and industry in the second half of the twentieth century. In particular in low energy nuclear physics they have been essential for addressing a number of critical research questions from nuclear structure to nuclear astrophysics. This article describes this development on the example of a single machine which has been used for nearly sixty years at the forefront of scientific research in nuclear physics. The article summarizes the concept of electrostatic accelerators and outlines how this accelerator developed from a bare support function to an independent research tool that has been utilized in different research environments and institutions and now looks forward to a new life as part of the experiment CASPAR at the 4,850" level of the Sanford Underground Research Facility.

  20. DOE pushes for useful quantum computing

    NASA Astrophysics Data System (ADS)

    Cho, Adrian

    2018-01-01

    The U.S. Department of Energy (DOE) is joining the quest to develop quantum computers, devices that would exploit quantum mechanics to crack problems that overwhelm conventional computers. The initiative comes as Google and other companies race to build a quantum computer that can demonstrate "quantum supremacy" by beating classical computers on a test problem. But reaching that milestone will not mean practical uses are at hand, and the new $40 million DOE effort is intended to spur the development of useful quantum computing algorithms for its work in chemistry, materials science, nuclear physics, and particle physics. With the resources at its 17 national laboratories, DOE could play a key role in developing the machines, researchers say, although finding problems with which quantum computers can help isn't so easy.

  1. A Bridge Too Far: The Demise of the Superconducting Super Collider, 1989-1993

    NASA Astrophysics Data System (ADS)

    Riordan, Michael

    2015-04-01

    In October 1993 the US Congress terminated the Superconducting Super Collider -- at over 10 billion the largest and costliest basic-science project ever attempted. It was a disastrous loss for the nation's once-dominant high-energy physics community, which has been slowly declining since then. With the 2012 discovery of the Higgs boson at CERN's Large Hadron Collider, Europe has assumed world leadership in this field. A combination of fiscal austerity, continuing SSC cost overruns, intense Congressional scrutiny, lack of major foreign contributions, waning Presidential support, and the widespread public perception of mismanagement led to the project's demise nearly five years after it had begun. Its termination occurred against the political backdrop of changing scientific needs as US science policy shifted to a post-Cold War footing during the early 1990s. And the growing cost of the SSC inevitably exerted undue pressure upon other worthy research, thus weakening its support in Congress and the broader scientific community. As underscored by the Higgs boson discovery, at a mass substantially below that of the top quark, the SSC did not need to collide protons at 40 TeV in order to attain its premier physics goal. The selection of this design energy was governed more by politics than by physics, given that Europeans could build the LHC by eventually installing superconducting magnets in the LEP tunnel under construction in the mid-1980s. In hindsight, there were good alternative projects the US high-energy physics community could have pursued that did not involve building a gargantuan, multibillion-dollar machine at a green-field site in Texas. Research supported by the National Science Foundation, Department of Energy, and the Richard Lounsbery Foundation.

  2. Bearingless AC Homopolar Machine Design and Control for Distributed Flywheel Energy Storage

    NASA Astrophysics Data System (ADS)

    Severson, Eric Loren

    The increasing ownership of electric vehicles, in-home solar and wind generation, and wider penetration of renewable energies onto the power grid has created a need for grid-based energy storage to provide energy-neutral services. These services include frequency regulation, which requires short response-times, high power ramping capabilities, and several charge cycles over the course of one day; and diurnal load-/generation-following services to offset the inherent mismatch between renewable generation and the power grid's load profile, which requires low self-discharge so that a reasonable efficiency is obtained over a 24 hour storage interval. To realize the maximum benefits of energy storage, the technology should be modular and have minimum geographic constraints, so that it is easily scalable according to local demands. Furthermore, the technology must be economically viable to participate in the energy markets. There is currently no storage technology that is able to simultaneously meet all of these needs. This dissertation focuses on developing a new energy storage device based on flywheel technology to meet these needs. It is shown that the bearingless ac homopolar machine can be used to overcome key obstacles in flywheel technology, namely: unacceptable self-discharge and overall system cost and complexity. Bearingless machines combine the functionality of a magnetic bearing and a motor/generator into a single electromechanical device. Design of these machines is particularly challenging due to cross-coupling effects and trade-offs between motor and magnetic bearing capabilities. The bearingless ac homopolar machine adds to these design challenges due to its 3D flux paths requiring computationally expensive 3D finite element analysis. At the time this dissertation was started, bearingless ac homopolar machines were a highly immature technology. This dissertation advances the state-of-the-art of these machines through research contributions in the areas of magnetic modeling, winding design, control, and power-electronic drive implementation. While these contributions are oriented towards facilitating more optimal flywheel designs, they will also be useful in applying the bearingless ac homopolar machine in other applications. Example designs are considered through finite element analysis and experimental validation is provided from a proof-of-concept prototype that has been designed and constructed as a part of this dissertation.

  3. Trajectories of the ribosome as a Brownian nanomachine

    DOE PAGES

    Dashti, Ali; Schwander, Peter; Langlois, Robert; ...

    2014-11-24

    In a Brownian machine, there is a tiny device buffeted by the random motions of molecules in the environment, is capable of exploiting these thermal motions for many of the conformational changes in its work cycle. Such machines are now thought to be ubiquitous, with the ribosome, a molecular machine responsible for protein synthesis, increasingly regarded as prototypical. We present a new analytical approach capable of determining the free-energy landscape and the continuous trajectories of molecular machines from a large number of snapshots obtained by cryogenic electron microscopy. We demonstrate this approach in the context of experimental cryogenic electron microscopemore » images of a large ensemble of nontranslating ribosomes purified from yeast cells. The free-energy landscape is seen to contain a closed path of low energy, along which the ribosome exhibits conformational changes known to be associated with the elongation cycle. This approach allows model-free quantitative analysis of the degrees of freedom and the energy landscape underlying continuous conformational changes in nanomachines, including those important for biological function.« less

  4. Quantitative approaches to energy and glucose homeostasis: machine learning and modelling for precision understanding and prediction

    PubMed Central

    Murphy, Kevin G.; Jones, Nick S.

    2018-01-01

    Obesity is a major global public health problem. Understanding how energy homeostasis is regulated, and can become dysregulated, is crucial for developing new treatments for obesity. Detailed recording of individual behaviour and new imaging modalities offer the prospect of medically relevant models of energy homeostasis that are both understandable and individually predictive. The profusion of data from these sources has led to an interest in applying machine learning techniques to gain insight from these large, relatively unstructured datasets. We review both physiological models and machine learning results across a diverse range of applications in energy homeostasis, and highlight how modelling and machine learning can work together to improve predictive ability. We collect quantitative details in a comprehensive mathematical supplement. We also discuss the prospects of forecasting homeostatic behaviour and stress the importance of characterizing stochasticity within and between individuals in order to provide practical, tailored forecasts and guidance to combat the spread of obesity. PMID:29367240

  5. Machine learning methods as a tool to analyse incomplete or irregularly sampled radon time series data.

    PubMed

    Janik, M; Bossew, P; Kurihara, O

    2018-07-15

    Machine learning is a class of statistical techniques which has proven to be a powerful tool for modelling the behaviour of complex systems, in which response quantities depend on assumed controls or predictors in a complicated way. In this paper, as our first purpose, we propose the application of machine learning to reconstruct incomplete or irregularly sampled data of time series indoor radon ( 222 Rn). The physical assumption underlying the modelling is that Rn concentration in the air is controlled by environmental variables such as air temperature and pressure. The algorithms "learn" from complete sections of multivariate series, derive a dependence model and apply it to sections where the controls are available, but not the response (Rn), and in this way complete the Rn series. Three machine learning techniques are applied in this study, namely random forest, its extension called the gradient boosting machine and deep learning. For a comparison, we apply the classical multiple regression in a generalized linear model version. Performance of the models is evaluated through different metrics. The performance of the gradient boosting machine is found to be superior to that of the other techniques. By applying learning machines, we show, as our second purpose, that missing data or periods of Rn series data can be reconstructed and resampled on a regular grid reasonably, if data of appropriate physical controls are available. The techniques also identify to which degree the assumed controls contribute to imputing missing Rn values. Our third purpose, though no less important from the viewpoint of physics, is identifying to which degree physical, in this case environmental variables, are relevant as Rn predictors, or in other words, which predictors explain most of the temporal variability of Rn. We show that variables which contribute most to the Rn series reconstruction, are temperature, relative humidity and day of the year. The first two are physical predictors, while "day of the year" is a statistical proxy or surrogate for missing or unknown predictors. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Toy Story: what I have learned from playing with toys about the physics of living cells

    NASA Astrophysics Data System (ADS)

    Austin, Robert H.

    2011-02-01

    Yogi Berra once noted that "You can observe a lot just by watching." A similar remark can be made about toys: you can learn a lot of physics by playing with certain children's toys, and given that physics also applies to life, you could hope that it would also be possible to learn about the physics of living cells by close observation of toys, loosely defined. I'll start out with a couple of toys, rubber duckies and something called a soliton machine and discuss insights (or failures) in how "energy" moves in biological molecules. I'll bring back the rubber duckies and a toy suggested by one of the eccentrics known to roam the halls of academia to discuss how this lead to studies how cells move and collective aspects of cell movement. Then I'll talk about mazes and how they lead to experiments on evolution and cancer. Hopefully this broad range of toys will show how indeed "You can observe a lot just by watching" about some of the fundamental physics of living cells.

  7. Needs of ergonomic design at control units in production industries.

    PubMed

    Levchuk, I; Schäfer, A; Lang, K-H; Gebhardt, Hj; Klussmann, A

    2012-01-01

    During the last decades, an increasing use of innovative technologies in manufacturing areas was monitored. A huge amount of physical workload was replaced by the change from conventional machine tools to computer-controlled units. CNC systems spread in current production processes. Because of this, machine operators today mostly have an observational function. This caused increasing of static work (e.g., standing, sitting) and cognitive demands (e.g., process observation). Machine operators have a high responsibility, because mistakes may lead to human injuries as well as to product losses - and in consequence may lead to high monetary losses (for the company) as well. Being usable often means for a CNC machine being efficient. An intuitive usability and an ergonomic organization of CNC workplaces can be an essential basis to reduce the risk of failures in operation as well as physical complaints (e.g. pain or diseases because of bad body posture during work). In contrast to conventional machines, CNC machines are equipped both with hardware and software. An intuitive and clear-sighted operating of CNC systems is a requirement for quick learning of new systems. Within this study, a survey was carried out among trainees learning the operation of CNC machines.

  8. Impact resistance of guards on grinding machines.

    PubMed

    Mewes, Detlef; Mewes, Olaf; Herbst, Peter

    2011-01-01

    Guards on machine tools are meant to protect persons from injuries caused by parts ejected with high kinetic energy from the machine's working zone. With respect to stationary grinding machines, Standard No. EN 13218:2002, therefore, specifies minimum wall thicknesses for guards. These values are mainly based on estimations and experience instead of systematic experimental investigations. This paper shows to what extent simple impact tests with standardizable projectiles can be used as basis for the evaluation of the impact resistance of guards, provided that not only the kinetic energy of the projectiles used but also, among others, their geometry corresponds to the abrasive product fragments to be expected.

  9. Geometry and surface damage in micro electrical discharge machining of micro-holes

    NASA Astrophysics Data System (ADS)

    Ekmekci, Bülent; Sayar, Atakan; Tecelli Öpöz, Tahsin; Erden, Abdulkadir

    2009-10-01

    Geometry and subsurface damage of blind micro-holes produced by micro electrical discharge machining (micro-EDM) is investigated experimentally to explore the relational dependence with respect to pulse energy. For this purpose, micro-holes are machined with various pulse energies on plastic mold steel samples using a tungsten carbide tool electrode and a hydrocarbon-based dielectric liquid. Variations in the micro-hole geometry, micro-hole depth and over-cut in micro-hole diameter are measured. Then, unconventional etching agents are applied on the cross sections to examine micro structural alterations within the substrate. It is observed that the heat-damaged segment is composed of three distinctive layers, which have relatively high thicknesses and vary noticeably with respect to the drilling depth. Crack formation is identified on some sections of the micro-holes even by utilizing low pulse energies during machining. It is concluded that the cracking mechanism is different from cracks encountered on the surfaces when machining is performed by using the conventional EDM process. Moreover, an electrically conductive bridge between work material and debris particles is possible at the end tip during machining which leads to electric discharges between the piled segments of debris particles and the tool electrode during discharging.

  10. Irrelevance of the Power Stroke for the Directionality, Stopping Force, and Optimal Efficiency of Chemically Driven Molecular Machines

    PubMed Central

    Astumian, R. Dean

    2015-01-01

    A simple model for a chemically driven molecular walker shows that the elastic energy stored by the molecule and released during the conformational change known as the power-stroke (i.e., the free-energy difference between the pre- and post-power-stroke states) is irrelevant for determining the directionality, stopping force, and efficiency of the motor. Further, the apportionment of the dependence on the externally applied force between the forward and reverse rate constants of the power-stroke (or indeed among all rate constants) is irrelevant for determining the directionality, stopping force, and efficiency of the motor. Arguments based on the principle of microscopic reversibility demonstrate that this result is general for all chemically driven molecular machines, and even more broadly that the relative energies of the states of the motor have no role in determining the directionality, stopping force, or optimal efficiency of the machine. Instead, the directionality, stopping force, and optimal efficiency are determined solely by the relative heights of the energy barriers between the states. Molecular recognition—the ability of a molecular machine to discriminate between substrate and product depending on the state of the machine—is far more important for determining the intrinsic directionality and thermodynamics of chemo-mechanical coupling than are the details of the internal mechanical conformational motions of the machine. In contrast to the conclusions for chemical driving, a power-stroke is very important for the directionality and efficiency of light-driven molecular machines and for molecular machines driven by external modulation of thermodynamic parameters. PMID:25606678

  11. Energy harvesting using AC machines with high effective pole count

    NASA Astrophysics Data System (ADS)

    Geiger, Richard Theodore

    In this thesis, ways to improve the power conversion of rotating generators at low rotor speeds in energy harvesting applications were investigated. One method is to increase the pole count, which increases the generator back-emf without also increasing the I2R losses, thereby increasing both torque density and conversion efficiency. One machine topology that has a high effective pole count is a hybrid "stepper" machine. However, the large self inductance of these machines decreases their power factor and hence the maximum power that can be delivered to a load. This effect can be cancelled by the addition of capacitors in series with the stepper windings. A circuit was designed and implemented to automatically vary the series capacitance over the entire speed range investigated. The addition of the series capacitors improved the power output of the stepper machine by up to 700%. At low rotor speeds, with the addition of series capacitance, the power output of the hybrid "stepper" was more than 200% that of a similarly sized PMDC brushed motor. Finally, in this thesis a hybrid lumped parameter / finite element model was used to investigate the impact of number, shape and size of the rotor and stator teeth on machine performance. A typical off-the-shelf hybrid stepper machine has significant cogging torque by design. This cogging torque is a major problem in most small energy harvesting applications. In this thesis it was shown that the cogging and ripple torque can be dramatically reduced. These findings confirm that high-pole-count topologies, and specifically the hybrid stepper configuration, are an attractive choice for energy harvesting applications.

  12. Contemporary machine learning: techniques for practitioners in the physical sciences

    NASA Astrophysics Data System (ADS)

    Spears, Brian

    2017-10-01

    Machine learning is the science of using computers to find relationships in data without explicitly knowing or programming those relationships in advance. Often without realizing it, we employ machine learning every day as we use our phones or drive our cars. Over the last few years, machine learning has found increasingly broad application in the physical sciences. This most often involves building a model relationship between a dependent, measurable output and an associated set of controllable, but complicated, independent inputs. The methods are applicable both to experimental observations and to databases of simulated output from large, detailed numerical simulations. In this tutorial, we will present an overview of current tools and techniques in machine learning - a jumping-off point for researchers interested in using machine learning to advance their work. We will discuss supervised learning techniques for modeling complicated functions, beginning with familiar regression schemes, then advancing to more sophisticated decision trees, modern neural networks, and deep learning methods. Next, we will cover unsupervised learning and techniques for reducing the dimensionality of input spaces and for clustering data. We'll show example applications from both magnetic and inertial confinement fusion. Along the way, we will describe methods for practitioners to help ensure that their models generalize from their training data to as-yet-unseen test data. We will finally point out some limitations to modern machine learning and speculate on some ways that practitioners from the physical sciences may be particularly suited to help. This work was performed by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  13. Learn about Physical Science: Simple Machines. [CD-ROM].

    ERIC Educational Resources Information Center

    2000

    This CD-ROM, designed for students in grades K-2, explores the world of simple machines. It allows students to delve into the mechanical world and learn the ways in which simple machines make work easier. Animated demonstrations are provided of the lever, pulley, wheel, screw, wedge, and inclined plane. Activities include practical matching and…

  14. Fun with Physics in the Elementary School.

    ERIC Educational Resources Information Center

    Ediger, Marlow

    Primary grade pupils can become fascinated with simple machines. This paper suggests that teachers have simple machines in the classroom for a unit of study. It proposes some guidelines to create a unit of study for six simple machines that include the fulcrum, inclined plane, pulley, wheel and axle, wedge, and screw. Friction, gravity, force, and…

  15. Simple Machine Junk Cars

    ERIC Educational Resources Information Center

    Herald, Christine

    2010-01-01

    During the month of May, the author's eighth-grade physical science students study the six simple machines through hands-on activities, reading assignments, videos, and notes. At the end of the month, they can easily identify the six types of simple machine: inclined plane, wheel and axle, pulley, screw, wedge, and lever. To conclude this unit,…

  16. Horizontal-axis clothes washer market poised for expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, K.L.

    1994-12-31

    The availability of energy- and water-efficient horizontal-axis washing machines in the North American market is growing, as US and European manufacturers position for an expected long-term market shift toward horizontal-axis (H-axis) technology. Four of the five major producers of washing machines in the US are developing or considering new H-axis models. New entrants, including US-based Staber Industries and several European manufacturers, are also expected to compete in this market. The intensified interest in H-axis technology is partly driven by speculation that new US energy efficiency standards, to be proposed in 1996 and implemented in 1999, will effectively mandate H-axis machines.more » H-axis washers typically use one-third to two-thirds less energy, water, and detergent than vertical-axis machines. Some models also reduce the energy needed to dry the laundry, since their higher spin speeds extract more water than is typical with vertical-axis designs. H-axis washing machines are the focus of two broadly-based efforts to support coordinated research and incentive programs by electric, gas, and water utilities: The High-Efficiency Laundry Metering/Marketing Analysis (THELMA), and the Consortium for Energy Efficiency (CEE) High-Efficiency Clothes Washer Initiative. These efforts may help to pave the way for new types of marketing partnerships among utilities and other parties that could help to speed adoption of H-axis washers.« less

  17. Understanding the undelaying mechanism of HA-subtyping in the level of physic-chemical characteristics of protein.

    PubMed

    Ebrahimi, Mansour; Aghagolzadeh, Parisa; Shamabadi, Narges; Tahmasebi, Ahmad; Alsharifi, Mohammed; Adelson, David L; Hemmatzadeh, Farhid; Ebrahimie, Esmaeil

    2014-01-01

    The evolution of the influenza A virus to increase its host range is a major concern worldwide. Molecular mechanisms of increasing host range are largely unknown. Influenza surface proteins play determining roles in reorganization of host-sialic acid receptors and host range. In an attempt to uncover the physic-chemical attributes which govern HA subtyping, we performed a large scale functional analysis of over 7000 sequences of 16 different HA subtypes. Large number (896) of physic-chemical protein characteristics were calculated for each HA sequence. Then, 10 different attribute weighting algorithms were used to find the key characteristics distinguishing HA subtypes. Furthermore, to discover machine leaning models which can predict HA subtypes, various Decision Tree, Support Vector Machine, Naïve Bayes, and Neural Network models were trained on calculated protein characteristics dataset as well as 10 trimmed datasets generated by attribute weighting algorithms. The prediction accuracies of the machine learning methods were evaluated by 10-fold cross validation. The results highlighted the frequency of Gln (selected by 80% of attribute weighting algorithms), percentage/frequency of Tyr, percentage of Cys, and frequencies of Try and Glu (selected by 70% of attribute weighting algorithms) as the key features that are associated with HA subtyping. Random Forest tree induction algorithm and RBF kernel function of SVM (scaled by grid search) showed high accuracy of 98% in clustering and predicting HA subtypes based on protein attributes. Decision tree models were successful in monitoring the short mutation/reassortment paths by which influenza virus can gain the key protein structure of another HA subtype and increase its host range in a short period of time with less energy consumption. Extracting and mining a large number of amino acid attributes of HA subtypes of influenza A virus through supervised algorithms represent a new avenue for understanding and predicting possible future structure of influenza pandemics.

  18. Understanding the Underlying Mechanism of HA-Subtyping in the Level of Physic-Chemical Characteristics of Protein

    PubMed Central

    Ebrahimi, Mansour; Aghagolzadeh, Parisa; Shamabadi, Narges; Tahmasebi, Ahmad; Alsharifi, Mohammed; Adelson, David L.

    2014-01-01

    The evolution of the influenza A virus to increase its host range is a major concern worldwide. Molecular mechanisms of increasing host range are largely unknown. Influenza surface proteins play determining roles in reorganization of host-sialic acid receptors and host range. In an attempt to uncover the physic-chemical attributes which govern HA subtyping, we performed a large scale functional analysis of over 7000 sequences of 16 different HA subtypes. Large number (896) of physic-chemical protein characteristics were calculated for each HA sequence. Then, 10 different attribute weighting algorithms were used to find the key characteristics distinguishing HA subtypes. Furthermore, to discover machine leaning models which can predict HA subtypes, various Decision Tree, Support Vector Machine, Naïve Bayes, and Neural Network models were trained on calculated protein characteristics dataset as well as 10 trimmed datasets generated by attribute weighting algorithms. The prediction accuracies of the machine learning methods were evaluated by 10-fold cross validation. The results highlighted the frequency of Gln (selected by 80% of attribute weighting algorithms), percentage/frequency of Tyr, percentage of Cys, and frequencies of Try and Glu (selected by 70% of attribute weighting algorithms) as the key features that are associated with HA subtyping. Random Forest tree induction algorithm and RBF kernel function of SVM (scaled by grid search) showed high accuracy of 98% in clustering and predicting HA subtypes based on protein attributes. Decision tree models were successful in monitoring the short mutation/reassortment paths by which influenza virus can gain the key protein structure of another HA subtype and increase its host range in a short period of time with less energy consumption. Extracting and mining a large number of amino acid attributes of HA subtypes of influenza A virus through supervised algorithms represent a new avenue for understanding and predicting possible future structure of influenza pandemics. PMID:24809455

  19. Computing exponentially faster: implementing a non-deterministic universal Turing machine using DNA

    PubMed Central

    Currin, Andrew; Korovin, Konstantin; Ababi, Maria; Roper, Katherine; Kell, Douglas B.; Day, Philip J.

    2017-01-01

    The theory of computer science is based around universal Turing machines (UTMs): abstract machines able to execute all possible algorithms. Modern digital computers are physical embodiments of classical UTMs. For the most important class of problem in computer science, non-deterministic polynomial complete problems, non-deterministic UTMs (NUTMs) are theoretically exponentially faster than both classical UTMs and quantum mechanical UTMs (QUTMs). However, no attempt has previously been made to build an NUTM, and their construction has been regarded as impossible. Here, we demonstrate the first physical design of an NUTM. This design is based on Thue string rewriting systems, and thereby avoids the limitations of most previous DNA computing schemes: all the computation is local (simple edits to strings) so there is no need for communication, and there is no need to order operations. The design exploits DNA's ability to replicate to execute an exponential number of computational paths in P time. Each Thue rewriting step is embodied in a DNA edit implemented using a novel combination of polymerase chain reactions and site-directed mutagenesis. We demonstrate that the design works using both computational modelling and in vitro molecular biology experimentation: the design is thermodynamically favourable, microprogramming can be used to encode arbitrary Thue rules, all classes of Thue rule can be implemented, and non-deterministic rule implementation. In an NUTM, the resource limitation is space, which contrasts with classical UTMs and QUTMs where it is time. This fundamental difference enables an NUTM to trade space for time, which is significant for both theoretical computer science and physics. It is also of practical importance, for to quote Richard Feynman ‘there's plenty of room at the bottom’. This means that a desktop DNA NUTM could potentially utilize more processors than all the electronic computers in the world combined, and thereby outperform the world's current fastest supercomputer, while consuming a tiny fraction of its energy. PMID:28250099

  20. Topological energy storage of work generated by nanomotors.

    PubMed

    Weysser, Fabian; Benzerara, Olivier; Johner, Albert; Kulić, Igor M

    2015-01-28

    Most macroscopic machines rely on wheels and gears. Yet, rigid gears are entirely impractical on the nano-scale. Here we propose a more useful method to couple any rotary engine to any other mechanical elements on the nano- and micro-scale. We argue that a rotary molecular motor attached to an entangled polymer energy storage unit, which together form what we call the "tanglotron" device, is a viable concept that can be experimentally implemented. We derive the torque-entanglement relationship for a tanglotron (its "equation of state") and show that it can be understood by simple statistical mechanics arguments. We find that a typical entanglement at low packing density costs around 6kT. In the high entanglement regime, the free energy diverges logarithmically close to a maximal geometric packing density. We outline several promising applications of the tanglotron idea and conclude that the transmission, storage and back-conversion of topological entanglement energy are not only physically feasible but also practical for a number of reasons.

  1. Projected Regression Methods for Inverting Fredholm Integrals: Formalism and Application to Analytical Continuation

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-Francois; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    We present a machine learning-based statistical regression approach to the inversion of Fredholm integrals of the first kind by studying an important example for the quantum materials community, the analytical continuation problem of quantum many-body physics. It involves reconstructing the frequency dependence of physical excitation spectra from data obtained at specific points in the complex frequency plane. The approach provides a natural regularization in cases where the inverse of the Fredholm kernel is ill-conditioned and yields robust error metrics. The stability of the forward problem permits the construction of a large database of input-output pairs. Machine learning methods applied to this database generate approximate solutions which are projected onto the subspace of functions satisfying relevant constraints. We show that for low input noise the method performs as well or better than Maximum Entropy (MaxEnt) under standard error metrics, and is substantially more robust to noise. We expect the methodology to be similarly effective for any problem involving a formally ill-conditioned inversion, provided that the forward problem can be efficiently solved. AJM was supported by the Office of Science of the U.S. Department of Energy under Subcontract No. 3F-3138 and LFA by the Columbia Univeristy IDS-ROADS project, UR009033-05 which also provided part support to RN and LH.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiersen, W.; Heitzenroeder, P.; Neilson, G. H.

    The National Compact Stellarator Experiment (NCSX) is being constructed at the Princeton Plasma Physics Laboratory (PPPL) in partnership with the Oak Ridge National Laboratory (ORNL). The stellarator core is designed to produce a compact 3-D plasma that combines stellarator and tokamak physics advantages. The engineering challenges of NCSX stem from its complex geometry. From the project's start in April, 2003 to September, 2004, the fabrication specifications for the project's two long-lead components, the modular coil winding forms and the vacuum vessel, were developed. An industrial manufacturing R&D program refined the processes for their fabrication as well as production cost andmore » schedule estimates. The project passed a series of reviews and established its performance baseline with the Department of Energy. In September 2004, fabrication was approved and contracts for these components were awarded. The suppliers have completed the engineering and tooling preparations and are in production. Meanwhile, the project completed preparations for winding the coils at PPPL by installing a coil manufacturing facility and developing all necessary processes through R&D. The main activities for the next two years will be component manufacture, coil winding, and sub-assembly of the vacuum vessel and coil subsets. Machine sector sub-assembly, machine assembly, and testing will follow, leading to First Plasma in July 2009.« less

  3. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standardmore » reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.« less

  4. Minimization of energy and surface roughness of the products machined by milling

    NASA Astrophysics Data System (ADS)

    Belloufi, A.; Abdelkrim, M.; Bouakba, M.; Rezgui, I.

    2017-08-01

    Metal cutting represents a large portion in the manufacturing industries, which makes this process the largest consumer of energy. Energy consumption is an indirect source of carbon footprint, we know that CO2 emissions come from the production of energy. Therefore high energy consumption requires a large production, which leads to high cost and a large amount of CO2 emissions. At this day, a lot of researches done on the Metal cutting, but the environmental problems of the processes are rarely discussed. The right selection of cutting parameters is an effective method to reduce energy consumption because of the direct relationship between energy consumption and cutting parameters in machining processes. Therefore, one of the objectives of this research is to propose an optimization strategy suitable for machining processes (milling) to achieve the optimum cutting conditions based on the criterion of the energy consumed during the milling. In this paper the problem of energy consumed in milling is solved by an optimization method chosen. The optimization is done according to the different requirements in the process of roughing and finishing under various technological constraints.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barty, C J

    A renaissance in nuclear physics is occurring around the world because of a new kind of incredibly bright, gamma-ray light source that can be created with short pulse lasers and energetic electron beams. These highly Mono-Energetic Gamma-ray (MEGa-ray) sources produce narrow, laser-like beams of incoherent, tunable gamma-rays and are enabling access and manipulation of the nucleus of the atom with photons or so called 'Nuclear Photonics'. Just as in the early days of the laser when photon manipulation of the valence electron structure of the atom became possible and enabling to new applications and science, nuclear photonics with laser-based gamma-raymore » sources promises both to open up wide areas of practical isotope-related, materials applications and to enable new discovery-class nuclear science. In the United States, the development of high brightness and high flux MEGa-ray sources is being actively pursued at the Lawrence Livermore National Laboratory in Livermore (LLNL), California near San Francisco. The LLNL work aims to create by 2013 a machine that will advance the state of the art with respect to source the peak brightness by 6 orders of magnitude. This machine will create beams of 1 to 2.3 MeV photons with color purity matching that of common lasers. In Europe a similar but higher photon energy gamma source has been included as part of the core capability that will be established at the Extreme Light Infrastructure Nuclear Physics (ELI-NP) facility in Magurele, Romania outside of Bucharest. This machine is expected to have an end point gamma energy in the range of 13 MeV. The machine will be co-located with two world-class, 10 Petawatt laser systems thus allowing combined intense-laser and gamma-ray interaction experiments. Such capability will be unique in the world. In this talk, Dr. Chris Barty from LLNL will review the state of the art with respect to MEGa-ray source design, construction and experiments and will describe both the ongoing projects around the world as well some of the exciting applications that these machines will enable. The optimized interaction of short-duration, pulsed lasers with relativistic electron beams (inverse laser-Compton scattering) is the key to unrivaled MeV-scale photon source monochromaticity, pulse brightness and flux. In the MeV spectral range, such Mono-Energetic Gamma-ray (MEGa-ray) sources can have many orders of magnitude higher peak brilliance than even the world's largest synchrotrons. They can efficiently perturb and excite the isotope-specific resonant structure of the nucleus in a manner similar to resonant laser excitation of the valence electron structure of the atom.« less

  6. Energy: Machines, Science (Experimental): 5311.03.

    ERIC Educational Resources Information Center

    Castaldi, June P.

    This unit of instruction was designed as an introductory course in energy involving six simple machines, electricity, magnetism, and motion. The booklet lists the relevant state-adopted texts and states the performance objectives for the unit. It provides an outline of the course content and suggests experiments, demonstrations, field trips, and…

  7. The Hooey Machine.

    ERIC Educational Resources Information Center

    Scarnati, James T.; Tice, Craig J.

    1992-01-01

    Describes how students can make and use Hooey Machines to learn how mechanical energy can be transferred from one object to another within a system. The Hooey Machine is made using a pencil, eight thumbtacks, one pushpin, tape, scissors, graph paper, and a plastic lid. (PR)

  8. Steady State Advanced Tokamak (SSAT): The mission and the machine

    NASA Astrophysics Data System (ADS)

    Thomassen, K.; Goldston, R.; Nevins, B.; Neilson, H.; Shannon, T.; Montgomery, B.

    1992-03-01

    Extending the tokamak concept to the steady state regime and pursuing advances in tokamak physics are important and complementary steps for the magnetic fusion energy program. The required transition away from inductive current drive will provide exciting opportunities for advances in tokamak physics, as well as important impetus to drive advances in fusion technology. Recognizing this, the Fusion Policy Advisory Committee and the U.S. National Energy Strategy identified the development of steady state tokamak physics and technology, and improvements in the tokamak concept, as vital elements in the magnetic fusion energy development plan. Both called for the construction of a steady state tokamak facility to address these plan elements. Advances in physics that produce better confinement and higher pressure limits are required for a similar unit size reactor. Regimes with largely self-driven plasma current are required to permit a steady-state tokamak reactor with acceptable recirculating power. Reliable techniques of disruption control will be needed to achieve the availability goals of an economic reactor. Thus the central role of this new tokamak facility is to point the way to a more attractive demonstration reactor (DEMO) than the present data base would support. To meet the challenges, we propose a new 'Steady State Advanced Tokamak' (SSAT) facility that would develop and demonstrate optimized steady state tokamak operating mode. While other tokamaks in the world program employ superconducting toroidal field coils, SSAT would be the first major tokamak to operate with a fully superconducting coil set in the elongated, divertor geometry planned for ITER and DEMO.

  9. Machine learnt bond order potential to model metal-organic (Co-C) heterostructures.

    PubMed

    Narayanan, Badri; Chan, Henry; Kinaci, Alper; Sen, Fatih G; Gray, Stephen K; Chan, Maria K Y; Sankaranarayanan, Subramanian K R S

    2017-11-30

    A fundamental understanding of the inter-relationships between structure, morphology, atomic scale dynamics, chemistry, and physical properties of mixed metallic-covalent systems is essential to design novel functional materials for applications in flexible nano-electronics, energy storage and catalysis. To achieve such knowledge, it is imperative to develop robust and computationally efficient atomistic models that describe atomic interactions accurately within a single framework. Here, we present a unified Tersoff-Brenner type bond order potential (BOP) for a Co-C system, trained against lattice parameters, cohesive energies, equation of state, and elastic constants of different crystalline phases of cobalt as well as orthorhombic Co 2 C derived from density functional theory (DFT) calculations. The independent BOP parameters are determined using a combination of supervised machine learning (genetic algorithms) and local minimization via the simplex method. Our newly developed BOP accurately describes the structural, thermodynamic, mechanical, and surface properties of both the elemental components as well as the carbide phases, in excellent accordance with DFT calculations and experiments. Using our machine-learnt BOP potential, we performed large-scale molecular dynamics simulations to investigate the effect of metal/carbon concentration on the structure and mechanical properties of porous architectures obtained via self-assembly of cobalt nanoparticles and fullerene molecules. Such porous structures have implications in flexible electronics, where materials with high electrical conductivity and low elastic stiffness are desired. Using unsupervised machine learning (clustering), we identify the pore structure, pore-distribution, and metallic conduction pathways in self-assembled structures at different C/Co ratios. We find that as the C/Co ratio increases, the connectivity between the Co nanoparticles becomes limited, likely resulting in low electrical conductivity; on the other hand, such C-rich hybrid structures are highly flexible (i.e., low stiffness). The BOP model developed in this work is a valuable tool to investigate atomic scale processes, structure-property relationships, and temperature/pressure response of Co-C systems, as well as design organic-inorganic hybrid structures with a desired set of properties.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    This factsheet describes a project that developed and demonstrated a new manufacturing-informed design framework that utilizes advanced multi-scale, physics-based process modeling to dramatically improve manufacturing productivity and quality in machining operations while reducing the cost of machined components.

  11. Biomachining - A new approach for micromachining of metals

    NASA Astrophysics Data System (ADS)

    Vigneshwaran, S. C. Sakthi; Ramakrishnan, R.; Arun Prakash, C.; Sashank, C.

    2018-04-01

    Machining is the process of removal of material from workpiece. Machining can be done by physical, chemical or biological methods. Though physical and chemical methods have been widely used in machining process, they have their own disadvantages such as development of heat affected zone and usage of hazardous chemicals. Biomachining is the machining process in which bacteria is used to remove material from the metal parts. Chemolithotrophic bacteria such as Acidothiobacillus ferroxidans has been used in biomachining of metals like copper, iron etc. These bacteria are used because of their property of catalyzing the oxidation of inorganic substances. Biomachining is a suitable process for micromachining of metals. This paper reviews the biomachining process and various mechanisms involved in biomachining. This paper also briefs about various parameters/factors to be considered in biomachining and also the effect of those parameters on metal removal rate.

  12. Supervised Time Series Event Detector for Building Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-04-13

    A machine learning based approach is developed to detect events that have rarely been seen in the historical data. The data can include building energy consumption, sensor data, environmental data and any data that may affect the building's energy consumption. The algorithm is a modified nonlinear Bayesian support vector machine, which examines daily energy consumption profile, detect the days with abnormal events, and diagnose the cause of the events.

  13. Study of Man-Machine Communications Systems for Disabled Persons (The Handicapped). Volume V. Final Report.

    ERIC Educational Resources Information Center

    Kafafian, Haig

    Instructions are given for teaching severely physically and/or neurologically handicapped students to use the 14-key Cybertype man-machine communications system, an electric writing machine with a simplified keyboard to enable persons with limited motor ability or coordination to communicate in written form. Explained are the various possible…

  14. The compound Atwood machine problem

    NASA Astrophysics Data System (ADS)

    Lopes Coelho, R.

    2017-05-01

    The present paper accounts for progress in physics teaching in the sense that a problem, which has been closed to students for being too difficult, is gained for the high school curriculum. This problem is the compound Atwood machine with three bodies. Its introduction into high school classes is based on a recent study on the weighing of an Atwood machine.

  15. Galaxy Classification using Machine Learning

    NASA Astrophysics Data System (ADS)

    Fowler, Lucas; Schawinski, Kevin; Brandt, Ben-Elias; widmer, Nicole

    2017-01-01

    We present our current research into the use of machine learning to classify galaxy imaging data with various convolutional neural network configurations in TensorFlow. We are investigating how five-band Sloan Digital Sky Survey imaging data can be used to train on physical properties such as redshift, star formation rate, mass and morphology. We also investigate the performance of artificially redshifted images in recovering physical properties as image quality degrades.

  16. 48 CFR 908.7103 - Office machines.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Office machines. 908.7103 Section 908.7103 Federal Acquisition Regulations System DEPARTMENT OF ENERGY COMPETITION ACQUISITION PLANNING REQUIRED SOURCES OF SUPPLIES AND SERVICES Acquisition of Special Items 908.7103 Office machines...

  17. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography.

    PubMed

    Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin

    2016-07-01

    Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P < 0.001), and no systematic bias was found in Bland-Altman analysis: mean difference was -0.00081 ± 0.0039. Invasive FFR ≤ 0.80 was found in 38 lesions out of 125 and was predicted by the machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P < 0.001). Compared with the physics-based computation, average execution time was reduced by more than 80 times, leading to near real-time assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor. Copyright © 2016 the American Physiological Society.

  18. Small communal laundries in block of flats: Planning, Equipment, Handicap Adaption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedersen, B.

    1980-01-01

    The primary requirements which must be made for a communal laundry is that it must be adapted to the laundry quantities, laundry needs, and available time of the households. In addition, the equipment must be such that the work involved and the and water are kept as low as possible. It is also important that the laundry facility be regarded as an attractive work environment. The following topics are discussed: Small communal laundries offer many advantages (In the same building, Possibilities for unscheduled laundering, Economically advantageous, Easy to agree on laundering times); Calculation of laundry capacity; Equipment in the laundrymore » (Washing machines, Spin dryer, Tumbler dryer and drying cabinets, Work table, Sink unit, Cold mangle); Information on equipment; Energy conservation measures (Heat exchanger, Outdoor drying); Location of equipment; Work areas which also suit the physically handicapped; Work postures are improved if the machines are placed on a higher level; Layouts; Standards for laundries.« less

  19. RESTful M2M Gateway for Remote Wireless Monitoring for District Central Heating Networks

    PubMed Central

    Cheng, Bo; Wei, Zesan

    2014-01-01

    In recent years, the increased interest in energy conservation and environmental protection, combined with the development of modern communication and computer technology, has resulted in the replacement of distributed heating by central heating in urban areas. This paper proposes a Representational State Transfer (REST) Machine-to-Machine (M2M) gateway for wireless remote monitoring for a district central heating network. In particular, we focus on the resource-oriented RESTful M2M gateway architecture, and present an uniform devices abstraction approach based on Open Service Gateway Initiative (OSGi) technology, and implement the resource mapping mechanism between resource address mapping mechanism between RESTful resources and the physical sensor devices, and present the buffer queue combined with polling method to implement the data scheduling and Quality of Service (QoS) guarantee, and also give the RESTful M2M gateway open service Application Programming Interface (API) set. The performance has been measured and analyzed. Finally, the conclusions and future work are presented. PMID:25436650

  20. RESTful M2M gateway for remote wireless monitoring for district central heating networks.

    PubMed

    Cheng, Bo; Wei, Zesan

    2014-11-27

    In recent years, the increased interest in energy conservation and environmental protection, combined with the development of modern communication and computer technology, has resulted in the replacement of distributed heating by central heating in urban areas. This paper proposes a Representational State Transfer (REST) Machine-to-Machine (M2M) gateway for wireless remote monitoring for a district central heating network. In particular, we focus on the resource-oriented RESTful M2M gateway architecture, and present an uniform devices abstraction approach based on Open Service Gateway Initiative (OSGi) technology, and implement the resource mapping mechanism between resource address mapping mechanism between RESTful resources and the physical sensor devices, and present the buffer queue combined with polling method to implement the data scheduling and Quality of Service (QoS) guarantee, and also give the RESTful M2M gateway open service Application Programming Interface (API) set. The performance has been measured and analyzed. Finally, the conclusions and future work are presented.

  1. First experimental evidence of hydrodynamic tunneling of ultra-relativistic protons in extended solid copper target at the CERN HiRadMat facility

    NASA Astrophysics Data System (ADS)

    Schmidt, R.; Blanco Sancho, J.; Burkart, F.; Grenier, D.; Wollmann, D.; Tahir, N. A.; Shutov, A.; Piriz, A. R.

    2014-08-01

    A novel experiment has been performed at the CERN HiRadMat test facility to study the impact of the 440 GeV proton beam generated by the Super Proton Synchrotron on extended solid copper cylindrical targets. Substantial hydrodynamic tunneling of the protons in the target material has been observed that leads to significant lengthening of the projectile range, which confirms our previous theoretical predictions [N. A. Tahir et al., Phys. Rev. Spec. Top.-Accel. Beams 15, 051003 (2012)]. Simulation results show very good agreement with the experimental measurements. These results have very important implications on the machine protection design for powerful machines like the Large Hadron Collider (LHC), the future High Luminosity LHC, and the proposed huge 80 km circumference Future Circular Collider, which is currently being discussed at CERN. Another very interesting outcome of this work is that one may also study the field of High Energy Density Physics at this test facility.

  2. Power electromagnetic strike machine for engineering-geological surveys

    NASA Astrophysics Data System (ADS)

    Usanov, K. M.; Volgin, A. V.; Chetverikov, E. A.; Kargin, V. A.; Moiseev, A. P.; Ivanova, Z. I.

    2017-10-01

    When implementing the processes of dynamic sensing of soils and pulsed nonexplosive seismic exploration, the most common and effective method is the strike one, which is provided by a variety of structure and parameters of pneumatic, hydraulic, electrical machines of strike action. The creation of compact portable strike machines which do not require transportation and use of mechanized means is important. A promising direction in the development of strike machines is the use of pulsed electromagnetic actuator characterized by relatively low energy consumption, relatively high specific performance and efficiency, and providing direct conversion of electrical energy into mechanical work of strike mass with linear movement trajectory. The results of these studies allowed establishing on the basis of linear electromagnetic motors the electromagnetic pulse machines with portable performance for dynamic sensing of soils and land seismic pulse of small depths.

  3. Fragmentation Energy-Saving Theory of Full Face Rock Tunnel Boring Machine Disc Cutters

    NASA Astrophysics Data System (ADS)

    Zhang, Zhao-Huang; Gong, Guo-Fang; Gao, Qing-Feng; Sun, Fei

    2017-07-01

    Attempts to minimize energy consumption of a tunnel boring machine disc cutter during the process of fragmentation have largely focused on optimizing disc-cutter spacing, as determined by the minimum specific energy required for fragmentation; however, indentation tests showed that rock deforms plastically beneath the cutters. Equations for thrust were developed for both the traditional, popularly employed disc cutter and anew design based on three-dimensional theory. The respective energy consumption for penetration, rolling, and side-slip fragmentations were obtained. A change in disc-cutter fragmentation angles resulted in a change in the nature of the interaction between the cutter and rock, which lowered the specific energy of fragmentation. During actual field excavations to the same penetration length, the combined energy consumption for fragmentation using the newly designed cutters was 15% lower than that when using the traditional design. This paper presents a theory for energy saving in tunnel boring machines. Investigation results showed that the disc cutters designed using this theory were more durable than traditional designs, and effectively lowered the energy consumption.

  4. SVM-based multi-sensor fusion for free-living physical activity assessment.

    PubMed

    Liu, Shaopeng; Gao, Robert X; John, Dinesh; Staudenmayer, John; Freedson, Patty S

    2011-01-01

    This paper presents a sensor fusion method for assessing physical activity (PA) of human subjects, based on the support vector machines (SVMs). Specifically, acceleration and ventilation measured by a wearable multi-sensor device on 50 test subjects performing 13 types of activities of varying intensities are analyzed, from which the activity types and related energy expenditures are derived. The result shows that the method correctly recognized the 13 activity types 84.7% of the time, which is 26% higher than using a hip accelerometer alone. Also, the method predicted the associated energy expenditure with a root mean square error of 0.43 METs, 43% lower than using a hip accelerometer alone. Furthermore, the fusion method was effective in reducing the subject-to-subject variability (standard deviation of recognition accuracies across subjects) in activity recognition, especially when data from the ventilation sensor was added to the fusion model. These results demonstrate that the multi-sensor fusion technique presented is more effective in assessing activities of varying intensities than the traditional accelerometer-alone based methods.

  5. Synthetically chemical-electrical mechanism for controlling large scale reversible deformation of liquid metal objects

    PubMed Central

    Zhang, Jie; Sheng, Lei; Liu, Jing

    2014-01-01

    Reversible deformation of a machine holds enormous promise across many scientific areas ranging from mechanical engineering to applied physics. So far, such capabilities are still hard to achieve through conventional rigid materials or depending mainly on elastomeric materials, which however own rather limited performances and require complicated manipulations. Here, we show a basic strategy which is fundamentally different from the existing ones to realize large scale reversible deformation through controlling the working materials via the synthetically chemical-electrical mechanism (SCHEME). Such activity incorporates an object of liquid metal gallium whose surface area could spread up to five times of its original size and vice versa under low energy consumption. Particularly, the alterable surface tension based on combination of chemical dissolution and electrochemical oxidation is ascribed to the reversible shape transformation, which works much more flexible than many former deformation principles through converting electrical energy into mechanical movement. A series of very unusual phenomena regarding the reversible configurational shifts are disclosed with dominant factors clarified. This study opens a generalized way to combine the liquid metal serving as shape-variable element with the SCHEME to compose functional soft machines, which implies huge potential for developing future smart robots to fulfill various complicated tasks. PMID:25408295

  6. Recognizing molecular patterns by machine learning: An agnostic structural definition of the hydrogen bond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gasparotto, Piero; Ceriotti, Michele, E-mail: michele.ceriotti@epfl.ch

    The concept of chemical bonding can ultimately be seen as a rationalization of the recurring structural patterns observed in molecules and solids. Chemical intuition is nothing but the ability to recognize and predict such patterns, and how they transform into one another. Here, we discuss how to use a computer to identify atomic patterns automatically, so as to provide an algorithmic definition of a bond based solely on structural information. We concentrate in particular on hydrogen bonding – a central concept to our understanding of the physical chemistry of water, biological systems, and many technologically important materials. Since the hydrogenmore » bond is a somewhat fuzzy entity that covers a broad range of energies and distances, many different criteria have been proposed and used over the years, based either on sophisticate electronic structure calculations followed by an energy decomposition analysis, or on somewhat arbitrary choices of a range of structural parameters that is deemed to correspond to a hydrogen-bonded configuration. We introduce here a definition that is univocal, unbiased, and adaptive, based on our machine-learning analysis of an atomistic simulation. The strategy we propose could be easily adapted to similar scenarios, where one has to recognize or classify structural patterns in a material or chemical compound.« less

  7. Synthetically chemical-electrical mechanism for controlling large scale reversible deformation of liquid metal objects.

    PubMed

    Zhang, Jie; Sheng, Lei; Liu, Jing

    2014-11-19

    Reversible deformation of a machine holds enormous promise across many scientific areas ranging from mechanical engineering to applied physics. So far, such capabilities are still hard to achieve through conventional rigid materials or depending mainly on elastomeric materials, which however own rather limited performances and require complicated manipulations. Here, we show a basic strategy which is fundamentally different from the existing ones to realize large scale reversible deformation through controlling the working materials via the synthetically chemical-electrical mechanism (SCHEME). Such activity incorporates an object of liquid metal gallium whose surface area could spread up to five times of its original size and vice versa under low energy consumption. Particularly, the alterable surface tension based on combination of chemical dissolution and electrochemical oxidation is ascribed to the reversible shape transformation, which works much more flexible than many former deformation principles through converting electrical energy into mechanical movement. A series of very unusual phenomena regarding the reversible configurational shifts are disclosed with dominant factors clarified. This study opens a generalized way to combine the liquid metal serving as shape-variable element with the SCHEME to compose functional soft machines, which implies huge potential for developing future smart robots to fulfill various complicated tasks.

  8. Recognizing molecular patterns by machine learning: an agnostic structural definition of the hydrogen bond.

    PubMed

    Gasparotto, Piero; Ceriotti, Michele

    2014-11-07

    The concept of chemical bonding can ultimately be seen as a rationalization of the recurring structural patterns observed in molecules and solids. Chemical intuition is nothing but the ability to recognize and predict such patterns, and how they transform into one another. Here, we discuss how to use a computer to identify atomic patterns automatically, so as to provide an algorithmic definition of a bond based solely on structural information. We concentrate in particular on hydrogen bonding--a central concept to our understanding of the physical chemistry of water, biological systems, and many technologically important materials. Since the hydrogen bond is a somewhat fuzzy entity that covers a broad range of energies and distances, many different criteria have been proposed and used over the years, based either on sophisticate electronic structure calculations followed by an energy decomposition analysis, or on somewhat arbitrary choices of a range of structural parameters that is deemed to correspond to a hydrogen-bonded configuration. We introduce here a definition that is univocal, unbiased, and adaptive, based on our machine-learning analysis of an atomistic simulation. The strategy we propose could be easily adapted to similar scenarios, where one has to recognize or classify structural patterns in a material or chemical compound.

  9. Synthetically chemical-electrical mechanism for controlling large scale reversible deformation of liquid metal objects

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Sheng, Lei; Liu, Jing

    2014-11-01

    Reversible deformation of a machine holds enormous promise across many scientific areas ranging from mechanical engineering to applied physics. So far, such capabilities are still hard to achieve through conventional rigid materials or depending mainly on elastomeric materials, which however own rather limited performances and require complicated manipulations. Here, we show a basic strategy which is fundamentally different from the existing ones to realize large scale reversible deformation through controlling the working materials via the synthetically chemical-electrical mechanism (SCHEME). Such activity incorporates an object of liquid metal gallium whose surface area could spread up to five times of its original size and vice versa under low energy consumption. Particularly, the alterable surface tension based on combination of chemical dissolution and electrochemical oxidation is ascribed to the reversible shape transformation, which works much more flexible than many former deformation principles through converting electrical energy into mechanical movement. A series of very unusual phenomena regarding the reversible configurational shifts are disclosed with dominant factors clarified. This study opens a generalized way to combine the liquid metal serving as shape-variable element with the SCHEME to compose functional soft machines, which implies huge potential for developing future smart robots to fulfill various complicated tasks.

  10. Addressing uncertainty in atomistic machine learning.

    PubMed

    Peterson, Andrew A; Christensen, Rune; Khorshidi, Alireza

    2017-05-10

    Machine-learning regression has been demonstrated to precisely emulate the potential energy and forces that are output from more expensive electronic-structure calculations. However, to predict new regions of the potential energy surface, an assessment must be made of the credibility of the predictions. In this perspective, we address the types of errors that might arise in atomistic machine learning, the unique aspects of atomistic simulations that make machine-learning challenging, and highlight how uncertainty analysis can be used to assess the validity of machine-learning predictions. We suggest this will allow researchers to more fully use machine learning for the routine acceleration of large, high-accuracy, or extended-time simulations. In our demonstrations, we use a bootstrap ensemble of neural network-based calculators, and show that the width of the ensemble can provide an estimate of the uncertainty when the width is comparable to that in the training data. Intriguingly, we also show that the uncertainty can be localized to specific atoms in the simulation, which may offer hints for the generation of training data to strategically improve the machine-learned representation.

  11. Nuclear Fusion prize laudation Nuclear Fusion prize laudation

    NASA Astrophysics Data System (ADS)

    Burkart, W.

    2011-01-01

    Clean energy in abundance will be of critical importance to the pursuit of world peace and development. As part of the IAEA's activities to facilitate the dissemination of fusion related science and technology, the journal Nuclear Fusion is intended to contribute to the realization of such energy from fusion. In 2010, we celebrated the 50th anniversary of the IAEA journal. The excellence of research published in the journal is attested to by its high citation index. The IAEA recognizes excellence by means of an annual prize awarded to the authors of papers judged to have made the greatest impact. On the occasion of the 2010 IAEA Fusion Energy Conference in Daejeon, Republic of Korea at the welcome dinner hosted by the city of Daejeon, we celebrated the achievements of the 2009 and 2010 Nuclear Fusion prize winners. Steve Sabbagh, from the Department of Applied Physics and Applied Mathematics, Columbia University, New York is the winner of the 2009 award for his paper: 'Resistive wall stabilized operation in rotating high beta NSTX plasmas' [1]. This is a landmark paper which reports record parameters of beta in a large spherical torus plasma and presents a thorough investigation of the physics of resistive wall mode (RWM) instability. The paper makes a significant contribution to the critical topic of RWM stabilization. John Rice, from the Plasma Science and Fusion Center, MIT, Cambridge is the winner of the 2010 award for his paper: 'Inter-machine comparison of intrinsic toroidal rotation in tokamaks' [2]. The 2010 award is for a seminal paper that analyzes results across a range of machines in order to develop a universal scaling that can be used to predict intrinsic rotation. This paper has already triggered a wealth of experimental and theoretical work. I congratulate both authors and their colleagues on these exceptional papers. W. Burkart Deputy Director General Department of Nuclear Sciences and Applications International Atomic Energy Agency, Vienna, Austria References [1] Sabbagh S. et al 2006 Nucl. Fusion 46 635-44 [2] Rice J.E. et al 2007 Nucl. Fusion 47 1618-24

  12. Investigation of Combined Motor/Magnetic Bearings for Flywheel Energy Storage Systems

    NASA Technical Reports Server (NTRS)

    Hofmann, Heath

    2003-01-01

    Dr. Hofmann's work in the summer of 2003 consisted of two separate projects. In the first part of the summer, Dr. Hofmann prepared and collected information regarding rotor losses in synchronous machines; in particular, machines with low rotor losses operating in vacuum and supported by magnetic bearings, such as the motor/generator for flywheel energy storage systems. This work culminated in a presentation at NASA Glenn Research Center on this topic. In the second part, Dr. Hofmann investigated an approach to flywheel energy storage where the phases of the flywheel motor/generator are connected in parallel with the phases of an induction machine driving a mechanical actuator. With this approach, additional power electronics for driving the flywheel unit are not required. Simulations of the connection of a flywheel energy storage system to a model of an electromechanical actuator testbed at NASA Glenn were performed that validated the proposed approach. A proof-of-concept experiment using the D1 flywheel unit at NASA Glenn and a Sundstrand induction machine connected to a dynamometer was successfully conducted.

  13. On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products.

    PubMed

    Varshney, Kush R; Alemzadeh, Homa

    2017-09-01

    Machine learning algorithms increasingly influence our decisions and interact with us in all parts of our daily lives. Therefore, just as we consider the safety of power plants, highways, and a variety of other engineered socio-technical systems, we must also take into account the safety of systems involving machine learning. Heretofore, the definition of safety has not been formalized in a machine learning context. In this article, we do so by defining machine learning safety in terms of risk, epistemic uncertainty, and the harm incurred by unwanted outcomes. We then use this definition to examine safety in all sorts of applications in cyber-physical systems, decision sciences, and data products. We find that the foundational principle of modern statistical machine learning, empirical risk minimization, is not always a sufficient objective. We discuss how four different categories of strategies for achieving safety in engineering, including inherently safe design, safety reserves, safe fail, and procedural safeguards can be mapped to a machine learning context. We then discuss example techniques that can be adopted in each category, such as considering interpretability and causality of predictive models, objective functions beyond expected prediction accuracy, human involvement for labeling difficult or rare examples, and user experience design of software and open data.

  14. Microcompartments and Protein Machines in Prokaryotes

    PubMed Central

    Saier, Milton H.

    2013-01-01

    The prokaryotic cell was once thought of as a “bag of enzymes” with little or no intracellular compartmentalization. In this view, most reactions essential for life occurred as a consequence of random molecular collisions involving substrates, cofactors and cytoplasmic enzymes. Our current conception of a prokaryote is far from this view. We now consider a bacterium or an archaeon as a highly structured, non-random collection of functional membrane-embedded and proteinaceous molecular machines, each of which serves a specialized function. In this article we shall present an overview of such microcompartments including (i) the bacterial cytoskeleton and the apparati allowing DNA segregation during cells division, (ii) energy transduction apparati involving light-driven proton pumping and ion gradient-driven ATP synthesis, (iii) prokaryotic motility and taxis machines that mediate cell movements in response to gradients of chemicals and physical forces, (iv) machines of protein folding, secretion and degradation, (v) metabolasomes carrying out specific chemical reactions, (vi) 24 hour clocks allowing bacteria to coordinate their metabolic activities with the daily solar cycle and (vii) proteinaceous membrane compartmentalized structures such as sulfur granules and gas vacuoles. Membrane-bounded prokaryotic organelles were considered in a recent JMMB written symposium concerned with membraneous compartmentalization in bacteria [Saier and Bogdanov, 2013]. By contrast, in this symposium, we focus on proteinaceous microcompartments. These two symposia, taken together, provide the interested reader with an objective view of the remarkable complexity of what was once thought of as a simple non-compartmentalized cell. PMID:23920489

  15. TEACHING PHYSICS: Atwood's machine: experiments in an accelerating frame

    NASA Astrophysics Data System (ADS)

    Teck Chee, Chia; Hong, Chia Yee

    1999-03-01

    Experiments in an accelerating frame are often difficult to perform, but simple computer software allows sufficiently rapid and accurate measurements to be made on an arrangement of weights and pulleys known as Atwood's machine.

  16. Machine learning for the structure-energy-property landscapes of molecular crystals.

    PubMed

    Musil, Félix; De, Sandip; Yang, Jack; Campbell, Joshua E; Day, Graeme M; Ceriotti, Michele

    2018-02-07

    Molecular crystals play an important role in several fields of science and technology. They frequently crystallize in different polymorphs with substantially different physical properties. To help guide the synthesis of candidate materials, atomic-scale modelling can be used to enumerate the stable polymorphs and to predict their properties, as well as to propose heuristic rules to rationalize the correlations between crystal structure and materials properties. Here we show how a recently-developed machine-learning (ML) framework can be used to achieve inexpensive and accurate predictions of the stability and properties of polymorphs, and a data-driven classification that is less biased and more flexible than typical heuristic rules. We discuss, as examples, the lattice energy and property landscapes of pentacene and two azapentacene isomers that are of interest as organic semiconductor materials. We show that we can estimate force field or DFT lattice energies with sub-kJ mol -1 accuracy, using only a few hundred reference configurations, and reduce by a factor of ten the computational effort needed to predict charge mobility in the crystal structures. The automatic structural classification of the polymorphs reveals a more detailed picture of molecular packing than that provided by conventional heuristics, and helps disentangle the role of hydrogen bonded and π-stacking interactions in determining molecular self-assembly. This observation demonstrates that ML is not just a black-box scheme to interpolate between reference calculations, but can also be used as a tool to gain intuitive insights into structure-property relations in molecular crystal engineering.

  17. Gradient boosting machine for modeling the energy consumption of commercial buildings

    DOE PAGES

    Touzani, Samir; Granderson, Jessica; Fernandes, Samuel

    2017-11-26

    Accurate savings estimations are important to promote energy efficiency projects and demonstrate their cost-effectiveness. The increasing presence of advanced metering infrastructure (AMI) in commercial buildings has resulted in a rising availability of high frequency interval data. These data can be used for a variety of energy efficiency applications such as demand response, fault detection and diagnosis, and heating, ventilation, and air conditioning (HVAC) optimization. This large amount of data has also opened the door to the use of advanced statistical learning models, which hold promise for providing accurate building baseline energy consumption predictions, and thus accurate saving estimations. The gradientmore » boosting machine is a powerful machine learning algorithm that is gaining considerable traction in a wide range of data driven applications, such as ecology, computer vision, and biology. In the present work an energy consumption baseline modeling method based on a gradient boosting machine was proposed. To assess the performance of this method, a recently published testing procedure was used on a large dataset of 410 commercial buildings. The model training periods were varied and several prediction accuracy metrics were used to evaluate the model's performance. The results show that using the gradient boosting machine model improved the R-squared prediction accuracy and the CV(RMSE) in more than 80 percent of the cases, when compared to an industry best practice model that is based on piecewise linear regression, and to a random forest algorithm.« less

  18. Gradient boosting machine for modeling the energy consumption of commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Touzani, Samir; Granderson, Jessica; Fernandes, Samuel

    Accurate savings estimations are important to promote energy efficiency projects and demonstrate their cost-effectiveness. The increasing presence of advanced metering infrastructure (AMI) in commercial buildings has resulted in a rising availability of high frequency interval data. These data can be used for a variety of energy efficiency applications such as demand response, fault detection and diagnosis, and heating, ventilation, and air conditioning (HVAC) optimization. This large amount of data has also opened the door to the use of advanced statistical learning models, which hold promise for providing accurate building baseline energy consumption predictions, and thus accurate saving estimations. The gradientmore » boosting machine is a powerful machine learning algorithm that is gaining considerable traction in a wide range of data driven applications, such as ecology, computer vision, and biology. In the present work an energy consumption baseline modeling method based on a gradient boosting machine was proposed. To assess the performance of this method, a recently published testing procedure was used on a large dataset of 410 commercial buildings. The model training periods were varied and several prediction accuracy metrics were used to evaluate the model's performance. The results show that using the gradient boosting machine model improved the R-squared prediction accuracy and the CV(RMSE) in more than 80 percent of the cases, when compared to an industry best practice model that is based on piecewise linear regression, and to a random forest algorithm.« less

  19. No time machine construction in open 2+1 gravity with timelike total energy-momentum

    NASA Astrophysics Data System (ADS)

    Tiglio, Manuel H.

    1998-09-01

    It is shown that in (2+1)-dimensional gravity an open spacetime with timelike sources and total energy momentum cannot have a stable compactly generated Cauchy horizon. This constitutes a proof of a version of Kabat's conjecture and shows, in particular, that not only a Gott time machine cannot be formed from processes such as the decay of a single cosmic string as has been shown by Carroll et al., but that, in a precise sense, a time machine cannot be constructed at all.

  20. The phaco machine: analysing new technology.

    PubMed

    Fishkind, William J

    2013-01-01

    The phaco machine is frequently overlooked as the crucial surgical instrument it is. Understanding how to set parameters is initiated by understanding fundamental concepts of machine function. This study analyses the critical concepts of partial occlusion phaco, occlusion phaco and pump technology. In addition, phaco energy categories as well as variations of phaco energy production are explored. Contemporary power modulations and pump controls allow for the enhancement of partial occlusion phacoemulsification. These significant changes in the anterior chamber dynamics produce a balanced environment for phaco; less complications; and improved patient outcomes.

  1. Social energy: mining energy from the society

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jun Jason; Gao, David Wenzhong; Zhang, Yingchen

    The inherent nature of energy, i.e., physicality, sociality and informatization, implies the inevitable and intensive interaction between energy systems and social systems. From this perspective, we define 'social energy' as a complex sociotechnical system of energy systems, social systems and the derived artificial virtual systems which characterize the intense intersystem and intra-system interactions. The recent advancement in intelligent technology, including artificial intelligence and machine learning technologies, sensing and communication in Internet of Things technologies, and massive high performance computing and extreme-scale data analytics technologies, enables the possibility of substantial advancement in socio-technical system optimization, scheduling, control and management. In thismore » paper, we provide a discussion on the nature of energy, and then propose the concept and intention of social energy systems for electrical power. A general methodology of establishing and investigating social energy is proposed, which is based on the ACP approach, i.e., 'artificial systems' (A), 'computational experiments' (C) and 'parallel execution' (P), and parallel system methodology. A case study on the University of Denver (DU) campus grid is provided and studied to demonstrate the social energy concept. In the concluding remarks, we discuss the technical pathway, in both social and nature sciences, to social energy, and our vision on its future.« less

  2. Prediction of hot spot residues at protein-protein interfaces by combining machine learning and energy-based methods.

    PubMed

    Lise, Stefano; Archambeau, Cedric; Pontil, Massimiliano; Jones, David T

    2009-10-30

    Alanine scanning mutagenesis is a powerful experimental methodology for investigating the structural and energetic characteristics of protein complexes. Individual amino-acids are systematically mutated to alanine and changes in free energy of binding (DeltaDeltaG) measured. Several experiments have shown that protein-protein interactions are critically dependent on just a few residues ("hot spots") at the interface. Hot spots make a dominant contribution to the free energy of binding and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there is a need for accurate and reliable computational methods. Such methods would also add to our understanding of the determinants of affinity and specificity in protein-protein recognition. We present a novel computational strategy to identify hot spot residues, given the structure of a complex. We consider the basic energetic terms that contribute to hot spot interactions, i.e. van der Waals potentials, solvation energy, hydrogen bonds and Coulomb electrostatics. We treat them as input features and use machine learning algorithms such as Support Vector Machines and Gaussian Processes to optimally combine and integrate them, based on a set of training examples of alanine mutations. We show that our approach is effective in predicting hot spots and it compares favourably to other available methods. In particular we find the best performances using Transductive Support Vector Machines, a semi-supervised learning scheme. When hot spots are defined as those residues for which DeltaDeltaG >or= 2 kcal/mol, our method achieves a precision and a recall respectively of 56% and 65%. We have developed an hybrid scheme in which energy terms are used as input features of machine learning models. This strategy combines the strengths of machine learning and energy-based methods. Although so far these two types of approaches have mainly been applied separately to biomolecular problems, the results of our investigation indicate that there are substantial benefits to be gained by their integration.

  3. Posture and activity recognition and energy expenditure prediction in a wearable platform.

    PubMed

    Sazonova, Nadezhda; Browning, Raymond; Melanson, Edward; Sazonov, Edward

    2014-01-01

    The use of wearable sensors coupled with the processing power of mobile phones may be an attractive way to provide real-time feedback about physical activity and energy expenditure (EE). Here we describe use of a shoe-based wearable sensor system (SmartShoe) with a mobile phone for real-time prediction and display of time spent in various postures/physical activities and the resulting EE. To deal with processing power and memory limitations of the phone, we introduce new algorithms that require substantially less computational power. The algorithms were validated using data from 15 subjects who performed up to 15 different activities of daily living during a four-hour stay in a room calorimeter. Use of Multinomial Logistic Discrimination (MLD) for posture and activity classification resulted in an accuracy comparable to that of Support Vector Machines (SVM) (90% vs. 95%-98%) while reducing the running time by a factor of 190 and reducing the memory requirement by a factor of 104. Per minute EE estimation using activity-specific models resulted in an accurate EE prediction (RMSE of 0.53 METs vs. RMSE of 0.69 METs using previously reported SVM-branched models). These results demonstrate successful implementation of real-time physical activity monitoring and EE prediction system on a wearable platform.

  4. Investigation of laser ablation of CVD diamond film

    NASA Astrophysics Data System (ADS)

    Chao, Choung-Lii; Chou, W. C.; Ma, Kung-Jen; Chen, Ta-Tung; Liu, Y. M.; Kuo, Y. S.; Chen, Ying-Tung

    2005-04-01

    Diamond, having many advanced physical and mechanical properties, is one of the most important materials used in the mechanical, telecommunication and optoelectronic industry. However, high hardness value and extreme brittleness have made diamond extremely difficult to be machined by conventional mechanical grinding and polishing. In the present study, the microwave CVD method was employed to produce epitaxial diamond films on silicon single crystal. Laser ablation experiments were then conducted on the obtained diamond films. The underlying material removal mechanisms, microstructure of the machined surface and related machining conditions were also investigated. It was found that during the laser ablation, peaks of the diamond grains were removed mainly by the photo-thermal effects introduced by excimer laser. The diamond structures of the protruded diamond grains were transformed by the laser photonic energy into graphite, amorphous diamond and amorphous carbon which were removed by the subsequent laser shots. As the protruding peaks gradually removed from the surface the removal rate decreased. Surface roughness (Ra) was improved from above 1μm to around 0.1μm in few minutes time in this study. However, a scanning technique would be required if a large area was to be polished by laser and, as a consequence, it could be very time consuming.

  5. Stepping outside the neighborhood of T at LHC

    NASA Astrophysics Data System (ADS)

    Wiedemann, Urs Achim

    2009-11-01

    “ As you are well aware, many in the RHIC community are interested in the LHC heavy-ion program, but have several questions: What can we learn at the LHC that is qualitatively new? Are collisions at LHC similar to RHIC ones, just with a somewhat hotter/denser initial state? If not, why not? These questions are asked in good faith, and this talk is an opportunity to answer them directly to much of the RHIC community.” With these words, the organizers of Quark Matter 2009 in Knoxville invited me to discuss the physics opportunities for heavy ion collisions at the LHC without recalling the standard arguments, which are mainly based on the extended kinematic reach of the machine. In response, I emphasize here that lattice QCD indicates characteristic qualitative differences between thermal physics in the neighborhood of the critical temperature (T400-500MeV), for which the relevant energy densities will be solely attainable at the LHC.

  6. Recovering Galaxy Properties Using Gaussian Process SED Fitting

    NASA Astrophysics Data System (ADS)

    Iyer, Kartheik; Awan, Humna

    2018-01-01

    Information about physical quantities like the stellar mass, star formation rates, and ages for distant galaxies is contained in their spectral energy distributions (SEDs), obtained through photometric surveys like SDSS, CANDELS, LSST etc. However, noise in the photometric observations often is a problem, and using naive machine learning methods to estimate physical quantities can result in overfitting the noise, or converging on solutions that lie outside the physical regime of parameter space.We use Gaussian Process regression trained on a sample of SEDs corresponding to galaxies from a Semi-Analytic model (Somerville+15a) to estimate their stellar masses, and compare its performance to a variety of different methods, including simple linear regression, Random Forests, and k-Nearest Neighbours. We find that the Gaussian Process method is robust to noise and predicts not only stellar masses but also their uncertainties. The method is also robust in the cases where the distribution of the training data is not identical to the target data, which can be extremely useful when generalized to more subtle galaxy properties.

  7. The upgraded Large Plasma Device, a machine for studying frontier basic plasma physics.

    PubMed

    Gekelman, W; Pribyl, P; Lucky, Z; Drandell, M; Leneman, D; Maggs, J; Vincena, S; Van Compernolle, B; Tripathi, S K P; Morales, G; Carter, T A; Wang, Y; DeHaas, T

    2016-02-01

    In 1991 a manuscript describing an instrument for studying magnetized plasmas was published in this journal. The Large Plasma Device (LAPD) was upgraded in 2001 and has become a national user facility for the study of basic plasma physics. The upgrade as well as diagnostics introduced since then has significantly changed the capabilities of the device. All references to the machine still quote the original RSI paper, which at this time is not appropriate. In this work, the properties of the updated LAPD are presented. The strategy of the machine construction, the available diagnostics, the parameters available for experiments, as well as illustrations of several experiments are presented here.

  8. Machine learning with quantum relative entropy

    NASA Astrophysics Data System (ADS)

    Tsuda, Koji

    2009-12-01

    Density matrices are a central tool in quantum physics, but it is also used in machine learning. A positive definite matrix called kernel matrix is used to represent the similarities between examples. Positive definiteness assures that the examples are embedded in an Euclidean space. When a positive definite matrix is learned from data, one has to design an update rule that maintains the positive definiteness. Our update rule, called matrix exponentiated gradient update, is motivated by the quantum relative entropy. Notably, the relative entropy is an instance of Bregman divergences, which are asymmetric distance measures specifying theoretical properties of machine learning algorithms. Using the calculus commonly used in quantum physics, we prove an upperbound of the generalization error of online learning.

  9. Classification without labels: learning from mixed samples in high energy physics

    NASA Astrophysics Data System (ADS)

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    2017-10-01

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimal classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.

  10. Classification without labels: learning from mixed samples in high energy physics

    DOE PAGES

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    2017-10-25

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimalmore » classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.« less

  11. Classification without labels: learning from mixed samples in high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimalmore » classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.« less

  12. TEACHING PHYSICS: A computer-based revitalization of Atwood's machine

    NASA Astrophysics Data System (ADS)

    Trumper, Ricardo; Gelbman, Moshe

    2000-09-01

    Atwood's machine is used in a microcomputer-based experiment to demonstrate Newton's second law with considerable precision. The friction force on the masses and the moment of inertia of the pulley can also be estimated.

  13. Progress Toward Fabrication of Machined Metal Shells for the First Double-Shell Implosions at the National Ignition Facility

    DOE PAGES

    Cardenas, Tana; Schmidt, Derek W.; Loomis, Eric N.; ...

    2018-01-25

    The double-shell platform fielded at the National Ignition Facility requires developments in new machining techniques and robotic assembly stations to meet the experimental specifications. Current double-shell target designs use a dense high-Z inner shell, a foam cushion, and a low-Z outer shell. The design requires that the inner shell be gas filled using a fill tube. This tube impacts the entire machining and assembly design. Other intermediate physics designs have to be fielded to answer physics questions and advance the technology to be able to fabricate the full point design in the near future. One of these intermediate designs ismore » a mid-Z imaging design. The methods of designing, fabricating, and characterizing each of the major components of an imaging double shell are discussed with an emphasis on the fabrication of the machined outer metal shell.« less

  14. Progress Toward Fabrication of Machined Metal Shells for the First Double-Shell Implosions at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardenas, Tana; Schmidt, Derek W.; Loomis, Eric N.

    The double-shell platform fielded at the National Ignition Facility requires developments in new machining techniques and robotic assembly stations to meet the experimental specifications. Current double-shell target designs use a dense high-Z inner shell, a foam cushion, and a low-Z outer shell. The design requires that the inner shell be gas filled using a fill tube. This tube impacts the entire machining and assembly design. Other intermediate physics designs have to be fielded to answer physics questions and advance the technology to be able to fabricate the full point design in the near future. One of these intermediate designs ismore » a mid-Z imaging design. The methods of designing, fabricating, and characterizing each of the major components of an imaging double shell are discussed with an emphasis on the fabrication of the machined outer metal shell.« less

  15. Machine learning of network metrics in ATLAS Distributed Data Management

    NASA Astrophysics Data System (ADS)

    Lassnig, Mario; Toler, Wesley; Vamosi, Ralf; Bogado, Joaquin; ATLAS Collaboration

    2017-10-01

    The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for networkaware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.

  16. Residential energy use and potential conservation through reduced laundering temperatures in the United States and Canada.

    PubMed

    Sabaliunas, Darius; Pittinger, Charles; Kessel, Cristy; Masscheleyn, Patrick

    2006-04-01

    A residential energy-use model was developed to estimate energy budgets for household laundering practices in the United States and Canada. The thermal energy for heating water and mechanical energy for agitating clothes in conventional washing machines were calculated for representative households in the United States and Canada. Comparisons in energy consumption among hot-, warm-, and cold-water wash and rinse cycles, horizontal- and vertical-axis washing machines, and gas and electric water heaters, were calculated on a per-wash-load basis. Demographic data for current laundering practices in the United States and Canada were then incorporated to estimate household and national energy consumption on an annual basis for each country. On average, the thermal energy required to heat water using either gas or electric energy constitutes 80% to 85% of the total energy consumed per wash in conventional, vertical-axis (top-loading) washing machines. The balance of energy used is mechanical energy. Consequently, the potential energy savings per load in converting from hot-and-warm- to cold-wash temperatures can be significant. Annual potential energy and cost savings and reductions in carbon dioxide emissions are also estimated for each country, assuming full conversion to cold-wash water temperatures. This study provides useful information to consumers for conserving energy in the home, as well as to, manufacturers in the design of more energy-efficient laundry formulations and appliances.

  17. Safety issues in high speed machining

    NASA Astrophysics Data System (ADS)

    1994-05-01

    There are several risks related to High-Speed Milling, but they have not been systematically determined or studied so far. Increased loads by high centrifugal forces may result in dramatic hazards. Flying tools or fragments from a tool with high kinetic energy may damage surrounding people, machines and devices. In the project, mechanical risks were evaluated, theoretic values for kinetic energies of rotating tools were calculated, possible damages of the flying objects were determined and terms to eliminate the risks were considered. The noise levels of the High-Speed Machining center owned by the Helsinki University of Technology (HUT) and the Technical Research Center of Finland (VTT) in practical machining situation were measured and the results were compared to those after basic preventive measures were taken.

  18. Review of third and next generation synchrotron light sources

    NASA Astrophysics Data System (ADS)

    Bilderback, Donald H.; Elleaume, Pascal; Weckert, Edgar

    2005-05-01

    Synchrotron radiation (SR) is having a very large impact on interdisciplinary science and has been tremendously successful with the arrival of third generation synchrotron x-ray sources. But the revolution in x-ray science is still gaining momentum. Even though new storage rings are currently under construction, even more advanced rings are under design (PETRA III and the ultra high energy x-ray source) and the uses of linacs (energy recovery linac, x-ray free electron laser) can take us further into the future, to provide the unique synchrotron light that is so highly prized for today's studies in science in such fields as materials science, physics, chemistry and biology, for example. All these machines are highly reliant upon the consequences of Einstein's special theory of relativity. The consequences of relativity account for the small opening angle of synchrotron radiation in the forward direction and the increasing mass an electron gains as it is accelerated to high energy. These are familiar results to every synchrotron scientist. In this paper we outline not only the origins of SR but discuss how Einstein's strong character and his intuition and excellence have not only marked the physics of the 20th century but provide the foundation for continuing accelerator developments into the 21st century.

  19. Advancing solar energy forecasting through the underlying physics

    NASA Astrophysics Data System (ADS)

    Yang, H.; Ghonima, M. S.; Zhong, X.; Ozge, B.; Kurtz, B.; Wu, E.; Mejia, F. A.; Zamora, M.; Wang, G.; Clemesha, R.; Norris, J. R.; Heus, T.; Kleissl, J. P.

    2017-12-01

    As solar power comprises an increasingly large portion of the energy generation mix, the ability to accurately forecast solar photovoltaic generation becomes increasingly important. Due to the variability of solar power caused by cloud cover, knowledge of both the magnitude and timing of expected solar power production ahead of time facilitates the integration of solar power onto the electric grid by reducing electricity generation from traditional ancillary generators such as gas and oil power plants, as well as decreasing the ramping of all generators, reducing start and shutdown costs, and minimizing solar power curtailment, thereby providing annual economic value. The time scales involved in both the energy markets and solar variability range from intra-hour to several days ahead. This wide range of time horizons led to the development of a multitude of techniques, with each offering unique advantages in specific applications. For example, sky imagery provides site-specific forecasts on the minute-scale. Statistical techniques including machine learning algorithms are commonly used in the intra-day forecast horizon for regional applications, while numerical weather prediction models can provide mesoscale forecasts on both the intra-day and days-ahead time scale. This talk will provide an overview of the challenges unique to each technique and highlight the advances in their ongoing development which come alongside advances in the fundamental physics underneath.

  20. Radiation Damage From Mono-energetic Electrons Up to 200 keV On Biological Systems

    NASA Astrophysics Data System (ADS)

    Prilepskiy, Yuriy

    2006-03-01

    The electron gun of the CEBAF machine at Jefferson lab (Newport News, VA) is capable of delivering electrons with energies up to 200 keV with a resolution of about 10-5. This 1.5 GHz beam permits to generate cellular radiation damage within minutes. We have performed irradiation of cancer cells with different energies and different currents to investigate their biological responses. This study will permit to address the physical processes involved in the RBE and LET at a level that supersedes current data listed in the literature by orders of magnitude. We will discuss the experimental setup and results of the first stage of data collected with this novel system. This research is part of a global program to provide detailed information for the understanding of radiation based cancer treatments.

  1. Higgs self-coupling measurements at a 100 TeV hadron collider

    DOE PAGES

    Barr, Alan J.; Dolan, Matthew J.; Englert, Christoph; ...

    2015-02-03

    An important physics goal of a possible next-generation high-energy hadron collider will be precision characterisation of the Higgs sector and electroweak symmetry breaking. A crucial part of understanding the nature of electroweak symmetry breaking is measuring the Higgs self-interactions. We study dihiggs production in proton-proton collisions at 100 TeV centre of mass energy in order to estimate the sensitivity such a machine would have to variations in the trilinear Higgs coupling around the Standard Model expectation. We focus on the bb¯γγ final state, including possible enhancements in sensitivity by exploiting dihiggs recoils against a hard jet. In conclusion, we findmore » that it should be possible to measure the trilinear self-coupling with 40% accuracy given 3/ab and 12% with 30/ab of data.« less

  2. Quantitative assessment of the enamel machinability in tooth preparation with dental diamond burs.

    PubMed

    Song, Xiao-Fei; Jin, Chen-Xin; Yin, Ling

    2015-01-01

    Enamel cutting using dental handpieces is a critical process in tooth preparation for dental restorations and treatment but the machinability of enamel is poorly understood. This paper reports on the first quantitative assessment of the enamel machinability using computer-assisted numerical control, high-speed data acquisition, and force sensing systems. The enamel machinability in terms of cutting forces, force ratio, cutting torque, cutting speed and specific cutting energy were characterized in relation to enamel surface orientation, specific material removal rate and diamond bur grit size. The results show that enamel surface orientation, specific material removal rate and diamond bur grit size critically affected the enamel cutting capability. Cutting buccal/lingual surfaces resulted in significantly higher tangential and normal forces, torques and specific energy (p<0.05) but lower cutting speeds than occlusal surfaces (p<0.05). Increasing material removal rate for high cutting efficiencies using coarse burs yielded remarkable rises in cutting forces and torque (p<0.05) but significant reductions in cutting speed and specific cutting energy (p<0.05). In particular, great variations in cutting forces, torques and specific energy were observed at the specific material removal rate of 3mm(3)/min/mm using coarse burs, indicating the cutting limit. This work provides fundamental data and the scientific understanding of the enamel machinability for clinical dental practice. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. An Attachable Electromagnetic Energy Harvester Driven Wireless Sensing System Demonstrating Milling-Processes and Cutter-Wear/Breakage-Condition Monitoring.

    PubMed

    Chung, Tien-Kan; Yeh, Po-Chen; Lee, Hao; Lin, Cheng-Mao; Tseng, Chia-Yung; Lo, Wen-Tuan; Wang, Chieh-Min; Wang, Wen-Chin; Tu, Chi-Jen; Tasi, Pei-Yuan; Chang, Jui-Wen

    2016-02-23

    An attachable electromagnetic-energy-harvester driven wireless vibration-sensing system for monitoring milling-processes and cutter-wear/breakage-conditions is demonstrated. The system includes an electromagnetic energy harvester, three single-axis Micro Electro-Mechanical Systems (MEMS) accelerometers, a wireless chip module, and corresponding circuits. The harvester consisting of magnets with a coil uses electromagnetic induction to harness mechanical energy produced by the rotating spindle in milling processes and consequently convert the harnessed energy to electrical output. The electrical output is rectified by the rectification circuit to power the accelerometers and wireless chip module. The harvester, circuits, accelerometer, and wireless chip are integrated as an energy-harvester driven wireless vibration-sensing system. Therefore, this completes a self-powered wireless vibration sensing system. For system testing, a numerical-controlled machining tool with various milling processes is used. According to the test results, the system is fully self-powered and able to successfully sense vibration in the milling processes. Furthermore, by analyzing the vibration signals (i.e., through analyzing the electrical outputs of the accelerometers), criteria are successfully established for the system for real-time accurate simulations of the milling-processes and cutter-conditions (such as cutter-wear conditions and cutter-breaking occurrence). Due to these results, our approach can be applied to most milling and other machining machines in factories to realize more smart machining technologies.

  4. An Attachable Electromagnetic Energy Harvester Driven Wireless Sensing System Demonstrating Milling-Processes and Cutter-Wear/Breakage-Condition Monitoring

    PubMed Central

    Chung, Tien-Kan; Yeh, Po-Chen; Lee, Hao; Lin, Cheng-Mao; Tseng, Chia-Yung; Lo, Wen-Tuan; Wang, Chieh-Min; Wang, Wen-Chin; Tu, Chi-Jen; Tasi, Pei-Yuan; Chang, Jui-Wen

    2016-01-01

    An attachable electromagnetic-energy-harvester driven wireless vibration-sensing system for monitoring milling-processes and cutter-wear/breakage-conditions is demonstrated. The system includes an electromagnetic energy harvester, three single-axis Micro Electro-Mechanical Systems (MEMS) accelerometers, a wireless chip module, and corresponding circuits. The harvester consisting of magnets with a coil uses electromagnetic induction to harness mechanical energy produced by the rotating spindle in milling processes and consequently convert the harnessed energy to electrical output. The electrical output is rectified by the rectification circuit to power the accelerometers and wireless chip module. The harvester, circuits, accelerometer, and wireless chip are integrated as an energy-harvester driven wireless vibration-sensing system. Therefore, this completes a self-powered wireless vibration sensing system. For system testing, a numerical-controlled machining tool with various milling processes is used. According to the test results, the system is fully self-powered and able to successfully sense vibration in the milling processes. Furthermore, by analyzing the vibration signals (i.e., through analyzing the electrical outputs of the accelerometers), criteria are successfully established for the system for real-time accurate simulations of the milling-processes and cutter-conditions (such as cutter-wear conditions and cutter-breaking occurrence). Due to these results, our approach can be applied to most milling and other machining machines in factories to realize more smart machining technologies. PMID:26907297

  5. Physical mechanism of ultrasonic machining

    NASA Astrophysics Data System (ADS)

    Isaev, A.; Grechishnikov, V.; Kozochkin, M.; Pivkin, P.; Petuhov, Y.; Romanov, V.

    2016-04-01

    In this paper, the main aspects of ultrasonic machining of constructional materials are considered. Influence of coolant on surface parameters is studied. Results of experiments on ultrasonic lathe cutting with application of tangential vibrations and with use of coolant are considered.

  6. Condition monitoring of a prototype turbine. Description of the system and main results

    NASA Astrophysics Data System (ADS)

    Valero, C.; Egusquiza, E.; Presas, A.; Valentin, D.; Egusquiza, M.; Bossio, M.

    2017-04-01

    The fast change in new renewable energy is affecting directly the required operating range of hydropower plants. According to the present demand of electricity, it is necessary to generate different levels of power. Because of its ease to regulate and its huge storage capacity of energy, hydropower is the unique energy source that can adapt to the demand. Today, the required operating range of turbine units is expected to extend from part load to overload. These extreme operations points can cause several pressure pulsations, cavitation and vibrations in different parts of the machine. To determine the effects on the machine, vibration measurements are necessary in actual machines. Vibrations can be used for machinery protection and to identify problems in the machine (diagnosis). In this paper, some results obtained in a hydropower plant are presented. The variation of global levels and vibratory signatures has been analysed as function as gross head, transducer location and operating points.

  7. How much information is in a jet?

    NASA Astrophysics Data System (ADS)

    Datta, Kaustuv; Larkoski, Andrew

    2017-06-01

    Machine learning techniques are increasingly being applied toward data analyses at the Large Hadron Collider, especially with applications for discrimination of jets with different originating particles. Previous studies of the power of machine learning to jet physics have typically employed image recognition, natural language processing, or other algorithms that have been extensively developed in computer science. While these studies have demonstrated impressive discrimination power, often exceeding that of widely-used observables, they have been formulated in a non-constructive manner and it is not clear what additional information the machines are learning. In this paper, we study machine learning for jet physics constructively, expressing all of the information in a jet onto sets of observables that completely and minimally span N-body phase space. For concreteness, we study the application of machine learning for discrimination of boosted, hadronic decays of Z bosons from jets initiated by QCD processes. Our results demonstrate that the information in a jet that is useful for discrimination power of QCD jets from Z bosons is saturated by only considering observables that are sensitive to 4-body (8 dimensional) phase space.

  8. Method for providing slip energy control in permanent magnet electrical machines

    DOEpatents

    Hsu, John S.

    2006-11-14

    An electric machine (40) has a stator (43), a permanent magnet rotor (38) with permanent magnets (39) and a magnetic coupling uncluttered rotor (46) for inducing a slip energy current in secondary coils (47). A dc flux can be produced in the uncluttered rotor when the secondary coils are fed with dc currents. The magnetic coupling uncluttered rotor (46) has magnetic brushes (A, B, C, D) which couple flux in through the rotor (46) to the secondary coils (47c, 47d) without inducing a current in the rotor (46) and without coupling a stator rotational energy component to the secondary coils (47c, 47d). The machine can be operated as a motor or a generator in multi-phase or single-phase embodiments and is applicable to the hybrid electric vehicle. A method of providing a slip energy controller is also disclosed.

  9. Floating Ultrasonic Transducer Inspection System and Method for Nondestructive Evaluation

    NASA Technical Reports Server (NTRS)

    Johnston, Patrick H. (Inventor); Zalameda, Joseph N. (Inventor)

    2016-01-01

    A method for inspecting a structural sample using ultrasonic energy includes positioning an ultrasonic transducer adjacent to a surface of the sample, and then transmitting ultrasonic energy into the sample. Force pulses are applied to the transducer concurrently with transmission of the ultrasonic energy. A host machine processes ultrasonic return pulses from an ultrasonic pulser/receiver to quantify attenuation of the ultrasonic energy within the sample. The host machine detects a defect in the sample using the quantified level of attenuation. The method may include positioning a dry couplant between an ultrasonic transducer and the surface. A system includes an actuator, an ultrasonic transducer, a dry couplant between the transducer the sample, a scanning device that moves the actuator and transducer, and a measurement system having a pulsed actuator power supply, an ultrasonic pulser/receiver, and a host machine that executes the above method.

  10. Machine Learning Estimates of Natural Product Conformational Energies

    PubMed Central

    Rupp, Matthias; Bauer, Matthias R.; Wilcken, Rainer; Lange, Andreas; Reutlinger, Michael; Boeckler, Frank M.; Schneider, Gisbert

    2014-01-01

    Machine learning has been used for estimation of potential energy surfaces to speed up molecular dynamics simulations of small systems. We demonstrate that this approach is feasible for significantly larger, structurally complex molecules, taking the natural product Archazolid A, a potent inhibitor of vacuolar-type ATPase, from the myxobacterium Archangium gephyra as an example. Our model estimates energies of new conformations by exploiting information from previous calculations via Gaussian process regression. Predictive variance is used to assess whether a conformation is in the interpolation region, allowing a controlled trade-off between prediction accuracy and computational speed-up. For energies of relaxed conformations at the density functional level of theory (implicit solvent, DFT/BLYP-disp3/def2-TZVP), mean absolute errors of less than 1 kcal/mol were achieved. The study demonstrates that predictive machine learning models can be developed for structurally complex, pharmaceutically relevant compounds, potentially enabling considerable speed-ups in simulations of larger molecular structures. PMID:24453952

  11. Using Machine Learning as a fast emulator of physical processes within the Met Office's Unified Model

    NASA Astrophysics Data System (ADS)

    Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.

    2017-12-01

    The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.

  12. Healthier vending machines in workplaces: both possible and effective.

    PubMed

    Gorton, Delvina; Carter, Julie; Cvjetan, Branko; Ni Mhurchu, Cliona

    2010-03-19

    To develop healthier vending guidelines and assess their effect on the nutrient content and sales of snack products sold through hospital vending machines, and on staff satisfaction. Nutrition guidelines for healthier vending machine products were developed and implemented in 14 snack vending machines at two hospital sites in Auckland, New Zealand. The guidelines comprised threshold criteria for energy, saturated fat, sugar, and sodium content of vended foods. Sales data were collected prior to introduction of the guidelines (March-May 2007), and again post-introduction (March-May 2008). A food composition database was used to assess impact of the intervention on nutrient content of purchases. A staff survey was also conducted pre- and post-intervention to assess acceptability. Pre-intervention, 16% of staff used vending machines once a week or more, with little change post-intervention (15%). The guidelines resulted in a substantial reduction in the amount of energy (-24%), total fat (-32%), saturated fat (-41%), and total sugars (-30%) per 100 g product sold. Sales volumes were not affected, and the proportion of staff satisfied with vending machine products increased. Implementation of nutrition guidelines in hospital vending machines led to substantial improvements in nutrient content of vending products sold. Wider implementation of these guidelines is recommended.

  13. A 34-meter VAWT (Vertical Axis Wind Turbine) point design

    NASA Astrophysics Data System (ADS)

    Ashwill, T. D.; Berg, D. E.; Dodd, H. M.; Rumsey, M. A.; Sutherland, H. J.; Veers, P. S.

    The Wind Energy Division at Sandia National Laboratories recently completed a point design based on the 34-m Vertical Axis Wind Turbine (VAWT) Test Bed. The 34-m Test Bed research machine incorporates several innovations that improve Darrieus technology, including increased energy production, over previous machines. The point design differs minimally from the Test Bed; but by removing research-related items, its estimated cost is substantially reduced. The point design is a first step towards a Test-Bed-based commercial machine that would be competitive with conventional sources of power in the mid-1990s.

  14. LHC Status and Upgrade Challenges

    NASA Astrophysics Data System (ADS)

    Smith, Jeffrey

    2009-11-01

    The Large Hadron Collider has had a trying start-up and a challenging operational future lays ahead. Critical to the machine's performance is controlling a beam of particles whose stored energy is equivalent to 80 kg of TNT. Unavoidable beam losses result in energy deposition throughout the machine and without adequate protection this power would result in quenching of the superconducting magnets. A brief overview of the machine layout and principles of operation will be reviewed including a summary of the September 2008 accident. The current status of the LHC, startup schedule and upgrade options to achieve the target luminosity will be presented.

  15. Episode forecasting in bipolar disorder: Is energy better than mood?

    PubMed

    Ortiz, Abigail; Bradler, Kamil; Hintze, Arend

    2018-01-22

    Bipolar disorder is a severe mood disorder characterized by alternating episodes of mania and depression. Several interventions have been developed to decrease high admission rates and high suicides rates associated with the illness, including psychoeducation and early episode detection, with mixed results. More recently, machine learning approaches have been used to aid clinical diagnosis or to detect a particular clinical state; however, contradictory results arise from confusion around which of the several automatically generated data are the most contributory and useful to detect a particular clinical state. Our aim for this study was to apply machine learning techniques and nonlinear analyses to a physiological time series dataset in order to find the best predictor for forecasting episodes in mood disorders. We employed three different techniques: entropy calculations and two different machine learning approaches (genetic programming and Markov Brains as classifiers) to determine whether mood, energy or sleep was the best predictor to forecast a mood episode in a physiological time series. Evening energy was the best predictor for both manic and depressive episodes in each of the three aforementioned techniques. This suggests that energy might be a better predictor than mood for forecasting mood episodes in bipolar disorder and that these particular machine learning approaches are valuable tools to be used clinically. Energy should be considered as an important factor for episode prediction. Machine learning approaches provide better tools to forecast episodes and to increase our understanding of the processes that underlie mood regulation. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. A multi-group and preemptable scheduling of cloud resource based on HTCondor

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaowei; Zou, Jiaheng; Cheng, Yaodong; Shi, Jingyan

    2017-10-01

    Due to the features of virtual machine-flexibility, easy controlling and various system environments, more and more fields utilize the virtualization technology to construct the distributed system with the virtual resources, also including high energy physics. This paper introduce a method used in high energy physics that supports multiple resource group and preemptable cloud resource scheduling, combining virtual machine with HTCondor (a batch system). It makes resource controlling more flexible and more efficient and makes resource scheduling independent of job scheduling. Firstly, the resources belong to different experiment-groups, and the type of user-groups mapping to resource-groups(same as experiment-group) is one-to-one or many-to-one. In order to make the confused group simply to be managed, we designed the permission controlling component to ensure that the different resource-groups can get the suitable jobs. Secondly, for the purpose of elastically allocating resources for suitable resource-group, it is necessary to schedule resources like scheduling jobs. So this paper designs the cloud resource scheduling to maintain a resource queue and allocate an appropriate amount of virtual resources to the request resource-group. Thirdly, in some kind of situations, because of the resource occupied for a long time, resources need to be preempted. This paper adds the preemption function for the resource scheduling that implement resource preemption based on the group priority. Additionally, the way to preempting is soft that when virtual resources are preempted, jobs will not be killed but also be held and rematched later. It is implemented with the help of HTCondor, storing the held job information in scheduler, releasing the job to idle status and doing second matcher. In IHEP (institute of high energy physics), we have built a batch system based on HTCondor with a virtual resources pool based on Openstack. And this paper will show some cases of experiment JUNO and LHAASO. The result indicates that multi-group and preemptable resource scheduling is efficient to support multi-group and soft preemption. Additionally, the permission controlling component has been used in the local computing cluster, supporting for experiment JUNO, CMS and LHAASO, and the scale will be expanded to more experiments at the first half year, including DYW, BES and so on. Its evidence that the permission controlling is efficient.

  17. NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.

    PubMed

    Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C

    2011-09-14

    An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics

  18. 70% efficiency of bistate molecular machines explained by information theory, high dimensional geometry and evolutionary convergence.

    PubMed

    Schneider, Thomas D

    2010-10-01

    The relationship between information and energy is key to understanding biological systems. We can display the information in DNA sequences specifically bound by proteins by using sequence logos, and we can measure the corresponding binding energy. These can be compared by noting that one of the forms of the second law of thermodynamics defines the minimum energy dissipation required to gain one bit of information. Under the isothermal conditions that molecular machines function this is [Formula in text] joules per bit (kB is Boltzmann's constant and T is the absolute temperature). Then an efficiency of binding can be computed by dividing the information in a logo by the free energy of binding after it has been converted to bits. The isothermal efficiencies of not only genetic control systems, but also visual pigments are near 70%. From information and coding theory, the theoretical efficiency limit for bistate molecular machines is ln 2=0.6931. Evolutionary convergence to maximum efficiency is limited by the constraint that molecular states must be distinct from each other. The result indicates that natural molecular machines operate close to their information processing maximum (the channel capacity), and implies that nanotechnology can attain this goal.

  19. Beam Loss Monitoring for LHC Machine Protection

    NASA Astrophysics Data System (ADS)

    Holzer, Eva Barbara; Dehning, Bernd; Effnger, Ewald; Emery, Jonathan; Grishin, Viatcheslav; Hajdu, Csaba; Jackson, Stephen; Kurfuerst, Christoph; Marsili, Aurelien; Misiowiec, Marek; Nagel, Markus; Busto, Eduardo Nebot Del; Nordt, Annika; Roderick, Chris; Sapinski, Mariusz; Zamantzas, Christos

    The energy stored in the nominal LHC beams is two times 362 MJ, 100 times the energy of the Tevatron. As little as 1 mJ/cm3 deposited energy quenches a magnet at 7 TeV and 1 J/cm3 causes magnet damage. The beam dumps are the only places to safely dispose of this beam. One of the key systems for machine protection is the beam loss monitoring (BLM) system. About 3600 ionization chambers are installed at likely or critical loss locations around the LHC ring. The losses are integrated in 12 time intervals ranging from 40 μs to 84 s and compared to threshold values defined in 32 energy ranges. A beam abort is requested when potentially dangerous losses are detected or when any of the numerous internal system validation tests fails. In addition, loss data are used for machine set-up and operational verifications. The collimation system for example uses the loss data for set-up and regular performance verification. Commissioning and operational experience of the BLM are presented: The machine protection functionality of the BLM system has been fully reliable; the LHC availability has not been compromised by false beam aborts.

  20. 70% efficiency of bistate molecular machines explained by information theory, high dimensional geometry and evolutionary convergence

    PubMed Central

    Schneider, Thomas D.

    2010-01-01

    The relationship between information and energy is key to understanding biological systems. We can display the information in DNA sequences specifically bound by proteins by using sequence logos, and we can measure the corresponding binding energy. These can be compared by noting that one of the forms of the second law of thermodynamics defines the minimum energy dissipation required to gain one bit of information. Under the isothermal conditions that molecular machines function this is joules per bit ( is Boltzmann's constant and T is the absolute temperature). Then an efficiency of binding can be computed by dividing the information in a logo by the free energy of binding after it has been converted to bits. The isothermal efficiencies of not only genetic control systems, but also visual pigments are near 70%. From information and coding theory, the theoretical efficiency limit for bistate molecular machines is ln 2 = 0.6931. Evolutionary convergence to maximum efficiency is limited by the constraint that molecular states must be distinct from each other. The result indicates that natural molecular machines operate close to their information processing maximum (the channel capacity), and implies that nanotechnology can attain this goal. PMID:20562221

  1. Derailing healthy choices: an audit of vending machines at train stations in NSW.

    PubMed

    Kelly, Bridget; Flood, Victoria M; Bicego, Cecilia; Yeatman, Heather

    2012-04-01

    Train stations provide opportunities for food purchases and many consumers are exposed to these venues daily, on their commute to and from work. This study aimed to describe the food environment that commuters are exposed to at train stations in NSW. One hundred train stations were randomly sampled from the Greater Sydney Metropolitan region, representing a range of demographic areas. A purpose-designed instrument was developed to collect information on the availability, promotion and cost of food and beverages in vending machines. Items were classified as high/low in energy according to NSW school canteen criteria. Of the 206 vending machines identified, 84% of slots were stocked with high-energy food and beverages. The most frequently available items were chips and extruded snacks (33%), sugar-sweetened soft drinks (18%), chocolate (12%) and confectionery (10%). High energy foods were consistently cheaper than lower-energy alternatives. Transport sites may cumulatively contribute to excess energy consumption as the items offered are energy dense. Interventions are required to improve train commuters' access to healthy food and beverages.

  2. Comparison of Test Procedures and Energy Efficiency Criteria in Selected International Standards & Labeling Programs for Copy Machines, External Power Supplies, LED Displays, Residential Gas Cooktops and Televisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Nina; Zhou, Nan; Fridley, David

    2012-03-01

    This report presents a technical review of international minimum energy performance standards (MEPS), voluntary and mandatory energy efficiency labels and test procedures for five products being considered for new or revised MEPS in China: copy machines, external power supply, LED displays, residential gas cooktops and flat-screen televisions. For each product, an overview of the scope of existing international standards and labeling programs, energy values and energy performance metrics and description and detailed summary table of criteria and procedures in major test standards are presented.

  3. SchNet - A deep learning architecture for molecules and materials

    NASA Astrophysics Data System (ADS)

    Schütt, K. T.; Sauceda, H. E.; Kindermans, P.-J.; Tkatchenko, A.; Müller, K.-R.

    2018-06-01

    Deep learning has led to a paradigm shift in artificial intelligence, including web, text, and image search, speech recognition, as well as bioinformatics, with growing impact in chemical physics. Machine learning, in general, and deep learning, in particular, are ideally suitable for representing quantum-mechanical interactions, enabling us to model nonlinear potential-energy surfaces or enhancing the exploration of chemical compound space. Here we present the deep learning architecture SchNet that is specifically designed to model atomistic systems by making use of continuous-filter convolutional layers. We demonstrate the capabilities of SchNet by accurately predicting a range of properties across chemical space for molecules and materials, where our model learns chemically plausible embeddings of atom types across the periodic table. Finally, we employ SchNet to predict potential-energy surfaces and energy-conserving force fields for molecular dynamics simulations of small molecules and perform an exemplary study on the quantum-mechanical properties of C20-fullerene that would have been infeasible with regular ab initio molecular dynamics.

  4. The NASA-Lewis program on fusion energy for space power and propulsion, 1958-1978

    NASA Technical Reports Server (NTRS)

    Schulze, Norman R.; Roth, J. Reece

    1990-01-01

    An historical synopsis is provided of the NASA-Lewis research program on fusion energy for space power and propulsion systems. It was initiated to explore the potential applications of fusion energy to space power and propulsion systems. Some fusion related accomplishments and program areas covered include: basic research on the Electric Field Bumpy Torus (EFBT) magnetoelectric fusion containment concept, including identification of its radial transport mechanism and confinement time scaling; operation of the Pilot Rig mirror machine, the first superconducting magnet facility to be used in plasma physics or fusion research; operation of the Superconducting Bumpy Torus magnet facility, first used to generate a toroidal magnetic field; steady state production of neutrons from DD reactions; studies of the direct conversion of plasma enthalpy to thrust by a direct fusion rocket via propellant addition and magnetic nozzles; power and propulsion system studies, including D(3)He power balance, neutron shielding, and refrigeration requirements; and development of large volume, high field superconducting and cryogenic magnet technology.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curry, Bennett

    The Arizona Commerce Authority (ACA) conducted an Innovation in Advanced Manufacturing Grant Competition to support and grow southern and central Arizona’s Aerospace and Defense (A&D) industry and its supply chain. The problem statement for this grant challenge was that many A&D machining processes utilize older generation CNC machine tool technologies that can result an inefficient use of resources – energy, time and materials – compared to the latest state-of-the-art CNC machines. Competitive awards funded projects to develop innovative new tools and technologies that reduce energy consumption for older generation machine tools and foster working relationships between industry small to medium-sizedmore » manufacturing enterprises and third-party solution providers. During the 42-month term of this grant, 12 competitive awards were made. Final reports have been included with this submission.« less

  6. Responsive materials: A novel design for enhanced machine-augmented composites

    PubMed Central

    Bafekrpour, Ehsan; Molotnikov, Andrey; Weaver, James C.; Brechet, Yves; Estrin, Yuri

    2014-01-01

    The concept of novel responsive materials with a displacement conversion capability was further developed through the design of new machine-augmented composites (MACs). Embedded converter machines and MACs with improved geometry were designed and fabricated by multi-material 3D printing. This technique proved to be very effective in fabricating these novel composites with tuneable elastic moduli of the matrix and the embedded machines and excellent bonding between them. Substantial improvement in the displacement conversion efficiency of the new MACs over the existing ones was demonstrated. Also, the new design trebled the energy absorption of the MACs. Applications in energy absorbers as well as mechanical sensors and actuators are thus envisaged. A further type of MACs with conversion ability, viz. conversion of compressive displacements to torsional ones, was also proposed. PMID:24445490

  7. Soft electroactive actuators and hard ratchet-wheels enable unidirectional locomotion of hybrid machine

    NASA Astrophysics Data System (ADS)

    Sun, Wenjie; Liu, Fan; Ma, Ziqi; Li, Chenghai; Zhou, Jinxiong

    2017-01-01

    Combining synergistically the muscle-like actuation of soft materials and load-carrying and locomotive capability of hard mechanical components results in hybrid soft machines that can exhibit specific functions. Here, we describe the design, fabrication, modeling and experiment of a hybrid soft machine enabled by marrying unidirectionally actuated dielectric elastomer (DE) membrane-spring system and ratchet wheels. Subjected to an applied voltage 8.2 kV at ramping velocity 820 V/s, the hybrid machine prototype exhibits monotonic uniaxial locomotion with an averaged velocity 0.5mm/s. The underlying physics and working mechanisms of the soft machine are verified and elucidated by finite element simulation.

  8. Pulsed, Hydraulic Coal-Mining Machine

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr.

    1986-01-01

    In proposed coal-cutting machine, piston forces water through nozzle, expelling pulsed jet that cuts into coal face. Spring-loaded piston reciprocates at end of travel to refill water chamber. Machine a onecylinder, two-cycle, internal-combustion engine, fueled by gasoline, diesel fuel, or hydrogen. Fuel converted more directly into mechanical energy of water jet.

  9. Bypassing the Kohn-Sham equations with machine learning.

    PubMed

    Brockherde, Felix; Vogt, Leslie; Li, Li; Tuckerman, Mark E; Burke, Kieron; Müller, Klaus-Robert

    2017-10-11

    Last year, at least 30,000 scientific papers used the Kohn-Sham scheme of density functional theory to solve electronic structure problems in a wide variety of scientific fields. Machine learning holds the promise of learning the energy functional via examples, bypassing the need to solve the Kohn-Sham equations. This should yield substantial savings in computer time, allowing larger systems and/or longer time-scales to be tackled, but attempts to machine-learn this functional have been limited by the need to find its derivative. The present work overcomes this difficulty by directly learning the density-potential and energy-density maps for test systems and various molecules. We perform the first molecular dynamics simulation with a machine-learned density functional on malonaldehyde and are able to capture the intramolecular proton transfer process. Learning density models now allows the construction of accurate density functionals for realistic molecular systems.Machine learning allows electronic structure calculations to access larger system sizes and, in dynamical simulations, longer time scales. Here, the authors perform such a simulation using a machine-learned density functional that avoids direct solution of the Kohn-Sham equations.

  10. Mechanical design of walking machines.

    PubMed

    Arikawa, Keisuke; Hirose, Shigeo

    2007-01-15

    The performance of existing actuators, such as electric motors, is very limited, be it power-weight ratio or energy efficiency. In this paper, we discuss the method to design a practical walking machine under this severe constraint with focus on two concepts, the gravitationally decoupled actuation (GDA) and the coupled drive. The GDA decouples the driving system against the gravitational field to suppress generation of negative power and improve energy efficiency. On the other hand, the coupled drive couples the driving system to distribute the output power equally among actuators and maximize the utilization of installed actuator power. First, we depict the GDA and coupled drive in detail. Then, we present actual machines, TITAN-III and VIII, quadruped walking machines designed on the basis of the GDA, and NINJA-I and II, quadruped wall walking machines designed on the basis of the coupled drive. Finally, we discuss walking machines that travel on three-dimensional terrain (3D terrain), which includes the ground, walls and ceiling. Then, we demonstrate with computer simulation that we can selectively leverage GDA and coupled drive by walking posture control.

  11. Classifying Black Hole States with Machine Learning

    NASA Astrophysics Data System (ADS)

    Huppenkothen, Daniela

    2018-01-01

    Galactic black hole binaries are known to go through different states with apparent signatures in both X-ray light curves and spectra, leading to important implications for accretion physics as well as our knowledge of General Relativity. Existing frameworks of classification are usually based on human interpretation of low-dimensional representations of the data, and generally only apply to fairly small data sets. Machine learning, in contrast, allows for rapid classification of large, high-dimensional data sets. In this talk, I will report on advances made in classification of states observed in Black Hole X-ray Binaries, focusing on the two sources GRS 1915+105 and Cygnus X-1, and show both the successes and limitations of using machine learning to derive physical constraints on these systems.

  12. Quantum Computing: Solving Complex Problems

    ScienceCinema

    DiVincenzo, David

    2018-05-22

    One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

  13. CP Violation and the Future of Flavor Physics

    NASA Astrophysics Data System (ADS)

    Kiesling, Christian

    2009-12-01

    With the nearing completion of the first-generation experiments at asymmetric e+e- colliders running at the Υ(4S) resonance ("B-Factories") a new era of high luminosity machines is at the horizon. We report here on the plans at KEK in Japan to upgrade the KEKB machine ("SuperKEKB") with the goal of achieving an instantaneous luminosity exceeding 8×1035 cm-2 s-1, which is almost two orders of magnitude higher than KEKB. Together with the machine, the Belle detector will be upgraded as well ("Belle-II"), with significant improvements to increase its background tolerance as well as improving its physics performance. The new generation of experiments is scheduled to take first data in the year 2013.

  14. Machine learning on-a-chip: a high-performance low-power reusable neuron architecture for artificial neural networks in ECG classifications.

    PubMed

    Sun, Yuwen; Cheng, Allen C

    2012-07-01

    Artificial neural networks (ANNs) are a promising machine learning technique in classifying non-linear electrocardiogram (ECG) signals and recognizing abnormal patterns suggesting risks of cardiovascular diseases (CVDs). In this paper, we propose a new reusable neuron architecture (RNA) enabling a performance-efficient and cost-effective silicon implementation for ANN. The RNA architecture consists of a single layer of physical RNA neurons, each of which is designed to use minimal hardware resource (e.g., a single 2-input multiplier-accumulator is used to compute the dot product of two vectors). By carefully applying the principal of time sharing, RNA can multiplexs this single layer of physical neurons to efficiently execute both feed-forward and back-propagation computations of an ANN while conserving the area and reducing the power dissipation of the silicon. A three-layer 51-30-12 ANN is implemented in RNA to perform the ECG classification for CVD detection. This RNA hardware also allows on-chip automatic training update. A quantitative design space exploration in area, power dissipation, and execution speed between RNA and three other implementations representative of different reusable hardware strategies is presented and discussed. Compared with an equivalent software implementation in C executed on an embedded microprocessor, the RNA ASIC achieves three orders of magnitude improvements in both the execution speed and the energy efficiency. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Finite Element Structural Analysis of a Low Energy Micro Sheet Forming Machine Concept Design

    NASA Astrophysics Data System (ADS)

    Razali, A. R.; Ann, C. T.; Ahmad, A. F.; Shariff, H. M.; Kasim, N. I.; Musa, M. A.

    2017-05-01

    It is forecasted that with the miniaturization of materials being processed, energy consumption will also be ‘miniaturized’ proportionally. The aim of this researchis to design a low energy micro-sheet-forming machine for the application of thin sheet metal. A fewconcept designsof machine structure were produced. With the help of FE software, the structure is then subjected to a forming force to observe deflection in the structure for the selection of the best and simplest design. Comparison studies between mild steel and aluminium alloys 6061 were made with a view to examine the most suitable material to be used. Based on the analysis, allowable maximum tolerance was set at 2.5µm and it was found that aluminium alloy 6061 suffice to be used.

  16. Development and validation of methods for man-made machine interface evaluation. [for shuttles and shuttle payloads

    NASA Technical Reports Server (NTRS)

    Malone, T. B.; Micocci, A.

    1975-01-01

    The alternate methods of conducting a man-machine interface evaluation are classified as static and dynamic, and are evaluated. A dynamic evaluation tool is presented to provide for a determination of the effectiveness of the man-machine interface in terms of the sequence of operations (task and task sequences) and in terms of the physical characteristics of the interface. This dynamic checklist approach is recommended for shuttle and shuttle payload man-machine interface evaluations based on reduced preparation time, reduced data, and increased sensitivity of critical problems.

  17. The desktop muon detector: A simple, physics-motivated machine- and electronics-shop project for university students

    NASA Astrophysics Data System (ADS)

    Axani, S. N.; Conrad, J. M.; Kirby, C.

    2017-12-01

    This paper describes the construction of a desktop muon detector, an undergraduate-level physics project that develops machine-shop and electronics-shop technical skills. The desktop muon detector is a self-contained apparatus that employs a plastic scintillator as the detection medium and a silicon photomultiplier for light collection. This detector can be battery powered and is used in conjunction with the provided software. The total cost per detector is approximately 100. We describe physics experiments we have performed, and then suggest several other interesting measurements that are possible, with one or more desktop muon detectors.

  18. Phenomenology tools on cloud infrastructures using OpenStack

    NASA Astrophysics Data System (ADS)

    Campos, I.; Fernández-del-Castillo, E.; Heinemeyer, S.; Lopez-Garcia, A.; Pahlen, F.; Borges, G.

    2013-04-01

    We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.

  19. The world problem: on the computability of the topology of 4-manifolds

    NASA Technical Reports Server (NTRS)

    vanMeter, J. R.

    2005-01-01

    Topological classification of the 4-manifolds bridges computation theory and physics. A proof of the undecidability of the homeomorphy problem for 4-manifolds is outlined here in a clarifying way. It is shown that an arbitrary Turing machine with an arbitrary input can be encoded into the topology of a 4-manifold, such that the 4-manifold is homeomorphic to a certain other 4-manifold if and only if the corresponding Turing machine halts on the associated input. Physical implications are briefly discussed.

  20. Advances in molecular dynamics simulation of ultra-precision machining of hard and brittle materials

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoguang; Li, Qiang; Liu, Tao; Kang, Renke; Jin, Zhuji; Guo, Dongming

    2017-03-01

    Hard and brittle materials, such as silicon, SiC, and optical glasses, are widely used in aerospace, military, integrated circuit, and other fields because of their excellent physical and chemical properties. However, these materials display poor machinability because of their hard and brittle properties. Damages such as surface micro-crack and subsurface damage often occur during machining of hard and brittle materials. Ultra-precision machining is widely used in processing hard and brittle materials to obtain nanoscale machining quality. However, the theoretical mechanism underlying this method remains unclear. This paper provides a review of present research on the molecular dynamics simulation of ultra-precision machining of hard and brittle materials. The future trends in this field are also discussed.

  1. A 3D Human-Machine Integrated Design and Analysis Framework for Squat Exercises with a Smith Machine.

    PubMed

    Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun

    2017-02-06

    In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human-machine-environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines.

  2. A machine learning method to separate cosmic ray electrons from protons from 10 to 100 GeV using DAMPE data

    NASA Astrophysics Data System (ADS)

    Zhao, Hao; Peng, Wen-Xi; Wang, Huan-Yu; Qiao, Rui; Guo, Dong-Ya; Xiao, Hong; Wang, Zhao-Min

    2018-06-01

    DArk Matter Particle Explorer (DAMPE) is a general purpose high energy cosmic ray and gamma ray observatory, aiming to detect high energy electrons and gammas in the energy range 5 GeV to 10 TeV and hundreds of TeV for nuclei. This paper provides a method using machine learning to identify electrons and separate them from gammas, protons, helium and heavy nuclei with the DAMPE data acquired from 2016 January 1 to 2017 June 30, in the energy range from 10 to 100 GeV.

  3. Genarris: Random generation of molecular crystal structures and fast screening with a Harris approximation

    NASA Astrophysics Data System (ADS)

    Li, Xiayue; Curtis, Farren S.; Rose, Timothy; Schober, Christoph; Vazquez-Mayagoitia, Alvaro; Reuter, Karsten; Oberhofer, Harald; Marom, Noa

    2018-06-01

    We present Genarris, a Python package that performs configuration space screening for molecular crystals of rigid molecules by random sampling with physical constraints. For fast energy evaluations, Genarris employs a Harris approximation, whereby the total density of a molecular crystal is constructed via superposition of single molecule densities. Dispersion-inclusive density functional theory is then used for the Harris density without performing a self-consistency cycle. Genarris uses machine learning for clustering, based on a relative coordinate descriptor developed specifically for molecular crystals, which is shown to be robust in identifying packing motif similarity. In addition to random structure generation, Genarris offers three workflows based on different sequences of successive clustering and selection steps: the "Rigorous" workflow is an exhaustive exploration of the potential energy landscape, the "Energy" workflow produces a set of low energy structures, and the "Diverse" workflow produces a maximally diverse set of structures. The latter is recommended for generating initial populations for genetic algorithms. Here, the implementation of Genarris is reported and its application is demonstrated for three test cases.

  4. Development of Energy Models for Production Systems and Processes to Inform Environmentally Benign Decision-Making

    NASA Astrophysics Data System (ADS)

    Diaz-Elsayed, Nancy

    Between 2008 and 2035 global energy demand is expected to grow by 53%. While most industry-level analyses of manufacturing in the United States (U.S.) have traditionally focused on high energy consumers such as the petroleum, chemical, paper, primary metal, and food sectors, the remaining sectors account for the majority of establishments in the U.S. Specifically, of the establishments participating in the Energy Information Administration's Manufacturing Energy Consumption Survey in 2006, the non-energy intensive" sectors still consumed 4*109 GJ of energy, i.e., one-quarter of the energy consumed by the manufacturing sectors, which is enough to power 98 million homes for a year. The increasing use of renewable energy sources and the introduction of energy-efficient technologies in manufacturing operations support the advancement towards a cleaner future, but having a good understanding of how the systems and processes function can reduce the environmental burden even further. To facilitate this, methods are developed to model the energy of manufacturing across three hierarchical levels: production equipment, factory operations, and industry; these methods are used to accurately assess the current state and provide effective recommendations to further reduce energy consumption. First, the energy consumption of production equipment is characterized to provide machine operators and product designers with viable methods to estimate the environmental impact of the manufacturing phase of a product. The energy model of production equipment is tested and found to have an average accuracy of 97% for a product requiring machining with a variable material removal rate profile. However, changing the use of production equipment alone will not result in an optimal solution since machines are part of a larger system. Which machines to use, how to schedule production runs while accounting for idle time, the design of the factory layout to facilitate production, and even the machining parameters --- these decisions affect how much energy is utilized during production. Therefore, at the facility level a methodology is presented for implementing priority queuing while accounting for a high product mix in a discrete event simulation environment. A baseline case is presented and alternative factory designs are suggested, which lead to energy savings of approximately 9%. At the industry level, the majority of energy consumption for manufacturing facilities is utilized for machine drive, process heating, and HVAC. Numerous studies have characterized the energy of manufacturing processes and HVAC equipment, but energy data is often limited for a facility in its entirety since manufacturing companies often lack the appropriate sensors to track it and are hesitant to release this information for confidentiality purposes. Without detailed information about the use of energy in manufacturing sites, the scope of factory studies cannot be adequately defined. Therefore, the breakdown of energy consumption of sectors with discrete production is presented, as well as a case study assessing the electrical energy consumption, greenhouse gas emissions, their associated costs, and labor costs for selected sites in the United States, Japan, Germany, China, and India. By presenting energy models and assessments of production equipment, factory operations, and industry, this dissertation provides a comprehensive assessment of energy trends in manufacturing and recommends methods that can be used beyond these case studies and industries to reduce consumption and contribute to an energy-efficient future.

  5. Microcompartments and protein machines in prokaryotes.

    PubMed

    Saier, Milton H

    2013-01-01

    The prokaryotic cell was once thought of as a 'bag of enzymes' with little or no intracellular compartmentalization. In this view, most reactions essential for life occurred as a consequence of random molecular collisions involving substrates, cofactors and cytoplasmic enzymes. Our current conception of a prokaryote is far from this view. We now consider a bacterium or an archaeon as a highly structured, nonrandom collection of functional membrane-embedded and proteinaceous molecular machines, each of which serves a specialized function. In this article we shall present an overview of such microcompartments including (1) the bacterial cytoskeleton and the apparati allowing DNA segregation during cell division; (2) energy transduction apparati involving light-driven proton pumping and ion gradient-driven ATP synthesis; (3) prokaryotic motility and taxis machines that mediate cell movements in response to gradients of chemicals and physical forces; (4) machines of protein folding, secretion and degradation; (5) metabolosomes carrying out specific chemical reactions; (6) 24-hour clocks allowing bacteria to coordinate their metabolic activities with the daily solar cycle, and (7) proteinaceous membrane compartmentalized structures such as sulfur granules and gas vacuoles. Membrane-bound prokaryotic organelles were considered in a recent Journal of Molecular Microbiology and Biotechnology written symposium concerned with membranous compartmentalization in bacteria [J Mol Microbiol Biotechnol 2013;23:1-192]. By contrast, in this symposium, we focus on proteinaceous microcompartments. These two symposia, taken together, provide the interested reader with an objective view of the remarkable complexity of what was once thought of as a simple noncompartmentalized cell. Copyright © 2013 S. Karger AG, Basel.

  6. Ringing in the new physics: The politics and technology of electron colliders in the United States, 1956--1972

    NASA Astrophysics Data System (ADS)

    Paris, Elizabeth

    The ``November Revolution'' of 1974 and the experiments that followed consolidated the place of the Standard Model in modern particle physics. Much of the evidence on which these conclusions depended was generated by a new type of tool: colliding beam storage rings, which had been considered physically unfeasible twenty years earlier. In 1956 a young experimentalist named Gerry O'Neill dedicated himself to demonstrating that such an apparatus could do useful physics. The storage ring movement encountered numerous obstacles before generating one of the standard machines for high energy research. In fact, it wasn't until 1970 that the U.S. finally broke ground on its first electron-positron collider. Drawing extensively on archival sources and supplementing them with the personal accounts of many of the individuals who took part, Ringing in the New Physics examines this instance of post-World War II techno-science and the new social, political and scientific tensions that characterize it. The motivations are twofold: first, that the chronicle of storage rings may take its place beside mathematical group theory, computer simulations, magnetic spark chambers, and the like as an important contributor to a view of matter and energy which has been the dominant model for the last twenty-five years. In addition, the account provides a case study for the integration of the personal, professional, institutional, and material worlds when examining an episode in the history or sociology of twentieth century science. The story behind the technological development of storage rings holds fascinating insights into the relationship between theory and experiment, collaboration and competition in the physics community, the way scientists obtain funding and their responsibilities to it, and the very nature of what constitutes ``successful'' science in the post- World War II era.

  7. Thermodynamic analysis of resources used in manufacturing processes.

    PubMed

    Gutowski, Timothy G; Branham, Matthew S; Dahmus, Jeffrey B; Jones, Alissa J; Thiriez, Alexandre

    2009-03-01

    In this study we use a thermodynamic framework to characterize the material and energy resources used in manufacturing processes. The analysis and data span a wide range of processes from "conventional" processes such as machining, casting, and injection molding, to the so-called "advanced machining" processes such as electrical discharge machining and abrasive waterjet machining, and to the vapor-phase processes used in semiconductor and nanomaterials fabrication. In all, 20 processes are analyzed. The results show that the intensity of materials and energy used per unit of mass of material processed (measured either as specific energy or exergy) has increased by at least 6 orders of magnitude over the past several decades. The increase of material/energy intensity use has been primarily a consequence of the introduction of new manufacturing processes, rather than changes in traditional technologies. This phenomenon has been driven by the desire for precise small-scale devices and product features and enabled by stable and declining material and energy prices over this period. We illustrate the relevance of thermodynamics (including exergy analysis) for all processes in spite of the fact that long-lasting focus in manufacturing has been on product quality--not necessarily energy/material conversion efficiency. We promote the use of thermodynamics tools for analysis of manufacturing processes within the context of rapidly increasing relevance of sustainable human enterprises. We confirm that exergy analysis can be used to identify where resources are lost in these processes, which is the first step in proposing and/or redesigning new more efficient processes.

  8. Man/Machine Interaction Dynamics And Performance (MMIDAP) capability

    NASA Technical Reports Server (NTRS)

    Frisch, Harold P.

    1991-01-01

    The creation of an ability to study interaction dynamics between a machine and its human operator can be approached from a myriad of directions. The Man/Machine Interaction Dynamics and Performance (MMIDAP) project seeks to create an ability to study the consequences of machine design alternatives relative to the performance of both machine and operator. The class of machines to which this study is directed includes those that require the intelligent physical exertions of a human operator. While Goddard's Flight Telerobotic's program was expected to be a major user, basic engineering design and biomedical applications reach far beyond telerobotics. Ongoing efforts are outlined of the GSFC and its University and small business collaborators to integrate both human performance and musculoskeletal data bases with analysis capabilities necessary to enable the study of dynamic actions, reactions, and performance of coupled machine/operator systems.

  9. Energy Harvesting for Soft-Matter Machines and Electronics

    DTIC Science & Technology

    2016-06-09

    AFRL-AFOSR-VA-TR-2016-0353 Energy Harvesting for Soft-Matter Machines and Electronics Carmel Majidi CARNEGIE MELLON UNIVERSITY Final Report 06/09...ES) CARNEGIE MELLON UNIVERSITY 5000 FORBES AVENUE PITTSBURGH, PA 15213-3815 US 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING...livelink.ebs.afrl.af.mil/livelink/llisapi.dll DISTRIBUTION A: Distribution approved for public release. Carnegie Mellon University MECHANICAL ENGINEERING FINAL

  10. Practical aspects of the use of three-phase alternating current electric machines in electricity storage system

    NASA Astrophysics Data System (ADS)

    Ciucur, Violeta

    2015-02-01

    Of three-phase alternating current electric machines, it brings into question which of them is more advantageous to be used in electrical energy storage system by pumping water. The two major categories among which are given dispute are synchronous and the asynchronous machine. To consider the synchronous machine with permanent magnet configuration because it brings advantages compared with conventional synchronous machine, first by removing the necessary additional excitation winding. From the point of view of loss of the two types of machines, the optimal adjustment of the magnetic flux density is obtained to minimize the copper loss by hysteresis and eddy currents.

  11. Determination of the Lowest-Energy States for the Model Distribution of Trained Restricted Boltzmann Machines Using a 1000 Qubit D-Wave 2X Quantum Computer.

    PubMed

    Koshka, Yaroslav; Perera, Dilina; Hall, Spencer; Novotny, M A

    2017-07-01

    The possibility of using a quantum computer D-Wave 2X with more than 1000 qubits to determine the global minimum of the energy landscape of trained restricted Boltzmann machines is investigated. In order to overcome the problem of limited interconnectivity in the D-Wave architecture, the proposed RBM embedding combines multiple qubits to represent a particular RBM unit. The results for the lowest-energy (the ground state) and some of the higher-energy states found by the D-Wave 2X were compared with those of the classical simulated annealing (SA) algorithm. In many cases, the D-Wave machine successfully found the same RBM lowest-energy state as that found by SA. In some examples, the D-Wave machine returned a state corresponding to one of the higher-energy local minima found by SA. The inherently nonperfect embedding of the RBM into the Chimera lattice explored in this work (i.e., multiple qubits combined into a single RBM unit were found not to be guaranteed to be all aligned) and the existence of small, persistent biases in the D-Wave hardware may cause a discrepancy between the D-Wave and the SA results. In some of the investigated cases, introduction of a small bias field into the energy function or optimization of the chain-strength parameter in the D-Wave embedding successfully addressed difficulties of the particular RBM embedding. With further development of the D-Wave hardware, the approach will be suitable for much larger numbers of RBM units.

  12. Catalysis of heat-to-work conversion in quantum machines

    PubMed Central

    Ghosh, A.; Latune, C. L.; Davidovich, L.; Kurizki, G.

    2017-01-01

    We propose a hitherto-unexplored concept in quantum thermodynamics: catalysis of heat-to-work conversion by quantum nonlinear pumping of the piston mode which extracts work from the machine. This concept is analogous to chemical reaction catalysis: Small energy investment by the catalyst (pump) may yield a large increase in heat-to-work conversion. Since it is powered by thermal baths, the catalyzed machine adheres to the Carnot bound, but may strongly enhance its efficiency and power compared with its noncatalyzed counterparts. This enhancement stems from the increased ability of the squeezed piston to store work. Remarkably, the fraction of piston energy that is convertible into work may then approach unity. The present machine and its counterparts powered by squeezed baths share a common feature: Neither is a genuine heat engine. However, a squeezed pump that catalyzes heat-to-work conversion by small investment of work is much more advantageous than a squeezed bath that simply transduces part of the work invested in its squeezing into work performed by the machine. PMID:29087326

  13. Catalysis of heat-to-work conversion in quantum machines

    NASA Astrophysics Data System (ADS)

    Ghosh, A.; Latune, C. L.; Davidovich, L.; Kurizki, G.

    2017-11-01

    We propose a hitherto-unexplored concept in quantum thermodynamics: catalysis of heat-to-work conversion by quantum nonlinear pumping of the piston mode which extracts work from the machine. This concept is analogous to chemical reaction catalysis: Small energy investment by the catalyst (pump) may yield a large increase in heat-to-work conversion. Since it is powered by thermal baths, the catalyzed machine adheres to the Carnot bound, but may strongly enhance its efficiency and power compared with its noncatalyzed counterparts. This enhancement stems from the increased ability of the squeezed piston to store work. Remarkably, the fraction of piston energy that is convertible into work may then approach unity. The present machine and its counterparts powered by squeezed baths share a common feature: Neither is a genuine heat engine. However, a squeezed pump that catalyzes heat-to-work conversion by small investment of work is much more advantageous than a squeezed bath that simply transduces part of the work invested in its squeezing into work performed by the machine.

  14. Catalysis of heat-to-work conversion in quantum machines.

    PubMed

    Ghosh, A; Latune, C L; Davidovich, L; Kurizki, G

    2017-11-14

    We propose a hitherto-unexplored concept in quantum thermodynamics: catalysis of heat-to-work conversion by quantum nonlinear pumping of the piston mode which extracts work from the machine. This concept is analogous to chemical reaction catalysis: Small energy investment by the catalyst (pump) may yield a large increase in heat-to-work conversion. Since it is powered by thermal baths, the catalyzed machine adheres to the Carnot bound, but may strongly enhance its efficiency and power compared with its noncatalyzed counterparts. This enhancement stems from the increased ability of the squeezed piston to store work. Remarkably, the fraction of piston energy that is convertible into work may then approach unity. The present machine and its counterparts powered by squeezed baths share a common feature: Neither is a genuine heat engine. However, a squeezed pump that catalyzes heat-to-work conversion by small investment of work is much more advantageous than a squeezed bath that simply transduces part of the work invested in its squeezing into work performed by the machine.

  15. Ryan King | NREL

    Science.gov Websites

    research focuses on optimization and machine learning applied to complex energy systems and turbulent flows techniques to improve wind plant design and controls and developed a new data-driven machine learning closure

  16. Using Phun to Study "Perpetual Motion" Machines

    ERIC Educational Resources Information Center

    Kores, Jaroslav

    2012-01-01

    The concept of "perpetual motion" has a long history. The Indian astronomer and mathematician Bhaskara II (12th century) was the first person to describe a perpetual motion (PM) machine. An example of a 13th-century PM machine is shown in Fig. 1. Although the law of conservation of energy clearly implies the impossibility of PM construction, over…

  17. Design and fabrication of complete dentures using CAD/CAM technology

    PubMed Central

    Han, Weili; Li, Yanfeng; Zhang, Yue; lv, Yuan; Zhang, Ying; Hu, Ping; Liu, Huanyue; Ma, Zheng; Shen, Yi

    2017-01-01

    Abstract The aim of the study was to test the feasibility of using commercially available computer-aided design and computer-aided manufacturing (CAD/CAM) technology including 3Shape Dental System 2013 trial version, WIELAND V2.0.049 and WIELAND ZENOTEC T1 milling machine to design and fabricate complete dentures. The modeling process of full denture available in the trial version of 3Shape Dental System 2013 was used to design virtual complete dentures on the basis of 3-dimensional (3D) digital edentulous models generated from the physical models. The virtual complete dentures designed were exported to CAM software of WIELAND V2.0.049. A WIELAND ZENOTEC T1 milling machine controlled by the CAM software was used to fabricate physical dentitions and baseplates by milling acrylic resin composite plates. The physical dentitions were bonded to the corresponding baseplates to form the maxillary and mandibular complete dentures. Virtual complete dentures were successfully designed using the software through several steps including generation of 3D digital edentulous models, model analysis, arrangement of artificial teeth, trimming relief area, and occlusal adjustment. Physical dentitions and baseplates were successfully fabricated according to the designed virtual complete dentures using milling machine controlled by a CAM software. Bonding physical dentitions to the corresponding baseplates generated the final physical complete dentures. Our study demonstrated that complete dentures could be successfully designed and fabricated by using CAD/CAM. PMID:28072686

  18. Chip formation and surface integrity in high-speed machining of hardened steel

    NASA Astrophysics Data System (ADS)

    Kishawy, Hossam Eldeen A.

    Increasing demands for high production rates as well as cost reduction have emphasized the potential for the industrial application of hard turning technology during the past few years. Machining instead of grinding hardened steel components reduces the machining sequence, the machining time, and the specific cutting energy. Hard turning Is characterized by the generation of high temperatures, the formation of saw toothed chips, and the high ratio of thrust to tangential cutting force components. Although a large volume of literature exists on hard turning, the change in machined surface physical properties represents a major challenge. Thus, a better understanding of the cutting mechanism in hard turning is still required. In particular, the chip formation process and the surface integrity of the machined surface are important issues which require further research. In this thesis, a mechanistic model for saw toothed chip formation is presented. This model is based on the concept of crack initiation on the free surface of the workpiece. The model presented explains the mechanism of chip formation. In addition, experimental investigation is conducted in order to study the chip morphology. The effect of process parameters, including edge preparation and tool wear on the chip morphology, is studied using Scanning Electron Microscopy (SEM). The dynamics of chip formation are also investigated. The surface integrity of the machined parts is also investigated. This investigation focusses on residual stresses as well as surface and sub-surface deformation. A three dimensional thermo-elasto-plastic finite element model is developed to predict the machining residual stresses. The effect of flank wear is introduced during the analysis. Although residual stresses have complicated origins and are introduced by many factors, in this model only the thermal and mechanical factors are considered. The finite element analysis demonstrates the significant effect of the heat generated during cutting on the residual stresses. The machined specimens are also examined using x-ray diffraction technique to clarify the effect of different speeds, feeds and depths of cut as well as different edge preparations on the residual stress distribution beneath the machined surface. A reasonable agreement between the predicted and measured residual stress is obtained. The results obtained demonstrate the possibility of eliminating the existence of high tensile residual stresses in the workpiece surface by selecting the proper cutting conditions. The machined surfaces are examined using SEM to study the effect of different process parameters and edge preparations on the quality of the machined surface. The phenomenon of material side flow is investigated to clarify the mechanism of this phenomenon. The effect of process parameters and edge preparations on sub-surface deformation is also investigated.

  19. The development of a control system for a small high speed steam microturbine generator system

    NASA Astrophysics Data System (ADS)

    Alford, A.; Nichol, P.; Saunders, M.; Frisby, B.

    2015-08-01

    Steam is a widely used energy source. In many situations steam is generated at high pressures and then reduced in pressure through control valves before reaching point of use. An opportunity was identified to convert some of the energy at the point of pressure reduction into electricity. To take advantage of a market identified for small scale systems, a microturbine generator was designed based on a small high speed turbo machine. This machine was packaged with the necessary control valves and systems to allow connection of the machine to the grid. Traditional machines vary the speed of the generator to match the grid frequency. This was not possible due to the high speed of this machine. The characteristics of the rotating unit had to be understood to allow a control that allowed export of energy at the right frequency to the grid under the widest possible range of steam conditions. A further goal of the control system was to maximise the efficiency of generation under all conditions. A further complication was to provide adequate protection for the rotating unit in the event of the loss of connection to the grid. The system to meet these challenges is outlined with the solutions employed and tested for this application.

  20. Machine Learning and Deep Learning Models to Predict Runoff Water Quantity and Quality

    NASA Astrophysics Data System (ADS)

    Bradford, S. A.; Liang, J.; Li, W.; Murata, T.; Simunek, J.

    2017-12-01

    Contaminants can be rapidly transported at the soil surface by runoff to surface water bodies. Physically-based models, which are based on the mathematical description of main hydrological processes, are key tools for predicting surface water impairment. Along with physically-based models, data-driven models are becoming increasingly popular for describing the behavior of hydrological and water resources systems since these models can be used to complement or even replace physically based-models. In this presentation we propose a new data-driven model as an alternative to a physically-based overland flow and transport model. First, we have developed a physically-based numerical model to simulate overland flow and contaminant transport (the HYDRUS-1D overland flow module). A large number of numerical simulations were carried out to develop a database containing information about the impact of various input parameters (weather patterns, surface topography, vegetation, soil conditions, contaminants, and best management practices) on runoff water quantity and quality outputs. This database was used to train data-driven models. Three different methods (Neural Networks, Support Vector Machines, and Recurrence Neural Networks) were explored to prepare input- output functional relations. Results demonstrate the ability and limitations of machine learning and deep learning models to predict runoff water quantity and quality.

  1. Finite machines, mental procedures, and modern physics.

    PubMed

    Lupacchini, Rossella

    2007-01-01

    A Turing machine provides a mathematical definition of the natural process of calculating. It rests on trust that a procedure of reason can be reproduced mechanically. Turing's analysis of the concept of mechanical procedure in terms of a finite machine convinced Gödel of the validity of the Church thesis. And yet, Gödel's later concern was that, insofar as Turing's work shows that "mental procedure cannot go beyond mechanical procedures", it would imply the same kind of limitation on human mind. He therefore deems Turing's argument to be inconclusive. The question then arises as to which extent a computing machine operating by finite means could provide an adequate model of human intelligence. It is argued that a rigorous answer to this question can be given by developing Turing's considerations on the nature of mental processes. For Turing such processes are the consequence of physical processes and he seems to be led to the conclusion that quantum mechanics could help to find a more comprehensive explanation of them.

  2. Using virtual machine monitors to overcome the challenges of monitoring and managing virtualized cloud infrastructures

    NASA Astrophysics Data System (ADS)

    Bamiah, Mervat Adib; Brohi, Sarfraz Nawaz; Chuprat, Suriayati

    2012-01-01

    Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency, flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization, using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor (VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM). Considering the challenge of VM management, this paper describes several techniques to monitor and manage virtualized cloud infrastructures.

  3. An efficient annealing in Boltzmann machine in Hopfield neural network

    NASA Astrophysics Data System (ADS)

    Kin, Teoh Yeong; Hasan, Suzanawati Abu; Bulot, Norhisam; Ismail, Mohammad Hafiz

    2012-09-01

    This paper proposes and implements Boltzmann machine in Hopfield neural network doing logic programming based on the energy minimization system. The temperature scheduling in Boltzmann machine enhancing the performance of doing logic programming in Hopfield neural network. The finest temperature is determined by observing the ratio of global solution and final hamming distance using computer simulations. The study shows that Boltzmann Machine model is more stable and competent in term of representing and solving difficult combinatory problems.

  4. The Simpsons program 6-D phase space tracking with acceleration

    NASA Astrophysics Data System (ADS)

    Machida, S.

    1993-12-01

    A particle tracking code, Simpsons, in 6-D phase space including energy ramping has been developed to model proton synchrotrons and storage rings. We take time as the independent variable to change machine parameters and diagnose beam quality in a quite similar way as real machines, unlike existing tracking codes for synchrotrons which advance a particle element by element. Arbitrary energy ramping and rf voltage curves as a function of time are read as an input file for defining a machine cycle. The code is used to study beam dynamics with time dependent parameters. Some of the examples from simulations of the Superconducting Super Collider (SSC) boosters are shown.

  5. Laser-machined piezoelectric cantilevers for mechanical energy harvesting.

    PubMed

    Kim, HyunUk; Bedekar, Vishwas; Islam, Rashed Adnan; Lee, Woo-Ho; Leo, Don; Priya, Shashank

    2008-09-01

    In this study, we report results on a piezoelectric- material-based mechanical energy-harvesting device that was fabricated by combining laser machining with microelectronics packaging technology. It was found that the laser-machining process did not have significant effect on the electrical properties of piezoelectric material. The fabricated device was tested in the low-frequency regime of 50 to 1000 Hz at constant force of 8 g (where g = 9.8 m/s(2)). The device was found to generate continuous power of 1.13 microW at 870 Hz across a 288.5 kOmega load with a power density of 301.3 microW/cm(3).

  6. Multi-winding homopolar electric machine

    DOEpatents

    Van Neste, Charles W

    2012-10-16

    A multi-winding homopolar electric machine and method for converting between mechanical energy and electrical energy. The electric machine includes a shaft defining an axis of rotation, first and second magnets, a shielding portion, and a conductor. First and second magnets are coaxial with the shaft and include a charged pole surface and an oppositely charged pole surface, the charged pole surfaces facing one another to form a repulsive field therebetween. The shield portion extends between the magnets to confine at least a portion of the repulsive field to between the first and second magnets. The conductor extends between first and second end contacts and is toroidally coiled about the first and second magnets and the shield portion to develop a voltage across the first and second end contacts in response to rotation of the electric machine about the axis of rotation.

  7. Graduate student theses supported by DOE`s Environmental Sciences Division

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cushman, Robert M.; Parra, Bobbi M.

    1995-07-01

    This report provides complete bibliographic citations, abstracts, and keywords for 212 doctoral and master`s theses supported fully or partly by the U.S. Department of Energy`s Environmental Sciences Division (and its predecessors) in the following areas: Atmospheric Sciences; Marine Transport; Terrestrial Transport; Ecosystems Function and Response; Carbon, Climate, and Vegetation; Information; Computer Hardware, Advanced Mathematics, and Model Physics (CHAMMP); Atmospheric Radiation Measurement (ARM); Oceans; National Institute for Global Environmental Change (NIGEC); Unmanned Aerial Vehicles (UAV); Integrated Assessment; Graduate Fellowships for Global Change; and Quantitative Links. Information on the major professor, department, principal investigator, and program area is given for each abstract.more » Indexes are provided for major professor, university, principal investigator, program area, and keywords. This bibliography is also available in various machine-readable formats (ASCII text file, WordPerfect{reg_sign} files, and PAPYRUS{trademark} files).« less

  8. Particle Laden Turbulence in a Radiation Environment Using a Portable High Preformace Solver Based on the Legion Runtime System

    NASA Astrophysics Data System (ADS)

    Torres, Hilario; Iaccarino, Gianluca

    2017-11-01

    Soleil-X is a multi-physics solver being developed at Stanford University as a part of the Predictive Science Academic Alliance Program II. Our goal is to conduct high fidelity simulations of particle laden turbulent flows in a radiation environment for solar energy receiver applications as well as to demonstrate our readiness to effectively utilize next generation Exascale machines. The novel aspect of Soleil-X is that it is built upon the Legion runtime system to enable easy portability to different parallel distributed heterogeneous architectures while also being written entirely in high-level/high-productivity languages (Ebb and Regent). An overview of the Soleil-X software architecture will be given. Results from coupled fluid flow, Lagrangian point particle tracking, and thermal radiation simulations will be presented. Performance diagnostic tools and metrics corresponding the the same cases will also be discussed. US Department of Energy, National Nuclear Security Administration.

  9. Energy landscape analysis of neuroimaging data

    NASA Astrophysics Data System (ADS)

    Ezaki, Takahiro; Watanabe, Takamitsu; Ohzeki, Masayuki; Masuda, Naoki

    2017-05-01

    Computational neuroscience models have been used for understanding neural dynamics in the brain and how they may be altered when physiological or other conditions change. We review and develop a data-driven approach to neuroimaging data called the energy landscape analysis. The methods are rooted in statistical physics theory, in particular the Ising model, also known as the (pairwise) maximum entropy model and Boltzmann machine. The methods have been applied to fitting electrophysiological data in neuroscience for a decade, but their use in neuroimaging data is still in its infancy. We first review the methods and discuss some algorithms and technical aspects. Then, we apply the methods to functional magnetic resonance imaging data recorded from healthy individuals to inspect the relationship between the accuracy of fitting, the size of the brain system to be analysed and the data length. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  10. Reproducibility of ad libitum energy intake with the use of a computerized vending machine system123

    PubMed Central

    Votruba, Susanne B; Franks, Paul W; Krakoff, Jonathan; Salbe, Arline D

    2010-01-01

    Background: Accurate assessment of energy intake is difficult but critical for the evaluation of eating behavior and intervention effects. Consequently, methods to assess ad libitum energy intake under controlled conditions have been developed. Objective: Our objective was to evaluate the reproducibility of ad libitum energy intake with the use of a computerized vending machine system. Design: Twelve individuals (mean ± SD: 36 ± 8 y old; 41 ± 8% body fat) consumed a weight-maintaining diet for 3 d; subsequently, they self-selected all food with the use of a computerized vending machine system for an additional 3 d. Mean daily energy intake was calculated from the actual weight of foods consumed and expressed as a percentage of weight-maintenance energy needs (%WMEN). Subjects repeated the study multiple times during 2 y. The within-person reproducibility of energy intake was determined through the calculation of the intraclass correlation coefficients (ICCs) between visits. Results: Daily energy intake for all subjects was 5020 ± 1753 kcal during visit 1 and 4855 ± 1615 kcal during visit 2. There were no significant associations between energy intake and body weight, body mass index, or percentage body fat while subjects used the vending machines, which indicates that intake was not driven by body size or need. Despite overconsumption (%WMEN = 181 ± 57%), the reproducibility of intake between visits, whether expressed as daily energy intake (ICC = 0.90), %WMEN (ICC = 0.86), weight of food consumed (ICC = 0.87), or fat intake (g/d; ICC = 0.87), was highly significant (P < 0.0001). Conclusion: Although ad libitum energy intake exceeded %WMEN, the within-person reliability of this intake across multiple visits was high, which makes this a reproducible method for the measurement of ad libitum intake in subjects who reside in a research unit. This trial was registered at clinicaltrials.gov as NCT00342732. PMID:19923376

  11. Design of heat exchanger for Ericsson-Brayton piston engine.

    PubMed

    Durcansky, Peter; Papucik, Stefan; Jandacka, Jozef; Holubcik, Michal; Nosek, Radovan

    2014-01-01

    Combined power generation or cogeneration is a highly effective technology that produces heat and electricity in one device more efficiently than separate production. Overall effectiveness is growing by use of combined technologies of energy extraction, taking heat from flue gases and coolants of machines. Another problem is the dependence of such devices on fossil fuels as fuel. For the combustion turbine is mostly used as fuel natural gas, kerosene and as fuel for heating power plants is mostly used coal. It is therefore necessary to seek for compensation today, which confirms the assumption in the future. At first glance, the obvious efforts are to restrict the use of largely oil and change the type of energy used in transport. Another significant change is the increase in renewable energy--energy that is produced from renewable sources. Among machines gaining energy by unconventional way belong mainly the steam engine, Stirling engine, and Ericsson engine. In these machines, the energy is obtained by external combustion and engine performs work in a medium that receives and transmits energy from combustion or flue gases indirectly. The paper deals with the principle of hot-air engines, and their use in combined heat and electricity production from biomass and with heat exchangers as primary energy transforming element.

  12. Poster - Thurs Eve-21: Experience with the Velocity(TM) pre-commissioning services.

    PubMed

    Scora, D; Sixel, K; Mason, D; Neath, C

    2008-07-01

    As the first Canadian users of the Velocity™ program offered by Siemens, we would like to share our experience with the program. The Velocity program involves the measurement of the commissioning data by an independent Physics consulting company at the factory test cell. The data collected was used to model the treatment beams in our planning system in parallel with the linac delivery and installation. Beam models and a complete data book were generated for two photon energies including Virtual Wedge, physical wedge, and IMRT, and 6 electron energies at 100 and 110 cm SSD. Our final beam models are essentially the Velocity models with some minor modifications to customize the fit to our liking. Our experience with the Velocity program was very positive; the data collection was professional and efficient. It allowed us to proceed with confidence in our beam data and modeling and to spend more time on other aspects of opening a new clinic. With the assistance of the program we were able to open a three-linac clinic with Image-Guided IMRT within 4.5 months of machine delivery. © 2008 American Association of Physicists in Medicine.

  13. Experimental characterization of a small custom-built double-acting gamma-type stirling engine

    NASA Astrophysics Data System (ADS)

    Intsiful, Peter; Mensah, Francis; Thorpe, Arthur

    This paper investigates characterization of a small custom-built double-acting gamma-type stirling engine. Stirling-cycle engine is a reciprocating energy conversion machine with working spaces operating under conditions of oscillating pressure and flow. These conditions may be due to compressibility as wells as pressure and temperature fluctuations. In standard literature, research indicates that there is lack of basic physics to account for the transport phenomena that manifest themselves in the working spaces of reciprocating engines. Previous techniques involve governing equations: mass, momentum and energy. Some authors use engineering thermodynamics. None of these approaches addresses this particular engine. A technique for observing and analyzing the behavior of this engine via parametric spectral profiles has been developed, using laser beams. These profiles enabled the generation of pv-curves and other trajectories for investigating the thermos-physical and thermos-hydrodynamic phenomena that manifest in the exchangers. The engine's performance was examined. The results indicate that with current load of 35.78A, electric power of 0.505 kW was generated at a speed of 240 rpm and 29.50 percent efficiency was obtained. Nasa grants to Howard University NASA/HBCU-NHRETU & CSTEA.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Strykowsky, T. Brown, J. Chrzanowski, M. Cole, P. Heitzenroeder, G.H. Neilson, Donald Rej, and M. Viola

    The National Compact Stellarator Experiment (NCSX) was designed to test physics principles of an innovative fusion energy confinement device developed by the Princeton Plasma Physics Laboratory (PPPL) and Oak Ridge National Laboratory (ORNL) under contract from the US Department of Energy. The project was technically very challenging, primarily due to the complex component geometries and tight tolerances that were required. As the project matured these challenges manifested themselves in significant cost overruns through all phases of the project (i.e. design, R&D, fabrication and assembly). The project was subsequently cancelled by the DOE in 2008. Although the project was not completed,more » several major work packages, comprising about 65% of the total estimated cost (excluding management and contingency), were completed, providing a data base of actual costs that can be analyzed to understand cost drivers. Technical factors that drove costs included the complex geometry, tight tolerances, material requirements, and performance requirements. Management factors included imposed annual funding constraints that throttled project cash flow, staff availability, and inadequate R&D. Understanding how requirements and design decisions drove cost through this top-down forensic cost analysis could provide valuable insight into the configuration and design of future state-of-the art machines and other devices.« less

  15. Musical feedback during exercise machine workout enhances mood

    PubMed Central

    Fritz, Thomas H.; Halfpaap, Johanna; Grahl, Sophia; Kirkland, Ambika; Villringer, Arno

    2013-01-01

    Music making has a number of beneficial effects for motor tasks compared to passive music listening. Given that recent research suggests that high energy musical activities elevate positive affect more strongly than low energy musical activities, we here investigated a recent method that combined music making with systematically increasing physiological arousal by exercise machine workout. We compared mood and anxiety after two exercise conditions on non-cyclical exercise machines, one with passive music listening and the other with musical feedback (where participants could make music with the exercise machines). The results showed that agency during exercise machine workout (an activity we previously labeled jymmin – a cross between jammin and gym) had an enhancing effect on mood compared to workout with passive music listening. Furthermore, the order in which the conditions were presented mediated the effect of musical agency for this subscale when participants first listened passively, the difference in mood between the two conditions was greater, suggesting that a stronger increase in hormone levels (e.g., endorphins) during the active condition may have caused the observed effect. Given an enhanced mood after training with musical feedback compared to passively listening to the same type of music during workout, the results suggest that exercise machine workout with musical feedback (jymmin) makes the act of exercise machine training more desirable. PMID:24368905

  16. Hybrid-secondary uncluttered permanent magnet machine and method

    DOEpatents

    Hsu, John S.

    2005-12-20

    An electric machine (40) has a stator (43), a permanent magnet rotor (38) with permanent magnets (39) and a magnetic coupling uncluttered rotor (46) for inducing a slip energy current in secondary coils (47). A dc flux can be produced in the uncluttered rotor when the secondary coils are fed with dc currents. The magnetic coupling uncluttered rotor (46) has magnetic brushes (A, B, C, D) which couple flux in through the rotor (46) to the secondary coils (47c, 47d) without inducing a current in the rotor (46) and without coupling a stator rotational energy component to the secondary coils (47c, 47d). The machine can be operated as a motor or a generator in multi-phase or single-phase embodiments and is applicable to the hybrid electric vehicle. A method of providing a slip energy controller is also disclosed.

  17. 21 CFR 890.1850 - Diagnostic muscle stimulator.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) MEDICAL DEVICES PHYSICAL MEDICINE DEVICES Physical Medicine Diagnostic Devices § 890.1850 Diagnostic... electromyograph machine to initiate muscle activity. It is intended for medical purposes, such as to diagnose...

  18. Plasma Wall interaction in the IGNITOR machine

    NASA Astrophysics Data System (ADS)

    Ferro, C.

    1998-11-01

    One of the critical issues in ignited machines is the management of the heat and particle exhaust without degradation of the plasma quality (pollution and confinement time) and without damage of the material facing the plasma. The IGNITOR machine has been conceived as a ``limiter" device, i.e., with the plasma leaning nearly on the entire surface of the first wall. Peak heat loads can easily be maintained at values lower than 1.35 MW/m^2 even considering displacements of the plasma column^1. This ``limiter" choice is based on the operational performances of high density, high field machines which suggests that intrinsic physics processes in the edge of the plasma are effective in spreading heat loads and maintaining the plasma pollution at a low level. The possibility of these operating scenarios has been demonstrated recently by different machines both in limiter and divertor configurations. The basis for the different physical processes that are expected to influence the IGNITOR edge parameters ^2 are discussed and a comparison with the latest experimental results is given. ^1 C. Ferro, G. Franzoni, R. Zanino, ENEA Internal Report RT/ERG/FUS/94/14. ^2 C. Ferro, R. Zanino, J. Nucl. Mater. 543, 176 (1990).

  19. First experimental evidence of hydrodynamic tunneling of ultra–relativistic protons in extended solid copper target at the CERN HiRadMat facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, R.; Grenier, D.; Wollmann, D.

    2014-08-15

    A novel experiment has been performed at the CERN HiRadMat test facility to study the impact of the 440 GeV proton beam generated by the Super Proton Synchrotron on extended solid copper cylindrical targets. Substantial hydrodynamic tunneling of the protons in the target material has been observed that leads to significant lengthening of the projectile range, which confirms our previous theoretical predictions [N. A. Tahir et al., Phys. Rev. Spec. Top.-Accel. Beams 15, 051003 (2012)]. Simulation results show very good agreement with the experimental measurements. These results have very important implications on the machine protection design for powerful machines like themore » Large Hadron Collider (LHC), the future High Luminosity LHC, and the proposed huge 80 km circumference Future Circular Collider, which is currently being discussed at CERN. Another very interesting outcome of this work is that one may also study the field of High Energy Density Physics at this test facility.« less

  20. Justification of process of loading coal onto face conveyors by auger heads of shearer-loader machines

    NASA Astrophysics Data System (ADS)

    Nguyen, K. L.; Gabov, V. V.; Zadkov, D. A.; Le, T. B.

    2018-03-01

    This paper analyzes the processes of removing coal from the area of its dislodging and loading the disintegrated mass onto face conveyors by auger heads of shearer-loader machines. The loading process is assumed to consist of four subprocesses: dislodging coal, removal of the disintegrated mass by auger blades from the crushing area, passive transportation of the disintegrated mass, and forming the load flow on the bearing surface of a face conveyor. Each of the considered subprocesses is different in its physical nature, the number of factors influencing it, and can be complex or multifactor. Possibilities of improving the efficiency of loading coal onto a face conveyor are addressed. The selected criteria of loading efficiency are load rate, specific energy consumption, and coal size reduction. Efficiency is improved by reducing the resistance to movement of the disintegrated mass during loading by increasing the area of the loading window section and the volume of the loading area on the conveyor, as well as by coordination of intensity of flows related to the considered processes in local areas.

  1. Spindle speed variation technique in turning operations: Modeling and real implementation

    NASA Astrophysics Data System (ADS)

    Urbikain, G.; Olvera, D.; de Lacalle, L. N. López; Elías-Zúñiga, A.

    2016-11-01

    Chatter is still one of the most challenging problems in machining vibrations. Researchers have focused their efforts to prevent, avoid or reduce chatter vibrations by introducing more accurate predictive physical methods. Among them, the techniques based on varying the rotational speed of the spindle (or SSV, Spindle Speed ​​Variation) have gained great relevance. However, several problems need to be addressed due to technical and practical reasons. On one hand, they can generate harmful overheating of the spindle especially at high speeds. On the other hand, the machine may be unable to perform the interpolation properly. Moreover, it is not trivial to select the most appropriate tuning parameters. This paper conducts a study of the real implementation of the SSV technique in turning systems. First, a stability model based on perturbation theory was developed for simulation purposes. Secondly, the procedure to realistically implement the technique in a conventional turning center was tested and developed. The balance between the improved stability margins and acceptable behavior of the spindle is ensured by energy consumption measurements. Mathematical model shows good agreement with experimental cutting tests.

  2. Tokamak foundation in USSR/Russia 1950-1990

    NASA Astrophysics Data System (ADS)

    Smirnov, V. P.

    2010-01-01

    In the USSR, nuclear fusion research began in 1950 with the work of I.E. Tamm, A.D. Sakharov and colleagues. They formulated the principles of magnetic confinement of high temperature plasmas, that would allow the development of a thermonuclear reactor. Following this, experimental research on plasma initiation and heating in toroidal systems began in 1951 at the Kurchatov Institute. From the very first devices with vessels made of glass, porcelain or metal with insulating inserts, work progressed to the operation of the first tokamak, T-1, in 1958. More machines followed and the first international collaboration in nuclear fusion, on the T-3 tokamak, established the tokamak as a promising option for magnetic confinement. Experiments continued and specialized machines were developed to test separately improvements to the tokamak concept needed for the production of energy. At the same time, research into plasma physics and tokamak theory was being undertaken which provides the basis for modern theoretical work. Since then, the tokamak concept has been refined by a world-wide effort and today we look forward to the successful operation of ITER.

  3. Language extraction from zinc sulfide

    NASA Astrophysics Data System (ADS)

    Varn, Dowman Parks

    2001-09-01

    Recent advances in the analysis of one-dimensional temporal and spacial series allow for detailed characterization of disorder and computation in physical systems. One such system that has defied theoretical understanding since its discovery in 1912 is polytypism. Polytypes are layered compounds, exhibiting crystallinity in two dimensions, yet having complicated stacking sequences in the third direction. They can show both ordered and disordered sequences, sometimes each in the same specimen. We demonstrate a method for extracting two-layer correlation information from ZnS diffraction patterns and employ a novel technique for epsilon-machine reconstruction. We solve a long-standing problem---that of determining structural information for disordered materials from their diffraction patterns---for this special class of disorder. Our solution offers the most complete possible statistical description of the disorder. Furthermore, from our reconstructed epsilon-machines we find the effective range of the interlayer interaction in these materials, as well as the configurational energy of both ordered and disordered specimens. Finally, we can determine the 'language' (in terms of the Chomsky Hierarchy) these small rocks speak, and we find that regular languages are sufficient to describe them.

  4. Coordination of peptidoglycan synthesis and outer membrane constriction during Escherichia coli cell division

    PubMed Central

    Gray, Andrew N; Egan, Alexander JF; van't Veer, Inge L; Verheul, Jolanda; Colavin, Alexandre; Koumoutsi, Alexandra; Biboy, Jacob; Altelaar, A F Maarten; Damen, Mirjam J; Huang, Kerwyn Casey; Simorre, Jean-Pierre; Breukink, Eefjan; den Blaauwen, Tanneke; Typas, Athanasios; Gross, Carol A; Vollmer, Waldemar

    2015-01-01

    To maintain cellular structure and integrity during division, Gram-negative bacteria must carefully coordinate constriction of a tripartite cell envelope of inner membrane, peptidoglycan (PG), and outer membrane (OM). It has remained enigmatic how this is accomplished. Here, we show that envelope machines facilitating septal PG synthesis (PBP1B-LpoB complex) and OM constriction (Tol system) are physically and functionally coordinated via YbgF, renamed CpoB (Coordinator of PG synthesis and OM constriction, associated with PBP1B). CpoB localizes to the septum concurrent with PBP1B-LpoB and Tol at the onset of constriction, interacts with both complexes, and regulates PBP1B activity in response to Tol energy state. This coordination links PG synthesis with OM invagination and imparts a unique mode of bifunctional PG synthase regulation by selectively modulating PBP1B cross-linking activity. Coordination of the PBP1B and Tol machines by CpoB contributes to effective PBP1B function in vivo and maintenance of cell envelope integrity during division. DOI: http://dx.doi.org/10.7554/eLife.07118.001 PMID:25951518

  5. DOE-RCT-0003641 Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, Edward; Lesster, Ted

    2014-07-30

    This program studied novel concepts for an Axial Flux Reluctance Machine to capture energy from marine hydrokinetic sources and compared their attributes to a Radial Flux Reluctance Machine which was designed under a prior Department of Energy program for the same application. Detailed electromagnetic and mechanical analyses were performed to determine the validity of the concept and to provide a direct comparison with the existing conventional Radial Flux Switched Reluctance Machine designed during the Advanced Wave Energy Conversion Project, DE-EE0003641. The alternate design changed the machine topology so that the flux that is switched flows axially rather than radially andmore » the poles themselves are long radially, as opposed to the radial flux machine that has pole pieces that are long axially. It appeared possible to build an axial flux machine that should be considerably more compact than the radial machine. In an “apples to apples” comparison, the same rules with regard to generating magnetic force and the fundamental limitations of flux density hold, so that at the heart of the machine the same torque equations hold. The differences are in the mechanical configuration that limits or enhances the change of permeance with rotor position, in the amount of permeable iron required to channel the flux via the pole pieces to the air-gaps, and in the sizing and complexity of the electrical winding. Accordingly it was anticipated that the magnetic component weight would be similar but that better use of space would result in a shorter machine with accompanying reduction in housing and support structure. For the comparison the pole count was kept the same at 28 though it was also expected that the radial tapering of the slots between pole pieces would permit a higher pole count machine, enabling the generation of greater power at a given speed in some future design. The baseline Radial Flux Machine design was established during the previous DOE program. Its characteristics were tabulated for use in comparing to the Axial Flux Machine. Three basic conceptual designs for the Axial Flux Machine were considered: (1) a machine with a single coil at the inner diameter of the machine, (2) a machine with a single coil at the outside diameter of the machine, and (3) a machine with a coil around each tooth. Slight variations of these basic configurations were considered during the study. Analysis was performed on these configurations to determine the best candidate design to advance to preliminary design, based on size, weight, performance, cost and manufacturability. The configuration selected as the most promising was the multi-pole machine with a coil around each tooth. This configuration provided the least complexity with respect to the mechanical configuration and manufacturing, which would yield the highest reliability and lowest cost machine of the three options. A preliminary design was performed on this selected configuration. For this first ever axial design of the multi rotor configuration the 'apples to apples' comparison was based on using the same length of rotor pole as the axial length of rotor pole in the radial machine and making the mean radius of the rotor in the axial machine the same as the air gap radius in the radial machine. The tooth to slot ratio at the mean radius of the axial machine was the same as the tooth to slot ratio of the radial machine. The comparison between the original radial flux machine and the new axial flux machine indicates that for the same torque, the axial flux machine diameter will be 27% greater, but it will have 30% of the length, and 76% of the weight. Based on these results, it is concluded that an axial flux reluctance machine presents a viable option for large generators to be used for the capture of wave energy. In the analysis of Task 4, below, it is pointed out that our selection of dimensional similarity for the 'apples to apples' comparison did not produce an optimum axial flux design. There is torque capability to spare, implying we could reduce the magnetic structure, but the winding area, constrained by the pole separation at the inner pole radius has a higher resistance than desirable, implying we need more room for copper. The recommendation is to proceed via one cycle of optimization and review to correct this unbalance and then proceed to a detailed design phase to produce manufacturing drawings, followed by the construction of a prototype to test the performance of the machine against predicted results.« less

  6. Charting the energy landscape of metal/organic interfaces via machine learning

    NASA Astrophysics Data System (ADS)

    Scherbela, Michael; Hörmann, Lukas; Jeindl, Andreas; Obersteiner, Veronika; Hofmann, Oliver T.

    2018-04-01

    The rich polymorphism exhibited by inorganic/organic interfaces is a major challenge for materials design. In this work, we present a method to efficiently explore the potential energy surface and predict the formation energies of polymorphs and defects. This is achieved by training a machine learning model on a list of only 100 candidate structures that are evaluated via dispersion-corrected density functional theory (DFT) calculations. We demonstrate the power of this approach for tetracyanoethylene on Ag(100) and explain the anisotropic ordering that is observed experimentally.

  7. Charting the energy landscape of metal/organic interfaces via machine learning

    DOE PAGES

    Scherbela, Michael; Hormann, Lukas; Jeindl, Andreas; ...

    2018-04-17

    The rich polymorphism exhibited by inorganic/organic interfaces is a major challenge for materials design. Here in this work, we present a method to efficiently explore the potential energy surface and predict the formation energies of polymorphs and defects. This is achieved by training a machine learning model on a list of only 100 candidate structures that are evaluated via dispersion-corrected density functional theory (DFT) calculations. Finally, we demonstrate the power of this approach for tetracyanoethylene on Ag(100) and explain the anisotropic ordering that is observed experimentally.

  8. Charting the energy landscape of metal/organic interfaces via machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scherbela, Michael; Hormann, Lukas; Jeindl, Andreas

    The rich polymorphism exhibited by inorganic/organic interfaces is a major challenge for materials design. Here in this work, we present a method to efficiently explore the potential energy surface and predict the formation energies of polymorphs and defects. This is achieved by training a machine learning model on a list of only 100 candidate structures that are evaluated via dispersion-corrected density functional theory (DFT) calculations. Finally, we demonstrate the power of this approach for tetracyanoethylene on Ag(100) and explain the anisotropic ordering that is observed experimentally.

  9. Posture and activity recognition and energy expenditure estimation in a wearable platform.

    PubMed

    Sazonov, Edward; Hegde, Nagaraj; Browning, Raymond C; Melanson, Edward L; Sazonova, Nadezhda A

    2015-07-01

    The use of wearable sensors coupled with the processing power of mobile phones may be an attractive way to provide real-time feedback about physical activity and energy expenditure (EE). Here, we describe the use of a shoe-based wearable sensor system (SmartShoe) with a mobile phone for real-time recognition of various postures/physical activities and the resulting EE. To deal with processing power and memory limitations of the phone, we compare the use of support vector machines (SVM), multinomial logistic discrimination (MLD), and multilayer perceptrons (MLP) for posture and activity classification followed by activity-branched EE estimation. The algorithms were validated using data from 15 subjects who performed up to 15 different activities of daily living during a 4-h stay in a room calorimeter. MLD and MLP demonstrated activity classification accuracy virtually identical to SVM (∼ 95%) while reducing the running time and the memory requirements by a factor of >10 3. Comparison of per-minute EE estimation using activity-branched models resulted in accurate EE prediction (RMSE = 0.78 kcal/min for SVM and MLD activity classification, 0.77 kcal/min for MLP versus RMSE of 0.75 kcal/min for manual annotation). These results suggest that low-power computational algorithms can be successfully used for real-time physical activity monitoring and EE estimation on a wearable platform.

  10. High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apollinari, G.; Béjar Alonso, I.; Brüning, O.

    2015-12-17

    The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHCmore » is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cavities for beam rotation with ultra-precise phase control, new technology and physical processes for beam collimation and 300 metre-long high-power superconducting links with negligible energy dissipation. The present document describes the technologies and components that will be used to realise the project and is intended to serve as the basis for the detailed engineering design of HL-LHC.« less

  11. Integrated Method for Personal Thermal Comfort Assessment and Optimization through Users' Feedback, IoT and Machine Learning: A Case Study †.

    PubMed

    Salamone, Francesco; Belussi, Lorenzo; Currò, Cristian; Danza, Ludovico; Ghellere, Matteo; Guazzi, Giulia; Lenzi, Bruno; Megale, Valentino; Meroni, Italo

    2018-05-17

    Thermal comfort has become a topic issue in building performance assessment as well as energy efficiency. Three methods are mainly recognized for its assessment. Two of them based on standardized methodologies, face the problem by considering the indoor environment in steady-state conditions (PMV and PPD) and users as active subjects whose thermal perception is influenced by outdoor climatic conditions (adaptive approach). The latter method is the starting point to investigate thermal comfort from an overall perspective by considering endogenous variables besides the traditional physical and environmental ones. Following this perspective, the paper describes the results of an in-field investigation of thermal conditions through the use of nearable and wearable solutions, parametric models and machine learning techniques. The aim of the research is the exploration of the reliability of IoT-based solutions combined with advanced algorithms, in order to create a replicable framework for the assessment and improvement of user thermal satisfaction. For this purpose, an experimental test in real offices was carried out involving eight workers. Parametric models are applied for the assessment of thermal comfort; IoT solutions are used to monitor the environmental variables and the users' parameters; the machine learning CART method allows to predict the users' profile and the thermal comfort perception respect to the indoor environment.

  12. Integrated Method for Personal Thermal Comfort Assessment and Optimization through Users’ Feedback, IoT and Machine Learning: A Case Study †

    PubMed Central

    Currò, Cristian; Danza, Ludovico; Ghellere, Matteo; Guazzi, Giulia; Lenzi, Bruno; Megale, Valentino; Meroni, Italo

    2018-01-01

    Thermal comfort has become a topic issue in building performance assessment as well as energy efficiency. Three methods are mainly recognized for its assessment. Two of them based on standardized methodologies, face the problem by considering the indoor environment in steady-state conditions (PMV and PPD) and users as active subjects whose thermal perception is influenced by outdoor climatic conditions (adaptive approach). The latter method is the starting point to investigate thermal comfort from an overall perspective by considering endogenous variables besides the traditional physical and environmental ones. Following this perspective, the paper describes the results of an in-field investigation of thermal conditions through the use of nearable and wearable solutions, parametric models and machine learning techniques. The aim of the research is the exploration of the reliability of IoT-based solutions combined with advanced algorithms, in order to create a replicable framework for the assessment and improvement of user thermal satisfaction. For this purpose, an experimental test in real offices was carried out involving eight workers. Parametric models are applied for the assessment of thermal comfort; IoT solutions are used to monitor the environmental variables and the users’ parameters; the machine learning CART method allows to predict the users’ profile and the thermal comfort perception respect to the indoor environment. PMID:29772818

  13. Jacks--A Study of Simple Machines.

    ERIC Educational Resources Information Center

    Parsons, Ralph

    This vocational physics individualized student instructional module on jacks (simple machines used to lift heavy objects) contains student prerequisites and objectives, an introduction, and sections on the ratchet bumper jack, the hydraulic jack, the screw jack, and load limitations. Designed with a laboratory orientation, each section consists of…

  14. A Data-Driven Approach to Develop Physically Sound Predictors: Application to Depth-Averaged Velocities and Drag Coefficients on Vegetated Flows

    NASA Astrophysics Data System (ADS)

    Tinoco, R. O.; Goldstein, E. B.; Coco, G.

    2016-12-01

    We use a machine learning approach to seek accurate, physically sound predictors, to estimate two relevant flow parameters for open-channel vegetated flows: mean velocities and drag coefficients. A genetic programming algorithm is used to find a robust relationship between properties of the vegetation and flow parameters. We use data published from several laboratory experiments covering a broad range of conditions to obtain: a) in the case of mean flow, an equation that matches the accuracy of other predictors from recent literature while showing a less complex structure, and b) for drag coefficients, a predictor that relies on both single element and array parameters. We investigate different criteria for dataset size and data selection to evaluate their impact on the resulting predictor, as well as simple strategies to obtain only dimensionally consistent equations, and avoid the need for dimensional coefficients. The results show that a proper methodology can deliver physically sound models representative of the processes involved, such that genetic programming and machine learning techniques can be used as powerful tools to study complicated phenomena and develop not only purely empirical, but "hybrid" models, coupling results from machine learning methodologies into physics-based models.

  15. Geologic Carbon Sequestration Leakage Detection: A Physics-Guided Machine Learning Approach

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Harp, D. R.; Chen, B.; Pawar, R.

    2017-12-01

    One of the risks of large-scale geologic carbon sequestration is the potential migration of fluids out of the storage formations. Accurate and fast detection of this fluids migration is not only important but also challenging, due to the large subsurface uncertainty and complex governing physics. Traditional leakage detection and monitoring techniques rely on geophysical observations including pressure. However, the resulting accuracy of these methods is limited because of indirect information they provide requiring expert interpretation, therefore yielding in-accurate estimates of leakage rates and locations. In this work, we develop a novel machine-learning technique based on support vector regression to effectively and efficiently predict the leakage locations and leakage rates based on limited number of pressure observations. Compared to the conventional data-driven approaches, which can be usually seem as a "black box" procedure, we develop a physics-guided machine learning method to incorporate the governing physics into the learning procedure. To validate the performance of our proposed leakage detection method, we employ our method to both 2D and 3D synthetic subsurface models. Our novel CO2 leakage detection method has shown high detection accuracy in the example problems.

  16. Hidden physics models: Machine learning of nonlinear partial differential equations

    NASA Astrophysics Data System (ADS)

    Raissi, Maziar; Karniadakis, George Em

    2018-03-01

    While there is currently a lot of enthusiasm about "big data", useful data is usually "small" and expensive to acquire. In this paper, we present a new paradigm of learning partial differential equations from small data. In particular, we introduce hidden physics models, which are essentially data-efficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and nonlinear partial differential equations, to extract patterns from high-dimensional data generated from experiments. The proposed methodology may be applied to the problem of learning, system identification, or data-driven discovery of partial differential equations. Our framework relies on Gaussian processes, a powerful tool for probabilistic inference over functions, that enables us to strike a balance between model complexity and data fitting. The effectiveness of the proposed approach is demonstrated through a variety of canonical problems, spanning a number of scientific domains, including the Navier-Stokes, Schrödinger, Kuramoto-Sivashinsky, and time dependent linear fractional equations. The methodology provides a promising new direction for harnessing the long-standing developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data.

  17. Pressure-Letdown Machine for a Coal Reactor

    NASA Technical Reports Server (NTRS)

    Perkins, G. S.; Mabe, W. B.

    1986-01-01

    Pumps operating in reverse generate power. Conceptual pressure-letdown machine for coal-liquefaction system extracts energy from expansion of product fluid. Mud pumps, originally intended for use in oil drilling, operated in reverse so their motors act as generators. Several pumps operated in alternating phase to obtain multiple stages of letdown from inlet pressure to outlet pressure. About 75 percent of work generates inlet pressure recoverable as electrical energy.

  18. Machine-Building for Fuel and Energy Complex: Perspective Forms of Interaction

    NASA Astrophysics Data System (ADS)

    Nikitenko, S. M.; Goosen, E. V.; Pakhomova, E. A.; Rozhkova, O. V.; Mesyats, M. A.

    2017-10-01

    The article is devoted to the study of the existing forms of cooperation between the authorities, business and science in the fuel and energy complex and the machine-building industry at the regional level. The possibilities of applying the concept of the “triple helix” and its multi-helix modifications for the implementation of the import substitution program for high- tech products have been considered.

  19. Biomolecular Dynamics: Order-Disorder Transitions and Energy Landscapes

    PubMed Central

    Whitford, Paul C.; Sanbonmatsu, Karissa Y.; Onuchic, José N.

    2013-01-01

    While the energy landscape theory of protein folding is now a widely accepted view for understanding how relatively-weak molecular interactions lead to rapid and cooperative protein folding, such a framework must be extended to describe the large-scale functional motions observed in molecular machines. In this review, we discuss 1) the development of the energy landscape theory of biomolecular folding, 2) recent advances towards establishing a consistent understanding of folding and function, and 3) emerging themes in the functional motions of enzymes, biomolecular motors, and other biomolecular machines. Recent theoretical, computational, and experimental lines of investigation are providing a very dynamic picture of biomolecular motion. In contrast to earlier ideas, where molecular machines were thought to function similarly to macroscopic machines, with rigid components that move along a few degrees of freedom in a deterministic fashion, biomolecular complexes are only marginally stable. Since the stabilizing contribution of each atomic interaction is on the order of the thermal fluctuations in solution, the rigid body description of molecular function must be revisited. An emerging theme is that functional motions encompass order-disorder transitions and structural flexibility provide significant contributions to the free-energy. In this review, we describe the biological importance of order-disorder transitions and discuss the statistical-mechanical foundation of theoretical approaches that can characterize such transitions. PMID:22790780

  20. Robust one-step catalytic machine for high fidelity anticloning and W-state generation in a multiqubit system.

    PubMed

    Olaya-Castro, Alexandra; Johnson, Neil F; Quiroga, Luis

    2005-03-25

    We propose a physically realizable machine which can either generate multiparticle W-like states, or implement high-fidelity 1-->M (M=1,2,...infinity) anticloning of an arbitrary qubit state, in a single step. This universal machine acts as a catalyst in that it is unchanged after either procedure, effectively resetting itself for its next operation. It possesses an inherent immunity to decoherence. Most importantly in terms of practical multiparty quantum communication, the machine's robustness in the presence of decoherence actually increases as the number of qubits M increases.

  1. Crabbing System for an Electron-Ion Collider

    NASA Astrophysics Data System (ADS)

    Castilla, Alejandro

    As high energy and nuclear physicists continue to push further the boundaries of knowledge using colliders, there is an imperative need, not only to increase the colliding beams' energies, but also to improve the accuracy of the experiments, and to collect a large quantity of events with good statistical sensitivity. To achieve the latter, it is necessary to collect more data by increasing the rate at which these pro- cesses are being produced and detected in the machine. This rate of events depends directly on the machine's luminosity. The luminosity itself is proportional to the frequency at which the beams are being delivered, the number of particles in each beam, and inversely proportional to the cross-sectional size of the colliding beams. There are several approaches that can be considered to increase the events statistics in a collider other than increasing the luminosity, such as running the experiments for a longer time. However, this also elevates the operation expenses, while increas- ing the frequency at which the beams are delivered implies strong physical changes along the accelerator and the detectors. Therefore, it is preferred to increase the beam intensities and reduce the beams cross-sectional areas to achieve these higher luminosities. In the case where the goal is to push the limits, sometimes even beyond the machines design parameters, one must develop a detailed High Luminosity Scheme. Any high luminosity scheme on a modern collider considers--in one of their versions--the use of crab cavities to correct the geometrical reduction of the luminosity due to the beams crossing angle. In this dissertation, we present the design and testing of a proof-of-principle compact superconducting crab cavity, at 750 MHz, for the future electron-ion collider, currently under design at Jefferson Lab. In addition to the design and validation of the cavity prototype, we present the analysis of the first order beam dynamics and the integration of the crabbing systems to the interaction region. Following this, we propose the concept of twin crabs to allow machines with variable beam transverse coupling in the interaction region to have full crabbing in only the desired plane. Finally, we present recommendations to extend this work to other frequencies.

  2. Fault tolerant operation of switched reluctance machine

    NASA Astrophysics Data System (ADS)

    Wang, Wei

    The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and experiments. With the proposed optimal waveform, torque production is greatly improved under the same Root Mean Square (RMS) current constraint. Additionally, position sensorless operation methods under phase faults are investigated to account for the combination of physical position sensor and phase winding faults. A comprehensive solution for position sensorless operation under single and multiple phases fault are proposed and validated through experiments. Continuous position sensorless operation with seamless transition between various numbers of phase fault is achieved.

  3. Wind at Work.

    ERIC Educational Resources Information Center

    Adams, Stephen

    1998-01-01

    Describes a project in which students create wind machines to harness the wind's power and do mechanical work. Demonstrates kinetic and potential energy conversions and makes work and power calculations meaningful. Students conduct hands-on investigations with their machines. (DDR)

  4. Energy Landscapes: From Protein Folding to Molecular Assembly

    Science.gov Websites

    been used, for example, in DNA origami, in which artificial structures and machines are built in a mechanical processes and eventually to reproduce these in artificial machines. This conference will provide

  5. SUPAR: Smartphone as a ubiquitous physical activity recognizer for u-healthcare services.

    PubMed

    Fahim, Muhammad; Lee, Sungyoung; Yoon, Yongik

    2014-01-01

    Current generation smartphone can be seen as one of the most ubiquitous device for physical activity recognition. In this paper we proposed a physical activity recognizer to provide u-healthcare services in a cost effective manner by utilizing cloud computing infrastructure. Our model is comprised on embedded triaxial accelerometer of the smartphone to sense the body movements and a cloud server to store and process the sensory data for numerous kind of services. We compute the time and frequency domain features over the raw signals and evaluate different machine learning algorithms to identify an accurate activity recognition model for four kinds of physical activities (i.e., walking, running, cycling and hopping). During our experiments we found Support Vector Machine (SVM) algorithm outperforms for the aforementioned physical activities as compared to its counterparts. Furthermore, we also explain how smartphone application and cloud server communicate with each other.

  6. Using Machine Learning and Data Analysis to Improve Customer Acquisition and Marketing in Residential Solar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sigrin, Benjamin O

    High customer acquisition costs remain a persistent challenge in the U.S. residential solar industry. Effective customer acquisition in the residential solar market is increasingly achieved with the help of data analysis and machine learning, whether that means more targeted advertising, understanding customer motivations, or responding to competitors. New research by the National Renewable Energy Laboratory, Sandia National Laboratories, Vanderbilt University, University of Pennsylvania, and the California Center for Sustainable Energy and funded through the U.S. Department of Energy's Solar Energy Evolution and Diffusion (SEEDS) program demonstrates novel computational methods that can help drive down costs in the residential solar industry.

  7. An Energy-Efficient Multi-Tier Architecture for Fall Detection Using Smartphones.

    PubMed

    Guvensan, M Amac; Kansiz, A Oguz; Camgoz, N Cihan; Turkmen, H Irem; Yavuz, A Gokhan; Karsligil, M Elif

    2017-06-23

    Automatic detection of fall events is vital to providing fast medical assistance to the causality, particularly when the injury causes loss of consciousness. Optimization of the energy consumption of mobile applications, especially those which run 24/7 in the background, is essential for longer use of smartphones. In order to improve energy-efficiency without compromising on the fall detection performance, we propose a novel 3-tier architecture that combines simple thresholding methods with machine learning algorithms. The proposed method is implemented on a mobile application, called uSurvive, for Android smartphones. It runs as a background service and monitors the activities of a person in daily life and automatically sends a notification to the appropriate authorities and/or user defined contacts when it detects a fall. The performance of the proposed method was evaluated in terms of fall detection performance and energy consumption. Real life performance tests conducted on two different models of smartphone demonstrate that our 3-tier architecture with feature reduction could save up to 62% of energy compared to machine learning only solutions. In addition to this energy saving, the hybrid method has a 93% of accuracy, which is superior to thresholding methods and better than machine learning only solutions.

  8. Hybrid Power Management for Office Equipment

    NASA Astrophysics Data System (ADS)

    Gingade, Ganesh P.

    Office machines (such as printers, scanners, fax, and copiers) can consume significant amounts of power. Few studies have been devoted to power management of office equipment. Most office machines have sleep modes to save power. Power management of these machines are usually timeout-based: a machine sleeps after being idle long enough. Setting the timeout duration can be difficult: if it is too long, the machine wastes power during idleness. If it is too short, the machine sleeps too soon and too often--the wakeup delay can significantly degrade productivity. Thus, power management is a tradeoff between saving energy and keeping short response time. Many power management policies have been published and one policy may outperform another in some scenarios. There is no definite conclusion which policy is always better. This thesis describes two methods for office equipment power management. The first method adaptively reduces power based on a constraint of the wakeup delay. The second method is a hybrid with multiple candidate policies and it selects the most appropriate power management policy. Using six months of request traces from 18 different offices, we demonstrate that the hybrid policy outperforms individual policies. We also discover that power management based on business hours does not produce consistent energy savings.

  9. An error reduction algorithm to improve lidar turbulence estimates for wind energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Clifton, Andrew

    Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidarsmore » in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine-learning methods in L-TERRA was highly dependent on the input variables and training dataset used, suggesting that machine learning may not be the best technique for reducing lidar turbulence intensity (TI) error. Future work will include the use of a lidar simulator to better understand how different factors affect lidar turbulence error and to determine how these errors can be reduced using information from a stand-alone lidar.« less

  10. An error reduction algorithm to improve lidar turbulence estimates for wind energy

    DOE PAGES

    Newman, Jennifer F.; Clifton, Andrew

    2017-02-10

    Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidarsmore » in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine-learning methods in L-TERRA was highly dependent on the input variables and training dataset used, suggesting that machine learning may not be the best technique for reducing lidar turbulence intensity (TI) error. Future work will include the use of a lidar simulator to better understand how different factors affect lidar turbulence error and to determine how these errors can be reduced using information from a stand-alone lidar.« less

  11. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    NASA Astrophysics Data System (ADS)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-10-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute's computing system, will be described.

  12. The impact of the availability of school vending machines on eating behavior during lunch: the Youth Physical Activity and Nutrition Survey.

    PubMed

    Park, Sohyun; Sappenfield, William M; Huang, Youjie; Sherry, Bettylou; Bensyl, Diana M

    2010-10-01

    Childhood obesity is a major public health concern and is associated with substantial morbidities. Access to less-healthy foods might facilitate dietary behaviors that contribute to obesity. However, less-healthy foods are usually available in school vending machines. This cross-sectional study examined the prevalence of students buying snacks or beverages from school vending machines instead of buying school lunch and predictors of this behavior. Analyses were based on the 2003 Florida Youth Physical Activity and Nutrition Survey using a representative sample of 4,322 students in grades six through eight in 73 Florida public middle schools. Analyses included χ2 tests and logistic regression. The outcome measure was buying a snack or beverage from vending machines 2 or more days during the previous 5 days instead of buying lunch. The survey response rate was 72%. Eighteen percent of respondents reported purchasing a snack or beverage from a vending machine 2 or more days during the previous 5 school days instead of buying school lunch. Although healthier options were available, the most commonly purchased vending machine items were chips, pretzels/crackers, candy bars, soda, and sport drinks. More students chose snacks or beverages instead of lunch in schools where beverage vending machines were also available than did students in schools where beverage vending machines were unavailable: 19% and 7%, respectively (P≤0.05). The strongest risk factor for buying snacks or beverages from vending machines instead of buying school lunch was availability of beverage vending machines in schools (adjusted odds ratio=3.5; 95% confidence interval, 2.2 to 5.7). Other statistically significant risk factors were smoking, non-Hispanic black race/ethnicity, Hispanic ethnicity, and older age. Although healthier choices were available, the most common choices were the less-healthy foods. Schools should consider developing policies to reduce the availability of less-healthy choices in vending machines and to reduce access to beverage vending machines. Copyright © 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.

  13. The new ATLAS/LUCID detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruschi, Marco

    The new ATLAS luminosity monitor has many innovative aspects implemented. Its photomultipliers tubes are used as detector elements by using the Cherenkov light produced by charged particles above threshold crossing the quartz windows. The analog shaping of the readout chain has been improved, in order to cope with the 25 ns bunch spacing of the LHC machine. The main readout card is a quite general processing unit based on 12 bit - 500 MS/s Flash ADC and on FPGAs, delivering the processed data to 1.3 Gb/s optical links. The article will describe all these aspects and will outline future perspectivesmore » of the card for next generation high energy physics experiments. (authors)« less

  14. [Biochemical evaluation of metabolic disorders in the tissues of the locomotor system in patients with occupational diseases caused by physical stress].

    PubMed

    Shatskaia, N N; Tarasova, A A; Fedorova, V I; Shardakova, E F; Selezneva, A I; Fedosova, N F

    1991-01-01

    A group of patients with occupational disease and female sewing-machine operators were medically examined with a broad set of biochemical techniques aimed at the detection of metabolic disorders in the locomotor system tissues. Noninflammatory dystrophic changes were found. The muscular component was dominating in comparison with the osseous one in the genesis of the degenerative dystrophic processes, which manifested in the clinical course. Laboratory manifestations were revealed related to the lowered energy supply and oxygenation of the skeleton muscles in patients with neuromuscular and osteo-muscular++ syndromes. The metabolic disorders were diagnosed at the early stages of myalgia.

  15. Method of Individual Forecasting of Technical State of Logging Machines

    NASA Astrophysics Data System (ADS)

    Kozlov, V. G.; Gulevsky, V. A.; Skrypnikov, A. V.; Logoyda, V. S.; Menzhulova, A. S.

    2018-03-01

    Development of the model that evaluates the possibility of failure requires the knowledge of changes’ regularities of technical condition parameters of the machines in use. To study the regularities, the need to develop stochastic models that take into account physical essence of the processes of destruction of structural elements of the machines, the technology of their production, degradation and the stochastic properties of the parameters of the technical state and the conditions and modes of operation arose.

  16. Predicting Solar Activity Using Machine-Learning Methods

    NASA Astrophysics Data System (ADS)

    Bobra, M.

    2017-12-01

    Of all the activity observed on the Sun, two of the most energetic events are flares and coronal mass ejections. However, we do not, as of yet, fully understand the physical mechanism that triggers solar eruptions. A machine-learning algorithm, which is favorable in cases where the amount of data is large, is one way to [1] empirically determine the signatures of this mechanism in solar image data and [2] use them to predict solar activity. In this talk, we discuss the application of various machine learning algorithms - specifically, a Support Vector Machine, a sparse linear regression (Lasso), and Convolutional Neural Network - to image data from the photosphere, chromosphere, transition region, and corona taken by instruments aboard the Solar Dynamics Observatory in order to predict solar activity on a variety of time scales. Such an approach may be useful since, at the present time, there are no physical models of flares available for real-time prediction. We discuss our results (Bobra and Couvidat, 2015; Bobra and Ilonidis, 2016; Jonas et al., 2017) as well as other attempts to predict flares using machine-learning (e.g. Ahmed et al., 2013; Nishizuka et al. 2017) and compare these results with the more traditional techniques used by the NOAA Space Weather Prediction Center (Crown, 2012). We also discuss some of the challenges in using machine-learning algorithms for space science applications.

  17. 14 CFR 382.3 - What do the terms in this rule mean?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and places between which those flights are performed. CPAP machine means a continuous positive airway pressure machine. Department or DOT means the United States Department of Transportation. Direct threat... learning disabilities. The term physical or mental impairment includes, but is not limited to, such...

  18. 14 CFR 382.3 - What do the terms in this rule mean?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and places between which those flights are performed. CPAP machine means a continuous positive airway pressure machine. Department or DOT means the United States Department of Transportation. Direct threat... learning disabilities. The term physical or mental impairment includes, but is not limited to, such...

  19. 14 CFR 382.3 - What do the terms in this rule mean?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... and places between which those flights are performed. CPAP machine means a continuous positive airway pressure machine. Department or DOT means the United States Department of Transportation. Direct threat... learning disabilities. The term physical or mental impairment includes, but is not limited to, such...

  20. Discussion on ``Foundations of the Second Law''

    NASA Astrophysics Data System (ADS)

    Silbey, Robert; Ao, Ping; Beretta, Gian Paolo; Cengel, Yunus; Foley, Andrew; Freedman, Steven; Graeff, Roderich; Keck, James C.; Lloyd, Seth; Maroney, Owen; Nieuwenhuizen, Theodorus M.; Weissman, Michael

    2008-08-01

    This article reports an open discussion that took place during the Keenan Symposium "Meeting the Entropy Challenge" (held in Cambridge, Massachusetts, on October 4, 2007) following the short presentations—each reported as a separate article in the present volume—by Seth Lloyd, Owen Maroney, Silviu Guiasu, Ping Ao, Jochen Gemmer, Bernard Guy, Gian Paolo Beretta, Speranta Gheorghiu-Svirschevski, and Dorion Sagan. All panelists and the audience were asked to address the following questions • Why is the second law true? Is it an inviolable law of nature? If not, is it possible to develop a perpetual motion machine of the second kind? • Are second law limitations objective or subjective, real or apparent, due to the nature of physical states or the representation and manipulation of information? Is entropy a physical property in the same sense as energy is universally understood to be an intrinsic property of matter? • Does the second law conflict with quantum mechanics? Are the differences between mechanical and thermodynamic descriptions of physical phenomena reconcilable? Does the reversible law of motion of hamiltonian mechanics and quantum mechanics conflict with the empirical observation of irreversible phenomena?

  1. Machine Learning Force Field Parameters from Ab Initio Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ying; Li, Hui; Pickard, Frank C.

    Machine learning (ML) techniques with the genetic algorithm (GA) have been applied to determine a polarizable force field parameters using only ab initio data from quantum mechanics (QM) calculations of molecular clusters at the MP2/6-31G(d,p), DFMP2(fc)/jul-cc-pVDZ, and DFMP2(fc)/jul-cc-pVTZ levels to predict experimental condensed phase properties (i.e., density and heat of vaporization). The performance of this ML/GA approach is demonstrated on 4943 dimer electrostatic potentials and 1250 cluster interaction energies for methanol. Excellent agreement between the training data set from QM calculations and the optimized force field model was achieved. The results were further improved by introducing an offset factor duringmore » the machine learning process to compensate for the discrepancy between the QM calculated energy and the energy reproduced by optimized force field, while maintaining the local “shape” of the QM energy surface. Throughout the machine learning process, experimental observables were not involved in the objective function, but were only used for model validation. The best model, optimized from the QM data at the DFMP2(fc)/jul-cc-pVTZ level, appears to perform even better than the original AMOEBA force field (amoeba09.prm), which was optimized empirically to match liquid properties. The present effort shows the possibility of using machine learning techniques to develop descriptive polarizable force field using only QM data. The ML/GA strategy to optimize force fields parameters described here could easily be extended to other molecular systems.« less

  2. Measured impacts of high efficiency domestic clothes washers in a community

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomlinson, J.; Rizy, T.

    1998-07-01

    The US market for domestic clothes washers is currently dominated by conventional vertical-axis washers that typically require approximately 40 gallons of water for each wash load. Although the current market for high efficiency clothes washers that use much less water and energy is quite small, it is growing slowly as manufacturers make machines based on tumble action, horizontal-axis designs available and as information about the performance and benefits of such machines is developed and made available to consumers. To help build awareness of these benefits and to accelerate markets for high efficiency washers, the Department of Energy (DOE), under itsmore » ENERGY STAR{reg_sign} Program and in cooperation with a major manufacturers of high efficiency washers, conducted a field evaluation of high efficiency washers using Bern, Kansas as a test bed. Baseline washing machine performance data as well as consumer washing behavior were obtained from data collected on the existing machines of more than 100 participants in this instrumented study. Following a 2-month initial study period, all conventional machines were replaced by high efficiency, tumble-action washers, and the study continued for 3 months. Based on measured data from over 20,000 loads of laundry, the impact of the washer replacement on (1) individual customers` energy and water consumption, (2) customers` laundry habits and perceptions, and (3) the community`s water supply and waste water systems were determined. The study, its findings, and how information from the experiment was used to improve national awareness of high efficiency clothes washer benefits are described in this paper.« less

  3. Energy Saving Melting and Revert Reduction Technology: Aging of Graphitic Cast Irons and Machinability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richards, Von L.

    2012-09-19

    The objective of this task was to determine whether ductile iron and compacted graphite iron exhibit age strengthening to a statistically significant extent. Further, this effort identified the mechanism by which gray iron age strengthens and the mechanism by which age-strengthening improves the machinability of gray cast iron. These results were then used to determine whether age strengthening improves the machinability of ductile iron and compacted graphite iron alloys in order to develop a predictive model of alloy factor effects on age strengthening. The results of this work will lead to reduced section sizes, and corresponding weight and energy savings.more » Improved machinability will reduce scrap and enhance casting marketability. Technical Conclusions: Age strengthening was demonstrated to occur in gray iron ductile iron and compacted graphite iron. Machinability was demonstrated to be improved by age strengthening when free ferrite was present in the microstructure, but not in a fully pearlitic microstructure. Age strengthening only occurs when there is residual nitrogen in solid solution in the Ferrite, whether the ferrite is free ferrite or the ferrite lamellae within pearlite. Age strengthening can be accelerated by Mn at about 0.5% in excess of the Mn/S balance Estimated energy savings over ten years is 13.05 trillion BTU, based primarily on yield improvement and size reduction of castings for equivalent service. Also it is estimated that the heavy truck end use of lighter castings for equivalent service requirement will result in a diesel fuel energy savings of 131 trillion BTU over ten years.« less

  4. Design of Heat Exchanger for Ericsson-Brayton Piston Engine

    PubMed Central

    Durcansky, Peter; Papucik, Stefan; Jandacka, Jozef

    2014-01-01

    Combined power generation or cogeneration is a highly effective technology that produces heat and electricity in one device more efficiently than separate production. Overall effectiveness is growing by use of combined technologies of energy extraction, taking heat from flue gases and coolants of machines. Another problem is the dependence of such devices on fossil fuels as fuel. For the combustion turbine is mostly used as fuel natural gas, kerosene and as fuel for heating power plants is mostly used coal. It is therefore necessary to seek for compensation today, which confirms the assumption in the future. At first glance, the obvious efforts are to restrict the use of largely oil and change the type of energy used in transport. Another significant change is the increase in renewable energy—energy that is produced from renewable sources. Among machines gaining energy by unconventional way belong mainly the steam engine, Stirling engine, and Ericsson engine. In these machines, the energy is obtained by external combustion and engine performs work in a medium that receives and transmits energy from combustion or flue gases indirectly. The paper deals with the principle of hot-air engines, and their use in combined heat and electricity production from biomass and with heat exchangers as primary energy transforming element. PMID:24977174

  5. Preliminary Investigation on Life Cycle Inventory of Powder Bed Fusion of Stainless Steel

    NASA Astrophysics Data System (ADS)

    Nyamekye, Patricia; Piili, Heidi; Leino, Maija; Salminen, Antti

    Manufacturing of work pieces from stainless steel with laser additive manufacturing, known also as laser sintering or 3D printing may increase energy and material efficiency. The use of powder bed fusion offers advantages to make parts for dynamic applications of light weight and near-net-shape products. Due to these advantages among others, PBF may also reduce emissions and operational cost in various applications. However, there are only few life cycle assessment studies examining this subject despite its prospect to business opportunity. The application of Life Cycle Inventory (LCI) in Powder Bed Fusion (PBF) provides a distinct evaluation of material and energy consumption. LCI offers a possibility to improve knowledge of process efficiency. This study investigates effect of process sustainability in terms of raw material, energy and time consumption with PBF and CNC machining. The results of the experimental study indicated lower energy efficiency in the production process with PBF. This study revealed that specific energy consumption in PBF decreased when several components are built simultaneously than if they would be built individually. This is due to fact that energy consumption per part is lower. On the contrary, amount of energy needed to machine on part in case of CNC machining is lower when parts are done separately.

  6. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task, a bound similar to the "encoding" bound governing how much the algorithm information complexity of a Turing machine calculation can differ for two reference universal Turing machines. Finally, it is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike algorithmic information complexity), in that there is one and only version of it that can be applicable throughout our universe.

  7. Energy landscapes for a machine learning application to series data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballard, Andrew J.; Stevenson, Jacob D.; Das, Ritankar

    2016-03-28

    Methods developed to explore and characterise potential energy landscapes are applied to the corresponding landscapes obtained from optimisation of a cost function in machine learning. We consider neural network predictions for the outcome of local geometry optimisation in a triatomic cluster, where four distinct local minima exist. The accuracy of the predictions is compared for fits using data from single and multiple points in the series of atomic configurations resulting from local geometry optimisation and for alternative neural networks. The machine learning solution landscapes are visualised using disconnectivity graphs, and signatures in the effective heat capacity are analysed in termsmore » of distributions of local minima and their properties.« less

  8. Superconductivity for Electromagnetic Guns

    DTIC Science & Technology

    1984-03-01

    greater than that for a pulsed homopolar machine when the time constant is less than 0.1 sec (ref 32) (See fig. 18). Since the energy density in a...transferred from the capacitor to the induct- or. If the capacitor is replaced by a homopolar machine, then, as is well-known, the kinetic energy of the...rotor plays the role of an "electrical" capacitance and the two arrangements (capacitance and homopolar ) are functionally equivalent. Group 3. In

  9. Performance analysis of single stage libr-water absorption machine operated by waste thermal energy of internal combustion engine: Case study

    NASA Astrophysics Data System (ADS)

    Sharif, Hafiz Zafar; Leman, A. M.; Muthuraman, S.; Salleh, Mohd Najib Mohd; Zakaria, Supaat

    2017-09-01

    Combined heating, cooling, and power is also known as Tri-generation. Tri-generation system can provide power, hot water, space heating and air -conditioning from single source of energy. The objective of this study is to propose a method to evaluate the characteristic and performance of a single stage lithium bromide-water (LiBr-H2O) absorption machine operated with waste thermal energy of internal combustion engine which is integral part of trigeneration system. Correlations for computer sensitivity analysis are developed in data fit software for (P-T-X), (H-T-X), saturated liquid (water), saturated vapor, saturation pressure and crystallization temperature curve of LiBr-H2O Solution. Number of equations were developed with data fit software and exported into excel work sheet for the evaluation of number of parameter concerned with the performance of vapor absorption machine such as co-efficient of performance, concentration of solution, mass flow rate, size of heat exchangers of the unit in relation to the generator, condenser, absorber and evaporator temperatures. Size of vapor absorption machine within its crystallization limits for cooling and heating by waste energy recovered from exhaust gas, and jacket water of internal combustion engine also presented in this study to save the time and cost for the facilities managers who are interested to utilize the waste thermal energy of their buildings or premises for heating and air conditioning applications.

  10. Research on intelligent machine self-perception method based on LSTM

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Cheng, Tao

    2018-05-01

    In this paper, we use the advantages of LSTM in feature extraction and processing high-dimensional and complex nonlinear data, and apply it to the autonomous perception of intelligent machines. Compared with the traditional multi-layer neural network, this model has memory, can handle time series information of any length. Since the multi-physical domain signals of processing machines have a certain timing relationship, and there is a contextual relationship between states and states, using this deep learning method to realize the self-perception of intelligent processing machines has strong versatility and adaptability. The experiment results show that the method proposed in this paper can obviously improve the sensing accuracy under various working conditions of the intelligent machine, and also shows that the algorithm can well support the intelligent processing machine to realize self-perception.

  11. State machine analysis of sensor data from dynamic processes

    DOEpatents

    Cook, William R.; Brabson, John M.; Deland, Sharon M.

    2003-12-23

    A state machine model analyzes sensor data from dynamic processes at a facility to identify the actual processes that were performed at the facility during a period of interest for the purpose of remote facility inspection. An inspector can further input the expected operations into the state machine model and compare the expected, or declared, processes to the actual processes to identify undeclared processes at the facility. The state machine analysis enables the generation of knowledge about the state of the facility at all levels, from location of physical objects to complex operational concepts. Therefore, the state machine method and apparatus may benefit any agency or business with sensored facilities that stores or manipulates expensive, dangerous, or controlled materials or information.

  12. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1990

    1990-01-01

    Reviewed are seven computer software packages for IBM and/or Apple Computers. Included are "Windows on Science: Volume 1--Physical Science"; "Science Probe--Physical Science"; "Wildlife Adventures--Grizzly Bears"; "Science Skills--Development Programs"; "The Clean Machine"; "Rock Doctor";…

  13. A comparative study on performance of CBN inserts when turning steel under dry and wet conditions

    NASA Astrophysics Data System (ADS)

    Abdullah Bagaber, Salem; Razlan Yusoff, Ahmad

    2017-10-01

    Cutting fluids is the most unsustainable components of machining processes, it is negatively impacting on the environmental and additional energy required. Due to its high strength and corrosion resistance, the machinability of stainless steel has attracted considerable interest. This study aims to evaluate performance of cubic boron nitride (CBN) inserts for the machining parameters includes the power consumption and surface roughness. Due to the high single cutting-edge cost of CBN, the performance of significant is importance for hard finish turning. The present work also deals with a comparative study on power consumption and surface roughness under dry and flood conditions. Turning process of the stainless steel 316 was performed. A response surface methodology based box-behnken design (BBD) was utilized for statistical analysis. The optimum process parameters are determined as the overall performance index. The comparison study has been done between dry and wet stainless-steel cut in terms of minimum value of energy and surface roughness. The result shows the stainless still can be machined under dry condition with 18.57% improvement of power consumption and acceptable quality compare to the wet cutting. The CBN tools under dry cutting stainless steel can be used to reduce the environment impacts in terms of no cutting fluid use and less energy required which is effected in machining productivity and profit.

  14. Variable cross-section windings for efficiency improvement of electric machines

    NASA Astrophysics Data System (ADS)

    Grachev, P. Yu; Bazarov, A. A.; Tabachinskiy, A. S.

    2018-02-01

    Implementation of energy-saving technologies in industry is impossible without efficiency improvement of electric machines. The article considers the ways of efficiency improvement and mass and dimensions reduction of electric machines with electronic control. Features of compact winding design for stators and armatures are described. Influence of compact winding on thermal and electrical process is given. Finite element method was used in computer simulation.

  15. What is the machine learning?

    NASA Astrophysics Data System (ADS)

    Chang, Spencer; Cohen, Timothy; Ostdiek, Bryan

    2018-03-01

    Applications of machine learning tools to problems of physical interest are often criticized for producing sensitivity at the expense of transparency. To address this concern, we explore a data planing procedure for identifying combinations of variables—aided by physical intuition—that can discriminate signal from background. Weights are introduced to smooth away the features in a given variable(s). New networks are then trained on this modified data. Observed decreases in sensitivity diagnose the variable's discriminating power. Planing also allows the investigation of the linear versus nonlinear nature of the boundaries between signal and background. We demonstrate the efficacy of this approach using a toy example, followed by an application to an idealized heavy resonance scenario at the Large Hadron Collider. By unpacking the information being utilized by these algorithms, this method puts in context what it means for a machine to learn.

  16. The practical Pomeron for high energy proton collimation

    NASA Astrophysics Data System (ADS)

    Appleby, R. B.; Barlow, R. J.; Molson, J. G.; Serluca, M.; Toader, A.

    2016-10-01

    We present a model which describes proton scattering data from ISR to Tevatron energies, and which can be applied to collimation in high energy accelerators, such as the LHC and FCC. Collimators remove beam halo particles, so that they do not impinge on vulnerable regions of the machine, such as the superconducting magnets and the experimental areas. In simulating the effect of the collimator jaws it is crucial to model the scattering of protons at small momentum transfer t, as these protons can subsequently survive several turns of the ring before being lost. At high energies these soft processes are well described by Pomeron exchange models. We study the behaviour of elastic and single-diffractive dissociation cross sections over a wide range of energy, and show that the model can be used as a global description of the wide variety of high energy elastic and diffractive data presently available. In particular it models low mass diffraction dissociation, where a rich resonance structure is present, and thus predicts the differential and integrated cross sections in the kinematical range appropriate to the LHC. We incorporate the physics of this model into the beam tracking code MERLIN and use it to simulate the resulting loss maps of the beam halo lost in the collimators in the LHC.

  17. Development of an Eco-Friendly Electrical Discharge Machine (E-EDM) Using TRIZ Approach

    NASA Astrophysics Data System (ADS)

    Sreebalaji, V. S.; Saravanan, R.

    Electrical Discharge Machine (EDM) is one of the non-traditional machining processes. EDM process is based on thermoelectric energy between the work and an electrode. A pulse discharge occurs in a small gap between the work piece and the electrode and removes the unwanted material from the parent metal through melting and vaporization. The electrode and the work piece must have an electrical conductivity in order to generate the spark. Dielectric fluid acts as a spark conductor, concentrating the energy to a very narrow region. There are various types of products can be produced and finished using EDM such as Moulds, Dies, Parts of Aerodynamics, Automotives and Surgical components. This research work reveals how an Eco friendly EDM (E-EDM) can be modeled to replace die electric fluid and introducing ozonised oxygen in to EDM to eliminate harmful effects generated while machining by using dielectric, to make pollution free machining environment through a new design of EEDM using TRIZ (a Russian acronym for Theory of Inventive Problem Solving) approach, since Eco friendly design is the need of the hour.

  18. Aggregation of Electric Current Consumption Features to Extract Maintenance KPIs

    NASA Astrophysics Data System (ADS)

    Simon, Victor; Johansson, Carl-Anders; Galar, Diego

    2017-09-01

    All electric powered machines offer the possibility of extracting information and calculating Key Performance Indicators (KPIs) from the electric current signal. Depending on the time window, sampling frequency and type of analysis, different indicators from the micro to macro level can be calculated for such aspects as maintenance, production, energy consumption etc. On the micro-level, the indicators are generally used for condition monitoring and diagnostics and are normally based on a short time window and a high sampling frequency. The macro indicators are normally based on a longer time window with a slower sampling frequency and are used as indicators for overall performance, cost or consumption. The indicators can be calculated directly from the current signal but can also be based on a combination of information from the current signal and operational data like rpm, position etc. One or several of those indicators can be used for prediction and prognostics of a machine's future behavior. This paper uses this technique to calculate indicators for maintenance and energy optimization in electric powered machines and fleets of machines, especially machine tools.

  19. 78 FR 73883 - Notice Pursuant to the National Cooperative Research and Production Act of 1993; Members of SGIP...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-09

    ... Utilities System, Lafayette, LA; Machine-to- Machine Intelligence Corporation (M2Mi), Moffett Field, CA; Inman Technology, Cambridge, MA; Kkrish Energy LLC, Colorado Springs, CO; Smarthome Laboratories, Ltd...

  20. Advanced Machine Learning Emulators of Radiative Transfer Models

    NASA Astrophysics Data System (ADS)

    Camps-Valls, G.; Verrelst, J.; Martino, L.; Vicent, J.

    2017-12-01

    Physically-based model inversion methodologies are based on physical laws and established cause-effect relationships. A plethora of remote sensing applications rely on the physical inversion of a Radiative Transfer Model (RTM), which lead to physically meaningful bio-geo-physical parameter estimates. The process is however computationally expensive, needs expert knowledge for both the selection of the RTM, its parametrization and the the look-up table generation, as well as its inversion. Mimicking complex codes with statistical nonlinear machine learning algorithms has become the natural alternative very recently. Emulators are statistical constructs able to approximate the RTM, although at a fraction of the computational cost, providing an estimation of uncertainty, and estimations of the gradient or finite integral forms. We review the field and recent advances of emulation of RTMs with machine learning models. We posit Gaussian processes (GPs) as the proper framework to tackle the problem. Furthermore, we introduce an automatic methodology to construct emulators for costly RTMs. The Automatic Gaussian Process Emulator (AGAPE) methodology combines the interpolation capabilities of GPs with the accurate design of an acquisition function that favours sampling in low density regions and flatness of the interpolation function. We illustrate the good capabilities of our emulators in toy examples, leaf and canopy levels PROSPECT and PROSAIL RTMs, and for the construction of an optimal look-up-table for atmospheric correction based on MODTRAN5.

  1. Exploring the Function Space of Deep-Learning Machines

    NASA Astrophysics Data System (ADS)

    Li, Bo; Saad, David

    2018-06-01

    The function space of deep-learning machines is investigated by studying growth in the entropy of functions of a given error with respect to a reference function, realized by a deep-learning machine. Using physics-inspired methods we study both sparsely and densely connected architectures to discover a layerwise convergence of candidate functions, marked by a corresponding reduction in entropy when approaching the reference function, gain insight into the importance of having a large number of layers, and observe phase transitions as the error increases.

  2. Mi Quinto Libro de Maquinas Simples: El Plano Inclinado. Escuela Intermedia Grados 7, 8 y 9 (My Fifth Book of Simple Machines: The Inclined Plane. Intermediate School Grades 7, 8, and 9).

    ERIC Educational Resources Information Center

    Alvarado, Patricio R.; Montalvo, Luis

    This is the fifth book in a five-book physical science series on simple machines. The books are designed for Spanish-speaking junior high school students. This volume explains the principles and some of the uses of inclined planes, as they appear in simple machines, by suggesting experiments and posing questions concerning drawings in the book…

  3. Teach students Semiconductor Lasers according to their natural ability

    NASA Astrophysics Data System (ADS)

    Liu, Ken; Guo, Chu Cai; Zhang, Jian Fa

    2017-08-01

    Physics explain the world in strict rules. And with these rules, modern machines and electronic devices with exact operation manner have been developed. However, human beings exceed these machines with self-awareness. To treat these self-awareness students as machines to learn strict rules, or to teach these students according to their aptitude? We choose the latter, because the first kind of teaching would let students lose their individual thoughts and natural ability. In this paper we describe the individualized teaching of "semiconductor lasers".

  4. Energy-efficient algorithm for classification of states of wireless sensor network using machine learning methods

    NASA Astrophysics Data System (ADS)

    Yuldashev, M. N.; Vlasov, A. I.; Novikov, A. N.

    2018-05-01

    This paper focuses on the development of an energy-efficient algorithm for classification of states of a wireless sensor network using machine learning methods. The proposed algorithm reduces energy consumption by: 1) elimination of monitoring of parameters that do not affect the state of the sensor network, 2) reduction of communication sessions over the network (the data are transmitted only if their values can affect the state of the sensor network). The studies of the proposed algorithm have shown that at classification accuracy close to 100%, the number of communication sessions can be reduced by 80%.

  5. Machine learning prediction for classification of outcomes in local minimisation

    NASA Astrophysics Data System (ADS)

    Das, Ritankar; Wales, David J.

    2017-01-01

    Machine learning schemes are employed to predict which local minimum will result from local energy minimisation of random starting configurations for a triatomic cluster. The input data consists of structural information at one or more of the configurations in optimisation sequences that converge to one of four distinct local minima. The ability to make reliable predictions, in terms of the energy or other properties of interest, could save significant computational resources in sampling procedures that involve systematic geometry optimisation. Results are compared for two energy minimisation schemes, and for neural network and quadratic functions of the inputs.

  6. Dynamic provisioning of local and remote compute resources with OpenStack

    NASA Astrophysics Data System (ADS)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  7. Multiparticle Solutions in 2+1 Gravity and Time Machines

    NASA Astrophysics Data System (ADS)

    Steif, Alan R.

    Multiparticle solutions for sources moving at the speed of light and corresponding to superpositions of single-particle plane-wave solutions are constructed in 2+1 gravity. It is shown that the two-particle spacetimes admit closed timelike curves provided the center-of-momentum energy exceeds a certain critical value. This occurs, however, at the cost of unphysical boundary conditions which are analogous to those affecting Gott’s time machine. As the energy exceeds the critical value, the closed timelike curves first occur at spatial infinity, then migrate inward as the energy is further increased. The total mass of the system also becomes imaginary for particle energies greater than the critical value.

  8. Laser-fusion targets for reactors

    DOEpatents

    Nuckolls, John H.; Thiessen, Albert R.

    1987-01-01

    A laser target comprising a thermonuclear fuel capsule composed of a centrally located quantity of fuel surrounded by at least one or more layers or shells of material for forming an atmosphere around the capsule by a low energy laser prepulse. The fuel may be formed as a solid core or hollow shell, and, under certain applications, a pusher-layer or shell is located intermediate the fuel and the atmosphere forming material. The fuel is ignited by symmetrical implosion via energy produced by a laser, or other energy sources such as an electron beam machine or ion beam machine, whereby thermonuclear burn of the fuel capsule creates energy for applications such as generation of electricity via a laser fusion reactor.

  9. Turning the LHC ring into a new physics search machine

    NASA Astrophysics Data System (ADS)

    Orava, Risto

    2017-03-01

    The LHC Collider Ring is proposed to be turned into an ultimate automatic search engine for new physics in four consecutive phases: (1) Searches for heavy particles produced in Central Exclusive Process (CEP): pp → p + X + p based on the existing Beam Loss Monitoring (BLM) system of the LHC; (2) Feasibility study of using the LHC Ring as a gravitation wave antenna; (3) Extensions to the current BLM system to facilitate precise registration of the selected CEP proton exit points from the LHC beam vacuum chamber; (4) Integration of the BLM based event tagging system together with the trigger/data acquisition systems of the LHC experiments to facilitate an on-line automatic search machine for the physics of tomorrow.

  10. Application of machine learning techniques to analyse the effects of physical exercise in ventricular fibrillation.

    PubMed

    Caravaca, Juan; Soria-Olivas, Emilio; Bataller, Manuel; Serrano, Antonio J; Such-Miquel, Luis; Vila-Francés, Joan; Guerrero, Juan F

    2014-02-01

    This work presents the application of machine learning techniques to analyse the influence of physical exercise in the physiological properties of the heart, during ventricular fibrillation. To this end, different kinds of classifiers (linear and neural models) are used to classify between trained and sedentary rabbit hearts. The use of those classifiers in combination with a wrapper feature selection algorithm allows to extract knowledge about the most relevant features in the problem. The obtained results show that neural models outperform linear classifiers (better performance indices and a better dimensionality reduction). The most relevant features to describe the benefits of physical exercise are those related to myocardial heterogeneity, mean activation rate and activation complexity. © 2013 Published by Elsevier Ltd.

  11. Reproducing an Early-20th-Century Wave Machine

    ERIC Educational Resources Information Center

    Daffron, John A.; Greenslade, Thomas B., Jr.

    2016-01-01

    Physics students often have problems understanding waves. Over the years numerous mechanical devices have been devised to show the propagation of both transverse and longitudinal waves (Ref. 1). In this article an updated version of an early-20th-century transverse wave machine is discussed. The original, Fig. 1, is at Creighton University in…

  12. Cybernetic anthropomorphic machine systems

    NASA Technical Reports Server (NTRS)

    Gray, W. E.

    1974-01-01

    Functional descriptions are provided for a number of cybernetic man machine systems that augment the capacity of normal human beings in the areas of strength, reach or physical size, and environmental interaction, and that are also applicable to aiding the neurologically handicapped. Teleoperators, computer control, exoskeletal devices, quadruped vehicles, space maintenance systems, and communications equipment are considered.

  13. Financial heat machine

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei

    2005-05-01

    We consider dynamics of financial markets as dynamics of expectations and discuss such a dynamics from the point of view of phenomenological thermodynamics. We describe a financial Carnot cycle and the financial analog of a heat machine. We see, that while in physics a perpetuum mobile is absolutely impossible, in economics such mobile may exist under some conditions.

  14. A SYSTEMS APPROACH UTILIZING GENERAL-PURPOSE AND SPECIAL-PURPOSE TEACHING MACHINES.

    ERIC Educational Resources Information Center

    SILVERN, LEONARD C.

    IN ORDER TO IMPROVE THE EMPLOYEE TRAINING-EVALUATION METHOD, TEACHING MACHINES AND PERFORMANCE AIDS MUST BE PHYSICALLY AND OPERATIONALLY INTEGRATED INTO THE SYSTEM, THUS RETURNING TRAINING TO THE ACTUAL JOB ENVIRONMENT. GIVEN THESE CONDITIONS, TRAINING CAN BE MEASURED, CALIBRATED, AND CONTROLLED WITH RESPECT TO ACTUAL JOB PERFORMANCE STANDARDS AND…

  15. Optical HMI with biomechanical energy harvesters integrated in textile supports

    NASA Astrophysics Data System (ADS)

    De Pasquale, G.; Kim, SG; De Pasquale, D.

    2015-12-01

    This paper reports the design, prototyping and experimental validation of a human-machine interface (HMI), named GoldFinger, integrated into a glove with energy harvesting from fingers motion. The device is addressed to medical applications, design tools, virtual reality field and to industrial applications where the interaction with machines is restricted by safety procedures. The HMI prototype includes four piezoelectric transducers applied to the fingers backside at PIP (proximal inter-phalangeal) joints, electric wires embedded in the fabric connecting the transducers, aluminum case for the electronics, wearable switch made with conductive fabrics to turn the communication channel on and off, and a LED. The electronic circuit used to manage the power and to control the light emitter includes a diodes bridge, leveling capacitors, storage battery and switch made by conductive fabric. The communication with the machine is managed by dedicated software, which includes the user interface, the optical tracking, and the continuous updating of the machine microcontroller. The energetic benefit of energy harvester on the battery lifetime is inversely proportional to the activation time of the optical emitter. In most applications, the optical port is active for 1 to 5% of the time, corresponding to battery lifetime increasing between about 14% and 70%.

  16. Electric converters of electromagnetic strike machine with capacitor supply

    NASA Astrophysics Data System (ADS)

    Usanov, K. M.; Volgin, A. V.; Kargin, V. A.; Moiseev, A. P.; Chetverikov, E. A.

    2018-03-01

    The application of pulse linear electromagnetic engines in small power strike machines (energy impact is 0.01...1.0 kJ), where the characteristic mode of rare beats (pulse seismic vibrator, the arch crash device bins bulk materials), is quite effective. At the same time, the technical and economic performance of such machines is largely determined by the ability of the power source to provide a large instantaneous power of the supply pulses in the winding of the linear electromagnetic motor. The use of intermediate energy storage devices in power systems of rare-shock LEME makes it possible to obtain easily large instantaneous powers, forced energy conversion, and increase the performance of the machine. A capacitor power supply of a pulsed source of seismic waves is proposed for the exploration of shallow depths. The sections of the capacitor storage (CS) are connected to the winding of the linear electromagnetic motor by thyristor dischargers, the sequence of activation of which is determined by the control device. The charge of the capacitors to the required voltage is made directly from the battery source, or through the converter from a battery source with a smaller number of batteries.

  17. A Multi-TeV Linear Collider Based on CLIC Technology : CLIC Conceptual Design Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aicheler, M; Burrows, P.; Draper, M.

    This report describes the accelerator studies for a future multi-TeV e +e - collider based on the Compact Linear Collider (CLIC) technology. The CLIC concept as described in the report is based on high gradient normal-conducting accelerating structures where the RF power for the acceleration of the colliding beams is extracted from a high-current Drive Beam that runs parallel with the main linac. The focus of CLIC R&D over the last years has been on addressing a set of key feasibility issues that are essential for proving the fundamental validity of the CLIC concept. The status of these feasibility studiesmore » are described and summarized. The report also includes a technical description of the accelerator components and R&D to develop the most important parts and methods, as well as a description of the civil engineering and technical services associated with the installation. Several larger system tests have been performed to validate the two-beam scheme, and of particular importance are the results from the CLIC test facility at CERN (CTF3). Both the machine and detector/physics studies for CLIC have primarily focused on the 3 TeV implementation of CLIC as a benchmark for the CLIC feasibility. This report also includes specific studies for an initial 500 GeV machine, and some discussion of possible intermediate energy stages. The performance and operation issues related to operation at reduced energy compared to the nominal, and considerations of a staged construction program are included in the final part of the report. The CLIC accelerator study is organized as an international collaboration with 43 partners in 22 countries. An associated report describes the physics potential and experiments at CLIC and a shorter report in preparation will focus on the CLIC implementation strategy, together with a plan for the CLIC R&D studies 2012–2016. Critical and important implementation issues such as cost, power and schedule will be addressed there.« less

  18. Comparative study of state-of-the-art myoelectric controllers for multigrasp prosthetic hands.

    PubMed

    Segil, Jacob L; Controzzi, Marco; Weir, Richard F ff; Cipriani, Christian

    2014-01-01

    A myoelectric controller should provide an intuitive and effective human-machine interface that deciphers user intent in real-time and is robust enough to operate in daily life. Many myoelectric control architectures have been developed, including pattern recognition systems, finite state machines, and more recently, postural control schemes. Here, we present a comparative study of two types of finite state machines and a postural control scheme using both virtual and physical assessment procedures with seven nondisabled subjects. The Southampton Hand Assessment Procedure (SHAP) was used in order to compare the effectiveness of the controllers during activities of daily living using a multigrasp artificial hand. Also, a virtual hand posture matching task was used to compare the controllers when reproducing six target postures. The performance when using the postural control scheme was significantly better (p < 0.05) than the finite state machines during the physical assessment when comparing within-subject averages using the SHAP percent difference metric. The virtual assessment results described significantly greater completion rates (97% and 99%) for the finite state machines, but the movement time tended to be faster (2.7 s) for the postural control scheme. Our results substantiate that postural control schemes rival other state-of-the-art myoelectric controllers.

  19. A 3D Human-Machine Integrated Design and Analysis Framework for Squat Exercises with a Smith Machine

    PubMed Central

    Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun

    2017-01-01

    In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human–machine–environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines. PMID:28178184

  20. Modeling Stochastic Kinetics of Molecular Machines at Multiple Levels: From Molecules to Modules

    PubMed Central

    Chowdhury, Debashish

    2013-01-01

    A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here. PMID:23746505

Top