Insights: Simple Models for Teaching Equilibrium and Le Chatelier's Principle.
ERIC Educational Resources Information Center
Russell, Joan M.
1988-01-01
Presents three models that have been effective for teaching chemical equilibrium and Le Chatelier's principle: (1) the liquid transfer model, (2) the fish model, and (3) the teeter-totter model. Explains each model and its relation to Le Chatelier's principle. (MVL)
The NextGen Model Atmosphere Grid for 3000{le}T{sub eff}{le}10,000 K
Hauschildt, P.H.; Allard, F.; Baron, E.
1999-02-01
We present our NextGen Model Atmosphere grid for low-mass stars for effective temperatures larger than 3000 K. These LTE models are calculated with the same basic model assumptions and input physics as the VLMS part of the NextGen grid so that the complete grid can be used, e.g., for consistent stellar evolution calculations and for internally consistent analysis of cool star spectra. This grid is also the starting point for a large grid of detailed NLTE model atmospheres for dwarfs and giants. The models were calculated from 3000 to 10,000 K (in steps of 200 K) for 3.5{le}logthinspg{le}5.5 (in steps of 0.5) and metallicities of {minus}4.0{le}[M/H]{le}0.0. We discuss the results of the model calculations and compare our results to the Kurucz grid. Some comparisons to standard stars like Vega and the Sun are presented and compared with detailed NLTE calculations. {copyright} {ital {copyright} 1999.} {ital The American Astronomical Society}
Shears, Tara
2012-02-28
The Standard Model is the theory used to describe the interactions between fundamental particles and fundamental forces. It is remarkably successful at predicting the outcome of particle physics experiments. However, the theory has not yet been completely verified. In particular, one of the most vital constituents, the Higgs boson, has not yet been observed. This paper describes the Standard Model, the experimental tests of the theory that have led to its acceptance and its shortcomings. PMID:22253237
The standard cosmological model
NASA Astrophysics Data System (ADS)
Scott, D.
2006-06-01
The Standard Model of Particle Physics (SMPP) is an enormously successful description of high-energy physics, driving ever more precise measurements to find "physics beyond the standard model", as well as providing motivation for developing more fundamental ideas that might explain the values of its parameters. Simultaneously, a description of the entire three-dimensional structure of the present-day Universe is being built up painstakingly. Most of the structure is stochastic in nature, being merely the result of the particular realization of the "initial conditions" within our observable Universe patch. However, governing this structure is the Standard Model of Cosmology (SMC), which appears to require only about a dozen parameters. Cosmologists are now determining the values of these quantities with increasing precision to search for "physics beyond the standard model", as well as trying to develop an understanding of the more fundamental ideas that might explain the values of its parameters. Although it is natural to see analogies between the two Standard Models, some intrinsic differences also exist, which are discussed here. Nevertheless, a truly fundamental theory will have to explain both the SMPP and SMC, and this must include an appreciation of which elements are deterministic and which are accidental. Considering different levels of stochasticity within cosmology may make it easier to accept that physical parameters in general might have a nondeterministic aspect.
Peskin, M.E.
1997-05-01
These lectures constitute a short course in ``Beyond the Standard Model`` for students of experimental particle physics. The author discusses the general ideas which guide the construction of models of physics beyond the Standard model. The central principle, the one which most directly motivates the search for new physics, is the search for the mechanism of the spontaneous symmetry breaking observed in the theory of weak interactions. To illustrate models of weak-interaction symmetry breaking, the author gives a detailed discussion of the idea of supersymmetry and that of new strong interactions at the TeV energy scale. He discusses experiments that will probe the details of these models at future pp and e{sup +}e{sup {minus}} colliders.
Calpas, Betty Constante
2010-06-11
The organization of this thesis consists of three main ideas: the first presents the theoretical framework and experimental, as well as objects used in the analysis and the second relates to the various work tasks of service that I performed on the calorimeter, and the third is the search for the Higgs boson in the channel ZH → e^{+}e^{-}b$\\bar{b}$. Thus, this thesis has the following structure: Chapter 1 is an introduction to the standard model of particle physics and the Higgs mechanism; Chapter 2 is an overview of the complex and the acceleration of the Tevatron at Fermilab DØ detector; Chapter 3 is an introduction to physical objects used in this thesis; Chapter 4 presents the study made on correcting the energy measured in the calorimeter; Chapter 5 describes the study of certification of electrons in the calorimeter; Chapter 6 describes the study of certification of electrons in the intercryostat region of calorimeter; Chapter 7 Detailed analysis on the search for Higgs production in the channel ZH → e^{+}e^{-}b$\\bar{b}$; and Chapter 8 presents the final results of the calculations of upper limits to the production cross section of the Higgs boson on a range of low masses.
Lykken, Joseph D.; /Fermilab
2010-05-01
'BSM physics' is a phrase used in several ways. It can refer to physical phenomena established experimentally but not accommodated by the Standard Model, in particular dark matter and neutrino oscillations (technically also anything that has to do with gravity, since gravity is not part of the Standard Model). 'Beyond the Standard Model' can also refer to possible deeper explanations of phenomena that are accommodated by the Standard Model but only with ad hoc parameterizations, such as Yukawa couplings and the strong CP angle. More generally, BSM can be taken to refer to any possible extension of the Standard Model, whether or not the extension solves any particular set of puzzles left unresolved in the SM. In this general sense one sees reference to the BSM 'theory space' of all possible SM extensions, this being a parameter space of coupling constants for new interactions, new charges or other quantum numbers, and parameters describing possible new degrees of freedom or new symmetries. Despite decades of model-building it seems unlikely that we have mapped out most of, or even the most interesting parts of, this theory space. Indeed we do not even know what is the dimensionality of this parameter space, or what fraction of it is already ruled out by experiment. Since Nature is only implementing at most one point in this BSM theory space (at least in our neighborhood of space and time), it might seem an impossible task to map back from a finite number of experimental discoveries and measurements to a unique BSM explanation. Fortunately for theorists the inevitable limitations of experiments themselves, in terms of resolutions, rates, and energy scales, means that in practice there are only a finite number of BSM model 'equivalence classes' competing at any given time to explain any given set of results. BSM phenomenology is a two-way street: not only do experimental results test or constrain BSM models, they also suggest - to those who get close enough to listen
Marciano, W.J.
1994-03-01
In these lectures, my aim is to provide a survey of the standard model with emphasis on its renormalizability and electroweak radiative corrections. Since this is a school, I will try to be somewhat pedagogical by providing examples of loop calculations. In that way, I hope to illustrate some of the commonly employed tools of particle physics. With those goals in mind, I have organized my presentations as follows: In Section 2, renormalization is discussed from an applied perspective. The technique of dimensional regularization is described and used to define running couplings and masses. The utility of the renormalization group for computing leading logs is illustrated for the muon anomalous magnetic moment. In Section 3 electroweak radiative corrections are discussed. Standard model predictions are surveyed and used to constrain the top quark mass. The S, T, and U parameters are introduced and employed to probe for ``new physics``. The effect of Z{prime} bosons on low energy phenomenology is described. In Section 4, a detailed illustration of electroweak radiative corrections is given for atomic parity violation. Finally, in Section 5, I conclude with an outlook for the future.
The Supersymmetric Standard Model
NASA Astrophysics Data System (ADS)
Fayet, Pierre
2016-10-01
The Standard Model may be included within a supersymmetric theory, postulating new sparticles that differ by half-a-unit of spin from their standard model partners, and by a new quantum number called R-parity. The lightest one, usually a neutralino, is expected to be stable and a possible candidate for dark matter. The electroweak breaking requires two doublets, leading to several charged and neutral Brout-Englert-Higgs bosons. This also leads to gauge/Higgs unification by providing extra spin-0 partners for the spin-1 W± and Z. It offers the possibility to view, up to a mixing angle, the new 125 GeV boson as the spin-0 partner of the Z under two supersymmetry transformations, i.e. as a Z that would be deprived of its spin. Supersymmetry then relates two existing particles of different spins, in spite of their different gauge symmetry properties, through supersymmetry transformations acting on physical fields in a non-polynomial way. We also discuss how the compactification of extra dimensions, relying on R-parity and other discrete symmetries, may determine both the supersymmetrybreaking and grand-unification scales.
MODeLeR: A Virtual Constructivist Learning Environment and Methodology for Object-Oriented Design
ERIC Educational Resources Information Center
Coffey, John W.; Koonce, Robert
2008-01-01
This article contains a description of the organization and method of use of an active learning environment named MODeLeR, (Multimedia Object Design Learning Resource), a tool designed to facilitate the learning of concepts pertaining to object modeling with the Unified Modeling Language (UML). MODeLeR was created to provide an authentic,…
Phenomenology beyond the standard model
Lykken, Joseph D.; /Fermilab
2005-03-01
An elementary review of models and phenomenology for physics beyond the Standard Model (excluding supersymmetry). The emphasis is on LHC physics. Based upon a talk given at the ''Physics at LHC'' conference, Vienna, 13-17 July 2004.
Bellantoni, L.
2009-11-01
There are many recent results from searches for fundamental new physics using the TeVatron, the SLAC b-factory and HERA. This talk quickly reviewed searches for pair-produced stop, for gauge-mediated SUSY breaking, for Higgs bosons in the MSSM and NMSSM models, for leptoquarks, and v-hadrons. There is a SUSY model which accommodates the recent astrophysical experimental results that suggest that dark matter annihilation is occurring in the center of our galaxy, and a relevant experimental result. Finally, model-independent searches at D0, CDF, and H1 are discussed.
Reference and Standard Atmosphere Models
NASA Technical Reports Server (NTRS)
Johnson, Dale L.; Roberts, Barry C.; Vaughan, William W.; Parker, Nelson C. (Technical Monitor)
2002-01-01
This paper describes the development of standard and reference atmosphere models along with the history of their origin and use since the mid 19th century. The first "Standard Atmospheres" were established by international agreement in the 1920's. Later some countries, notably the United States, also developed and published "Standard Atmospheres". The term "Reference Atmospheres" is used to identify atmosphere models for specific geographical locations. Range Reference Atmosphere Models developed first during the 1960's are examples of these descriptions of the atmosphere. This paper discusses the various models, scopes, applications and limitations relative to use in aerospace industry activities.
CoLeMo: A Collaborative Learning Environment for UML Modelling
ERIC Educational Resources Information Center
Chen, Weiqin; Pedersen, Roger Heggernes; Pettersen, Oystein
2006-01-01
This paper presents the design, implementation, and evaluation of a distributed collaborative UML modelling environment, CoLeMo. CoLeMo is designed for students studying UML modelling. It can also be used as a platform for collaborative design of software. We conducted formative evaluations and a summative evaluation to improve the environment and…
Colorado Model Content Standards: Science
ERIC Educational Resources Information Center
Colorado Department of Education, 2007
2007-01-01
The Colorado Model Content Standards for Science specify what all students should know and be able to do in science as a result of their school studies. Specific expectations are given for students completing grades K-2, 3-5, 6-8, and 9-12. Five standards outline the essential level of science knowledge and skills needed by Colorado citizens to…
Revisiting the standard solar model
Turck-Chieze, S.; Cahen, S.; Casse, M.; Doom, C.
1988-12-01
The mutual consistency between standard solar models is studied based on the recent Los Alamos opacity tables. Satisfactory agreement is found among these models concerning the helium content and the neutrino capture rates. The reference model leads to a solar helium content of 0.276 + or - 0.012 by mass fraction. 75 references.
UPWT check standard model test
NASA Technical Reports Server (NTRS)
2000-01-01
Installation of the check standard model in test section 2 of the Unitary Plan Wind Tunnel (UPWT). Testing was conducted as part of a Data Quality Control assessment in the Research Facilities Branch/Aerodynamics Aerothermodynamics Acoustics Competency.
Dynamics of the standard model
Donoghue, J.F.; Golowich, E.; Holstein, B.R.
1992-01-01
Given the remarkable successes of the standard model, it is appropriate that books in the field no longer dwell on the development of our current understanding of high-energy physics but rather present the world as we now know it. Dynamics of the Standard Model by Donoghue, Golowich, and Holstein takes just this approach. Instead of showing the confusion of the 60s and 70s, the authors present the enlightenment of the 80s. They start by describing the basic features and structure of the standard model and then concentrate on the techniques whereby the model can be applied to the physical world, connecting the theory to the experimental results that are the source of its success. Because they do not dwell on ancient (pre-1980) history, the authors of this book are able to go into much more depth in describing how the model can be tied to experiment, and much of the information presented has been accessible previously only in journal articles in a highly technical form. Though all of the authors are card-carrying theorists they go out of their way to stress applications and phenomenology and to show the reader how real-life calculations of use to experimentalists are done and can be applied to physical situations: what assumptions are made in doing them and how well they work. This is of great value both to the experimentalist seeking a deeper understanding of how the standard model can be connected to data and to the theorist wanting to see how detailed the phenomenological predictions of the standard model are and how well the model works. Furthermore, the authors constantly go beyond the lowest-order predictions of the standard model to discuss the corrections to it, as well as higher-order processes, some of which are now experimentally accessible and others of which will take well into the decade to uncover.
Standard model of knowledge representation
NASA Astrophysics Data System (ADS)
Yin, Wensheng
2016-03-01
Knowledge representation is the core of artificial intelligence research. Knowledge representation methods include predicate logic, semantic network, computer programming language, database, mathematical model, graphics language, natural language, etc. To establish the intrinsic link between various knowledge representation methods, a unified knowledge representation model is necessary. According to ontology, system theory, and control theory, a standard model of knowledge representation that reflects the change of the objective world is proposed. The model is composed of input, processing, and output. This knowledge representation method is not a contradiction to the traditional knowledge representation method. It can express knowledge in terms of multivariate and multidimensional. It can also express process knowledge, and at the same time, it has a strong ability to solve problems. In addition, the standard model of knowledge representation provides a way to solve problems of non-precision and inconsistent knowledge.
Standard model of knowledge representation
NASA Astrophysics Data System (ADS)
Yin, Wensheng
2016-09-01
Knowledge representation is the core of artificial intelligence research. Knowledge representation methods include predicate logic, semantic network, computer programming language, database, mathematical model, graphics language, natural language, etc. To establish the intrinsic link between various knowledge representation methods, a unified knowledge representation model is necessary. According to ontology, system theory, and control theory, a standard model of knowledge representation that reflects the change of the objective world is proposed. The model is composed of input, processing, and output. This knowledge representation method is not a contradiction to the traditional knowledge representation method. It can express knowledge in terms of multivariate and multidimensional. It can also express process knowledge, and at the same time, it has a strong ability to solve problems. In addition, the standard model of knowledge representation provides a way to solve problems of non-precision and inconsistent knowledge.
Marciano, W.J.
1989-05-01
In these lectures, my aim is to present a status report on the standard model and some key tests of electroweak unification. Within that context, I also discuss how and where hints of new physics may emerge. To accomplish those goals, I have organized my presentation as follows. I survey the standard model parameters with particular emphasis on the gauge coupling constants and vector boson masses. Examples of new physics appendages are also commented on. In addition, I have included an appendix on dimensional regularization and a simple example which employs that technique. I focus on weak charged current phenomenology. Precision tests of the standard model are described and up-to-date values for the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix parameters are presented. Constraints implied by those tests for a 4th generation, extra Z' bosons, and compositeness are discussed. An overview of the physics of tau decays is also included. I discuss weak neutral current phenomenology and the extraction of sin/sup 2//theta/W from experiment. The results presented there are based on a global analysis of all existing data. I have chosen to concentrate that discussion on radiative corrections, the effect of a heavy top quark mass, implications for grand unified theories (GUTS), extra Z' gauge bosons, and atomic parity violation. The potential for further experimental progress is also commented on. Finally, I depart from the narrowest version of the standard model and discuss effects of neutrino masses, mixings, and electromagnetic moments. 32 refs., 3 figs., 5 tabs
Consistency Across Standards or Standards in a New Business Model
NASA Technical Reports Server (NTRS)
Russo, Dane M.
2010-01-01
Presentation topics include: standards in a changing business model, the new National Space Policy is driving change, a new paradigm for human spaceflight, consistency across standards, the purpose of standards, danger of over-prescriptive standards, a balance is needed (between prescriptive and general standards), enabling versus inhibiting, characteristics of success-oriented standards, characteristics of success-oriented standards, and conclusions. Additional slides include NASA Procedural Requirements 8705.2B identifies human rating standards and requirements, draft health and medical standards for human rating, what's been done, government oversight models, examples of consistency from anthropometry, examples of inconsistency from air quality and appendices of government and non-governmental human factors standards.
Composite-technicolor standard model
NASA Astrophysics Data System (ADS)
Sekhar Chivukula, B.; Georgi, Howard
1987-04-01
We characterize a class of composite models in which the quarks and leptons and technifermions are built from fermions (preons) bound by strong gauge interactions. We argue that if the preon dynamics has as [SU(3) × U(1)] 5 flavor symmetry that is explicitly broken only by preon mass terms proportional to the quark and lepton mass matrices, then the composite-tech-nicolor theory has a GIM mechanism that suppresses dangerous flavor changing neutral current effects. We show that the compositeness scale must be between ≈1 TeV and ≈2.5 TeV, giving rise to observable deviations from the standard electroweak interactions, and that B overlineB mixing and CP violation in K mesons can differ significantly from the standard model predictions. The lepton flavor symmetries may be observable in the near future in the comparison of the compositeness effects in e +e - → μ +μ - with those in e +e - → e +e -.
Gaillard, M.K.
1989-05-01
The field of elementary particle, or high energy, physics seeks to identify the most elementary constituents of nature and to study the forces that govern their interactions. Increasing the energy of a probe in a laboratory experiment increases its power as an effective microscope for discerning increasingly smaller structures of matter. Thus we have learned that matter is composed of molecules that are in turn composed of atoms, that the atom consists of a nucleus surrounded by a cloud of electrons, and that the atomic nucleus is a collection of protons and neutrons. The more powerful probes provided by high energy particle accelerators have taught us that a nucleon is itself made of objects called quarks. The forces among quarks and electrons are understood within a general theoretical framework called the ''standard model,'' that accounts for all interactions observed in high energy laboratory experiments to date. These are commonly categorized as the ''strong,'' ''weak'' and ''electromagnetic'' interactions. In this lecture I will describe the standard model, and point out some of its limitations. Probing for deeper structures in quarks and electrons defines the present frontier of particle physics. I will discuss some speculative ideas about extensions of the standard model and/or yet more fundamental forces that may underlie our present picture. 11 figs., 1 tab.
Physics beyond the standard model
Womersley, J.
2000-01-24
The author briefly summarizes the prospects for extending the understanding of physics beyond the standard model within the next five years. He interprets ``beyond the standard model'' to mean the physics of electroweak symmetry breaking, including the standard model Higgs boson. The nature of this TeV-scale new physics is perhaps the most crucial question facing high-energy physics, but one should recall (neutrino oscillations) that there is ample evidence for interesting physics in the flavour section too. In the next five years, before the LHC starts operations, the facilities available will be LEP2, HERA and the Fermilab Tevatron. He devotes a bit more time to the Tevatron as it is a new initiative for United Kingdom institutions. The Tevatron schedule now calls for data taking in Run II, using two upgraded detectors, to begin on March 1, 2001, with 2 fb{sup {minus}1} accumulated in the first two years. A nine-month shutdown will follow, to allow new silicon detector layers to be installed, and then running will resume with a goal of accumulating 15 fb{sup {minus}1} (or more) by 2006.
Neutrinos beyond the Standard Model
Valle, J.W.F.
1989-08-01
I review some basic aspects of neutrino physics beyond the Standard Model such as neutrino mixing and neutrino non-orthogonality, universality and CP violation in the lepton sector, total lepton number and lepton flavor violation, etc.. These may lead to neutrino decays and oscillations, exotic weak decay processes, neutrinoless double /beta/ decay, etc.. Particle physics models are discussed where some of these processes can be sizable even in the absence of measurable neutrino masses. These may also substantially affect the propagation properties of solar and astrophysical neutrinos. 39 refs., 4 figs.
NASA Astrophysics Data System (ADS)
Gunion, John F.; Han, Tao; Ohnemus, James
1995-08-01
The Table of Contents for the book is as follows: * Preface * Organizing and Advisory Committees * PLENARY SESSIONS * Looking Beyond the Standard Model from LEP1 and LEP2 * Virtual Effects of Physics Beyond the Standard Model * Extended Gauge Sectors * CLEO's Views Beyond the Standard Model * On Estimating Perturbative Coefficients in Quantum Field Theory and Statistical Physics * Perturbative Corrections to Inclusive Heavy Hadron Decay * Some Recent Developments in Sphalerons * Searching for New Matter Particles at Future Colliders * Issues in Dynamical Supersymmetry Breaking * Present Status of Fermilab Collider Accelerator Upgrades * The Extraordinary Scientific Opportunities from Upgrading Fermilab's Luminosity ≥ 1033 cm-2 sec-1 * Applications of Effective Lagrangians * Collider Phenomenology for Strongly Interacting Electroweak Sector * Physics of Self-Interacting Electroweak Bosons * Particle Physics at a TeV-Scale e+e- Linear Collider * Physics at γγ and eγ Colliders * Challenges for Non-Minimal Higgs Searchers at Future Colliders * Physics Potential and Development of μ+μ- Colliders * Beyond Standard Quantum Chromodynamics * Extracting Predictions from Supergravity/Superstrings for the Effective Theory Below the Planck Scale * Non-Universal SUSY Breaking, Hierarchy and Squark Degeneracy * Supersymmetric Phenomenology in the Light of Grand Unification * A Survey of Phenomenological Constraints on Supergravity Models * Precision Tests of the MSSM * The Search for Supersymmetry * Neutrino Physics * Neutrino Mass: Oscillations and Hot Dark Matter * Dark Matter and Large-Scale Structure * Electroweak Baryogenesis * Progress in Searches for Non-Baryonic Dark Matter * Big Bang Nucleosynthesis * Flavor Tests of Quark-Lepton * Where are We Coming from? What are We? Where are We Going? * Summary, Perspectives * PARALLEL SESSIONS * SUSY Phenomenology I * Is Rb Telling us that Superpartners will soon be Discovered? * Dark Matter in Constrained Minimal
Gaillard, M.K.
1983-04-01
Focussing on the standard electroweak model, we examine physics issues which may be addressed with the help of intense beams of strange particles. I have collected miscellany of issues, starting with some philosophical remarks on how things stand and where we should go from here. I will then focus on a case study: the decay K/sup +/ ..-->.. ..pi../sup +/ + nothing observable, which provides a nice illustration of the type of physics that can be probed through rare decays. Other topics I will mention are CP violation in K-decays, hyperon and anti-hyperon physics, and a few random comments on other relevant phenomena.
Standard for Models and Simulations
NASA Technical Reports Server (NTRS)
Steele, Martin J.
2016-01-01
This NASA Technical Standard establishes uniform practices in modeling and simulation to ensure essential requirements are applied to the design, development, and use of models and simulations (MS), while ensuring acceptance criteria are defined by the program project and approved by the responsible Technical Authority. It also provides an approved set of requirements, recommendations, and criteria with which MS may be developed, accepted, and used in support of NASA activities. As the MS disciplines employed and application areas involved are broad, the common aspects of MS across all NASA activities are addressed. The discipline-specific details of a given MS should be obtained from relevant recommended practices. The primary purpose is to reduce the risks associated with MS-influenced decisions by ensuring the complete communication of the credibility of MS results.
Standard model with gravity couplings
NASA Astrophysics Data System (ADS)
Chang, Lay Nam; Soo, Chopin
1996-05-01
In this paper we examine the coupling of matter fields to gravity within the framework of the standard model of particle physics. The coupling is described in terms of Weyl fermions of a definite chirality, and employs only (anti-)self-dual or left-handed spin connection fields. We review the general framework for introducing the coupling using these fields, and show that conditions ensuring the cancellation of perturbative chiral gauge anomalies are not disturbed. We also explore a global anomaly associated with the theory, and argue that its removal requires that the number of fundamental fermions in the theory must be multiples of 16. In addition, we investigate the behavior of the theory under discrete transformations P, C, and T, and discuss possible violations of these discrete symmetries, including CPT, in the presence of instantons and the Adler-Bell-Jackiw anomaly.
From Interactive Open Learner Modelling to Intelligent Mentoring: STyLE-OLM and Beyond
ERIC Educational Resources Information Center
Dimitrova, Vania; Brna, Paul
2016-01-01
STyLE-OLM (Dimitrova 2003 "International Journal of Artificial Intelligence in Education," 13, 35-78) presented a framework for interactive open learner modelling which entails the development of the means by which learners can "inspect," "discuss" and "alter" the learner model that has been jointly…
Experiments beyond the standard model
Perl, M.L.
1984-09-01
This paper is based upon lectures in which I have described and explored the ways in which experimenters can try to find answers, or at least clues toward answers, to some of the fundamental questions of elementary particle physics. All of these experimental techniques and directions have been discussed fully in other papers, for example: searches for heavy charged leptons, tests of quantum chromodynamics, searches for Higgs particles, searches for particles predicted by supersymmetric theories, searches for particles predicted by technicolor theories, searches for proton decay, searches for neutrino oscillations, monopole searches, studies of low transfer momentum hadron physics at very high energies, and elementary particle studies using cosmic rays. Each of these subjects requires several lectures by itself to do justice to the large amount of experimental work and theoretical thought which has been devoted to these subjects. My approach in these tutorial lectures is to describe general ways to experiment beyond the standard model. I will use some of the topics listed to illustrate these general ways. Also, in these lectures I present some dreams and challenges about new techniques in experimental particle physics and accelerator technology, I call these Experimental Needs. 92 references.
Beyond the cosmological standard model
NASA Astrophysics Data System (ADS)
Joyce, Austin; Jain, Bhuvnesh; Khoury, Justin; Trodden, Mark
2015-03-01
After a decade and a half of research motivated by the accelerating universe, theory and experiment have reached a certain level of maturity. The development of theoretical models beyond Λ or smooth dark energy, often called modified gravity, has led to broader insights into a path forward, and a host of observational and experimental tests have been developed. In this review we present the current state of the field and describe a framework for anticipating developments in the next decade. We identify the guiding principles for rigorous and consistent modifications of the standard model, and discuss the prospects for empirical tests. We begin by reviewing recent attempts to consistently modify Einstein gravity in the infrared, focusing on the notion that additional degrees of freedom introduced by the modification must "screen" themselves from local tests of gravity. We categorize screening mechanisms into three broad classes: mechanisms which become active in regions of high Newtonian potential, those in which first derivatives of the field become important, and those for which second derivatives of the field are important. Examples of the first class, such as f(R) gravity, employ the familiar chameleon or symmetron mechanisms, whereas examples of the last class are galileon and massive gravity theories, employing the Vainshtein mechanism. In each case, we describe the theories as effective theories and discuss prospects for completion in a more fundamental theory. We describe experimental tests of each class of theories, summarizing laboratory and solar system tests and describing in some detail astrophysical and cosmological tests. Finally, we discuss prospects for future tests which will be sensitive to different signatures of new physics in the gravitational sector. The review is structured so that those parts that are more relevant to theorists vs. observers/experimentalists are clearly indicated, in the hope that this will serve as a useful reference for
Conductivite dans le modele de Hubbard bi-dimensionnel a faible couplage
NASA Astrophysics Data System (ADS)
Bergeron, Dominic
Le modele de Hubbard bi-dimensionnel (2D) est souvent considere comme le modele minimal pour les supraconducteurs a haute temperature critique a base d'oxyde de cuivre (SCHT). Sur un reseau carre, ce modele possede les phases qui sont communes a tous les SCHT, la phase antiferromagnetique, la phase supraconductrice et la phase dite du pseudogap. Il n'a pas de solution exacte, toutefois, plusieurs methodes approximatives permettent d'etudier ses proprietes de facon numerique. Les proprietes optiques et de transport sont bien connues dans les SCHT et sont donc de bonne candidates pour valider un modele theorique et aider a comprendre mieux la physique de ces materiaux. La presente these porte sur le calcul de ces proprietes pour le modele de Hubbard 2D a couplage faible ou intermediaire. La methode de calcul utilisee est l'approche auto-coherente a deux particules (ACDP), qui est non-perturbative et inclue l'effet des fluctuations de spin et de charge a toutes les longueurs d'onde. La derivation complete de l'expression de la conductivite dans l'approche ACDP est presentee. Cette expression contient ce qu'on appelle les corrections de vertex, qui tiennent compte des correlations entre quasi-particules. Pour rendre possible le calcul numerique de ces corrections, des algorithmes utilisant, entre autres, des transformees de Fourier rapides et des splines cubiques sont developpes. Les calculs sont faits pour le reseau carre avec sauts aux plus proches voisins autour du point critique antiferromagnetique. Aux dopages plus faibles que le point critique, la conductivite optique presente une bosse dans l'infrarouge moyen a basse temperature, tel qu'observe dans plusieurs SCHT. Dans la resistivite en fonction de la temperature, on trouve un comportement isolant dans le pseudogap lorsque les corrections de vertex sont negligees et metallique lorsqu'elles sont prises en compte. Pres du point critique, la resistivite est lineaire en T a basse temperature et devient
Le modele de Hubbard bidimensionnel a faible couplage: Thermodynamique et phenomenes critiques
NASA Astrophysics Data System (ADS)
Roy, Sebastien
Une etude systematique du modele de Hubbard en deux dimensions a faible couplage a l'aide de la theorie Auto-Coherente a Deux Particules (ACDP) dans le diagramme temperature-dopage-interaction-sauts permet de mettre en evidence l'influence des fluctuations magnetiques sur les proprietes thermodynamiques du systeme electronique sur reseau. Le regime classique renormalise a temperature finie pres du dopage nul est marque par la grandeur de la longueur de correlation de spin comparee a la longueur thermique de de Broglie et est caracterisee par un accroissement drastique de la longueur de correlation de spin. Cette croissance exponentielle a dopage nul marque la presence d'un pic de chaleur specifique en fonction de la temperature a basse temperature. Une temperature de crossover est alors associee a la temperature a laquelle la longueur de correlation de spin est egale a la longueur thermique de de Broglie. C'est a cette temperature caracteristique, ou est observee l'ouverture du pseudogap dans le poids spectral, que se situe le maximum du pic de chaleur specifique. La presence de ce pic a des consequences sur l'evolution du potentiel chimique avec le dopage lorsque l'uniformite thermodynamique est respectee. Les contraintes imposees par les lois de la thermodynamique font en sorte que l'evolution du potentiel chimique avec le dopage est non triviale. On demontre entre autres que le potentiel chimique est proportionnel a la double occupation qui est reliee au moment local. Par ailleurs, une derivation de la fonction de mise a l'echelle de la susceptibilite de spin a frequence nulle au voisinage d'un point critique marque sans equivoque la presence d'un point critique quantique en dopage pour une valeur donnee de l'interaction. Ce point critique, associe a une transition de phase magnetique en fonction du dopage a temperature nulle, induit un comportement non trivial sur les proprietes physiques du systeme a temperature finie. L'approche quantitative ACDP permet de
SCaLeM: A Framework for Characterizing and Analyzing Execution Models
Chavarría-Miranda, Daniel; Manzano Franco, Joseph B.; Krishnamoorthy, Sriram; Vishnu, Abhinav; Barker, Kevin J.; Hoisie, Adolfy
2014-10-13
As scalable parallel systems evolve towards more complex nodes with many-core architectures and larger trans-petascale & upcoming exascale deployments, there is a need to understand, characterize and quantify the underlying execution models being used on such systems. Execution models are a conceptual layer between applications & algorithms and the underlying parallel hardware and systems software on which those applications run. This paper presents the SCaLeM (Synchronization, Concurrency, Locality, Memory) framework for characterizing and execution models. SCaLeM consists of three basic elements: attributes, compositions and mapping of these compositions to abstract parallel systems. The fundamental Synchronization, Concurrency, Locality and Memory attributes are used to characterize each execution model, while the combinations of those attributes in the form of compositions are used to describe the primitive operations of the execution model. The mapping of the execution model’s primitive operations described by compositions, to an underlying abstract parallel system can be evaluated quantitatively to determine its effectiveness. Finally, SCaLeM also enables the representation and analysis of applications in terms of execution models, for the purpose of evaluating the effectiveness of such mapping.
Modeling in the Common Core State Standards
ERIC Educational Resources Information Center
Tam, Kai Chung
2011-01-01
The inclusion of modeling and applications into the mathematics curriculum has proven to be a challenging task over the last fifty years. The Common Core State Standards (CCSS) has made mathematical modeling both one of its Standards for Mathematical Practice and one of its Conceptual Categories. This article discusses the need for mathematical…
A standard satellite control reference model
NASA Technical Reports Server (NTRS)
Golden, Constance
1994-01-01
This paper describes a Satellite Control Reference Model that provides the basis for an approach to identify where standards would be beneficial in supporting space operations functions. The background and context for the development of the model and the approach are described. A process for using this reference model to trace top level interoperability directives to specific sets of engineering interface standards that must be implemented to meet these directives is discussed. Issues in developing a 'universal' reference model are also identified.
Beyond the supersymmetric standard model
Hall, L.J.
1988-02-01
The possibility of baryon number violation at the weak scale and an alternative primordial nucleosynthesis scheme arising from the decay of gravitations are discussed. The minimal low energy supergravity model is defined and a few of its features are described. Renormalization group scaling and flavor physics are mentioned.
Wisconsin's Model Academic Standards for Visual Arts.
ERIC Educational Resources Information Center
Nikolay, Pauli; Grady, Susan; Stefonek, Thomas
To assist parents and educators in preparing students for the 21st century, Wisconsin citizens have become involved in the development of challenging academic standards in 12 curricular areas. Having clear standards for students and teachers makes it possible to develop rigorous local curricula and valid, reliable assessments. This model of…
Less minimal supersymmetric standard model
de Gouvea, Andre; Friedland, Alexander; Murayama, Hitoshi
1998-03-28
Most of the phenomenological studies of supersymmetry have been carried out using the so-called minimal supergravity scenario, where one assumes a universal scalar mass, gaugino mass, and trilinear coupling at M{sub GUT}. Even though this is a useful simplifying assumption for phenomenological analyses, it is rather too restrictive to accommodate a large variety of phenomenological possibilities. It predicts, among other things, that the lightest supersymmetric particle (LSP) is an almost pure B-ino, and that the {mu}-parameter is larger than the masses of the SU(2){sub L} and U(1){sub Y} gauginos. We extend the minimal supergravity framework by introducing one extra parameter: the Fayet'Iliopoulos D-term for the hypercharge U(1), D{sub Y}. Allowing for this extra parameter, we find a much more diverse phenomenology, where the LSP is {tilde {nu}}{sub {tau}}, {tilde {tau}} or a neutralino with a large higgsino content. We discuss the relevance of the different possibilities to collider signatures. The same type of extension can be done to models with the gauge mediation of supersymmetry breaking. We argue that it is not wise to impose cosmological constraints on the parameter space.
An alternative to the standard model
Baek, Seungwon; Ko, Pyungwon; Park, Wan-Il
2014-06-24
We present an extension of the standard model to dark sector with an unbroken local dark U(1){sub X} symmetry. Including various singlet portal interactions provided by the standard model Higgs, right-handed neutrinos and kinetic mixing, we show that the model can address most of phenomenological issues (inflation, neutrino mass and mixing, baryon number asymmetry, dark matter, direct/indirect dark matter searches, some scale scale puzzles of the standard collisionless cold dark matter, vacuum stability of the standard model Higgs potential, dark radiation) and be regarded as an alternative to the standard model. The Higgs signal strength is equal to one as in the standard model for unbroken U(1){sub X} case with a scalar dark matter, but it could be less than one independent of decay channels if the dark matter is a dark sector fermion or if U(1){sub X} is spontaneously broken, because of a mixing with a new neutral scalar boson in the models.
The Higgs boson in the Standard Model
NASA Astrophysics Data System (ADS)
Djouadi, Abdelhak; Grazzini, Massimiliano
2016-10-01
The major goal of the Large Hadron Collider is to probe the electroweak symmetry breaking mechanism and the generation of the elementary particle masses. In the Standard Model this mechanism leads to the existence of a scalar Higgs boson with unique properties. We review the physics of the Standard Model Higgs boson, discuss its main search channels at hadron colliders and the corresponding theoretical predictions. We also summarize the strategies to study its basic properties.
Exploring the Standard Model of Particles
ERIC Educational Resources Information Center
Johansson, K. E.; Watkins, P. M.
2013-01-01
With the recent discovery of a new particle at the CERN Large Hadron Collider (LHC) the Higgs boson could be about to be discovered. This paper provides a brief summary of the standard model of particle physics and the importance of the Higgs boson and field in that model for non-specialists. The role of Feynman diagrams in making predictions for…
Exploring the Standard Model at the LHC
NASA Astrophysics Data System (ADS)
Vachon, Brigitte
2016-08-01
The ATLAS and CMS collaborations have performed studies of a wide range of Standard Model processes using data collected at the Large Hadron Collider at center-of-mass energies of 7, 8 and 13 TeV. These measurements are used to explore the Standard Model in a new kinematic regime, perform precision tests of the model, determine some of its fundamental parameters, constrain the proton parton distribution functions, and study new rare processes observed for the first time. Examples of recent Standard Model measurements performed by the ATLAS and CMS collaborations are summarized in this report. The measurements presented span a wide range of event final states including jets, photons, W/Z bosons, top quarks, and Higgs bosons.
Neutrino in standard model and beyond
NASA Astrophysics Data System (ADS)
Bilenky, S. M.
2015-07-01
After discovery of the Higgs boson at CERN the Standard Model acquired a status of the theory of the elementary particles in the electroweak range (up to about 300 GeV). What general conclusions can be inferred from the Standard Model? It looks that the Standard Model teaches us that in the framework of such general principles as local gauge symmetry, unification of weak and electromagnetic interactions and Brout-Englert-Higgs spontaneous breaking of the electroweak symmetry nature chooses the simplest possibilities. Two-component left-handed massless neutrino fields play crucial role in the determination of the charged current structure of the Standard Model. The absence of the right-handed neutrino fields in the Standard Model is the simplest, most economical possibility. In such a scenario Majorana mass term is the only possibility for neutrinos to be massive and mixed. Such mass term is generated by the lepton-number violating Weinberg effective Lagrangian. In this approach three Majorana neutrino masses are suppressed with respect to the masses of other fundamental fermions by the ratio of the electroweak scale and a scale of a lepton-number violating physics. The discovery of the neutrinoless double β-decay and absence of transitions of flavor neutrinos into sterile states would be evidence in favor of the minimal scenario we advocate here.
ERIC Educational Resources Information Center
Rostad, John
1997-01-01
Describes the production of news broadcasts on video by a high school class in Le Center, Minnesota. Topics include software for Apple computers, equipment used, student responsibilities, class curriculum, group work, communication among the production crew, administrative and staff support, and future improvements. (LRW)
Toward a midisuperspace quantization of LeMaitre-Tolman-Bondi collapse models
Vaz, Cenalo; Witten, Louis; Singh, T. P.
2001-05-15
LeMaitre-Tolman-Bondi models of spherical dust collapse have been used and continue to be used extensively to study various stellar collapse scenarios. It is by now well known that these models lead to the formation of black holes and naked singularities from regular initial data. The final outcome of the collapse, particularly in the event of naked singularity formation, depends very heavily on quantum effects during the final stages. These quantum effects cannot generally be treated semiclassically as quantum fluctuations of the gravitational field are expected to dominate before the final state is reached. We present a canonical reduction of LeMaitre-Tolman-Bondi space-times describing the marginally bound collapse of inhomogeneous dust, in which the physical radius R, the proper time of the collapsing dust {tau}, and the mass function F are the canonical coordinates R(r), {tau}(r) and F(r) on the phase space. Dirac's constraint quantization leads to a simple functional (Wheeler-DeWitt) equation. The equation is solved and the solution can be employed to study some of the effects of quantum gravity during gravitational collapse with different initial conditions.
Models of the Primordial Standard Clock
NASA Astrophysics Data System (ADS)
Chen, Xingang; Namjoo, Mohammad Hossein; Wang, Yi
2015-02-01
Oscillating massive fields in the primordial universe can be used as Standard Clocks. The ticks of these oscillations induce features in the density perturbations, which directly record the time evolution of the scale factor of the primordial universe, thus if detected, provide a direct evidence for the inflation scenario or the alternatives. In this paper, we construct a full inflationary model of primordial Standard Clock and study its predictions on the density perturbations. This model provides a full realization of several key features proposed previously. We compare the theoretical predictions from inflation and alternative scenarios with the Planck 2013 temperature data on Cosmic Microwave Background (CMB), and identify a statistically marginal but interesting candidate. We discuss how future CMB temperature and polarization data, non-Gaussianity analysis and Large Scale Structure data may be used to further test or constrain the Standard Clock signals.
Inclusive Standard Model Higgs searches with ATLAS
Polci, Francesco
2008-11-23
The update of the discovery potential for a Standard Model Higgs boson through the inclusive searches H{yields}{gamma}{gamma}, H{yields}ZZ* and H{yields}WW with the ATLAS detector is reported. The analysis are based on the most recent available simulations of signal, backgrounds as well as the detector response.
Preon Prophecies by the Standard Model
NASA Astrophysics Data System (ADS)
Fredriksson, Sverker
The Standard Model of quarks and leptons is, at first sight, nothing but a set of {\\it ad hoc} rules, with no connections, and no clues to their true background. At a closer look, however, there are many inherent prophecies that point in the same direction: {\\it Compositeness} in terms of three stable preons.
Inflation in the standard cosmological model
NASA Astrophysics Data System (ADS)
Uzan, Jean-Philippe
2015-12-01
The inflationary paradigm is now part of the standard cosmological model as a description of its primordial phase. While its original motivation was to solve the standard problems of the hot big bang model, it was soon understood that it offers a natural theory for the origin of the large-scale structure of the universe. Most models rely on a slow-rolling scalar field and enjoy very generic predictions. Besides, all the matter of the universe is produced by the decay of the inflaton field at the end of inflation during a phase of reheating. These predictions can be (and are) tested from their imprint of the large-scale structure and in particular the cosmic microwave background. Inflation stands as a window in physics where both general relativity and quantum field theory are at work and which can be observationally studied. It connects cosmology with high-energy physics. Today most models are constructed within extensions of the standard model, such as supersymmetry or string theory. Inflation also disrupts our vision of the universe, in particular with the ideas of chaotic inflation and eternal inflation that tend to promote the image of a very inhomogeneous universe with fractal structure on a large scale. This idea is also at the heart of further speculations, such as the multiverse. This introduction summarizes the connections between inflation and the hot big bang model and details the basics of its dynamics and predictions. xml:lang="fr"
Standard Model Higgs Searches at the Tevatron
Knoepfel, Kyle J.
2012-06-01
We present results from the search for a standard model Higgs boson using data corresponding up to 10 fb{sup -1} of proton-antiproton collision data produced by the Fermilab Tevatron at a center-of-mass energy of 1.96 TeV. The data were recorded by the CDF and D0 detectors between March 2001 and September of 2011. A broad excess is observed between 105 < m{sub H} < 145 GeV/c{sup 2} with a global significance of 2.2 standard deviations relative to the background-only hypothesis.
SLq(2) extension of the standard model
NASA Astrophysics Data System (ADS)
Finkelstein, Robert J.
2014-06-01
We examine a quantum group extension of the standard model. The field operators of the extended theory are obtained by replacing the field operators Ψ of the standard model by Ψ^Dmm'j, where Dmm'j are elements of a representation of the quantum algebra SLq(2), which is also the knot algebra. The Dmm'j lie in this algebra and carry the new degrees of freedom of the field quanta. The Dmm'j are restricted jointly by empirical constraints and by a postulated correspondence with classical knots. The elementary fermions are described by elements of the trefoil (j =3/2) representation and the weak vector bosons by elements of the ditrefoil (j=3) representation. The adjoint (j=1) and fundamental (j=1/2) representations define hypothetical bosonic and fermionic preons. All particles described by higher representations may be regarded as composed of the fermionic preons. This preon model unexpectedly agrees in important detail with the Harari-Shupe model. The new Lagrangian, which is invariant under gauge transformations of the SLq(2) algebra, fixes the relative masses of the elementary fermions within the same family. It also introduces form factors that modify the electroweak couplings and provide a parametrization of the Cabbibo-Kobayashi-Maskawa matrix. It is additionally postulated that the preons carry gluon charge and that the fermions, which are three preon systems, are in agreement with the color assignments of the standard model.
Standard model bosons as composite particles
Kahana, D.E. . Continuous Electron Beam Accelerator Facility); Kahana, S.H. )
1990-01-01
The Standard model of electro-weak interactions is derived from a Nambu, Jona-Lasinio type four-fermion interaction, which is assumed to result from a more basic theory valid above a very high scale {Lambda}. The masses of the gauge bosons and the Higgs are then produced by dynamical symmetry breaking of the Nambu model at an intermediate scale {mu}, and are evolved back to experimental energies via the renormalisation group equations of the Standard model. The weak angle sin{sup 2} ({theta}{sub W}) is predicted to be 3/8 at the scale {mu}, as in grand unified theories, and is evolved back to the experimental value at scale M{sub W}, thus determining {mu} {approximately}10{sup 13}GeV. Predictions for the ratios of the masses of the gauge and the Higgs bosons to the top quark mass, at experimental energies, are also obtained.
Esen, Alparslan; Isik, Kubilay; Saglam, Haci; Ozdemir, Yusuf Bugra; Dolanmaz, Dogan
2016-09-01
We compared the stability of three different titanium plate-and-screw fixation systems after Le Fort I osteotomy in polyurethane models of unilateral clefts. Thirty-six models were divided into 3 groups. In the first group, we adapted standard Plates 1mm thick with 2.0mm screws and placed them bilaterally on the zygomatic buttress and the piriform rim. In the second group, we did the same and added Plates 0.6mm thick with 1.6mm screws between the standard 2mm miniplates on both sides. In the last group, we placed Plates 1.4mm thick with 2.0mm screws bilaterally on the maxillary zygomatic buttress and piriform rim. Each group was tested in the inferosuperior (IS) and anteroposterior (AP) directions with a servo-hydraulic testing unit. In the IS direction, displacement values were not significantly different up to 80N, but between 80 and 210N, those in the 2×1.4mm group were better. In the AP direction, displacement values were not significantly different up to 40N, but between 40 and 180N, they were better in the standard with 1.6×0.6mm group and the 2×1.4mm group. When normal biting forces (90 - 260N) in the postoperative period are considered, the greatest resistance to occlusal loads was seen in the 2×1.4mm group. In the others, the biomechanical properties were better in the AP direction. PMID:27182011
Imperfect mirror copies of the standard model
NASA Astrophysics Data System (ADS)
Berryman, Jeffrey M.; de Gouvêa, André; Hernández, Daniel; Kelly, Kevin J.
2016-08-01
Inspired by the standard model of particle physics, we discuss a mechanism for constructing chiral, anomaly-free gauge theories. The gauge symmetries and particle content of such theories are identified using subgroups and complex representations of simple anomaly-free Lie groups, such as S O (10 ) or E6. We explore, using mostly S O (10 ) and the 16 representation, several of these "imperfect copies" of the standard model, including U (1 )N theories, S U (5 )⊗U (1 ) theories, S U (4 )⊗U (1 )2 theories with 4-plets and 6-plets, and chiral S U (3 )⊗S U (2 )⊗U (1 ) . A few general properties of such theories are discussed, as is how they might shed light on nonzero neutrino masses, the dark matter puzzle, and other phenomenologically relevant questions.
The Standard Model of Nuclear Physics
NASA Astrophysics Data System (ADS)
Detmold, William
2015-04-01
At its core, nuclear physics, which describes the properties and interactions of hadrons, such as protons and neutrons, and atomic nuclei, arises from the Standard Model of particle physics. However, the complexities of nuclei result in severe computational difficulties that have historically prevented the calculation of central quantities in nuclear physics directly from this underlying theory. The availability of petascale (and prospect of exascale) high performance computing is changing this situation by enabling us to extend the numerical techniques of lattice Quantum Chromodynamics (LQCD), applied successfully in particle physics, to the more intricate dynamics of nuclear physics. In this talk, I will discuss this revolution and the emerging understanding of hadrons and nuclei within the Standard Model.
Beyond the standard model in many directions
Chris Quigg
2004-04-28
These four lectures constitute a gentle introduction to what may lie beyond the standard model of quarks and leptons interacting through SU(3){sub c} {direct_product} SU(2){sub L} {direct_product} U(1){sub Y} gauge bosons, prepared for an audience of graduate students in experimental particle physics. In the first lecture, I introduce a novel graphical representation of the particles and interactions, the double simplex, to elicit questions that motivate our interest in physics beyond the standard model, without recourse to equations and formalism. Lecture 2 is devoted to a short review of the current status of the standard model, especially the electroweak theory, which serves as the point of departure for our explorations. The third lecture is concerned with unified theories of the strong, weak, and electromagnetic interactions. In the fourth lecture, I survey some attempts to extend and complete the electroweak theory, emphasizing some of the promise and challenges of supersymmetry. A short concluding section looks forward.
Indoorgml - a Standard for Indoor Spatial Modeling
NASA Astrophysics Data System (ADS)
Li, Ki-Joune
2016-06-01
With recent progress of mobile devices and indoor positioning technologies, it becomes possible to provide location-based services in indoor space as well as outdoor space. It is in a seamless way between indoor and outdoor spaces or in an independent way only for indoor space. However, we cannot simply apply spatial models developed for outdoor space to indoor space due to their differences. For example, coordinate reference systems are employed to indicate a specific position in outdoor space, while the location in indoor space is rather specified by cell number such as room number. Unlike outdoor space, the distance between two points in indoor space is not determined by the length of the straight line but the constraints given by indoor components such as walls, stairs, and doors. For this reason, we need to establish a new framework for indoor space from fundamental theoretical basis, indoor spatial data models, and information systems to store, manage, and analyse indoor spatial data. In order to provide this framework, an international standard, called IndoorGML has been developed and published by OGC (Open Geospatial Consortium). This standard is based on a cellular notion of space, which considers an indoor space as a set of non-overlapping cells. It consists of two types of modules; core module and extension module. While core module consists of four basic conceptual and implementation modeling components (geometric model for cell, topology between cells, semantic model of cell, and multi-layered space model), extension modules may be defined on the top of the core module to support an application area. As the first version of the standard, we provide an extension for indoor navigation.
Beyond standard model calculations with Sherpa
Höche, Stefan; Kuttimalai, Silvan; Schumann, Steffen; Siegert, Frank
2015-03-24
We present a fully automated framework as part of the Sherpa event generator for the computation of tree-level cross sections in beyond Standard Model scenarios, making use of model information given in the Universal FeynRules Output format. Elementary vertices are implemented into C++ code automatically and provided to the matrix-element generator Comix at runtime. Widths and branching ratios for unstable particles are computed from the same building blocks. The corresponding decays are simulated with spin correlations. Parton showers, QED radiation and hadronization are added by Sherpa, providing a full simulation of arbitrary BSM processes at the hadron level.
Experimentally testing the standard cosmological model
Schramm, D.N. Fermi National Accelerator Lab., Batavia, IL )
1990-11-01
The standard model of cosmology, the big bang, is now being tested and confirmed to remarkable accuracy. Recent high precision measurements relate to the microwave background; and big bang nucleosynthesis. This paper focuses on the latter since that relates more directly to high energy experiments. In particular, the recent LEP (and SLC) results on the number of neutrinos are discussed as a positive laboratory test of the standard cosmology scenario. Discussion is presented on the improved light element observational data as well as the improved neutron lifetime data. alternate nucleosynthesis scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conclusions on the baryonic density relative to the critical density, {Omega}{sub b}, remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the standard model conclusion that {Omega}{sub b} {approximately} 0.06. This latter point is the deriving force behind the need for non-baryonic dark matter (assuming {Omega}{sub total} = 1) and the need for dark baryonic matter, since {Omega}{sub visible} < {Omega}{sub b}. Recent accelerator constraints on non-baryonic matter are discussed, showing that any massive cold dark matter candidate must now have a mass M{sub x} {approx gt} 20 GeV and an interaction weaker than the Z{sup 0} coupling to a neutrino. It is also noted that recent hints regarding the solar neutrino experiments coupled with the see-saw model for {nu}-masses may imply that the {nu}{sub {tau}} is a good hot dark matter candidate. 73 refs., 5 figs.
42 CFR 403.210 - NAIC model standards.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 2 2012-10-01 2012-10-01 false NAIC model standards. 403.210 Section 403.210... model standards. (a) NAIC model standards means the National Association of Insurance Commissioners (NAIC) “Model Regulation to Implement the Individual Accident and Insurance Minimum Standards Act”...
42 CFR 403.210 - NAIC model standards.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 2 2013-10-01 2013-10-01 false NAIC model standards. 403.210 Section 403.210... model standards. (a) NAIC model standards means the National Association of Insurance Commissioners (NAIC) “Model Regulation to Implement the Individual Accident and Insurance Minimum Standards Act”...
42 CFR 403.210 - NAIC model standards.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 2 2011-10-01 2011-10-01 false NAIC model standards. 403.210 Section 403.210... model standards. (a) NAIC model standards means the National Association of Insurance Commissioners (NAIC) “Model Regulation to Implement the Individual Accident and Insurance Minimum Standards Act”...
42 CFR 403.210 - NAIC model standards.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 2 2014-10-01 2014-10-01 false NAIC model standards. 403.210 Section 403.210... model standards. (a) NAIC model standards means the National Association of Insurance Commissioners (NAIC) “Model Regulation to Implement the Individual Accident and Insurance Minimum Standards Act”...
42 CFR 403.210 - NAIC model standards.
Code of Federal Regulations, 2010 CFR
2010-10-01
... model standards. (a) NAIC model standards means the National Association of Insurance Commissioners (NAIC) “Model Regulation to Implement the Individual Accident and Insurance Minimum Standards Act” (as... of the NAIC model standards can be purchased from the National Association of Insurance...
Statistical model with a standard Γ distribution
NASA Astrophysics Data System (ADS)
Patriarca, Marco; Chakraborti, Anirban; Kaski, Kimmo
2004-07-01
We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter λ . We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity λ . Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T(λ) , where particles exchange energy in a space with an effective dimension D(λ) .
The Hypergeometrical Universe: Cosmology and Standard Model
NASA Astrophysics Data System (ADS)
Pereira, Marco A.
2010-12-01
This paper presents a simple and purely geometrical Grand Unification Theory. Quantum Gravity, Electrostatic and Magnetic interactions are shown in a unified framework. Newton's, Gauss' and Biot-Savart's Laws are derived from first principles. Unification symmetry is defined for all the existing forces. This alternative model does not require Strong and Electroweak forces. A 4D Shock -Wave Hyperspherical topology is proposed for the Universe which together with a Quantum Lagrangian Principle and a Dilator based model for matter result in a quantized stepwise expansion for the whole Universe along a radial direction within a 4D spatial manifold. The Hypergeometrical Standard Model for matter, Universe Topology and a new Law of Gravitation are presented.
The Hypergeometrical Universe: Cosmology and Standard Model
Pereira, Marco A.
2010-12-22
This paper presents a simple and purely geometrical Grand Unification Theory. Quantum Gravity, Electrostatic and Magnetic interactions are shown in a unified framework. Newton's, Gauss' and Biot-Savart's Laws are derived from first principles. Unification symmetry is defined for all the existing forces. This alternative model does not require Strong and Electroweak forces. A 4D Shock -Wave Hyperspherical topology is proposed for the Universe which together with a Quantum Lagrangian Principle and a Dilator based model for matter result in a quantized stepwise expansion for the whole Universe along a radial direction within a 4D spatial manifold. The Hypergeometrical Standard Model for matter, Universe Topology and a new Law of Gravitation are presented.
Probing beyond the Standard Model with Muons
Hisano, Junji
2008-02-21
Muon's Properties are the most precisely studied among unstable particles. After discovery of muons in 40's, the studies of muons contributed to construction and establishment of the standard model in the particle physics. Now we are going to LHC era, however, precision frontier is still important in the particle physics. In this article, we review roles of muon physics in the particle physics. Muon g-2, lepton flavor violation (LFV) in muon decay, and electric dipole moment (EDM) of muon are mainly discussed.
Beyond the standard model with the LHC
NASA Astrophysics Data System (ADS)
Ellis, John
2007-07-01
Whether or not the Large Hadron Collider reveals the long-awaited Higgs particle, it is likely to lead to discoveries that add to, or challenge, the standard model of particle physics. Data produced will be pored over for any evidence of supersymmetric partners for the existing denizens of the particle 'zoo' and for the curled-up extra dimensions demanded by string theory. There might also be clues as to why matter dominates over antimatter in the Universe, and as to the nature of the Universe's dark matter.
Phenomenology of the utilitarian supersymmetric standard model
NASA Astrophysics Data System (ADS)
Fraser, Sean; Kownacki, Corey; Ma, Ernest; Pollard, Nicholas; Popov, Oleg; Zakeri, Mohammadreza
2016-08-01
We study the 2010 specific version of the 2002 proposed U(1)X extension of the supersymmetric standard model, which has no μ term and conserves baryon number and lepton number separately and automatically. We consider in detail the scalar sector as well as the extra ZX gauge boson, and their interactions with the necessary extra color-triplet particles of this model, which behave as leptoquarks. We show how the diphoton excess at 750 GeV, recently observed at the LHC, may be explained within this context. We identify a new fermion dark-matter candidate and discuss its properties. An important byproduct of this study is the discovery of relaxed supersymmetric constraints on the Higgs boson's mass of 125 GeV.
Radiative effects in the standard model extension
NASA Astrophysics Data System (ADS)
Zhukovsky, V. Ch.; Lobanov, A. E.; Murchikova, E. M.
2006-03-01
The possibility of radiative effects induced by the Lorentz and CPT noninvariant interaction term for fermions in the standard model extension is investigated. In particular, electron-positron photo production and photon emission by electrons and positrons are studied. The rates of these processes are calculated in the Furry picture. It is demonstrated that the rates obtained in the framework of the model adopted strongly depend on the polarization states of the particles involved. As a result, ultrarelativistic particles produced should occupy states with a preferred spin orientation, i.e., photons have the sign of polarization opposite to the sign of the effective potential, while charged particles are preferably in the state with the helicity coinciding with the sign of the effective potential. This leads to evident spatial asymmetries which may have certain consequences observable at high energy accelerators, and in astrophysical and cosmological studies.
Sphaleron rate in the minimal standard model.
D'Onofrio, Michela; Rummukainen, Kari; Tranberg, Anders
2014-10-01
We use large-scale lattice simulations to compute the rate of baryon number violating processes (the sphaleron rate), the Higgs field expectation value, and the critical temperature in the standard model across the electroweak phase transition temperature. While there is no true phase transition between the high-temperature symmetric phase and the low-temperature broken phase, the crossover is sharp and located at temperature T(c) = (159.5 ± 1.5) GeV. The sphaleron rate in the symmetric phase (T>T(c)) is Γ/T(4) = (18 ± 3)α(W)(5), and in the broken phase in the physically interesting temperature range 130 GeV < T < T(c) it can be parametrized as log(Γ/T(4)) = (0.83 ± 0.01)T/GeV-(147.7 ± 1.9). The freeze-out temperature in the early Universe, where the Hubble rate wins over the baryon number violation rate, is T* = (131.7 ± 2.3) GeV. These values, beyond being intrinsic properties of the standard model, are relevant for, e.g., low-scale leptogenesis scenarios. PMID:25325629
Supersymmetric extensions of the standard model
NASA Astrophysics Data System (ADS)
Zwirner, Fabio
These four lectures are meant as an elementary introduction to the physics of realistic supersymmetric models. In the first lecture, after reviewing the motivations for low-energy supersymmetry and the recipe for the construction of supersymmetric lagrangians, we introduce the Minimal Supersymmetric extension of the Standard Model, and comment on possible alternatives. In the second lecture, we discuss what can be learnt by looking at such model as the low-energy limit of some unified theory, with emphasis at the implications of its renormalization group equations and at the possibility of a supersymmetric Grand Unification. The third lecture is devoted to the problem of supersymmetry breaking: we review some general features of the spontaneous breaking of global and local supersymmetry, and we compare the supergravity models with heavy and light gravitino. In the fourth lecture, we conclude with an overview of supersymmetric phenomenology: indirect effects of supersymmetric particles in electroweak precision tests and in flavour physics, as well as direct searches for the superpartners of ordinary particles.
Quantum gravity and the standard model
NASA Astrophysics Data System (ADS)
Bilson-Thompson, Sundance O.; Markopoulou, Fotini; Smolin, Lee
2007-08-01
We show that a class of background-independent models of quantum spacetime have local excitations that can be mapped to the first-generation fermions of the standard model of particle physics. These states propagate coherently as they can be shown to be noiseless subsystems of the microscopic quantum dynamics (Kribs and Markopoulou 2005 Preprint gr-qc/0510052, Markopoulou and Poulin unpublished). These are identified in terms of certain patterns of braiding of graphs, thus giving a quantum gravitational foundation for the topological preon model proposed by Bilson-Thompson (2005 Preprint hep-ph/0503213). These results apply to a large class of theories in which the Hilbert space has a basis of states given by ribbon graphs embedded in a three-dimensional manifold up to diffeomorphisms, and the dynamics is given by local moves on the graphs, such as arise in the representation theory of quantum groups. For such models, matter appears to be already included in the microscopic kinematics and dynamics.
Beyond the standard model of particle physics.
Virdee, T S
2016-08-28
The Large Hadron Collider (LHC) at CERN and its experiments were conceived to tackle open questions in particle physics. The mechanism of the generation of mass of fundamental particles has been elucidated with the discovery of the Higgs boson. It is clear that the standard model is not the final theory. The open questions still awaiting clues or answers, from the LHC and other experiments, include: What is the composition of dark matter and of dark energy? Why is there more matter than anti-matter? Are there more space dimensions than the familiar three? What is the path to the unification of all the fundamental forces? This talk will discuss the status of, and prospects for, the search for new particles, symmetries and forces in order to address the open questions.This article is part of the themed issue 'Unifying physics and technology in light of Maxwell's equations'. PMID:27458261
Outstanding questions: physics beyond the Standard Model.
Ellis, John
2012-02-28
The Standard Model of particle physics agrees very well with experiment, but many important questions remain unanswered, among them are the following. What is the origin of particle masses and are they due to a Higgs boson? How does one understand the number of species of matter particles and how do they mix? What is the origin of the difference between matter and antimatter, and is it related to the origin of the matter in the Universe? What is the nature of the astrophysical dark matter? How does one unify the fundamental interactions? How does one quantize gravity? In this article, I introduce these questions and discuss how they may be addressed by experiments at the Large Hadron Collider, with particular attention to the search for the Higgs boson and supersymmetry. PMID:22253238
Sequestering the standard model vacuum energy.
Kaloper, Nemanja; Padilla, Antonio
2014-03-01
We propose a very simple reformulation of general relativity, which completely sequesters from gravity all of the vacuum energy from a matter sector, including all loop corrections and renders all contributions from phase transitions automatically small. The idea is to make the dimensional parameters in the matter sector functionals of the 4-volume element of the Universe. For them to be nonzero, the Universe should be finite in spacetime. If this matter is the standard model of particle physics, our mechanism prevents any of its vacuum energy, classical or quantum, from sourcing the curvature of the Universe. The mechanism is consistent with the large hierarchy between the Planck scale, electroweak scale, and curvature scale, and early Universe cosmology, including inflation. Consequences of our proposal are that the vacuum curvature of an old and large universe is not zero, but very small, that w(DE) ≃ -1 is a transient, and that the Universe will collapse in the future. PMID:24655240
Beyond the standard model of particle physics.
Virdee, T S
2016-08-28
The Large Hadron Collider (LHC) at CERN and its experiments were conceived to tackle open questions in particle physics. The mechanism of the generation of mass of fundamental particles has been elucidated with the discovery of the Higgs boson. It is clear that the standard model is not the final theory. The open questions still awaiting clues or answers, from the LHC and other experiments, include: What is the composition of dark matter and of dark energy? Why is there more matter than anti-matter? Are there more space dimensions than the familiar three? What is the path to the unification of all the fundamental forces? This talk will discuss the status of, and prospects for, the search for new particles, symmetries and forces in order to address the open questions.This article is part of the themed issue 'Unifying physics and technology in light of Maxwell's equations'.
The spherically symmetric Standard Model with gravity
NASA Astrophysics Data System (ADS)
Balasin, H.; Böhmer, C. G.; Grumiller, D.
2005-08-01
Spherical reduction of generic four-dimensional theories is revisited. Three different notions of "spherical symmetry" are defined. The following sectors are investigated: Einstein-Cartan theory, spinors, (non-)abelian gauge fields and scalar fields. In each sector a different formalism seems to be most convenient: the Cartan formulation of gravity works best in the purely gravitational sector, the Einstein formulation is convenient for the Yang-Mills sector and for reducing scalar fields, and the Newman-Penrose formalism seems to be the most transparent one in the fermionic sector. Combining them the spherically reduced Standard Model of particle physics together with the usually omitted gravity part can be presented as a two-dimensional (dilaton gravity) theory.
Radiative Effects in the Standard Model Extension
NASA Astrophysics Data System (ADS)
Zhukovsky, V. Ch.; Lobanov, A. E.; Murchikova, E. M.
2006-10-01
The possibility of radiative effects that are due to interaction of fermions with the constant axial-vector background in the standard model extension is investigated. Electron-positron photo-production and photon emission by electrons and positrons were studied. The rates of these processes were calculated in the Furry picture. It was demonstrated that the rates obtained strongly depend on the polarization states of the particles involved. In consequence of this fact ultra-relativistic particles should occupy states with a preferred spin orientation, i.e., photons have the sign of polarization opposite to the sign of the effective potential, while charged particle are preferably in the state with the helicity coinciding with the sign of the effective potential. This leads to evident spatial asymmetries.
The Standard Solar Model and beyond
NASA Astrophysics Data System (ADS)
Turck-Chièze, S.
2016-01-01
The Standard Solar Model (SSM) is an important reference in Astrophysics as the Sun stays today the most observed star. This model is used to predict the internal observables like neutrino fluxes and oscillation frequencies and consequently to validate its assumptions for its generalization to other stars. The model outputs result from the resolution of the classical stellar equations and the knowledge of fundamental physics like nuclear reaction rates, screening, photon interaction, plasma physics. The plasma conditions remained unmeasurable in laboratory for long due to the high temperature and high density conditions of the solar interior. Today, neutrino detections and helioseismology aboard SoHO have largely revealed the solar interior, in particular the nuclear solar core so one can estimate the reliability of SSM and also its coherence with the different indicators and between them. This has been possible thanks to a Seismic Solar Model (SeSM) which takes into account in addition the observed sound speed profile. Seismology quantifies also some internal dynamical processes that need to be properly introduced in the description of stars. This review describes the different steps of building of the SSM, its predictions and the comparisons with observations. It discusses the accuracy of such model compared to the accuracy of the SeSM. The noticed differences and observational constraints put some limits on other possible processes like dark matter, magnetic field or waves and determine the directions of progress for the near future that will come from precise emitted neutrino fluxes. High density laser facilities promise also unprecedented checks of energy transfer by photons and nuclear reaction rates.
Geometrical Standard Model Enhancements to the Standard Model of Particle Physics
NASA Astrophysics Data System (ADS)
Strickland, Ken; Duvernois, Michael
2011-10-01
The Standard Model (SM) is the triumph of our age. As experimentation at the LHC tracks particles for the Higgs phenomena, theoreticians and experimentalist struggle to close in on a cohesive theory. Both suffer greatly as expectation waivers those who seek to move beyond the SM and those who cannot do without. When it seems like there are no more good ideas enter Rate Change Graph Technology (RCGT). From the science of the rate change graph, a Geometrical Standard Model (GSM) is available for comprehensive modeling, giving rich new sources of data and pathways to those ultimate answers we punish ourselves to achieve. As a new addition to science, GSM is a tool that provides a structured discovery and analysis environment. By eliminating value and size, RCGT operates with the rules of RCGT mechanics creating solutions derived from geometry. The GSM rate change graph could be the ultimate validation of the Standard Model yet. In its own right, GSM is created from geometrical intersections and comes with RCGT mechanics, yet parallels the SM to offer critical enhancements. The Higgs Objects along with a host of new objects are introduced to the SM and their positions revealed in this proposed modification to the SM.
Experimental tests of the standard model.
Nodulman, L.
1998-11-11
The title implies an impossibly broad field, as the Standard Model includes the fermion matter states, as well as the forces and fields of SU(3) x SU(2) x U(1). For practical purposes, I will confine myself to electroweak unification, as discussed in the lectures of M. Herrero. Quarks and mixing were discussed in the lectures of R. Aleksan, and leptons and mixing were discussed in the lectures of K. Nakamura. I will essentially assume universality, that is flavor independence, rather than discussing tests of it. I will not pursue tests of QED beyond noting the consistency and precision of measurements of {alpha}{sub EM} in various processes including the Lamb shift, the anomalous magnetic moment (g-2) of the electron, and the quantum Hall effect. The fantastic precision and agreement of these predictions and measurements is something that convinces people that there may be something to this science enterprise. Also impressive is the success of the ''Universal Fermi Interaction'' description of beta decay processes, or in more modern parlance, weak charged current interactions. With one coupling constant G{sub F}, most precisely determined in muon decay, a huge number of nuclear instabilities are described. The slightly slow rate for neutron beta decay was one of the initial pieces of evidence for Cabbibo mixing, now generalized so that all charged current decays of any flavor are covered.
D -oscillons in the standard model extension
NASA Astrophysics Data System (ADS)
Correa, R. A. C.; da Rocha, Roldão; de Souza Dutra, A.
2015-06-01
In this work we investigate the consequences of the Lorentz symmetry violation on extremely long-lived, time-dependent, and spatially localized field configurations, named oscillons. This is accomplished for two interacting scalar field theories in (D +1 ) dimensions in the context of the so-called standard model extension. We show that D -dimensional scalar field lumps can present a typical size Rmin≪RKK , where RKK is the extent of extra dimensions in Kaluza-Klein theories. The size Rmin is shown to strongly depend upon the terms that control the LV of the theory. This implies either contraction or dilation of the average radius Rmin, and a new rule for its composition, likewise. Moreover, we show that the spatial dimensions for existence of oscillating lumps have an upper limit, opening new possibilities to probe the existence of D -dimensional oscillons at TeV energy scale. In addition, in a cosmological scenario with Lorentz symmetry breaking, we show that in the early Universe with an extremely high energy density and a strong LV, the typical size Rmin was highly dilated. As the Universe had expanded and cooled down, it then passed through a phase transition toward a Lorentz symmetry, wherein Rmin tends to be compact.
Augmented standard model and the simplest scenario
NASA Astrophysics Data System (ADS)
Wu, Tai Tsun; Wu, Sau Lan
2015-11-01
The experimental discovery of the Higgs particle in 2012 by the ATLAS Collaboration and the CMS Collaboration at CERN ushers in a new era of particle physics. On the basis of these data, scalar quarks and scalar leptons are added to each generation of quarks and leptons. The resulting augmented standard model has fermion-boson symmetry for each of three generations, but only one Higgs doublet giving masses to all the elementary particles. A specific special case, the simplest scenario, is studied in detail. In this case, there are twenty six quadratic divergences, and all these divergences are cancelled provided that one single relation between the masses is satisfied. This mass relation contains a great deal of information, and in particular determines the masses of all the right-handed scalar quarks and scalar leptons, while gives relations for the masses of the left-handed ones. An alternative procedure is also given with a different starting point and less reliance on the experimental data. The result is of course the same.
Standard Model thermodynamics across the electroweak crossover
Laine, M.; Meyer, M.
2015-07-22
Even though the Standard Model with a Higgs mass m{sub \\tiny H}=125 GeV possesses no bulk phase transition, its thermodynamics still experiences a “soft point” at temperatures around T=160 GeV, with a deviation from ideal gas thermodynamics. Such a deviation may have an effect on precision computations of weakly interacting dark matter relic abundances if their mass is in the few TeV range, or on leptogenesis scenarios operating in this temperature range. By making use of results from lattice simulations based on a dimensionally reduced effective field theory, we estimate the relevant thermodynamic functions across the crossover. The results are tabulated in a numerical form permitting for their insertion as a background equation of state into cosmological particle production/decoupling codes. We find that Higgs dynamics induces a non-trivial “structure” visible e.g. in the heat capacity, but that in general the largest radiative corrections originate from QCD effects, reducing the energy density by a couple of percent from the free value even at T>160 GeV.
Standard Model thermodynamics across the electroweak crossover
Laine, M.; Meyer, M. E-mail: meyer@itp.unibe.ch
2015-07-01
Even though the Standard Model with a Higgs mass m{sub H} = 125GeV possesses no bulk phase transition, its thermodynamics still experiences a 'soft point' at temperatures around T = 160GeV, with a deviation from ideal gas thermodynamics. Such a deviation may have an effect on precision computations of weakly interacting dark matter relic abundances if their mass is in the few TeV range, or on leptogenesis scenarios operating in this temperature range. By making use of results from lattice simulations based on a dimensionally reduced effective field theory, we estimate the relevant thermodynamic functions across the crossover. The results are tabulated in a numerical form permitting for their insertion as a background equation of state into cosmological particle production/decoupling codes. We find that Higgs dynamics induces a non-trivial 'structure' visible e.g. in the heat capacity, but that in general the largest radiative corrections originate from QCD effects, reducing the energy density by a couple of percent from the free value even at T > 160GeV.
Geometrical basis for the Standard Model
Potter, F. )
1994-02-01
The robust character of the Standard Model is confirmed. Examination of its geometrical basis in three equivalent internal symmetry spaces - the unitary plane C[sup 2], the quaternion space Q, and the real space R[sup 4] - as well as the real space R[sup 3] uncovers mathematical properties that predict the physical properties of leptons and quarks. The finite rotational subgroups of the gauge group SU(2)[sub L] [times] U(1)[sub Y] generate exactly three lepton families and four quark families and reveal how quarks and leptons are related. Among the physical properties explained are the mass ratios of the six leptons and eight quarks, the origin of the left-handed preference by the weak interaction, the geometrical source of color symmetry, and the zero neutrino masses. The (u,d) and (c,s) quark families team together to satisfy the triangle anomaly cancellation with the electron family, while the other families pair one-to-one for cancellation. The spontaneously broken symmetry is discrete and needs no Higgs mechanism. Predictions include all massless neutrinos, the top quark at 160 GeV/c[sup 2], the b[prime] quark at 80 GeV/c[sup 2], and the t[prime] quark at 2600 GeV/c[sup 2].
Cosmological perturbations from the Standard Model Higgs
Simone, Andrea De; Riotto, Antonio E-mail: antonio.riotto@unige.ch
2013-02-01
We propose that the Standard Model (SM) Higgs is responsible for generating the cosmological perturbations of the universe by acting as an isocurvature mode during a de Sitter inflationary stage. In view of the recent ATLAS and CMS results for the Higgs mass, this can happen if the Hubble rate during inflation is in the range (10{sup 10}−10{sup 14}) GeV (depending on the SM parameters). Implications for the detection of primordial tensor perturbations through the B-mode of CMB polarization via the PLANCK satellite are discussed. For example, if the Higgs mass value is confirmed to be m{sub h} = 125.5 GeV and m{sub t},α{sub s} are at their central values, our mechanism predicts tensor perturbations too small to be detected in the near future. On the other hand, if tensor perturbations will be detected by PLANCK through the B-mode of CMB, then there is a definite relation between the Higgs and top masses, making the mechanism predictive and falsifiable.
Ellipsoidal geometry in asteroid thermal models - The standard radiometric model
NASA Technical Reports Server (NTRS)
Brown, R. H.
1985-01-01
The major consequences of ellipsoidal geometry in an othewise standard radiometric model for asteroids are explored. It is shown that for small deviations from spherical shape a spherical model of the same projected area gives a reasonable aproximation to the thermal flux from an ellipsoidal body. It is suggested that large departures from spherical shape require that some correction be made for geometry. Systematic differences in the radii of asteroids derived radiometrically at 10 and 20 microns may result partly from nonspherical geometry. It is also suggested that extrapolations of the rotational variation of thermal flux from a nonspherical body based solely on the change in cross-sectional area are in error.
The General Linear Model and Direct Standardization: A Comparison.
ERIC Educational Resources Information Center
Little, Roderick J. A.; Pullum, Thomas W.
1979-01-01
Two methods of analyzing nonorthogonal (uneven cell sizes) cross-classified data sets are compared. The methods are direct standardization and the general linear model. The authors illustrate when direct standardization may be a desirable method of analysis. (JKS)
Neutrinos: in and out of the standard model
Parke, Stephen; /Fermilab
2006-07-01
The particle physics Standard Model has been tremendously successful in predicting the outcome of a large number of experiments. In this model Neutrinos are massless. Yet recent evidence points to the fact that neutrinos are massive particles with tiny masses compared to the other particles in the Standard Model. These tiny masses allow the neutrinos to change flavor and oscillate. In this series of Lectures, I will review the properties of Neutrinos In the Standard Model and then discuss the physics of Neutrinos Beyond the Standard Model. Topics to be covered include Neutrino Flavor Transformations and Oscillations, Majorana versus Dirac Neutrino Masses, the Seesaw Mechanism and Leptogenesis.
The Standard Solar Model versus Experimental Observations
NASA Astrophysics Data System (ADS)
Manuel, O.
2000-12-01
The standard solar model (ssm) assumes the that Sun formed as a homogeneous body, its interior consists mostly of hydrogen, and its radiant energy comes from H-fusion in its core. Two sets of measurements indicate the ssm is wrong: 1. Analyses of material in the planetary system show that - (a) Fe, O, Ni, Si, Mg, S and Ca have high nuclear stability and comprise 98+% of ordinary meteorites that formed at the birth of the solar system; (b) the cores of inner planets formed in a central region consisting mostly of heavy elements like Fe, Ni and S; (c) the outer planets formed mostly from elements like H, He and C; and (d) isotopic heterogeneities accompanied these chemical gradients in debris of the supernova that exploded here 5 billion years ago to produce the solar system (See Origin of the Elements at http://www.umr.edu/õm/). 2. Analyses of material coming from the Sun show that - (a) there are not enough neutrinos for H-fusion to be its main source of energy; (b) light-weight isotopes (mass =L) of He, Ne, Ar, Kr and Xe in the solar wind are enriched relative to heavy isotopes (mass = H) by a factor, f, where log f = 4.56 log [H/L] -- - Eq. (1); (c) solar flares by-pass 3.4 of these 9-stages of diffusion and deplete the light-weight isotopes of He, Ne, Mg and Ar by a factor, f*, where log f* = -1.7 log [H/L] --- Eq. (2); (d) proton-capture on N-14 increased N-15 in the solar wind over geologic time; and (e) solar flares dredge up nitrogen with less N-15 from this H-fusion reaction. Each observation above is unexplained by ssm. After correcting photospheric abundances for diffusion [Observation 2(b)], the most abundant elements in the bulk sun are Fe, Ni, O, Si, S, Mg and Ca, the same elements that comprise ordinary meteorites [Observation 1(a)]. The probability that Eq. (1) would randomly select these elements from the photosphere, i.e., the likelihood for a meaningless agreement between observations 2(b) and 1(a), is < 2.0E(-33). Thus, ssm does not describe the
ERIC Educational Resources Information Center
Lee, Jaekyung; Liu, Xiaoyan; Amo, Laura Casey; Wang, Weichun Leilani
2014-01-01
Drawing on national and state assessment datasets in reading and math, this study tested "external" versus "internal" standards-based education models. The goal was to understand whether and how student performance standards work in multilayered school systems under No Child Left Behind Act of 2001 (NCLB). Under the…
Primordial lithium and the standard model(s)
NASA Technical Reports Server (NTRS)
Deliyannis, Constantine P.; Demarque, Pierre; Kawaler, Steven D.; Romanelli, Paul; Krauss, Lawrence M.
1989-01-01
The results of new theoretical work on surface Li-7 and Li-6 evolution in the oldest halo stars are presented, along with a new and refined analysis of the predicted primordial Li abundance resulting from big-bang nucleosynthesis. This makes it possible to determine the constraints which can be imposed on cosmology using primordial Li and both standard big-bang and stellar-evolution models. This leads to limits on the baryon density today of 0.0044-0.025 (where the Hubble constant is 100h km/sec Mpc) and imposes limitations on alternative nucleosynthesis scenarios.
Mathematics Teacher TPACK Standards and Development Model
ERIC Educational Resources Information Center
Niess, Margaret L.; Ronau, Robert N.; Shafer, Kathryn G.; Driskell, Shannon O.; Harper, Suzanne R.; Johnston, Christopher; Browning, Christine; Ozgun-Koca, S. Asli; Kersaint, Gladis
2009-01-01
What knowledge is needed to teach mathematics with digital technologies? The overarching construct, called technology, pedagogy, and content knowledge (TPACK), has been proposed as the interconnection and intersection of technology, pedagogy, and content knowledge. Mathematics Teacher TPACK Standards offer guidelines for thinking about this…
Template and Model Driven Development of Standardized Electronic Health Records.
Kropf, Stefan; Chalopin, Claire; Denecke, Kerstin
2015-01-01
Digital patient modeling targets the integration of distributed patient data into one overarching model. For this integration process, both a theoretical standard-based model and information structures combined with concrete instructions in form of a lightweight development process of single standardized Electronic Health Records (EHRs) are needed. In this paper, we introduce such a process along side a standard-based architecture. It allows the modeling and implementation of EHRs in a lightweight Electronic Health Record System (EHRS) core. The approach is demonstrated and tested by a prototype implementation. The results show that the suggested approach is useful and facilitates the development of standardized EHRSs. PMID:26262004
Particle Physics Primer: Explaining the Standard Model of Matter.
ERIC Educational Resources Information Center
Vondracek, Mark
2002-01-01
Describes the Standard Model, a basic model of the universe that describes electromagnetic force, weak nuclear force radioactivity, and the strong nuclear force responsible for holding particles within the nucleus together. (YDS)
Creating Better School-Age Care Jobs: Model Work Standards.
ERIC Educational Resources Information Center
Haack, Peggy
Built on the premise that good school-age care jobs are the cornerstone of high-quality services for school-age youth and their families, this guide presents model work standards for school-age care providers. The guide begins with a description of the strengths and challenges of the school-age care profession. The model work standards are…
Energy standards and model codes development, adoption, implementation, and enforcement
Conover, D.R.
1994-08-01
This report provides an overview of the energy standards and model codes process for the voluntary sector within the United States. The report was prepared by Pacific Northwest Laboratory (PNL) for the Building Energy Standards Program and is intended to be used as a primer or reference on this process. Building standards and model codes that address energy have been developed by organizations in the voluntary sector since the early 1970s. These standards and model codes provide minimum energy-efficient design and construction requirements for new buildings and, in some instances, existing buildings. The first step in the process is developing new or revising existing standards or codes. There are two overall differences between standards and codes. Energy standards are developed by a consensus process and are revised as needed. Model codes are revised on a regular annual cycle through a public hearing process. In addition to these overall differences, the specific steps in developing/revising energy standards differ from model codes. These energy standards or model codes are then available for adoption by states and local governments. Typically, energy standards are adopted by or adopted into model codes. Model codes are in turn adopted by states through either legislation or regulation. Enforcement is essential to the implementation of energy standards and model codes. Low-rise residential construction is generally evaluated for compliance at the local level, whereas state agencies tend to be more involved with other types of buildings. Low-rise residential buildings also may be more easily evaluated for compliance because the governing requirements tend to be less complex than for commercial buildings.
NASREN: Standard reference model for telerobot control
NASA Technical Reports Server (NTRS)
Albus, J. S.; Lumia, R.; Mccain, H.
1987-01-01
A hierarchical architecture is described which supports space station telerobots in a variety of modes. The system is divided into three hierarchies: task decomposition, world model, and sensory processing. Goals at each level of the task dedomposition heirarchy are divided both spatially and temporally into simpler commands for the next lower level. This decomposition is repreated until, at the lowest level, the drive signals to the robot actuators are generated. To accomplish its goals, task decomposition modules must often use information stored it the world model. The purpose of the sensory system is to update the world model as rapidly as possible to keep the model in registration with the physical world. The architecture of the entire control system hierarch is described and how it can be applied to space telerobot applications.
Big bang nucleosynthesis - The standard model and alternatives
NASA Technical Reports Server (NTRS)
Schramm, David N.
1991-01-01
The standard homogeneous-isotropic calculation of the big bang cosmological model is reviewed, and alternate models are discussed. The standard model is shown to agree with the light element abundances for He-4, H-2, He-3, and Li-7 that are available. Improved observational data from recent LEP collider and SLC results are discussed. The data agree with the standard model in terms of the number of neutrinos, and provide improved information regarding neutron lifetimes. Alternate models are reviewed which describe different scenarios for decaying matter or quark-hadron induced inhomogeneities. The baryonic density relative to the critical density in the alternate models is similar to that of the standard model when they are made to fit the abundances. This reinforces the conclusion that the baryonic density relative to critical density is about 0.06, and also reinforces the need for both nonbaryonic dark matter and dark baryonic matter.
Improved time-domain accuracy standards for model gravitational waveforms
Lindblom, Lee; Baker, John G.
2010-10-15
Model gravitational waveforms must be accurate enough to be useful for detection of signals and measurement of their parameters, so appropriate accuracy standards are needed. Yet these standards should not be unnecessarily restrictive, making them impractical for the numerical and analytical modelers to meet. The work of Lindblom, Owen, and Brown [Phys. Rev. D 78, 124020 (2008)] is extended by deriving new waveform accuracy standards which are significantly less restrictive while still ensuring the quality needed for gravitational-wave data analysis. These new standards are formulated as bounds on certain norms of the time-domain waveform errors, which makes it possible to enforce them in situations where frequency-domain errors may be difficult or impossible to estimate reliably. These standards are less restrictive by about a factor of 20 than the previously published time-domain standards for detection, and up to a factor of 60 for measurement. These new standards should therefore be much easier to use effectively.
Issues in standard model symmetry breaking
Golden, M.
1988-04-01
This work discusses the symmetry breaking sector of the SU(2) x U(1) electroweak model. The first two chapters discuss Higgs masses in two simple Higgs models. The author proves low-enery theorems for the symmetry breaking sector: The threshold behavior of gauge-boson scattering is completely determined, whenever the symmetry breaking sector meets certain simple conditions. The author uses these theorems to derive event rates for the superconducting super collider (SSC). The author shows that the SSC may be able to determine whether the interactions of the symmetry breaking sector are strong or weak. 54 refs.
ERIC Educational Resources Information Center
Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka
2015-01-01
The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…
Standardization of A Physiologic Hypoparathyroidism Animal Model
Jung, Soo Yeon; Kim, Ha Yeong; Park, Hae Sang; Yin, Xiang Yun; Chung, Sung Min; Kim, Han Su
2016-01-01
Ideal hypoparathyroidism animal models are a prerequisite to developing new treatment modalities for this disorder. The purpose of this study was to evaluate the feasibility of a model whereby rats were parathyroidectomized (PTX) using a fluorescent-identification method and the ideal calcium content of the diet was determined. Thirty male rats were divided into surgical sham (SHAM, n = 5) and PTX plus 0, 0.5, and 2% calcium diet groups (PTX-FC (n = 5), PTX-NC (n = 10), and PTX-HC (n = 10), respectively). Serum parathyroid hormone levels decreased to non-detectable levels in all PTX groups. All animals in the PTX—FC group died within 4 days after the operation. All animals survived when supplied calcium in the diet. However, serum calcium levels were higher in the PTX-HC than the SHAM group. The PTX-NC group demonstrated the most representative modeling of primary hypothyroidism. Serum calcium levels decreased and phosphorus levels increased, and bone volume was increased. All animals survived without further treatment and did not show nephrotoxicity including calcium deposits. These findings demonstrate that PTX animal models produced by using the fluorescent-identification method, and fed a 0.5% calcium diet, are appropriate for hypoparathyroidism treatment studies. PMID:27695051
Vertex displacements for acausal particles: testing the Lee-Wick standard model at the LHC
NASA Astrophysics Data System (ADS)
Álvarez, Ezequiel; Da Rold, Leandro; Schat, Carlos; Szynkman, Alejandro
2009-10-01
We propose to search for wrong displaced vertices, where decay products of the secondary vertex move towards the primary vertex instead of away from it, as a signature for microscopic violation of causality. We analyze in detail the leptonic sector of the recently proposed Lee-Wick Standard Model, which provides a well motivated framework to study acausal effects. We find that, assuming Minimal Flavor Violation, the Lee-Wick partners of the electron, tilde le and tilde e, can produce measurable wrong vertices at the LHC, the most promising channel being qbar qlongrightarrowblte_ltelongrightarrowe+e-jjjj. A Monte-Carlo simulation using MadGraph/MadEvent suggests that for Mllesssim450 GeV the measurement of these acausal vertex displacements should be accessible in the LHC era.
Ji, Danfeng; Xi, Beidou; Su, Jing; Huo, Shouliang; He, Li; Liu, Hongliang; Yang, Queping
2013-09-01
Lake eutrophication (LE) has become an increasingly severe environmental problem recently. However, there has been no nutrient standard established for LE control in many developing countries such as China. This study proposes a structural equation model to assist in the establishment of a lake nutrient standard for drinking water sources in Yunnan-Guizhou Plateau Ecoregion (Yungui Ecoregion), China. The modeling results indicate that the most predictive indicator for designated use-attainment is total phosphorus (TP) (total effect = -0.43), and chlorophyll a (Chl-a) is recommended as the second important indicator (total effect = -0.41). The model is further used for estimating the probability of use-attainment associated with lake water as a drinking water source and various levels of candidate criteria (based on the reference conditions and the current environmental quality standards for surface water). It is found that these candidate criteria cannot satisfy the designated 100% use-attainment. To achieve the short-term target (85% attainment of the designated use), TP and Chl-a values ought to be less than 0.02 mg/L and 1.4 microg/L, respectively. When used as a long-term target (90% or greater attainment of the designated use), the TP and Chl-a values are suggested to be less than 0.018 mg/L and 1 microg/L, respectively.
Physics Beyond the Standard Model: Supersymmetry
Nojiri, M.M.; Plehn, T.; Polesello, G.; Alexander, John M.; Allanach, B.C.; Barr, Alan J.; Benakli, K.; Boudjema, F.; Freitas, A.; Gwenlan, C.; Jager, S.; /CERN /LPSC, Grenoble
2008-02-01
This collection of studies on new physics at the LHC constitutes the report of the supersymmetry working group at the Workshop 'Physics at TeV Colliders', Les Houches, France, 2007. They cover the wide spectrum of phenomenology in the LHC era, from alternative models and signatures to the extraction of relevant observables, the study of the MSSM parameter space and finally to the interplay of LHC observations with additional data expected on a similar time scale. The special feature of this collection is that while not each of the studies is explicitly performed together by theoretical and experimental LHC physicists, all of them were inspired by and discussed in this particular environment.
Comparison of cosmological models using standard rulers and candles
NASA Astrophysics Data System (ADS)
Li, Xiao-Lei; Cao, Shuo; Zheng, Xiao-Gang; Li, Song; Biesiada, Marek
2016-05-01
In this paper, we used standard rulers and standard candles (separately and jointly) to explore five popular dark energy models under the assumption of the spatial flatness of the Universe. As standard rulers, we used a data set comprised of 118 galactic scale strong lensing systems (individual standard rulers if properly calibrated for the mass density profile) combined with BAO diagnostics (statistical standard ruler). Type Ia supernovae served as standard candles. Unlike most previous statistical studies involving strong lensing systems, we relaxed the assumption of a singular isothermal sphere (SIS) in favor of its generalization: the power-law mass density profile. Therefore, along with cosmological model parameters, we fitted the power law index and its first derivative with respect to the redshift (thus allowing for mass density profile evolution). It turned out that the best fitted γ parameters are in agreement with each other, irrespective of the cosmological model considered. This demonstrates that galactic strong lensing systems may provide a complementary probe to test the properties of dark energy. The fits for cosmological model parameters which we obtained are in agreement with alternative studies performed by other researchers. Because standard rulers and standard candles have different parameter degeneracies, a combination of standard rulers and standard candles gives much more restrictive results for cosmological parameters. Finally, we attempted an analysis based on model selection using information theoretic criteria (AIC and BIC). Our results support the claim that the cosmological constant model is still best and there is no (at least statistical) reason to prefer any other more complex model.
Preparation of tool mark standards with jewelry modeling waxes.
Petraco, Nicholas; Petraco, Nicholas D K; Faber, Lisa; Pizzola, Peter A
2009-03-01
This paper presents how jewelry modeling waxes are used in the preparation of tool mark standards from exemplar tools. We have previously found that jewelry modeling waxes are ideal for preparing test tool marks from exemplar tools. In this study, simple methods and techniques are offered for the replication of accurate, highly detailed tool mark standards with jewelry modeling waxes. The techniques described here demonstrate the conditioning and proper use of jewelry modeling wax in the production of tool mark standards. The application of each test tool's working surface to a piece of the appropriate wax in a manner consistent with the tool's design is clearly illustrated. The resulting tool mark standards are exact, highly detailed, 1:1, negative impressions of the exemplar tool's working surface. These wax models have a long shelf life and are suitable for use in microscopic examination comparison of questioned and known tool marks. PMID:19187458
Standardized verification of fuel cycle modeling
Feng, B.; Dixon, B.; Sunny, E.; Cuadra, A.; Jacobson, J.; Brown, N. R.; Powers, J.; Worrall, A.; Passerini, S.; Gregg, R.
2016-04-05
A nuclear fuel cycle systems modeling and code-to-code comparison effort was coordinated across multiple national laboratories to verify the tools needed to perform fuel cycle analyses of the transition from a once-through nuclear fuel cycle to a sustainable potential future fuel cycle. For this verification study, a simplified example transition scenario was developed to serve as a test case for the four systems codes involved (DYMOND, VISION, ORION, and MARKAL), each used by a different laboratory participant. In addition, all participants produced spreadsheet solutions for the test case to check all the mass flows and reactor/facility profiles on a year-by-yearmore » basis throughout the simulation period. The test case specifications describe a transition from the current US fleet of light water reactors to a future fleet of sodium-cooled fast reactors that continuously recycle transuranic elements as fuel. After several initial coordinated modeling and calculation attempts, it was revealed that most of the differences in code results were not due to different code algorithms or calculation approaches, but due to different interpretations of the input specifications among the analysts. Therefore, the specifications for the test case itself were iteratively updated to remove ambiguity and to help calibrate interpretations. In addition, a few corrections and modifications were made to the codes as well, which led to excellent agreement between all codes and spreadsheets for this test case. Although no fuel cycle transition analysis codes matched the spreadsheet results exactly, all remaining differences in the results were due to fundamental differences in code structure and/or were thoroughly explained. As a result, the specifications and example results are provided so that they can be used to verify additional codes in the future for such fuel cycle transition scenarios.« less
Standards, databases, and modeling tools in systems biology.
Kohl, Michael
2011-01-01
Modeling is a means for integrating the results from Genomics, Transcriptomics, Proteomics, and Metabolomics experiments and for gaining insights into the interaction of the constituents of biological systems. However, sharing such large amounts of frequently heterogeneous and distributed experimental data needs both standard data formats and public repositories. Standardization and a public storage system are also important for modeling due to the possibility of sharing models irrespective of the used software tools. Furthermore, rapid model development strongly benefits from available software packages that relieve the modeler of recurring tasks like numerical integration of rate equations or parameter estimation. In this chapter, the most common standard formats used for model encoding and some of the major public databases in this scientific field are presented. The main features of currently available modeling software are discussed and proposals for the application of such tools are given.
Standard model status (in search of new physics'')
Marciano, W.J.
1993-03-01
A perspective on successes and shortcomings of the standard model is given. The complementarity between direct high energy probes of new physics and lower energy searches via precision measurements and rare reactions is described. Several illustrative examples are discussed.
Standard model status (in search of ``new physics``)
Marciano, W.J.
1993-03-01
A perspective on successes and shortcomings of the standard model is given. The complementarity between direct high energy probes of new physics and lower energy searches via precision measurements and rare reactions is described. Several illustrative examples are discussed.
Enhancements to ASHRAE Standard 90.1 Prototype Building Models
Goel, Supriya; Athalye, Rahul A.; Wang, Weimin; Zhang, Jian; Rosenberg, Michael I.; Xie, YuLong; Hart, Philip R.; Mendon, Vrushali V.
2014-04-16
This report focuses on enhancements to prototype building models used to determine the energy impact of various versions of ANSI/ASHRAE/IES Standard 90.1. Since the last publication of the prototype building models, PNNL has made numerous enhancements to the original prototype models compliant with the 2004, 2007, and 2010 editions of Standard 90.1. Those enhancements are described here and were made for several reasons: (1) to change or improve prototype design assumptions; (2) to improve the simulation accuracy; (3) to improve the simulation infrastructure; and (4) to add additional detail to the models needed to capture certain energy impacts from Standard 90.1 improvements. These enhancements impact simulated prototype energy use, and consequently impact the savings estimated from edition to edition of Standard 90.1.
NASA Standard for Models and Simulations: Philosophy and Requirements Overview
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.
2013-01-01
Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.
NASA Standard for Models and Simulations: Philosophy and Requirements Overview
NASA Technical Reports Server (NTRS)
Blattnig, St3eve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.
2009-01-01
Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.
Standard Model of Particle Physics--a health physics perspective.
Bevelacqua, J J
2010-11-01
The Standard Model of Particle Physics is reviewed with an emphasis on its relationship to the physics supporting the health physics profession. Concepts important to health physics are emphasized and specific applications are presented. The capability of the Standard Model to provide health physics relevant information is illustrated with application of conservation laws to neutron and muon decay and in the calculation of the neutron mean lifetime.
Animal Models of Tourette Syndrome—From Proliferation to Standardization
Yael, Dorin; Israelashvili, Michal; Bar-Gad, Izhar
2016-01-01
Tourette syndrome (TS) is a childhood onset disorder characterized by motor and vocal tics and associated with multiple comorbid symptoms. Over the last decade, the accumulation of findings from TS patients and the emergence of new technologies have led to the development of novel animal models with high construct validity. In addition, animal models which were previously associated with other disorders were recently attributed to TS. The proliferation of TS animal models has accelerated TS research and provided a better understanding of the mechanism underlying the disorder. This newfound success generates novel challenges, since the conclusions that can be drawn from TS animal model studies are constrained by the considerable variation across models. Typically, each animal model examines a specific subset of deficits and centers on one field of research (physiology/genetics/pharmacology/etc.). Moreover, different studies do not use a standard lexicon to characterize different properties of the model. These factors hinder the evaluation of individual model validity as well as the comparison across models, leading to a formation of a fuzzy, segregated landscape of TS pathophysiology. Here, we call for a standardization process in the study of TS animal models as the next logical step. We believe that a generation of standard examination criteria will improve the utility of these models and enable their consolidation into a general framework. This should lead to a better understanding of these models and their relationship to TS, thereby improving the research of the mechanism underlying this disorder and aiding the development of new treatments. PMID:27065791
A Standard Kinematic Model for Flight Simulation at NASA Ames
NASA Technical Reports Server (NTRS)
Mcfarland, R. E.
1975-01-01
A standard kinematic model for aircraft simulation exists at NASA-Ames on a variety of computer systems, one of which is used to control the flight simulator for advanced aircraft (FSAA). The derivation of the kinematic model is given and various mathematical relationships are presented as a guide. These include descriptions of standardized simulation subsystems such as the atmospheric turbulence model and the generalized six-degrees-of-freedom trim routine, as well as an introduction to the emulative batch-processing system which enables this facility to optimize its real-time environment.
Standard model Higgs boson-inflaton and dark matter
Clark, T. E.; Liu Boyang; Love, S. T.; Veldhuis, T. ter
2009-10-01
The standard model Higgs boson can serve as the inflaton field of slow roll inflationary models provided it exhibits a large nonminimal coupling with the gravitational scalar curvature. The Higgs boson self interactions and its couplings with a standard model singlet scalar serving as the source of dark matter are then subject to cosmological constraints. These bounds, which can be more stringent than those arising from vacuum stability and perturbative triviality alone, still allow values for the Higgs boson mass which should be accessible at the LHC. As the Higgs boson coupling to the dark matter strengthens, lower values of the Higgs boson mass consistent with the cosmological data are allowed.
Non-standard models and the sociology of cosmology
NASA Astrophysics Data System (ADS)
López-Corredoira, Martín
2014-05-01
I review some theoretical ideas in cosmology different from the standard "Big Bang": the quasi-steady state model, the plasma cosmology model, non-cosmological redshifts, alternatives to non-baryonic dark matter and/or dark energy, and others. Cosmologists do not usually work within the framework of alternative cosmologies because they feel that these are not at present as competitive as the standard model. Certainly, they are not so developed, and they are not so developed because cosmologists do not work on them. It is a vicious circle. The fact that most cosmologists do not pay them any attention and only dedicate their research time to the standard model is to a great extent due to a sociological phenomenon (the "snowball effect" or "groupthink"). We might well wonder whether cosmology, our knowledge of the Universe as a whole, is a science like other fields of physics or a predominant ideology.
GIS-based RUSLE modelling of Leça River Basin, Northern Portugal, in two different grid scales
NASA Astrophysics Data System (ADS)
Petan, S.; Barbosa, J. L. P.; Mikoš, M.; Pinto, F. T.
2009-04-01
Soil erosion is the mechanical degradation caused by the natural forces and it is also influenced by human activities. The biggest threats are the related loss of fertile soil for food production and disturbances of aquatic ecosystems which could unbalance the environment in a wider range. Thus, precise predictions of the soil erosion processes are of a major importance for preventing any kind of environmental degradations. Spatial GIS modelling and erosion maps greatly support the policymaking for land planning and environmental management. Leça River Basin, with a surface of 187 km2, is located in the Northern part of Portugal and it was chosen for testing RUSLE methodology for soil loss prediction and identifying areas with high potential erosion. The model involves daily rainfall data for rainfall erosivity estimation, topographic data for slope length and steepness factor calculation, soil type data, CORINE land cover and land use data. The raster layer model was structured in two different scales: with a grid cell size of 10 and 30 meters. The similarities and differences between the model results of both scales were evaluated.
Conformal Loop quantization of gravity coupled to the standard model
NASA Astrophysics Data System (ADS)
Pullin, Jorge; Gambini, Rodolfo
2016-03-01
We consider a local conformal invariant coupling of the standard model to gravity free of any dimensional parameter. The theory is formulated in order to have a quantized version that admits a spin network description at the kinematical level like that of loop quantum gravity. The Gauss constraint, the diffeomorphism constraint and the conformal constraint are automatically satisfied and the standard inner product of the spin-network basis still holds. The resulting theory has resemblances with the Bars-Steinhardt-Turok local conformal theory, except it admits a canonical quantization in terms of loops. By considering a gauge fixed version of the theory we show that the Standard model coupled to gravity is recovered and the Higgs boson acquires mass. This in turn induces via the standard mechanism masses for massive bosons, baryons and leptons.
Higgs phenomenology in the standard model and beyond
NASA Astrophysics Data System (ADS)
Field, Bryan Jonathan
2005-07-01
The way in which the electroweak symmetry is broken in nature is currently unknown. The electroweak symmetry is theoretically broken in the Standard Model by the Higgs mechanism which generates masses for the particle content and introduces a single scalar to the particle spectrum, the Higgs boson. This particle has not yet been observed and the value of it mass is a free parameter in the Standard Model. The observation of one (or more) Higgs bosons would confirm our understanding of the Standard Model. In this thesis, we study the phenomenology of the Standard Model Higgs boson and compare its production observables to those of the Pseudoscalar Higgs boson and the lightest scalar Higgs boson of the Minimally Supersymmetric Standard Model. We study the production at both the Fermilab Tevatron and the future CERN Large Hadron Collider (LHC). In the first part of the thesis, we present the results of our calculations in the framework of perturbative QCD. In the second part, we present our resummed calculations.
Explore Physics Beyond the Standard Model with GLAST
Lionetto, A. M.
2007-07-12
We give an overview of the possibility of GLAST to explore theories beyond the Standard Model of particle physics. Among the wide taxonomy we will focus in particular on low scale supersymmetry and theories with extra space-time dimensions. These theories give a suitable dark matter candidate whose interactions and composition can be studied using a gamma ray probe. We show the possibility of GLAST to disentangle such exotic signals from a standard production background.
NASA Standard for Models and Simulations: Credibility Assessment Scale
NASA Technical Reports Server (NTRS)
Babula, Maria; Bertch, William J.; Green, Lawrence L.; Hale, Joseph P.; Mosier, Gary E.; Steele, Martin J.; Woods, Jody
2009-01-01
As one of its many responses to the 2003 Space Shuttle Columbia accident, NASA decided to develop a formal standard for models and simulations (M&S). Work commenced in May 2005. An interim version was issued in late 2006. This interim version underwent considerable revision following an extensive Agency-wide review in 2007 along with some additional revisions as a result of the review by the NASA Engineering Management Board (EMB) in the first half of 2008. Issuance of the revised, permanent version, hereafter referred to as the M&S Standard or just the Standard, occurred in July 2008. Bertch, Zang and Steeleiv provided a summary review of the development process of this standard up through the start of the review by the EMB. A thorough recount of the entire development process, major issues, key decisions, and all review processes are available in Ref. v. This is the second of a pair of papers providing a summary of the final version of the Standard. Its focus is the Credibility Assessment Scale, a key feature of the Standard, including an example of its application to a real-world M&S problem for the James Webb Space Telescope. The companion paper summarizes the overall philosophy of the Standard and an overview of the requirements. Verbatim quotes from the Standard are integrated into the text of this paper, and are indicated by quotation marks.
Messages on Flavour Physics Beyond the Standard Model
NASA Astrophysics Data System (ADS)
Buras, Andrzej J.
2008-12-01
We present a brief summary of the main results on flavour physics beyond the Standard Model that have been obtained in 2008 by my collaborators and myself in my group at TUM. In particular we list main messages coming from our analyses of flavour and CP-violating processes in Supersymmetry, Littlest Higgs model with T-Parity and a warped extra dimension model with custodial protection for the flavour diagonal and non-diagonal Z boson couplings.
Peer Review of NRC Standardized Plant Analysis Risk Models
Anthony Koonce; James Knudsen; Robert Buell
2011-03-01
The Nuclear Regulatory Commission (NRC) Standardized Plant Analysis Risk (SPAR) Models underwent a Peer Review using ASME PRA standard (Addendum C) as endorsed by NRC in Regulatory Guide (RG) 1.200. The review was performed by a mix of industry probabilistic risk analysis (PRA) experts and NRC PRA experts. Representative SPAR models, one PWR and one BWR, were reviewed against Capability Category I of the ASME PRA standard. Capability Category I was selected as the basis for review due to the specific uses/applications of the SPAR models. The BWR SPAR model was reviewed against 331 ASME PRA Standard Supporting Requirements; however, based on the Capability Category I level of review and the absence of internal flooding and containment performance (LERF) logic only 216 requirements were determined to be applicable. Based on the review, the BWR SPAR model met 139 of the 216 supporting requirements. The review also generated 200 findings or suggestions. Of these 200 findings and suggestions 142 were findings and 58 were suggestions. The PWR SPAR model was also evaluated against the same 331 ASME PRA Standard Supporting Requirements. Of these requirements only 215 were deemed appropriate for the review (for the same reason as noted for the BWR). The PWR review determined that 125 of the 215 supporting requirements met Capability Category I or greater. The review identified 101 findings or suggestions (76 findings and 25 suggestions). These findings or suggestions were developed to identify areas where SPAR models could be enhanced. A process to prioritize and incorporate the findings/suggestions supporting requirements into the SPAR models is being developed. The prioritization process focuses on those findings that will enhance the accuracy, completeness and usability of the SPAR models.
Cosmological Signatures of a UV-Conformal Standard Model
NASA Astrophysics Data System (ADS)
Dorsch, Glauber C.; Huber, Stephan J.; No, Jose Miguel
2014-09-01
Quantum scale invariance in the UV has been recently advocated as an attractive way of solving the gauge hierarchy problem arising in the standard model. We explore the cosmological signatures at the electroweak scale when the breaking of scale invariance originates from a hidden sector and is mediated to the standard model by gauge interactions (gauge mediation). These scenarios, while being hard to distinguish from the standard model at LHC, can give rise to a strong electroweak phase transition leading to the generation of a large stochastic gravitational wave signal in possible reach of future space-based detectors such as eLISA and BBO. This relic would be the cosmological imprint of the breaking of scale invariance in nature.
Test of a Power Transfer Model for Standardized Electrofishing
Miranda, L.E.; Dolan, C.R.
2003-01-01
Standardization of electrofishing in waters with differing conductivities is critical when monitoring temporal and spatial differences in fish assemblages. We tested a model that can help improve the consistency of electrofishing by allowing control over the amount of power that is transferred to the fish. The primary objective was to verify, under controlled laboratory conditions, whether the model adequately described fish immobilization responses elicited with various electrical settings over a range of water conductivities. We found that the model accurately described empirical observations over conductivities ranging from 12 to 1,030 ??S/cm for DC and various pulsed-DC settings. Because the model requires knowledge of a fish's effective conductivity, an attribute that is likely to vary according to species, size, temperature, and other variables, a second objective was to gather available estimates of the effective conductivity of fish to examine the magnitude of variation and to assess whether in practical applications a standard effective conductivity value for fish may be assumed. We found that applying a standard fish effective conductivity of 115 ??S/cm introduced relatively little error into the estimation of the peak power density required to immobilize fish with electrofishing. However, this standard was derived from few estimates of fish effective conductivity and a limited number of species; more estimates are needed to validate our working standard.
Loop Corrections to Standard Model fields in inflation
NASA Astrophysics Data System (ADS)
Chen, Xingang; Wang, Yi; Xianyu, Zhong-Zhi
2016-08-01
We calculate 1-loop corrections to the Schwinger-Keldysh propagators of Standard-Model-like fields of spin-0, 1/2, and 1, with all renormalizable interactions during inflation. We pay special attention to the late-time divergences of loop corrections, and show that the divergences can be resummed into finite results in the late-time limit using dynamical renormalization group method. This is our first step toward studying both the Standard Model and new physics in the primordial universe.
Non-Gaussianities from the Standard Model Higgs
Simone, Andrea De; Perrier, Hideki; Riotto, Antonio E-mail: hideki.perrier@unige.ch
2013-01-01
We have recently proposed that the Standard Model Higgs might be responsible for generating the cosmological perturbations of the universe by acting as an isocurvature mode during a de Sitter inflationary stage. In this paper we study the level of non-Gaussianity in the cosmological perturbations which are inevitably generated due to the non-linearities of the Standard Model Higgs potential. In particular, for the current central value of the top mass, we find that a future detection of non-Gaussianity would exclude the detection of tensor modes by the PLANCK satellite.
NASA Astrophysics Data System (ADS)
Mitchell, N. A.; Gran, K. B.; Cho, S. J.; Dalzell, B. J.; Kumarasamy, K.
2015-12-01
A combination of factors including climate change, land clearing, and artificial drainage have increased many agricultural regions' stream flows and rates at which channel banks and bluffs are eroded. Increasing erosion rates within the Minnesota River Basin have contributed to higher sediment-loading rates, excess turbidity levels, and increases in sedimentation rates in Lake Pepin further downstream. Water storage sites (e.g., wetlands) have been discussed as a means to address these issues. This study uses the Soil and Water Assessment Tool (SWAT) to assess a range of water retention site (WRS) implementation scenarios in the Le Sueur watershed in south-central Minnesota, a subwatershed of the Minnesota River Basin. Sediment loading from bluffs was assessed through an empirical relationship developed from gauging data. Sites were delineated as topographic depressions with specific land uses, minimum areas (3000 m2), and high compound topographic index values. Contributing areas for the WRS were manually measured and used with different site characteristics to create 210 initial WRS scenarios. A generalized relationship between WRS area and contributing area was identified from measurements, and this relationship was used with different site characteristics (e.g., depth, hydraulic conductivity (K), and placement) to create 225 generalized WRS scenarios. Reductions in peak flow volumes and sediment-loading rates are generally maximized by placing site with high K values in the upper half of the watershed. High K values allow sites to lose more water through seepage, emptying their storages between precipitation events and preventing frequent overflowing. Reductions in peak flow volumes and sediment-loading rates also level off at high WRS extents due to the decreasing frequencies of high-magnitude events. The generalized WRS scenarios were also used to create a simplified empirical model capable of generating peak flows and sediment-loading rates from near
A standard telemental health evaluation model: the time is now.
Kramer, Greg M; Shore, Jay H; Mishkind, Matt C; Friedl, Karl E; Poropatich, Ronald K; Gahm, Gregory A
2012-05-01
The telehealth field has advanced historic promises to improve access, cost, and quality of care. However, the extent to which it is delivering on its promises is unclear as the scientific evidence needed to justify success is still emerging. Many have identified the need to advance the scientific knowledge base to better quantify success. One method for advancing that knowledge base is a standard telemental health evaluation model. Telemental health is defined here as the provision of mental health services using live, interactive video-teleconferencing technology. Evaluation in the telemental health field largely consists of descriptive and small pilot studies, is often defined by the individual goals of the specific programs, and is typically focused on only one outcome. The field should adopt new evaluation methods that consider the co-adaptive interaction between users (patients and providers), healthcare costs and savings, and the rapid evolution in communication technologies. Acceptance of a standard evaluation model will improve perceptions of telemental health as an established field, promote development of a sounder empirical base, promote interagency collaboration, and provide a framework for more multidisciplinary research that integrates measuring the impact of the technology and the overall healthcare aspect. We suggest that consideration of a standard model is timely given where telemental health is at in terms of its stage of scientific progress. We will broadly recommend some elements of what such a standard evaluation model might include for telemental health and suggest a way forward for adopting such a model.
Status of the AIAA Modeling and Simulation Format Standard
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Hildreth, Bruce L.
2008-01-01
The current draft AIAA Standard for flight simulation models represents an on-going effort to improve the productivity of practitioners of the art of digital flight simulation (one of the original digital computer applications). This initial release provides the capability for the efficient representation and exchange of an aerodynamic model in full fidelity; the DAVE-ML format can be easily imported (with development of site-specific import tools) in an unambiguous way with automatic verification. An attractive feature of the standard is the ability to coexist with existing legacy software or tools. The draft Standard is currently limited in scope to static elements of dynamic flight simulations; however, these static elements represent the bulk of typical flight simulation mathematical models. It is already seeing application within U.S. and Australian government agencies in an effort to improve productivity and reduce model rehosting overhead. An existing tool allows import of DAVE-ML models into a popular simulation modeling and analysis tool, and other community-contributed tools and libraries can simplify the use of DAVE-ML compliant models at compile- or run-time of high-fidelity flight simulation.
Constraints on new physics beyond the Standard Model
NASA Astrophysics Data System (ADS)
Su, Yumian
1997-08-01
In this thesis, I consider theoretical and experimental constraints on three classes of theories beyond the Standard Model of particle physics. First, I consider two decays, B/to Xsμ+/mu/sp- and Z/to b/bar b, in the technicolor with scalars model. In this model, there exists an extra charged spin- 0 particle, which makes it possible for the above decays to take place through more channels as compared to the Standard Model. I find that the model is consistent with the experimental upper bound on the B/to Xsμ+/mu/sp- decay rate. Furthermore, the decay rate can be enhanced relative to the Standard Model prediction by as much as 60%. This will make the model testable in the next few years by the CDF and CLEO experiments. In the case of Z/to b/bar b, the correction is small and the prediction of the model lies within 2 error bars of the current experimentally measured value. Then, I explore the constraints from Z/to b/bar b on the minimal U(1)R-symmetric supersymmetric extension to the Standard Model. In this extended model, all the Standard Model particles are paired with corresponding 'superpartner' particles, and a global U(1)R symmetry is imposed to prevent immediate proton decay which would be caused by the presence of these superpartners. I find that the experimentally measured Z/to b/bar b decay rate requires the lighter top squark, one of the superpartners of the top quark, to have a small mass. Furthermore, the lower bound on the mass of the lighter top squark is found to be 88 GeV. Such a top squark might be accessible to searches by experiments at Fermilab and CERN LEP. Finally, I consider some of the issues associated with the presence of an extra Z/sp/prime boson in topcolor- assisted technicolor models. In this class of models, the strong interaction of the Z/sp/prime boson with ordinary matter particles can affect the production of these matter particles in proton-antiproton collisions. By comparing with the results of experiments at the Fermilab
Search for the standard model Higgs boson in $l\
Li, Dikai
2013-01-01
Humans have always attempted to understand the mystery of Nature, and more recently physicists have established theories to describe the observed phenomena. The most recent theory is a gauge quantum field theory framework, called Standard Model (SM), which proposes a model comprised of elementary matter particles and interaction particles which are fundamental force carriers in the most unified way. The Standard Model contains the internal symmetries of the unitary product group SU(3)_{c} ⓍSU(2)_{L} Ⓧ U(1)_{Y} , describes the electromagnetic, weak and strong interactions; the model also describes how quarks interact with each other through all of these three interactions, how leptons interact with each other through electromagnetic and weak forces, and how force carriers mediate the fundamental interactions.
Kwan, Joyce L Y; Chan, Wai
2011-09-01
We propose a two-stage method for comparing standardized coefficients in structural equation modeling (SEM). At stage 1, we transform the original model of interest into the standardized model by model reparameterization, so that the model parameters appearing in the standardized model are equivalent to the standardized parameters of the original model. At stage 2, we impose appropriate linear equality constraints on the standardized model and use a likelihood ratio test to make statistical inferences about the equality of standardized coefficients. Unlike other existing methods for comparing standardized coefficients, the proposed method does not require specific modeling features (e.g., specification of nonlinear constraints), which are available only in certain SEM software programs. Moreover, this method allows researchers to compare two or more standardized coefficients simultaneously in a standard and convenient way. Three real examples are given to illustrate the proposed method, using EQS, a popular SEM software program. Results show that the proposed method performs satisfactorily for testing the equality of standardized coefficients.
Radiation therapy: model standards for determination of need
Lagasse, L.G.; Devins, T.B.
1982-03-01
Contents: Health planning process; Health care requirements (model for projecting need for megavoltage radiation therapy); Operational objectives (manpower, megavoltage therapy and treatment planning equipment, support services, management and evaluation of patient care, organization and administration); Compliance with other standards imposed by law; Financial feasibility and capability; Reasonableness of expenditures and costs; Relative merit; Environmental impact.
Mathematical Modeling, Sense Making, and the Common Core State Standards
ERIC Educational Resources Information Center
Schoenfeld, Alan H.
2013-01-01
On October 14, 2013 the Mathematics Education Department at Teachers College hosted a full-day conference focused on the Common Core Standards Mathematical Modeling requirements to be implemented in September 2014 and in honor of Professor Henry Pollak's 25 years of service to the school. This article is adapted from my talk at this conference…
Teacher Leader Model Standards: Implications for Preparation, Policy, and Practice
ERIC Educational Resources Information Center
Berg, Jill Harrison; Carver, Cynthia L.; Mangin, Melinda M.
2014-01-01
Teacher leadership is increasingly recognized as a resource for instructional improvement. Consequently, teacher leader initiatives have expanded rapidly despite limited knowledge about how to prepare and support teacher leaders. In this context, the "Teacher Leader Model Standards" represent an important development in the field. In…
Searches for Standard Model Higgs at the Tevatron
Cortavitarte, Rocio Vilar; /Cantabria Inst. of Phys.
2007-11-01
A summary of the latest results of Standard Model Higgs boson searches from CDF and D0 presented at the DIS 2007 conference is reported in this paper. All analyses presented use 1 fb{sup -1} of Tevatron data. The strategy of the different analyses is determined by the Higgs production mechanism and decay channel.
Searches for standard model Higgs at the Tevatron
Vilar Cortabitarte, Rocio; /Cantabria U., Santander
2007-04-01
A summary of the latest results of Standard Model Higgs boson searches from CDF and D0 presented at the DIS 2007 conference is reported in this paper. All analyses presented use 1 fb{sup -1} of Tevatron data. The strategy of the different analyses is determined by the Higgs production mechanism and decay channel.
View of a five inch standard Mark III model 1 ...
View of a five inch standard Mark III model 1 #39, manufactured in 1916 at the naval gun factory waterveliet, NY; this is the only gun remaining on olympia dating from the period when it was in commission; note ammunition lift at left side of photograph. (p36) - USS Olympia, Penn's Landing, 211 South Columbus Boulevard, Philadelphia, Philadelphia County, PA
Radiative breaking of conformal symmetry in the Standard Model
NASA Astrophysics Data System (ADS)
Arbuzov, A. B.; Nazmitdinov, R. G.; Pavlov, A. E.; Pervushin, V. N.; Zakharov, A. F.
2016-02-01
Radiative mechanism of conformal symmetry breaking in a comformal-invariant version of the Standard Model is considered. The Coleman-Weinberg mechanism of dimensional transmutation in this system gives rise to finite vacuum expectation values and, consequently, masses of scalar and spinor fields. A natural bootstrap between the energy scales of the top quark and Higgs boson is suggested.
An Exercise in Modelling Using the US Standard Atmosphere
ERIC Educational Resources Information Center
LoPresto, Michael C.; Jacobs, Diane A.
2007-01-01
In this exercise the US Standard Atmosphere is used as "data" that a student is asked to model by deriving equations to reproduce it with the help of spreadsheet and graphing software. The exercise can be used as a laboratory or an independent study for a student of introductory physics to provide an introduction to scientific research methods…
Home Economics Education Career Path Guide and Model Curriculum Standards.
ERIC Educational Resources Information Center
California State Univ., Northridge.
This curriculum guide developed in California and organized in 10 chapters, provides a home economics education career path guide and model curriculum standards for high school home economics programs. The first chapter contains information on the following: home economics education in California, home economics careers for the future, home…
NASA Astrophysics Data System (ADS)
Merino, Andres; Guerrero-Higueras, Angel Manuel; López, Laura; Gascón, Estibaliz; Sánchez, José Luis; Lorente, José Manuel; Marcos, José Luis; Matía, Pedro; Ortiz de Galisteo, José Pablo; Nafría, David; Fernández-González, Sergio; Weigand, Roberto; Hermida, Lucía; García-Ortega, Eduardo
2014-05-01
The integration of various public and private observation networks into the Observation Network of Castile-León (ONet_CyL), Spain, allows us to monitor the risks in real-time. One of the most frequent risks in this region is severe precipitation. Thus, the data from the network allows us to determine the area where precipitation was registered and also to know the areas with precipitation in real-time. The observation network is managed with a LINUX system. The observation platform makes it possible to consult the observation data in a specific point in the region, or otherwise to see the spatial distribution of the precipitation in a user-defined area and time interval. In this study, we compared several rainfall estimation models, based on satellite data for Castile-León, with precipitation data from the meteorological observation network. The rainfall estimation models obtained from the meteorological satellite data provide us with a precipitation field covering a wide area, although its operational use requires a prior evaluation using ground truth data. The aim is to develop a real-time evaluation tool for rainfall estimation models that allows us to monitor the accuracy of its forecasting. This tool makes it possible to visualise different Skill Scores (Probability of Detection, False Alarm Ratio and others) of each rainfall estimation model in real time, thereby not only allowing us to know the areas where the rainfall models indicate precipitation, but also the validation of the model in real-time for each specific meteorological situation. Acknowledgements The authors would like to thank the Regional Government of Castile-León for its financial support through the project LE220A11-2. This study was supported by the following grants: GRANIMETRO (CGL2010-15930); MICROMETEO (IPT-310000-2010-22).
Progress Toward a Format Standard for Flight Dynamics Models
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Hildreth, Bruce L.
2006-01-01
In the beginning, there was FORTRAN, and it was... not so good. But it was universal, and all flight simulator equations of motion were coded with it. Then came ACSL, C, Ada, C++, C#, Java, FORTRAN-90, Matlab/Simulink, and a number of other programming languages. Since the halcyon punch card days of 1968, models of aircraft flight dynamics have proliferated in training devices, desktop engineering and development computers, and control design textbooks. With the rise of industry teaming and increased reliance on simulation for procurement decisions, aircraft and missile simulation models are created, updated, and exchanged with increasing frequency. However, there is no real lingua franca to facilitate the exchange of models from one simulation user to another. The current state-of-the-art is such that several staff-months if not staff-years are required to 'rehost' each release of a flight dynamics model from one simulation environment to another one. If a standard data package or exchange format were to be universally adopted, the cost and time of sharing and updating aerodynamics, control laws, mass and inertia, and other flight dynamic components of the equations of motion of an aircraft or spacecraft simulation could be drastically reduced. A 2002 paper estimated over $ 6 million in savings could be realized for one military aircraft type alone. This paper describes the efforts of the American Institute of Aeronautics and Astronautics (AIAA) to develop a standard flight dynamic model exchange standard based on XML and HDF-5 data formats.
Using geodetic VLBI to test Standard-Model Extension
NASA Astrophysics Data System (ADS)
Hees, Aurélien; Lambert, Sébastien; Le Poncin-Lafitte, Christophe
2016-04-01
The modeling of the relativistic delay in geodetic techniques is primordial to get accurate geodetic products. And geodetic techniques can also be used to measure the relativistic delay and get constraints on parameters describing the relativity theory. The effective field theory framework called the Standard-Model Extension (SME) has been developed in order to systematically parametrize hypothetical violations of Lorentz symmetry (in the Standard Model and in the gravitational sector). In terms of light deflexion by a massive body like the Sun, one can expect a dependence in the elongation angle different from GR. In this communication, we use geodetic VLBI observations of quasars made in the frame of the permanent geodetic VLBI monitoring program to constrain the first SME coefficient. Our results do not show any deviation from GR and they improve current constraints on both GR and SME parameters.
Higgs decays in gauge extensions of the standard model
NASA Astrophysics Data System (ADS)
Bunk, Don; Hubisz, Jay; Jain, Bithika
2014-02-01
We explore the phenomenology of virtual spin-1 contributions to the h→γγ and h→Zγ decay rates in gauge extensions of the standard model. We consider generic Lorentz and gauge-invariant vector self-interactions, which can have nontrivial structure after diagonalizing the quadratic part of the action. Such features are phenomenologically relevant in models where the electroweak gauge bosons mix with additional spin-1 fields, such as occurs in little Higgs models, extra dimensional models, strongly coupled variants of electroweak symmetry breaking, and other gauge extensions of the standard model. In models where nonrenormalizable operators mix field strengths of gauge groups, the one-loop Higgs decay amplitudes can be logarithmically divergent, and we provide power counting for the size of the relevant counterterm. We provide an example calculation in a four-site moose model that contains degrees of freedom that model the effects of vector and axial-vector resonances arising from TeV scale strong dynamics.
Towards realistic standard model from D-brane configurations
Leontaris, G. K.; Tracas, N. D.; Korakianitis, O.; Vlachos, N. D.
2007-12-01
Effective low energy models arising in the context of D-brane configurations with standard model (SM) gauge symmetry extended by several gauged Abelian factors are discussed. The models are classified according to their hypercharge embeddings consistent with the SM spectrum hypercharge assignment. Particular cases are analyzed according to their perspectives and viability as low energy effective field theory candidates. The resulting string scale is determined by means of a two-loop renormalization group calculation. Their implications in Yukawa couplings, neutrinos and flavor changing processes are also presented.
Extending the Standard Model with Confining and Conformal Dynamics
NASA Astrophysics Data System (ADS)
McRaven, John Emory
This dissertation will provide a survey of models that involve extending the standard model with confining and conformal dynamics. We will study a series of models, describe them in detail, outline their phenomenology, and provide some search strategies for finding them. The Gaugephobic Higgs model provides an interpolation between three different models of electroweak symmetry breaking: Higgsless models, Randall-Sundrum models, and the Standard Model. At parameter points between the extremes, Standard Model Higgs signals are present at reduced rates, and Higgsless Kaluza-Klein excitations are present with shifted masses and couplings, as well as signals from exotic quarks necessary to protect the Zbb coupling. Using a new implementation of the model in SHERPA, we show the LHC signals which differentiate the generic Gaugephobic Higgs model from its limiting cases. These are all signals involving a Higgs coupling to a Kaluza-Klein gauge boson or quark. We identify the clean signal pp → W (i) → WH mediated by a Kaluza-Klein W, which can be present at large rates and is enhanced for even Kaluza-Klein numbers. Due to the very hard lepton coming from the W+/- decay, this signature has little background, and provides a better discovery channel for the Higgs than any of the Standard Model modes, over its entire mass range. A Higgs radiated from new heavy quarks also has large rates, but is much less promising due to very high multiplicity final states. The AdS/CFT conjectures a relation between Extra Dimensional models in AdS5 space, such as the Gaugephobic Higgs Model, and 4D Conformal Field theories. The notion of conformality has found its way into several phenomenological models for TeV-scale physics extending the standard model. We proceed to explore the phenomenology of a new heavy quark that transforms under a hidden strongly coupled conformal gauge group in addition to transforming under QCD. This object would form states similar to R-Hadrons. The heavy state
E-health stakeholders experiences with clinical modelling and standardizations.
Gøeg, Kirstine Rosenbeck; Elberg, Pia Britt; Højen, Anne Randorff
2015-01-01
Stakeholders in e-health such as governance officials, health IT-implementers and vendors have to co-operate to achieve the goal of a future-proof interoperable e-health infrastructure. Co-operation requires knowledge on the responsibility and competences of stakeholder groups. To increase awareness on clinical modeling and standardization we conducted a workshop for Danish and a few Norwegian e-health stakeholders' and made them discuss their views on different aspects of clinical modeling using a theoretical model as a point of departure. Based on the model, we traced stakeholders' experiences. Our results showed there was a tendency that stakeholders were more familiar with e-health requirements than with design methods, clinical information models and clinical terminology as they are described in the scientific literature. The workshop made it possible for stakeholders to discuss their roles and expectations to each other.
Direct search for the Standard Model Higgs boson
NASA Astrophysics Data System (ADS)
Janot, Patrick; Kado, Marumi
2002-11-01
For twelve years, LEP revolutionized the knowledge of electroweak symmetry breaking within the standard model, and the direct discovery of the Higgs boson would have been the crowning achievement. Searches at the Z resonance and above the W +W - threshold allowed an unambiguous lower limit on the mass of the standard model Higgs boson to set be at 114.1 GeV· c-2. After years of efforts to push the LEP performance far beyond the design limits, hints of what could be the first signs of the existence of a 115 GeV· c-2 Higgs boson appeared in June 2000, were confirmed in September, and were then confirmed again in November. An additional six-month period of LEP operation was enough to provide a definite answer, with an opportunity to make a fundamental discovery of prime importance. To cite this article: P. Janot, M. Kado, C. R. Physique 3 (2002) 1193-1202.
Precision Electroweak Measurements and Constraints on the Standard Model
Not Available
2011-11-11
This note presents constraints on Standard Model parameters using published and preliminary precision electroweak results measured at the electron-positron colliders LEP and SLC. The results are compared with precise electroweak measurements from other experiments, notably CDF and D0 at the Tevatron. Constraints on the input parameters of the Standard Model are derived from the results obtained in high-Q{sup 2} interactions, and used to predict results in low-Q{sup 2} experiments, such as atomic parity violation, Moller scattering, and neutrino-nucleon scattering. The main changes with respect to the experimental results presented in 2007 are new combinations of results on the W-boson mass and width and the mass of the top quark.
Precision electroweak measurements and constraints on the Standard Model
Not Available
2010-12-01
This note presents constraints on Standard Model parameters using published and preliminary precision electroweak results obtained at the electron-positron colliders LEP and SLC. The results are compared with precise electroweak measurements from other experiments, notably CDF and D0 at the Tevatron. Constraints on the input parameters of the Standard Model are derived from the combined set of results obtained in high-Q{sup 2} interactions, and used to predict results in low-Q{sup 2} experiments, such as atomic parity violation, Moeller scattering, and neutrino-nucleon scattering. The main changes with respect to the experimental results presented in 2009 are new combinations of results on the width of the W boson and the mass of the top quark.
Precision Electroweak Measurements and Constraints on the Standard Model
None, None
2009-11-01
This note presents constraints on Standard Model parameters using published and preliminary precision electroweak results measured at the electron-positron colliders LEP and SLC. The results are compared with precise electroweak measurements from other experiments, notably CDF and D0 at the Tevatron. Constraints on the input parameters of the Standard Model are derived from the combined set of results obtained in high-Q{sup 2} interactions, and used to predict results in low-Q{sup 2} experiments, such as atomic parity violation, Moeller scattering, and neutrino-nucleon scattering. The main changes with respect to the experimental results presented in 2008 are new combinations of results on the W-boson mass and the mass of the top quark.
Precision Electroweak Measurements and Constraints on the Standard Model
The , ALEPH, CDF, D0, ...
2009-12-11
This note presents constraints on Standard Model parameters using published and preliminary precision electroweak results measured at the electron-positron colliders LEP and SLC. The results are compared with precise electroweak measurements from other experiments, notably CDF and D0 at the Tevatron. Constraints on the input parameters of the Standard Model are derived from the combined set of results obtained in high-Q{sup 2} interactions, and used to predict results in low-Q{sup 2} experiments, such as atomic parity violation, Moeller scattering, and neutrino-nucleon scattering. The main changes with respect to the experimental results presented in 2008 are new combinations of results on the W-boson mass and the mass of the top quark.
Challenges to the standard model of Big Bang nucleosynthesis.
Steigman, G
1993-06-01
Big Bang nucleosynthesis provides a unique probe of the early evolution of the Universe and a crucial test of the consistency of the standard hot Big Bang cosmological model. Although the primordial abundances of 2H, 3He, 4He, and 7Li inferred from current observational data are in agreement with those predicted by Big Bang nucleosynthesis, recent analysis has severely restricted the consistent range for the nucleon-to-photon ratio: 3.7 standard model and suggest that no new light particles may be allowed (N(BBN)nu
Challenges to the standard model of Big Bang nucleosynthesis.
Steigman, G
1993-01-01
Big Bang nucleosynthesis provides a unique probe of the early evolution of the Universe and a crucial test of the consistency of the standard hot Big Bang cosmological model. Although the primordial abundances of 2H, 3He, 4He, and 7Li inferred from current observational data are in agreement with those predicted by Big Bang nucleosynthesis, recent analysis has severely restricted the consistent range for the nucleon-to-photon ratio: 3.7 standard model and suggest that no new light particles may be allowed (N(BBN)nu
An Introduction to the Standard Model of Particle Physics
NASA Astrophysics Data System (ADS)
Cottingham, W. Noel; Greenwood, Derek A.
1999-01-01
This graduate textbook provides a concise, accessible introduction to the Standard Model of particle physics. Theoretical concepts are developed clearly and carefully throughout the book--from the electromagnetic and weak interactions of leptons and quarks to the strong interactions of quarks. Chapters developing the theory are interspersed with chapters describing some of the wealth of experimental data supporting the model. The book assumes only the standard mathematics taught in an undergraduate physics course; more sophisticated mathematical ideas are developed in the text and in appendices. For graduate students in particle physics and physicists working in other fields who are interested in the current understanding of the ultimate constituents of matter, this textbook provides a lucid and up-to-date introduction.
Search for Beyond the Standard Model Physics at D0
Kraus, James
2011-08-01
The standard model (SM) of particle physics has been remarkably successful at predicting the outcomes of particle physics experiments, but there are reasons to expect new physics at the electroweak scale. Over the last several years, there have been a number of searches for beyond the standard model (BSM) physics at D0. Here, we limit our focus to three: searches for diphoton events with large missing transverse energy (E{sub T}), searches for leptonic jets and E{sub T}, and searches for single vector-like quarks. We have discussed three recent searches at D0. There are many more, including limits on heavy neutral gauge boson in the ee channel, a search for scalar top quarks, a search for quirks, and limits on a new resonance decaying to WW or WZ.
Development of a standard documentation protocol for communicating exposure models.
Ciffroy, P; Altenpohl, A; Fait, G; Fransman, W; Paini, A; Radovnikovic, A; Simon-Cornu, M; Suciu, N; Verdonck, F
2016-10-15
An important step in building a computational model is its documentation; a comprehensive and structured documentation can improve the model applicability and transparency in science/research and for regulatory purposes. This is particularly crucial and challenging for environmental and/or human exposure models that aim to establish quantitative relationships between personal exposure levels and their determinants. Exposure models simulate the transport and fate of a contaminant from the source to the receptor and may involve a large set of entities (e.g. all the media the contaminants may pass though). Such complex models are difficult to be described in a comprehensive, unambiguous and accessible way. Bad communication of assumptions, theory, structure and/or parameterization can lead to lack of confidence by the user and it may be source of errors. The goal of this paper is to propose a standard documentation protocol (SDP) for exposure models, i.e. a generic format and a standard structure by which all exposure models could be documented. For this purpose, a CEN (European Committee for Standardisation) workshop was set up with objective to agree on minimum requirements for the amount and type of information to be provided on exposure models documentation along with guidelines for the structure and presentation of the information. The resulting CEN workshop agreement (CWA) was expected to facilitate a more rigorous formulation of exposure models description and the understanding by users. This paper intends to describe the process followed for defining the SDP, the standardisation approach, as well as the main components of the SDP resulting from a wide consultation of interested stakeholders. The main outcome is a CEN CWA which establishes terms and definitions for exposure models and their elements, specifies minimum requirements for the amount and type of information to be documented, and proposes a structure for communicating the documentation to different
Aspects of Particle Physics Beyond the Standard Model
NASA Astrophysics Data System (ADS)
Lu, Xiaochuan
This dissertation describes a few aspects of particles beyond the Standard Model, with a focus on the remaining questions after the discovery of a Standard Model-like Higgs boson. In specific, three topics are discussed in sequence: neutrino mass and baryon asymmetry, naturalness problem of Higgs mass, and placing constraints on theoretical models from precision measurements. First, the consequence of the neutrino mass anarchy on cosmology is studied. Attentions are paid in particular to the total mass of neutrinos and baryon asymmetry through leptogenesis. With the assumption of independence among mass matrix entries in addition to the basis independence, Gaussian measure is the only choice. On top of Gaussian measure, a simple approximate U(1) flavor symmetry makes leptogenesis highly successful. Correlations between the baryon asymmetry and the light-neutrino quantities are investigated. Also discussed are possible implications of recently suggested large total mass of neutrinos by the SDSS/BOSS data. Second, the Higgs mass implies fine-tuning for minimal theories of weak-scale supersymmetry (SUSY). Non-decoupling effects can boost the Higgs mass when new states interact with the Higgs, but new sources of SUSY breaking that accompany such extensions threaten naturalness. I will show that two singlets with a Dirac mass can increase the Higgs mass while maintaining naturalness in the presence of large SUSY breaking in the singlet sector. The modified Higgs phenomenology of this scenario, termed "Dirac NMSSM", is also studied. Finally, the sensitivities of future precision measurements in probing physics beyond the Standard Model are studied. A practical three-step procedure is presented for using the Standard Model effective field theory (SM EFT) to connect ultraviolet (UV) models of new physics with weak scale precision observables. With this procedure, one can interpret precision measurements as constraints on the UV model concerned. A detailed explanation is
Qweak, N → Δ, and physics beyond the standard model
NASA Astrophysics Data System (ADS)
Leacock, J.
2014-01-01
The data-taking phase of the Qweak experiment ended in May of 2012 at the Thomas Jefferson National Accelerator Facility. Qweak aims to measure the weak charge of the proton, Q {/W p }, via parity-violating elastic electron-proton scattering. The expected value of Q {/W p } is fortuitously suppressed, which leads to an increased sensitivity to physics beyond the Standard Model.
Charged Neutrinos and Atoms in the Standard Model
NASA Astrophysics Data System (ADS)
Takasugi, E.; Tanaka, M.
1992-03-01
The possibility of the charge quantization in the standard model is examined in the absence of the ``generation as copies'' rule. It is shown that neutrinos and atoms can have mini-charges, while neutron is neutral. If a triplet Higgs boson is introduced, neutrinos have masses. Two neutrinos form a Konopinski-Mahmoud Dirac particle and the other becomes a Majorana particle due to the hidden local anomaly free U(1) symmetry.
Beyond-standard-model tensor interaction and hadron phenomenology
Courtoy, Aurore; Baessler, Stefan; Gonzalez-Alonso, Martin; Liuti, Simonetta
2015-10-15
Here, we evaluate the impact of recent developments in hadron phenomenology on extracting possible fundamental tensor interactions beyond the standard model. We show that a novel class of observables, including the chiral-odd generalized parton distributions, and the transversity parton distribution function can contribute to the constraints on this quantity. Experimental extractions of the tensor hadronic matrix elements, if sufficiently precise, will provide a, so far, absent testing ground for lattice QCD calculations.
Beyond-Standard-Model Tensor Interaction and Hadron Phenomenology.
Courtoy, Aurore; Baeßler, Stefan; González-Alonso, Martín; Liuti, Simonetta
2015-10-16
We evaluate the impact of recent developments in hadron phenomenology on extracting possible fundamental tensor interactions beyond the standard model. We show that a novel class of observables, including the chiral-odd generalized parton distributions, and the transversity parton distribution function can contribute to the constraints on this quantity. Experimental extractions of the tensor hadronic matrix elements, if sufficiently precise, will provide a, so far, absent testing ground for lattice QCD calculations. PMID:26550868
Gravity, CPT, and the standard-model extension
NASA Astrophysics Data System (ADS)
Tasson, Jay D.
2015-08-01
Exotic atoms provide unique opportunities to search for new physics. The search for CPT and Lorentz violation in the context of the general field-theory based framework of the gravitational Standard-Model Extension (SME) is one such opportunity. This work summarizes the implications of Lorentz and CPT violation for gravitational experiments with antiatoms and atoms containing higher-generation matter as well as recent nongravitational proposals to test CPT and Lorentz symmetry with muons and muonic systems.
ERIC Educational Resources Information Center
Wisconsin Department of Public Instruction, 2011
2011-01-01
Wisconsin's adoption of the Common Core State Standards provides an excellent opportunity for Wisconsin school districts and communities to define expectations from birth through preparation for college and work. By aligning the existing Wisconsin Model Early Learning Standards with the Wisconsin Common Core State Standards, expectations can be…
Impersonating the Standard Model Higgs boson: Alignment without decoupling
Carena, Marcela; Low, Ian; Shah, Nausheen R.; Wagner, Carlos E. M.
2014-04-03
In models with an extended Higgs sector there exists an alignment limit, in which the lightest CP-even Higgs boson mimics the Standard Model Higgs. The alignment limit is commonly associated with the decoupling limit, where all non-standard scalars are significantly heavier than the Z boson. However, alignment can occur irrespective of the mass scale of the rest of the Higgs sector. In this work we discuss the general conditions that lead to “alignment without decoupling”, therefore allowing for the existence of additional non-standard Higgs bosons at the weak scale. The values of tan β for which this happens are derived in terms of the effective Higgs quartic couplings in general two-Higgs-doublet models as well as in supersymmetric theories, including the MSSM and the NMSSM. In addition, we study the information encoded in the variations of the SM Higgs-fermion couplings to explore regions in the m_{A} – tan β parameter space.
Impersonating the Standard Model Higgs boson: Alignment without decoupling
Carena, Marcela; Low, Ian; Shah, Nausheen R.; Wagner, Carlos E. M.
2014-04-03
In models with an extended Higgs sector there exists an alignment limit, in which the lightest CP-even Higgs boson mimics the Standard Model Higgs. The alignment limit is commonly associated with the decoupling limit, where all non-standard scalars are significantly heavier than the Z boson. However, alignment can occur irrespective of the mass scale of the rest of the Higgs sector. In this work we discuss the general conditions that lead to “alignment without decoupling”, therefore allowing for the existence of additional non-standard Higgs bosons at the weak scale. The values of tan β for which this happens are derivedmore » in terms of the effective Higgs quartic couplings in general two-Higgs-doublet models as well as in supersymmetric theories, including the MSSM and the NMSSM. In addition, we study the information encoded in the variations of the SM Higgs-fermion couplings to explore regions in the mA – tan β parameter space.« less
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models.
King, Zachary A; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A; Ebrahim, Ali; Palsson, Bernhard O; Lewis, Nathan E
2016-01-01
Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models.
King, Zachary A; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A; Ebrahim, Ali; Palsson, Bernhard O; Lewis, Nathan E
2016-01-01
Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.
2016-01-01
Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Drager, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.
2015-10-17
In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Drager, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.
2015-10-17
In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less
Search for physics beyond the Standard Model using jet observables
NASA Astrophysics Data System (ADS)
Kousouris, Konstantinos
2015-11-01
Jet observables have been exploited extensively during the LHC Run 1 to search for physics beyond the Standard Model. In this article, the most recent results from the ATLAS and CMS collaborations are summarized. Data from proton-proton collisions at 7 and 8 TeV center-of-mass energy have been analyzed to study monojet, dijet, and multijet final states, searching for a variety of new physics signals that include colored resonances, contact interactions, extra dimensions, and supersymmetric particles. The exhaustive searches with jets in Run 1 did not reveal any signal, and the results were used to put stringent exclusion limits on the new physics models.
Towards ISO Standard Earth Ionosphere and Plasmasphere Model
NASA Astrophysics Data System (ADS)
Bilitza, Dieter; Tamara, Gulyaeva
2012-07-01
Space exploration has been identified by several governments as a priority for their space agencies and commercial industry. A good knowledge and specification of the ionosphere and plasmasphere are the key elements necessary to achieve this goal in the design and operation of space vehicles, remote sensing, reliable communication and navigation. The International Standardization Organization, ISO, recommends the International Reference Ionosphere (IRI) for the specification of ionosphere plasma densities and temperatures and lists several plasmasphere models for extending IRI to plasmaspheric altitudes, as described in the ISO Technical Specification, ISO/TS16457:2009. IRI is an international project sponsored jointly by the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI). The buildup of IRI electron density profile in the bottomside and topside ionosphere and its extension to the plasmasphere are discussed in the paper. We report about the current status of the ISO standardization process for IRI. A Draft International Standard (DIS) document was prepared and circulated widely. Feedback and comments led to the latest revision of the document. We will also present a brief review of IRI-related activities and model status.
Standardization of Thermo-Fluid Modeling in Modelica.Fluid
Franke, Rudiger; Casella, Francesco; Sielemann, Michael; Proelss, Katrin; Otter, Martin; Wetter, Michael
2009-09-01
This article discusses the Modelica.Fluid library that has been included in the Modelica Standard Library 3.1. Modelica.Fluid provides interfaces and basic components for the device-oriented modeling of onedimensional thermo-fluid flow in networks containing vessels, pipes, fluid machines, valves and fittings. A unique feature of Modelica.Fluid is that the component equations and the media models as well as pressure loss and heat transfer correlations are decoupled from each other. All components are implemented such that they can be used for media from the Modelica.Media library. This means that an incompressible or compressible medium, a single or a multiple substance medium with one or more phases might be used with one and the same model as long as the modeling assumptions made hold. Furthermore, trace substances are supported. Modeling assumptions can be configured globally in an outer System object. This covers in particular the initialization, uni- or bi-directional flow, and dynamic or steady-state formulation of mass, energy, and momentum balance. All assumptions can be locally refined for every component. While Modelica.Fluid contains a reasonable set of component models, the goal of the library is not to provide a comprehensive set of models, but rather to provide interfaces and best practices for the treatment of issues such as connector design and implementation of energy, mass and momentum balances. Applications from various domains are presented.
On a radiative origin of the Standard Model from trinification
NASA Astrophysics Data System (ADS)
Camargo-Molina, José Eliel; Morais, António P.; Pasechnik, Roman; Wessén, Jonas
2016-09-01
In this work, we present a trinification-based grand unified theory incorporating a global SU(3) family symmetry that after a spontaneous breaking leads to a left-right symmetric model. Already at the classical level, this model can accommodate the matter content and the quark Cabbibo mixing in the Standard Model (SM) with only one Yukawa coupling at the unification scale. Considering the minimal low-energy scenario with the least amount of light states, we show that the resulting effective theory enables dynamical breaking of its gauge group down to that of the SM by means of radiative corrections accounted for by the renormalisation group evolution at one loop. This result paves the way for a consistent explanation of the SM breaking scale and fermion mass hierarchies.
Light fermion masses in superstring derived standard-like models
NASA Astrophysics Data System (ADS)
Faraggi, Alon E.
1994-06-01
I discuss the suppression of the lightest generation fermion mass terms in realistic superstring standard-like models in the free fermionic formulation. The suppression of the mass terms is a consequence of horizontal symmetries that arise due to the Z 2×Z 2 orbifold compactification. In a specific toy model, I investigate the possibility of resolving the strong CP puzzle by a highly suppressed up quark mass. In some scenarios the up quark mass may be as small as 10 -8 MeV. I show that in the specific model the suppression of the up quark mass is incompatible with the requirement of a nonvanishing electron mass. I discuss how this situation may be remedied.
The Beyond the standard model working group: Summary report
G. Azuelos et al.
2004-03-18
In this working group we have investigated a number of aspects of searches for new physics beyond the Standard Model (SM) at the running or planned TeV-scale colliders. For the most part, we have considered hadron colliders, as they will define particle physics at the energy frontier for the next ten years at least. The variety of models for Beyond the Standard Model (BSM) physics has grown immensely. It is clear that only future experiments can provide the needed direction to clarify the correct theory. Thus, our focus has been on exploring the extent to which hadron colliders can discover and study BSM physics in various models. We have placed special emphasis on scenarios in which the new signal might be difficult to find or of a very unexpected nature. For example, in the context of supersymmetry (SUSY), we have considered: how to make fully precise predictions for the Higgs bosons as well as the superparticles of the Minimal Supersymmetric Standard Model (MSSM) (parts III and IV); MSSM scenarios in which most or all SUSY particles have rather large masses (parts V and VI); the ability to sort out the many parameters of the MSSM using a variety of signals and study channels (part VII); whether the no-lose theorem for MSSM Higgs discovery can be extended to the next-to-minimal Supersymmetric Standard Model (NMSSM) in which an additional singlet superfield is added to the minimal collection of superfields, potentially providing a natural explanation of the electroweak value of the parameter {micro} (part VIII); sorting out the effects of CP violation using Higgs plus squark associate production (part IX); the impact of lepton flavor violation of various kinds (part X); experimental possibilities for the gravitino and its sgoldstino partner (part XI); what the implications for SUSY would be if the NuTeV signal for di-muon events were interpreted as a sign of R-parity violation (part XII). Our other main focus was on the phenomenological implications of extra
Electroweak precision data and the Lee-Wick standard model
Underwood, Thomas E. J.; Zwicky, Roman
2009-02-01
We investigate the electroweak precision constraints on the recently proposed Lee-Wick standard model at tree level. We analyze low-energy, Z-pole (LEP1/SLC) and LEP2 data separately. We derive the exact tree-level low-energy and Z-pole effective Lagrangians from both the auxiliary field and higher derivative formulation of the theory. For the LEP2 data we use the fact that the Lee-Wick standard model belongs to the class of models that assumes a so-called 'universal' form which can be described by seven oblique parameters at leading order in m{sub W}{sup 2}/M{sub 1,2}{sup 2}. At tree level we find that Y=-m{sub W}{sup 2}/M{sub 1}{sup 2} and W=-m{sub W}{sup 2}/M{sub 2}{sup 2}, where the negative sign is due to the presence of the negative norm states. All other oblique parameters (S,X) and (T,U,V) are found to be zero. In the addendum we show how our results differ from previous investigations, where contact terms, which are found to be of leading order, have been neglected. The LEP1/SLC constraints are slightly stronger than LEP2 and much stronger than the low-energy ones. The LEP1/SLC results exclude gauge boson masses of M{sub 1}{approx_equal}M{sub 2}{approx}3 TeV at the 99% confidence level. Somewhat lower masses are possible when one of the masses assumes a large value. Loop corrections to the electroweak observables are suppressed by the standard {approx}1/(4{pi}){sup 2} factor and are therefore not expected to change the constraints on M1 and M{sub 2}. This assertion is most transparent from the higher derivative formulation of the theory.
A theorem on the Higgs sector of the Standard Model
NASA Astrophysics Data System (ADS)
Frasca, Marco
2016-06-01
We provide the solution of the classical theory for the Higgs sector of the Standard Model obtaining the exact Green's function for the broken phase. Solving the Dyson-Schwinger equations for the Higgs field we show that the propagator coincides with that of the classical theory confirming the spectrum also at the quantum level. In this way we obtain a proof of triviality using the Källen-Lehman representation. This has as a consequence that higher excited states must exist for the Higgs particle, representing an internal spectrum for it. Higher excited states have exponentially smaller amplitudes and, so, their production rates are significantly depressed.
Quantum corrections in Higgs inflation: the Standard Model case
NASA Astrophysics Data System (ADS)
George, Damien P.; Mooij, Sander; Postma, Marieke
2016-04-01
We compute the one-loop renormalization group equations for Standard Model Higgs inflation. The calculation is done in the Einstein frame, using a covariant formalism for the multi-field system. All counterterms, and thus the betafunctions, can be extracted from the radiative corrections to the two-point functions; the calculation of higher n-point functions then serves as a consistency check of the approach. We find that the theory is renormalizable in the effective field theory sense in the small, mid and large field regime. In the large field regime our results differ slightly from those found in the literature, due to a different treatment of the Goldstone bosons.
What is Air? A Standard Model for Combustion Simulations
Cloutman, L D
2001-08-01
Most combustion devices utilize air as the oxidizer. Thus, reactive flow simulations of these devices require the specification of the composition of air as part of the physicochemical input. A mixture of only oxygen and nitrogen often is used, although in reality air is a more complex mixture of somewhat variable composition. We summarize some useful parameters describing a standard model of dry air. Then we consider modifications to include water vapor for creating the desired level of humidity. The ''minor'' constituents of air, especially argon and water vapor, can affect the composition by as much as about 5 percent in the mole fractions.
Standard model and supersymmetric Higgs searches at CDF
Kilminster, Ben; /Ohio State U.
2005-10-01
We present the results on the searches for the SM and the MSSM Higgs boson production in proton-antiproton collisions at {radical}s = 1.96 GeV with the CDF detector. The Higgs bosons are searched for in various production and decay channels, with data samples corresponding to 400 pb{sup -1}. Using these measurements, we set an upper limit on the production cross section times branching fraction for the Standard Model Higgs as a function of the Higgs mass, and we obtain exclusion regions in the tan{beta} vs mass for the neutral MSSM Higgs, and branching fraction vs mass for the charged Higgs.
Strong CP problem with 10(32) standard model copies.
Dvali, Gia; Farrar, Glennys R
2008-07-01
We show that a recently proposed solution to the hierarchy problem simultaneously solves the strong CP problem, without requiring an axion or any further new physics. Consistency of black hole physics implies a nontrivial relation between the number of particle species and particle masses, so that with approximately 10(32) copies of the standard model, the TeV scale is naturally explained. At the same time, as shown here, this setup predicts a typical expected value of the strong-CP parameter in QCD of theta approximately 10(-9). This strongly motivates a more sensitive measurement of the neutron electric dipole moment. PMID:18764102
Future high precision experiments and new physics beyond Standard Model
Luo, Mingxing
1993-04-01
High precision (< 1%) electroweak experiments that have been done or are likely to be done in this decade are examined on the basis of Standard Model (SM) predictions of fourteen weak neutral current observables and fifteen W and Z properties to the one-loop level, the implications of the corresponding experimental measurements to various types of possible new physics that enter at the tree or loop level were investigated. Certain experiments appear to have special promise as probes of the new physics considered here.
Future high precision experiments and new physics beyond Standard Model
Luo, Mingxing.
1993-01-01
High precision (< 1%) electroweak experiments that have been done or are likely to be done in this decade are examined on the basis of Standard Model (SM) predictions of fourteen weak neutral current observables and fifteen W and Z properties to the one-loop level, the implications of the corresponding experimental measurements to various types of possible new physics that enter at the tree or loop level were investigated. Certain experiments appear to have special promise as probes of the new physics considered here.
Electroweak baryogenesis in the exceptional supersymmetric standard model
Chao, Wei
2015-08-28
We study electroweak baryogenesis in the E{sub 6} inspired exceptional supersymmetric standard model (E{sub 6}SSM). The relaxation coefficients driven by singlinos and the new gaugino as well as the transport equation of the Higgs supermultiplet number density in the E{sub 6}SSM are calculated. Our numerical simulation shows that both CP-violating source terms from singlinos and the new gaugino can solely give rise to a correct baryon asymmetry of the Universe via the electroweak baryogenesis mechanism.
Symmetries and the search for physics beyond the standard model
Anthony W. Thomas
2010-11-01
Beginning with an introduction to its significance, we briefly review the status of our knowledge of the strangeness content of the nucleon, both experimental and theoretical. We then recall how the success of the corresponding experimental program at JLab led to an unanticipated improvement in the precision with which we constrain the possible existence of parity violating lepton-quark interactions beyond the Standard Model. This leads naturally to the consideration of the major outstanding discrepancy in the evolution of sin2θW. In particular, we explain how the nuclear modification of parton distributions, combined with charge symmetry violation eliminate this discrepancy.
Dark Matter and Color Octets Beyond the Standard Model
Krnjaic, Gordan Zdenko
2012-07-01
Although the Standard Model (SM) of particles and interactions has survived forty years of experimental tests, it does not provide a complete description of nature. From cosmological and astrophysical observations, it is now clear that the majority of matter in the universe is not baryonic and interacts very weakly (if at all) via non-gravitational forces. The SM does not provide a dark matter candidate, so new particles must be introduced. Furthermore, recent Tevatron results suggest that SM predictions for benchmark collider observables are in tension with experimental observations. In this thesis, we will propose extensions to the SM that address each of these issues.
Searches for the standard model Higgs boson at the Tevatron
Dorigo, Tommaso; /Padua U.
2005-05-01
The CDF and D0 experiments at the Tevatron have searched for the Standard Model Higgs boson in data collected between 2001 and 2004. Upper limits have been placed on the production cross section times branching ratio to b{bar b} pairs or W{sup +}W{sup -} pairs as a function of the Higgs boson mass. projections indicate that the Tevatron experiments have a chance of discovering a M{sub H} = 115 GeV Higgs with the total dataset foreseen by 2009, or excluding it at 95% C.L. up to a mass of 135 GeV.
NONGRAVITATIONAL FORCES ON COMETS: AN EXTENSION OF THE STANDARD MODEL
Aksnes, K.; Mysen, E.
2011-09-15
The accuracy of comet orbit computations is limited by uncertain knowledge of the recoil force due to outgassing from the nuclei. The standard model assumes an exponential dependence of the force on distance from the Sun. This variable force times constants A{sub 1}, A{sub 2}, and A{sub 3} represents the radial, transverse, and normal components of the net force. Orbit solutions show that the As often vary considerably over a few apparitions of the comets. In this paper, we allow for time variations of the As, and we show that for several comets this improves the orbit accuracy considerably.
Toward Standardizing a Lexicon of Infectious Disease Modeling Terms
Milwid, Rachael; Steriu, Andreea; Arino, Julien; Heffernan, Jane; Hyder, Ayaz; Schanzer, Dena; Gardner, Emma; Haworth-Brockman, Margaret; Isfeld-Kiely, Harpa; Langley, Joanne M.; Moghadas, Seyed M.
2016-01-01
Disease modeling is increasingly being used to evaluate the effect of health intervention strategies, particularly for infectious diseases. However, the utility and application of such models are hampered by the inconsistent use of infectious disease modeling terms between and within disciplines. We sought to standardize the lexicon of infectious disease modeling terms and develop a glossary of terms commonly used in describing models’ assumptions, parameters, variables, and outcomes. We combined a comprehensive literature review of relevant terms with an online forum discussion in a virtual community of practice, mod4PH (Modeling for Public Health). Using a convergent discussion process and consensus amongst the members of mod4PH, a glossary of terms was developed as an online resource. We anticipate that the glossary will improve inter- and intradisciplinary communication and will result in a greater uptake and understanding of disease modeling outcomes in heath policy decision-making. We highlight the role of the mod4PH community of practice and the methodologies used in this endeavor to link theory, policy, and practice in the public health domain. PMID:27734014
Big bang nucleosynthesis: The standard model and alternatives
NASA Technical Reports Server (NTRS)
Schramm, David N.
1991-01-01
Big bang nucleosynthesis provides (with the microwave background radiation) one of the two quantitative experimental tests of the big bang cosmological model. This paper reviews the standard homogeneous-isotropic calculation and shows how it fits the light element abundances ranging from He-4 at 24% by mass through H-2 and He-3 at parts in 10(exp 5) down to Li-7 at parts in 10(exp 10). Furthermore, the recent large electron positron (LEP) (and the stanford linear collider (SLC)) results on the number of neutrinos are discussed as a positive laboratory test of the standard scenario. Discussion is presented on the improved observational data as well as the improved neutron lifetime data. Alternate scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conlusions on the baryonic density relative to the critical density, omega(sub b) remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the conclusion that omega(sub b) approximately equals 0.06. This latter point is the driving force behind the need for non-baryonic dark matter (assuming omega(sub total) = 1) and the need for dark baryonic matter, since omega(sub visible) is less than omega(sub b).
Colorado Model Content Standards for Science: Suggested Grade Level Expectations.
ERIC Educational Resources Information Center
Colorado State Dept. of Education, Denver.
This document outlines the content standards for science in the state of Colorado. The document is organized into six standards, each of which is subdivided into a set of guiding questions exemplifying the standard and a series of lists defining what is expected of students at each grade level within the standard. The standards are that students…
Electro symmetry breaking and beyond the standard model
Barklow, T.; Dawson, S.; Haber, H.E.; Siegrist, J.
1995-05-01
The development of the Standard Model of particle physics is a remarkable success story. Its many facets have been tested at present day accelerators; no significant unambiguous deviations have yet been found. In some cases, the model has been verified at an accuracy of better than one part in a thousand. This state of affairs presents our field with a challenge. Where do we go from here? What is our vision for future developments in particle physics? Are particle physicists` recent successes a signal of the field`s impending demise, or do real long-term prospects exist for further progress? We assert that the long-term health and intellectual vitality of particle physics depends crucially on the development of a new generation of particle colliders that push the energy frontier by an order of magnitude beyond present capabilities. In this report, we address the scientific issues underlying this assertion.
Angular correlations in top quark decays in standard model extensions
Batebi, S.; Etesami, S. M.; Mohammadi-Najafabadi, M.
2011-03-01
The CMS Collaboration at the CERN LHC has searched for the t-channel single top quark production using the spin correlation of the t-channel. The signal extraction and cross section measurement rely on the angular distribution of the charged lepton in the top quark decays, the angle between the charged lepton momentum and top spin in the top rest frame. The behavior of the angular distribution is a distinct slope for the t-channel single top (signal) while it is flat for the backgrounds. In this Brief Report, we investigate the contributions which this spin correlation may receive from a two-Higgs doublet model, a top-color assisted technicolor (TC2) and the noncommutative extension of the standard model.
Alive and well: A short review about standard solar models
NASA Astrophysics Data System (ADS)
Serenelli, Aldo
2016-04-01
Standard solar models (SSMs) provide a reference framework across a number of research fields: solar and stellar models, solar neutrinos, particle physics the most conspicuous among them. The accuracy of the physical description of the global properties of the Sun that SSMs provide has been challenged in the last decade by a number of developments in stellar spectroscopic techniques. Over the same period of time, solar neutrino experiments, and Borexino in particular, have measured the four solar neutrino fluxes from the pp-chains that are associated with 99% of the nuclear energy generated in the Sun. Borexino has also set the most stringent limit on CNO energy generation, only ˜ 40% larger than predicted by SSMs. More recently, and for the first time, radiative opacity experiments have been performed at conditions that closely resemble those at the base of the solar convective envelope. In this article, we review these developments and discuss the current status of SSMs, including its intrinsic limitations.
Z' boson detection in the minimal quiver standard model
Berenstein, D.; Martinez, R.; Ochoa, F.; Pinansky, S.
2009-05-01
We undertake a phenomenological study of the extra neutral Z' boson in the minimal quiver standard model and discuss limits on the model's parameters from previous precision electroweak experiments, as well as detection prospects at the Large Hadron Collider at CERN. We find that masses lower than around 700 GeV are excluded by the Z-pole data from the CERN LEP collider, and below 620 GeV by experimental data from di-electron events at the Fermilab Tevatron collider. We also find that at a mass of 1 TeV the LHC cross section would show a small peak in the di-lepton and top pair channel.
How to use the Standard Model effective field theory
NASA Astrophysics Data System (ADS)
Henning, Brian; Lu, Xiaochuan; Murayama, Hitoshi
2016-01-01
We present a practical three-step procedure of using the Standard Model effective field theory (SM EFT) to connect ultraviolet (UV) models of new physics with weak scale precision observables. With this procedure, one can interpret precision measurements as constraints on a given UV model. We give a detailed explanation for calculating the effective action up to one-loop order in a manifestly gauge covariant fashion. This covariant derivative expansion method dramatically simplifies the process of matching a UV model with the SM EFT, and also makes available a universal formalism that is easy to use for a variety of UV models. A few general aspects of RG running effects and choosing operator bases are discussed. Finally, we provide mapping results between the bosonic sector of the SM EFT and a complete set of precision electroweak and Higgs observables to which present and near future experiments are sensitive. Many results and tools which should prove useful to those wishing to use the SM EFT are detailed in several appendices.
Long-term archiving and data access: modelling and standardization
NASA Technical Reports Server (NTRS)
Hoc, Claude; Levoir, Thierry; Nonon-Latapie, Michel
1996-01-01
This paper reports on the multiple difficulties inherent in the long-term archiving of digital data, and in particular on the different possible causes of definitive data loss. It defines the basic principles which must be respected when creating long-term archives. Such principles concern both the archival systems and the data. The archival systems should have two primary qualities: independence of architecture with respect to technological evolution, and generic-ness, i.e., the capability of ensuring identical service for heterogeneous data. These characteristics are implicit in the Reference Model for Archival Services, currently being designed within an ISO-CCSDS framework. A system prototype has been developed at the French Space Agency (CNES) in conformance with these principles, and its main characteristics will be discussed in this paper. Moreover, the data archived should be capable of abstract representation regardless of the technology used, and should, to the extent that it is possible, be organized, structured and described with the help of existing standards. The immediate advantage of standardization is illustrated by several concrete examples. Both the positive facets and the limitations of this approach are analyzed. The advantages of developing an object-oriented data model within this contxt are then examined.
Penguin-like diagrams from the standard model
NASA Astrophysics Data System (ADS)
Ping, Chia Swee
2015-04-01
The Standard Model is highly successful in describing the interactions of leptons and quarks. There are, however, rare processes that involve higher order effects in electroweak interactions. One specific class of processes is the penguin-like diagram. Such class of diagrams involves the neutral change of quark flavours accompanied by the emission of a gluon (gluon penguin), a photon (photon penguin), a gluon and a photon (gluon-photon penguin), a Z-boson (Z penguin), or a Higgs-boson (Higgs penguin). Such diagrams do not arise at the tree level in the Standard Model. They are, however, induced by one-loop effects. In this paper, we present an exact calculation of the penguin diagram vertices in the `tHooft-Feynman gauge. Renormalization of the vertex is effected by a prescription by Chia and Chong which gives an expression for the counter term identical to that obtained by employing Ward-Takahashi identity. The on-shell vertex functions for the penguin diagram vertices are obtained. The various penguin diagram vertex functions are related to one another via Ward-Takahashi identity. From these, a set of relations is obtained connecting the vertex form factors of various penguin diagrams. Explicit expressions for the gluon-photon penguin vertex form factors are obtained, and their contributions to the flavor changing processes estimated.
Supersymmetry and String Theory: Beyond the Standard Model
NASA Astrophysics Data System (ADS)
Dine, Michael
2007-01-01
The past decade has witnessed dramatic developments in the field of theoretical physics. This book is a comprehensive introduction to these recent developments. It contains a review of the Standard Model, covering non-perturbative topics, and a discussion of grand unified theories and magnetic monopoles. It introduces the basics of supersymmetry and its phenomenology, and includes dynamics, dynamical supersymmetry breaking, and electric-magnetic duality. The book then covers general relativity and the big bang theory, and the basic issues in inflationary cosmologies before discussing the spectra of known string theories and the features of their interactions. The book also includes brief introductions to technicolor, large extra dimensions, and the Randall-Sundrum theory of warped spaces. This will be of great interest to graduates and researchers in the fields of particle theory, string theory, astrophysics and cosmology. The book contains several problems, and password protected solutions will be available to lecturers at www.cambridge.org/9780521858410. Provides reader with tools to confront limitations of the Standard Model Includes several exercises and problems Solutions are available to lecturers at www.cambridge.org/9780521858410
Gravitational mass-shift effect in the standard model
NASA Astrophysics Data System (ADS)
Kazinski, P. O.
2012-02-01
The gravitational mass-shift effect is investigated in the framework of the standard model with the energy cutoff regularization both for stationary and nonstationary backgrounds at the one-loop level. The problem of singularity of the effective potential of the Higgs field on the horizon of a black hole, which was reported earlier, is resolved. The equations characterizing the properties of a vacuum state are derived and solved in a certain approximation for the Schwarzschild black hole. The gravitational mass-shift effect is completely described in this case. The behavior of masses of the massive particles of the standard model depends on the value of the Higgs boson mass in a flat spacetime. If the Higgs boson mass in a flat spacetime is less than 263.6 GeV then a mass of any massive particle approaching a gravitating object grows. If the Higgs boson mass in a flat spacetime is greater than or equal to 278.2 GeV, the masses of all the massive particles decrease in a strong gravitational field. The Higgs boson masses lying between these two values prove to lead to instability, at least at the one-loop level, and so they are excluded. It turns out that the vacuum possesses the same properties as an ultrarelativistic fluid in a certain approximation. The expression for the entropy and enthalpy densities and the pressure of this fluid are obtained. The sound speed in this fluid is also derived.
Penguin-like diagrams from the standard model
Ping, Chia Swee
2015-04-24
The Standard Model is highly successful in describing the interactions of leptons and quarks. There are, however, rare processes that involve higher order effects in electroweak interactions. One specific class of processes is the penguin-like diagram. Such class of diagrams involves the neutral change of quark flavours accompanied by the emission of a gluon (gluon penguin), a photon (photon penguin), a gluon and a photon (gluon-photon penguin), a Z-boson (Z penguin), or a Higgs-boson (Higgs penguin). Such diagrams do not arise at the tree level in the Standard Model. They are, however, induced by one-loop effects. In this paper, we present an exact calculation of the penguin diagram vertices in the ‘tHooft-Feynman gauge. Renormalization of the vertex is effected by a prescription by Chia and Chong which gives an expression for the counter term identical to that obtained by employing Ward-Takahashi identity. The on-shell vertex functions for the penguin diagram vertices are obtained. The various penguin diagram vertex functions are related to one another via Ward-Takahashi identity. From these, a set of relations is obtained connecting the vertex form factors of various penguin diagrams. Explicit expressions for the gluon-photon penguin vertex form factors are obtained, and their contributions to the flavor changing processes estimated.
TYPE II SUPERNOVAE: MODEL LIGHT CURVES AND STANDARD CANDLE RELATIONSHIPS
Kasen, Daniel; Woosley, S. E.
2009-10-01
A survey of Type II supernovae explosion models has been carried out to determine how their light curves and spectra vary with their mass, metallicity, and explosion energy. The presupernova models are taken from a recent survey of massive stellar evolution at solar metallicity supplemented by new calculations at subsolar metallicity. Explosions are simulated by the motion of a piston near the edge of the iron core and the resulting light curves and spectra are calculated using full multi-wavelength radiation transport. Formulae are developed that describe approximately how the model observables (light curve luminosity and duration) scale with the progenitor mass, explosion energy, and radioactive nucleosynthesis. Comparison with observational data shows that the explosion energy of typical supernovae (as measured by kinetic energy at infinity) varies by nearly an order of magnitude-from 0.5 to 4.0 x 10{sup 51} ergs, with a typical value of approx0.9 x 10{sup 51} ergs. Despite the large variation, the models exhibit a tight relationship between luminosity and expansion velocity, similar to that previously employed empirically to make SNe IIP standardized candles. This relation is explained by the simple behavior of hydrogen recombination in the supernova envelope, but we find a sensitivity to progenitor metallicity and mass that could lead to systematic errors. Additional correlations between light curve luminosity, duration, and color might enable the use of SNe IIP to obtain distances accurate to approx20% using only photometric data.
ERIC Educational Resources Information Center
Marshall, Victor W.; Haldemann, Verena
1995-01-01
Marshall reviews four types of social models of aging: allocation, construction of life course, personality and socialization, and negotiation, concluding that the life course perspective dominates. Haldemann comments (in French) that broader research is needed to question this dominance; Marshall responds that his goal was to describe, not to…
Application of standards and models in body composition analysis.
Müller, Manfred J; Braun, Wiebke; Pourhassan, Maryam; Geisler, Corinna; Bosy-Westphal, Anja
2016-05-01
The aim of this review is to extend present concepts of body composition and to integrate it into physiology. In vivo body composition analysis (BCA) has a sound theoretical and methodological basis. Present methods used for BCA are reliable and valid. Individual data on body components, organs and tissues are included into different models, e.g. a 2-, 3-, 4- or multi-component model. Today the so-called 4-compartment model as well as whole body MRI (or computed tomography) scans are considered as gold standards of BCA. In practice the use of the appropriate method depends on the question of interest and the accuracy needed to address it. Body composition data are descriptive and used for normative analyses (e.g. generating normal values, centiles and cut offs). Advanced models of BCA go beyond description and normative approaches. The concept of functional body composition (FBC) takes into account the relationships between individual body components, organs and tissues and related metabolic and physical functions. FBC can be further extended to the model of healthy body composition (HBC) based on horizontal (i.e. structural) and vertical (e.g. metabolism and its neuroendocrine control) relationships between individual components as well as between component and body functions using mathematical modelling with a hierarchical multi-level multi-scale approach at the software level. HBC integrates into whole body systems of cardiovascular, respiratory, hepatic and renal functions. To conclude BCA is a prerequisite for detailed phenotyping of individuals providing a sound basis for in depth biomedical research and clinical decision making.
Wisconsin's Model Academic Standards for Art and Design Education.
ERIC Educational Resources Information Center
Wisconsin State Dept. of Public Instruction, Madison.
This Wisconsin academic standards guide for art and design explains what is meant by academic standards. The guide declares that academic standards specify what students should know and be able to do; what students might be asked to do to give evidence of standards; how well students must perform; and that content, performance, and proficiency…
Evolution of Climate Science Modelling Language within international standards frameworks
NASA Astrophysics Data System (ADS)
Lowe, Dominic; Woolf, Andrew
2010-05-01
The Climate Science Modelling Language (CSML) was originally developed as part of the NERC Data Grid (NDG) project in the UK. It was one of the first Geography Markup Language (GML) application schemas describing complex feature types for the metocean domain. CSML feature types can be used to describe typical climate products such as model runs or atmospheric profiles. CSML has been successfully used within NDG to provide harmonised access to a number of different data sources. For example, meteorological observations held in heterogeneous databases by the British Atmospheric Data Centre (BADC) and Centre for Ecology and Hydrology (CEH) were served uniformly as CSML features via Web Feature Service. CSML has now been substantially revised to harmonise it with the latest developments in OGC and ISO conceptual modelling for geographic information. In particular, CSML is now aligned with the near-final ISO 19156 Observations & Measurements (O&M) standard. CSML combines the O&M concept of 'sampling features' together with an observation result based on the coverage model (ISO 19123). This general pattern is specialised for particular data types of interest, classified on the basis of sampling geometry and topology. In parallel work, the OGC Met Ocean Domain Working Group has established a conceptual modelling activity. This is a cross-organisational effort aimed at reaching consensus on a common core data model that could be re-used in a number of met-related application areas: operational meteorology, aviation meteorology, climate studies, and the research community. It is significant to note that this group has also identified sampling geometry and topology as a key classification axis for data types. Using the Model Driven Architecture (MDA) approach as adopted by INSPIRE we demonstrate how the CSML application schema is derived from a formal UML conceptual model based on the ISO TC211 framework. By employing MDA tools which map consistently between UML and GML we
Standards and Guidelines for Numerical Models for Tsunami Hazard Mitigation
NASA Astrophysics Data System (ADS)
Titov, V.; Gonzalez, F.; Kanoglu, U.; Yalciner, A.; Synolakis, C. E.
2006-12-01
An increased number of nations around the workd need to develop tsunami mitigation plans which invariably involve inundation maps for warning guidance and evacuation planning. There is the risk that inundation maps may be produced with older or untested methodology, as there are currently no standards for modeling tools. In the aftermath of the 2004 megatsunami, some models were used to model inundation for Cascadia events with results much larger than sediment records and existing state-of-the-art studies suggest leading to confusion among emergency management. Incorrectly assessing tsunami impact is hazardous, as recent events in 2006 in Tonga, Kythira, Greece and Central Java have suggested (Synolakis and Bernard, 2006). To calculate tsunami currents, forces and runup on coastal structures, and inundation of coastlines one must calculate the evolution of the tsunami wave from the deep ocean to its target site, numerically. No matter what the numerical model, validation (the process of ensuring that the model solves the parent equations of motion accurately) and verification (the process of ensuring that the model used represents geophysical reality appropriately) both are an essential. Validation ensures that the model performs well in a wide range of circumstances and is accomplished through comparison with analytical solutions. Verification ensures that the computational code performs well over a range of geophysical problems. A few analytic solutions have been validated themselves with laboratory data. Even fewer existing numerical models have been both validated with the analytical solutions and verified with both laboratory measurements and field measurements, thus establishing a gold standard for numerical codes for inundation mapping. While there is in principle no absolute certainty that a numerical code that has performed well in all the benchmark tests will also produce correct inundation predictions with any given source motions, validated codes
New extended standard model, dark matters and relativity theory
NASA Astrophysics Data System (ADS)
Hwang, Jae-Kwang
2016-03-01
Three-dimensional quantized space model is newly introduced as the extended standard model. Four three-dimensional quantized spaces with total 12 dimensions are used to explain the universes including ours. Electric (EC), lepton (LC) and color (CC) charges are defined to be the charges of the x1x2x3, x4x5x6 and x7x8x9 warped spaces, respectively. Then, the lepton is the xi(EC) - xj(LC) correlated state which makes 3x3 = 9 leptons and the quark is the xi(EC) - xj(LC) - xk(CC) correlated state which makes 3x3x3 = 27 quarks. The new three bastons with the xi(EC) state are proposed as the dark matters seen in the x1x2x3 space, too. The matter universe question, three generations of the leptons and quarks, dark matter and dark energy, hadronization, the big bang, quantum entanglement, quantum mechanics and general relativity are briefly discussed in terms of this new model. The details can be found in the article titled as ``journey into the universe; three-dimensional quantized spaces, elementary particles and quantum mechanics at https://www.researchgate.net/profile/J_Hwang2''.
Mathematical, Theoretical and Phenomenological Challenges Beyond the Standard Model
NASA Astrophysics Data System (ADS)
Djordjević, G.; Nešić, L.; Wess, Julius
2005-03-01
Integrable structures in the gauge/string corespondence -- Fluxes in M-theory on 7-manifolds: Gz-, SW(3)- and SU( 2)-structures -- Noncommutative quantum field theory: review and its latest achievements -- Shadows of quantum black holes -- Yukawa quasi-unification and inflation -- Supersymmetric grand unification: the quest for the theory -- Spin foam models of quantum gravity -- Riemann-cartan space-time in stringy geometry -- Can black hole relax unitarily? -- Deformed coordinate spaces derivatives.Deformed coherent state solution to multiparticle stochastic processes -- Non-commutative GUTS, standard model and C, P, T properties from seiberg-witten map -- Seesaw, susy and SO(10) -- On the dynamics of BMN operators of finite size and the , model of string bits -- Divergencies in &expanded noncommutative SU( 2) yang-mills theory -- Heterotic string compactifications with fluxes -- Symmetries and supersymmetries of the dirac-type operators on euclidean taub-NUT space -- Real and p-Adic aspects of quantization of tachyons -- Skew-symmetric lax polynomial matrices and integrable rigid body systems -- Supersymmetric quantum field theories -- Parastatistics algebras and combinatorics -- Noncommutative D-branes on group manifolds -- High-energy bounds on the scattering amplitude in noncommutative quantum field theory -- Many faces of D-branes: from flat space, via AdS to pp-waves.
CosPA 2015 and the Standard Model
NASA Astrophysics Data System (ADS)
Pauchy Hwang, W.-Y.
2016-07-01
In this keynote speech, I describe briefly “The Universe”, a journal/newsletter launched by APCosPA Organization, and my lifetime research on the Standard Model of particle physics. In this 21st Century, we should declare that we live in the quantum 4-dimensional Minkowski space-time with the force-fields gauge-group structure SUc(3) × SUL(2) × U(1) × SUf(3) built-in from the very beginning. This background can see the lepton world, of atomic sizes, and offers us the eyes to see other things. It also can see the quark world, of the Fermi sizes, and this fact makes this entire world much more interesting.
Baryon number dissipation at finite temperature in the standard model
Mottola, E. ); Raby, S. . Dept. of Physics); Starkman, G. . Dept. of Astronomy)
1990-01-01
We analyze the phenomenon of baryon number violation at finite temperature in the standard model, and derive the relaxation rate for the baryon density in the high temperature electroweak plasma. The relaxation rate, {gamma} is given in terms of real time correlation functions of the operator E{center dot}B, and is directly proportional to the sphaleron transition rate, {Gamma}: {gamma} {preceq} n{sub f}{Gamma}/T{sup 3}. Hence it is not instanton suppressed, as claimed by Cohen, Dugan and Manohar (CDM). We show explicitly how this result is consistent with the methods of CDM, once it is recognized that a new anomalous commutator is required in their approach. 19 refs., 2 figs.
On the standard model group in F-theory
NASA Astrophysics Data System (ADS)
Choi, Kang-Sin
2014-06-01
We analyze the standard model gauge group constructed in F-theory. The non-Abelian part is described by a surface singularity of Kodaira type. Blow-up analysis shows that the non-Abelian part is distinguished from the naïve product of and , but that it should be a rank three group along the chain of groups, because it has non-generic gauge symmetry enhancement structure responsible for desirable matter curves. The Abelian part is constructed from a globally valid two-form with the desired gauge quantum numbers, using a similar method to the decomposition (factorization) method of the spectral cover. This technique makes use of an extra section in the elliptic fiber of the Calabi-Yau manifold, on which F-theory is compactified. Conventional gauge coupling unification of is achieved, without requiring a threshold correction from the flux along the hypercharge direction.
Triple neutral gauge boson couplings in noncommutative Standard Model
NASA Astrophysics Data System (ADS)
Deshpande, N. G.; He, Xiao-Gang
2002-05-01
It has been shown recently that the triple neutral gauge boson couplings are not uniquely determined in noncommutative extension of the Standard Model (NCSM). Depending on specific schemes used, the couplings are different and may even be zero. To distinguish different realizations of the NCSM, additional information either from theoretical or experimental considerations is needed. In this Letter we show that these couplings can be uniquely determined from considerations of unification of electroweak and strong interactions. Using SU(5) as the underlying theory and integrating out the heavy degrees of freedom, we obtain unique non-zero new triple γγγ, γγZ, γZZ, ZZZ, γGG, ZGG and GGG couplings at the leading order in the NCSM. We also briefly discuss experimental implications.
Image contrast enhancement based on a local standard deviation model
Chang, Dah-Chung; Wu, Wen-Rong
1996-12-31
The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt`s Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details are concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm.
Consistent constraints on the Standard Model Effective Field Theory
NASA Astrophysics Data System (ADS)
Berthier, Laure; Trott, Michael
2016-02-01
We develop the global constraint picture in the (linear) effective field theory generalisation of the Standard Model, incorporating data from detectors that operated at PEP, PETRA, TRISTAN, SpS, Tevatron, SLAC, LEPI and LEP II, as well as low energy precision data. We fit one hundred and three observables. We develop a theory error metric for this effective field theory, which is required when constraints on parameters at leading order in the power counting are to be pushed to the percent level, or beyond, unless the cut off scale is assumed to be large, Λ ≳ 3 TeV. We more consistently incorporate theoretical errors in this work, avoiding this assumption, and as a direct consequence bounds on some leading parameters are relaxed. We show how an S, T analysis is modified by the theory errors we include as an illustrative example.
Mass of the Higgs boson in the standard electroweak model
Erler, Jens
2010-03-01
An updated global analysis within the standard model (SM) of all relevant electroweak precision and Higgs boson search data is presented with special emphasis on the implications for the Higgs boson mass, M{sub H}. Included are, in particular, the most recent results on the top quark and W boson masses, updated and significantly shifted constraints on the strong coupling constant, {alpha}{sub s}, from {tau} decays and other low-energy measurements such as from atomic parity violation and neutrino deep inelastic scattering. The latest results from searches for Higgs production and decay at the Tevatron are incorporated together with the older constraints from LEP 2. I find a trimodal probability distribution for M{sub H} with a fairly narrow preferred 90% C.L. window, 115 GeV{<=}M{sub H{<=}}148 GeV.
Standard model CP violation and cold electroweak baryogenesis
Tranberg, Anders
2011-10-15
Using large-scale real-time lattice simulations, we calculate the baryon asymmetry generated at a fast, cold electroweak symmetry breaking transition. CP-violation is provided by the leading effective bosonic term resulting from integrating out the fermions in the Minimal Standard Model at zero-temperature, and performing a covariant gradient expansion [A. Hernandez, T. Konstandin, and M. G. Schmidt, Nucl. Phys. B812, 290 (2009).]. This is an extension of the work presented in [A. Tranberg, A. Hernandez, T. Konstandin, and M. G. Schmidt, Phys. Lett. B 690, 207 (2010).]. The numerical implementation is described in detail, and we address issues specifically related to using this CP-violating term in the context of Cold Electroweak Baryogenesis.
Quantum Gravity and Lorentz Invariance Violation in the Standard Model
Alfaro, Jorge
2005-06-10
The most important problem of fundamental physics is the quantization of the gravitational field. A main difficulty is the lack of available experimental tests that discriminate among the theories proposed to quantize gravity. Recently, Lorentz invariance violation by quantum gravity (QG) has been the source of growing interest. However, the predictions depend on an ad hoc hypothesis and too many arbitrary parameters. Here we show that the standard model itself contains tiny Lorentz invariance violation terms coming from QG. All terms depend on one arbitrary parameter {alpha} that sets the scale of QG effects. This parameter can be estimated using data from the ultrahigh energy cosmic ray spectrum to be vertical bar {alpha} vertical bar <{approx}10{sup -22}-10{sup -23}.
Quantum gravity and Lorentz invariance violation in the standard model.
Alfaro, Jorge
2005-06-10
The most important problem of fundamental physics is the quantization of the gravitational field. A main difficulty is the lack of available experimental tests that discriminate among the theories proposed to quantize gravity. Recently, Lorentz invariance violation by quantum gravity (QG) has been the source of growing interest. However, the predictions depend on an ad hoc hypothesis and too many arbitrary parameters. Here we show that the standard model itself contains tiny Lorentz invariance violation terms coming from QG. All terms depend on one arbitrary parameter alpha that sets the scale of QG effects. This parameter can be estimated using data from the ultrahigh energy cosmic ray spectrum to be |alpha|< approximately 10(-22)-10(-23).
Physics beyond the Standard Model from hydrogen spectroscopy
NASA Astrophysics Data System (ADS)
Ubachs, W.; Koelemeij, J. C. J.; Eikema, K. S. E.; Salumbides, E. J.
2016-02-01
Spectroscopy of hydrogen can be used for a search into physics beyond the Standard Model. Differences between the absorption spectra of the Lyman and Werner bands of H2 as observed at high redshift and those measured in the laboratory can be interpreted in terms of possible variations of the proton-electron mass ratio μ =mp /me over cosmological history. Investigation of ten such absorbers in the redshift range z = 2.0 -4.2 yields a constraint of | Δμ / μ | < 5 ×10-6 at 3σ. Observation of H2 from the photospheres of white dwarf stars inside our Galaxy delivers a constraint of similar magnitude on a dependence of μ on a gravitational potential 104 times as strong as on the Earth's surface. While such astronomical studies aim at finding quintessence in an indirect manner, laboratory precision measurements target such additional quantum fields in a direct manner. Laser-based precision measurements of dissociation energies, vibrational splittings and rotational level energies in H2 molecules and their deuterated isotopomers HD and D2 produce values for the rovibrational binding energies fully consistent with quantum ab initio calculations including relativistic and quantum electrodynamical (QED) effects. Similarly, precision measurements of high-overtone vibrational transitions of HD+ ions, captured in ion traps and sympathetically cooled to mK temperatures, also result in transition frequencies fully consistent with calculations including QED corrections. Precision measurements of inter-Rydberg transitions in H2 can be extrapolated to yield accurate values for level splittings in the H2+ -ion. These comprehensive results of laboratory precision measurements on neutral and ionic hydrogen molecules can be interpreted to set bounds on the existence of possible fifth forces and of higher dimensions, phenomena describing physics beyond the Standard Model.
Physics Beyond the Standard Model from Molecular Hydrogen Spectroscopy
NASA Astrophysics Data System (ADS)
Ubachs, Wim; Salumbides, Edcel John; Bagdonaite, Julija
2015-06-01
The spectrum of molecular hydrogen can be measured in the laboratory to very high precision using advanced laser and molecular beam techniques, as well as frequency-comb based calibration [1,2]. The quantum level structure of this smallest neutral molecule can now be calculated to very high precision, based on a very accurate (10-15 precision) Born-Oppenheimer potential [3] and including subtle non-adiabatic, relativistic and quantum electrodynamic effects [4]. Comparison between theory and experiment yields a test of QED, and in fact of the Standard Model of Physics, since the weak, strong and gravitational forces have a negligible effect. Even fifth forces beyond the Standard Model can be searched for [5]. Astronomical observation of molecular hydrogen spectra, using the largest telescopes on Earth and in space, may reveal possible variations of fundamental constants on a cosmological time scale [6]. A study has been performed at a 'look-back' time of 12.5 billion years [7]. In addition the possible dependence of a fundamental constant on a gravitational field has been investigated from observation of molecular hydrogen in the photospheres of white dwarfs [8]. The latter involves a test of the Einsteins equivalence principle. [1] E.J. Salumbides et al., Phys. Rev. Lett. 107, 143005 (2011). [2] G. Dickenson et al., Phys. Rev. Lett. 110, 193601 (2013). [3] K. Pachucki, Phys. Rev. A82, 032509 (2010). [4] J. Komasa et al., J. Chem. Theory Comp. 7, 3105 (2011). [5] E.J. Salumbides et al., Phys. Rev. D87, 112008 (2013). [6] F. van Weerdenburg et al., Phys. Rev. Lett. 106, 180802 (2011). [7] J. Badonaite et al., Phys. Rev. Lett. 114, 071301 (2015). [8] J. Bagdonaite et al., Phys. Rev. Lett. 113, 123002 (2014).
Standard Model in multiscale theories and observational constraints
NASA Astrophysics Data System (ADS)
Calcagni, Gianluca; Nardelli, Giuseppe; Rodríguez-Fernández, David
2016-08-01
We construct and analyze the Standard Model of electroweak and strong interactions in multiscale spacetimes with (i) weighted derivatives and (ii) q -derivatives. Both theories can be formulated in two different frames, called fractional and integer picture. By definition, the fractional picture is where physical predictions should be made. (i) In the theory with weighted derivatives, it is shown that gauge invariance and the requirement of having constant masses in all reference frames make the Standard Model in the integer picture indistinguishable from the ordinary one. Experiments involving only weak and strong forces are insensitive to a change of spacetime dimensionality also in the fractional picture, and only the electromagnetic and gravitational sectors can break the degeneracy. For the simplest multiscale measures with only one characteristic time, length and energy scale t*, ℓ* and E*, we compute the Lamb shift in the hydrogen atom and constrain the multiscale correction to the ordinary result, getting the absolute upper bound t*<10-23 s . For the natural choice α0=1 /2 of the fractional exponent in the measure, this bound is strengthened to t*<10-29 s , corresponding to ℓ*<10-20 m and E*>28 TeV . Stronger bounds are obtained from the measurement of the fine-structure constant. (ii) In the theory with q -derivatives, considering the muon decay rate and the Lamb shift in light atoms, we obtain the independent absolute upper bounds t*<10-13 s and E*>35 MeV . For α0=1 /2 , the Lamb shift alone yields t*<10-27 s , ℓ*<10-19 m and E*>450 GeV .
The Standard Model as a 2T-Physics Theory
NASA Astrophysics Data System (ADS)
Bars, I.
2007-04-01
New developments in 2T-physics, that connect 2T-physics field theory directly to the real world, are reported in this talk. An action is proposed in field theory in 4+2 dimensions which correctly reproduces the Standard Model (SM) in 3+1 dimensions (and no junk). Everything that is known to work in the SM still works in the emergent 3+1 theory, but some of the problems of the SM get resolved. The resolution is due to new restrictions on interactions inherited from 4+2 dimensions that lead to some interesting physics and new points of view not discussed before in 3+1 dimensions. In particular the strong CP violation problem is resolved without an axion, and the electro-weak symmetry breakdown that generates masses requires the participation of the dilaton, thus relating the electro-weak phase transition to other phase transitions (such as evolution of the universe, vacuum selection in string theory, etc.) that also require the participation of the dilaton. The underlying principle of 2T-physics is the local symmetry Sp(2,R) under which position and momentum become indistinguishable at any instant. This principle inevitably leads to deep consequences, one of which is the two-time structure of spacetime in which ordinary 1-time spacetime is embedded. The proposed action for the Standard Model in 4+2 dimensions follows from new gauge symmetries in field theory related to the fundamental principles of Sp(2,R). These gauge symmetries thin out the degrees of freedom from 4+2 to 3+1 dimensions without any Kaluza-Klein modes.
Presymmetry in the Standard Model with adulterated Dirac neutrinos
NASA Astrophysics Data System (ADS)
Matute, Ernesto A.
2015-08-01
Recently we proposed a model for light Dirac neutrinos in which two right-handed (RH) neutrinos per generation are added to the particles of the Standard Model (SM), implemented with the symmetry of fermionic contents. The ordinary one is decoupled via the high scale type-I seesaw mechanism, while the extra pairs off with its left-handed (LH) partner. The symmetry of lepton and quark contents was merely used as a guideline to the choice of parameters because it is not a proper symmetry. Here we argue that the underlying symmetry to take for this correspondence is presymmetry, the hidden electroweak symmetry of the SM extended with RH neutrinos defined by transformations which exchange lepton and quark bare states with the same electroweak charges and no Majorana mass terms in the underlying Lagrangian. It gives a topological character to fractional charges, relates the number of families to the number of quark colors, and now guarantees the great disparity between the couplings of the two RH neutrinos. Thus, Dirac neutrinos with extremely small masses appear as natural predictions of presymmetry, satisfying the ’t Hooft’s naturalness conditions in the extended seesaw where the extra RH neutrinos serve to adulterate the mass properties in the low scale effective theory, which retains without extensions the gauge and Higgs sectors of the SM. However, the high energy threshold for the seesaw implies new physics to stabilize the quantum corrections to the Higgs boson mass in agreement with the naturalness requirement.
Critical Differences of Asymmetric Magnetic Reconnection from Standard Models
NASA Astrophysics Data System (ADS)
Nitta, S.; Wada, T.; Fuchida, T.; Kondoh, K.
2016-09-01
We have clarified the structure of asymmetric magnetic reconnection in detail as the result of the spontaneous evolutionary process. The asymmetry is imposed as ratio k of the magnetic field strength in both sides of the initial current sheet (CS) in the isothermal equilibrium. The MHD simulation is carried out by the HLLD code for the long-term temporal evolution with very high spatial resolution. The resultant structure is drastically different from the symmetric case (e.g., the Petschek model) even for slight asymmetry k = 2. (1) The velocity distribution in the reconnection jet clearly shows a two-layered structure, i.e., the high-speed sub-layer in which the flow is almost field aligned and the acceleration sub-layer. (2) Higher beta side (HBS) plasma is caught in a lower beta side plasmoid. This suggests a new plasma mixing process in the reconnection events. (3) A new large strong fast shock in front of the plasmoid forms in the HBS. This can be a new particle acceleration site in the reconnection system. These critical properties that have not been reported in previous works suggest that we contribute to a better and more detailed knowledge of the reconnection of the standard model for the symmetric magnetic reconnection system.
Structure of relativistic accretion disk with non-standard model
NASA Astrophysics Data System (ADS)
Khesali, A. R.; Salahshoor, K.
2016-07-01
The structure of stationary, axisymmetric advection-dominated accretion disk (ADAF) around rotating black hole, using non-standard model, was examined. In this model, the transport efficiency of the angular momentum α was dependent on the magnetic Prandtl number α ∝ Pm^{δ } . The full relativistic shear stress recently obtained by a new manner, was used. By considering black hole spin and Prandtl number instantaneously, the structure of ADAFs was changed in inner and outer region of the disk. It was discovered that the accretion flow was denser and hotter in the inner region, due to the black hole spin, and in the outer region, due to the presence of Prandtl parameter. Inasmuch as the rotation of the black hole affected the transport efficiency of angular momentum in parts of the disk very close to the even horizon, then in these regions, the viscosity depended on the rotation of black hole. Also, it was discovered that the effect of the black hole spin on the structure of the disk was related to the presence of Prandtl parameter.
Wisconsin's Model Academic Standards for Business. Bulletin No. 9004.
ERIC Educational Resources Information Center
Wisconsin State Dept. of Public Instruction, Madison.
This document contains standards for the academic content of the Wisconsin K-12 curriculum in the area of business. Developed by task forces of educators, parents, board of education members, and employers and employees, the standards cover content, performance, and proficiency areas. They are cross-referenced to the state standards for English…
Wisconsin's Model Academic Standards for Environmental Education. Bulletin No. 9001.
ERIC Educational Resources Information Center
Fortier, John D.; Grady, Susan M.; Lee, Shelley A.; Marinac, Patricia A.
This guide to Wisconsin's academic standards for environmental education describes the process and development of state environmental standards. Designed for administrators, school board members, and teachers, the guide explains the purpose and goals of creating standards and contains a brief history of environmental education in Wisconsin. The…
The Risk GP Model: the standard model of prediction in medicine.
Fuller, Jonathan; Flores, Luis J
2015-12-01
With the ascent of modern epidemiology in the Twentieth Century came a new standard model of prediction in public health and clinical medicine. In this article, we describe the structure of the model. The standard model uses epidemiological measures-most commonly, risk measures-to predict outcomes (prognosis) and effect sizes (treatment) in a patient population that can then be transformed into probabilities for individual patients. In the first step, a risk measure in a study population is generalized or extrapolated to a target population. In the second step, the risk measure is particularized or transformed to yield probabilistic information relevant to a patient from the target population. Hence, we call the approach the Risk Generalization-Particularization (Risk GP) Model. There are serious problems at both stages, especially with the extent to which the required assumptions will hold and the extent to which we have evidence for the assumptions. Given that there are other models of prediction that use different assumptions, we should not inflexibly commit ourselves to one standard model. Instead, model pluralism should be standard in medical prediction.
Aspects of Cosmology from particle physics beyond the Standard Model
NASA Astrophysics Data System (ADS)
Shuhmaher, Natalia
The interface of Cosmology and High Energy physics is a forefront area of research which is constantly undergoing development. This thesis makes various contributions to this endeavor. String-inspired cosmology is the subject of the first part of the thesis, where we propose both a new inflationary and a new alternative cosmological model. The second part of the thesis concentrates on the problems of integrating cosmology with particle physics beyond the Standard Model. Inspired by new opportunities due to stringy degrees of freedom, we propose a non-inflationary resolution of the entropy and horizon problems. In this string-inspired scenario, 'our' dimensions expand while the extra dimensions first expand and then contract, before eventually stabilizing. The equation of state of the bulk matter (which consists of branes) is negative. Hence, there is a net gain in the total energy of the universe during the pre-stabilization phase. At the end of this phase, the energy stored in the branes is converted into radiation. The result is a large and dense 3-dimensional universe. Making use of similar ideas, we propose a not-fine-tuned model of brane inflation. In this scenario the brane separation, playing the role of the inflaton, is the same as the overall volume modulus. The bulk matter provides an initial expansion phase which drives the inflaton up its potential, so that the conditions for inflation are realized. The specific choice of the inflationary potential nicely fits the cosmological observations. Another aspect of this research concentrates on the cosmological moduli problem: namely, the existence of weakly coupled particles those decay is late enough to interfere with Big Bang Nucleosynthesis. As a solution, we suggest parametric and tachyonic resonances to shorten the decay time. Even heavy moduli are dangerous for cosmology if they cause the overproduction of gravitinos. We find that tachyonic decay channels help to transfer most of the energy of these
Dark matter and color octets beyond the Standard Model
NASA Astrophysics Data System (ADS)
Krnjaic, Gordan Z.
Although the Standard Model (SM) of particles and interactions has survived forty years of experimental tests, it does not provide a complete description of nature. From cosmological and astrophysical observations, it is now clear that the majority of matter in the universe is not baryonic and interacts very weakly (if at all) via non-gravitational forces. The SM does not provide a dark matter candidate, so new particles must be introduced. Furthermore, recent Tevatron results suggest that SM predictions for benchmark collider observables are in tension with experimental observations. In this thesis, we will propose extensions to the SM that address each of these issues. Although there is abundant indirect evidence for the existence of dark matter, terrestrial efforts to observe its interactions have yielded conflicting results. We address this situation with a simple model of dark matter that features hydrogen-like bound states that scatter off SM nuclei by undergoing inelastic hyperfine transitions. We explore the available parameter space that results from demanding that DM self-interactions satisfy experimental bounds and ameliorate the tension between positive and null signals at the DAMA and CDMS experiments respectively. However, this simple model does not explain the cosmological abundance of dark matter and also encounters a Landau pole at a low energy scale. We, therefore, extend the field content and gauge group of the dark sector to resolve these issues with a renormalizable UV completion. We also explore the galactic dynamics of unbound dark matter and find that "dark ions" settle into a diffuse isothermal halo that differs from that of the bound states. This suppresses the local dark-ion density and expands the model's viable parameter space. We also consider the > 3σ excess in W plus dijet events recently observed at the Tevatron collider. We show that decays of a color-octet, electroweak-triplet scalar particle ("octo-triplet") can yield the
The pion: an enigma within the Standard Model
NASA Astrophysics Data System (ADS)
Horn, Tanja; Roberts, Craig D.
2016-07-01
Quantum chromodynamics (QCDs) is the strongly interacting part of the Standard Model. It is supposed to describe all of nuclear physics; and yet, almost 50 years after the discovery of gluons and quarks, we are only just beginning to understand how QCD builds the basic bricks for nuclei: neutrons and protons, and the pions that bind them together. QCD is characterised by two emergent phenomena: confinement and dynamical chiral symmetry breaking (DCSB). They have far-reaching consequences, expressed with great force in the character of the pion; and pion properties, in turn, suggest that confinement and DCSB are intimately connected. Indeed, since the pion is both a Nambu–Goldstone boson and a quark–antiquark bound-state, it holds a unique position in nature and, consequently, developing an understanding of its properties is critical to revealing some very basic features of the Standard Model. We describe experimental progress toward meeting this challenge that has been made using electromagnetic probes, highlighting both dramatic improvements in the precision of charged-pion form factor data that have been achieved in the past decade and new results on the neutral-pion transition form factor, both of which challenge existing notions of pion structure. We also provide a theoretical context for these empirical advances, which begins with an explanation of how DCSB works to guarantee that the pion is un-naturally light; but also, nevertheless, ensures that the pion is the best object to study in order to reveal the mechanisms that generate nearly all the mass of hadrons. In canvassing advances in these areas, our discussion unifies many aspects of pion structure and interactions, connecting the charged-pion elastic form factor, the neutral-pion transition form factor and the pion's leading-twist parton distribution amplitude. It also sketches novel ways in which experimental and theoretical studies of the charged-kaon electromagnetic form factor can provide
The pion: an enigma within the Standard Model
NASA Astrophysics Data System (ADS)
Horn, Tanja; Roberts, Craig D.
2016-07-01
Quantum chromodynamics (QCDs) is the strongly interacting part of the Standard Model. It is supposed to describe all of nuclear physics; and yet, almost 50 years after the discovery of gluons and quarks, we are only just beginning to understand how QCD builds the basic bricks for nuclei: neutrons and protons, and the pions that bind them together. QCD is characterised by two emergent phenomena: confinement and dynamical chiral symmetry breaking (DCSB). They have far-reaching consequences, expressed with great force in the character of the pion; and pion properties, in turn, suggest that confinement and DCSB are intimately connected. Indeed, since the pion is both a Nambu-Goldstone boson and a quark-antiquark bound-state, it holds a unique position in nature and, consequently, developing an understanding of its properties is critical to revealing some very basic features of the Standard Model. We describe experimental progress toward meeting this challenge that has been made using electromagnetic probes, highlighting both dramatic improvements in the precision of charged-pion form factor data that have been achieved in the past decade and new results on the neutral-pion transition form factor, both of which challenge existing notions of pion structure. We also provide a theoretical context for these empirical advances, which begins with an explanation of how DCSB works to guarantee that the pion is un-naturally light; but also, nevertheless, ensures that the pion is the best object to study in order to reveal the mechanisms that generate nearly all the mass of hadrons. In canvassing advances in these areas, our discussion unifies many aspects of pion structure and interactions, connecting the charged-pion elastic form factor, the neutral-pion transition form factor and the pion's leading-twist parton distribution amplitude. It also sketches novel ways in which experimental and theoretical studies of the charged-kaon electromagnetic form factor can provide
Effects of the Noncommutative Standard Model in WW Scattering
Conley, John A.; Hewett, JoAnne L.
2008-12-02
We examine W pair production in the Noncommutative Standard Model constructed with the Seiberg-Witten map. Consideration of partial wave unitarity in the reactions WW {yields} WW and e{sup +}e{sup -} {yields} WW shows that the latter process is more sensitive and that tree-level unitarity is violated when scattering energies are of order a TeV and the noncommutative scale is below about a TeV. We find that WW production at the LHC is not sensitive to scales above the unitarity bounds. WW production in e{sup +}e{sup -} annihilation, however, provides a good probe of such effects with noncommutative scales below 300-400 GeV being excluded at LEP-II, and the ILC being sensitive to scales up to 10-20 TeV. In addition, we find that the ability to measure the helicity states of the final state W bosons at the ILC provides a diagnostic tool to determine and disentangle the different possible noncommutative contributions.
Gravitational wave background from Standard Model physics: qualitative features
NASA Astrophysics Data System (ADS)
Ghiglieri, J.; Laine, M.
2015-07-01
Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at 0T > 16 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors.
Affine group formulation of the Standard Model coupled to gravity
Chou, Ching-Yi; Ita, Eyo; Soo, Chopin
2014-04-15
In this work we apply the affine group formalism for four dimensional gravity of Lorentzian signature, which is based on Klauder’s affine algebraic program, to the formulation of the Hamiltonian constraint of the interaction of matter and all forces, including gravity with non-vanishing cosmological constant Λ, as an affine Lie algebra. We use the hermitian action of fermions coupled to gravitation and Yang–Mills theory to find the density weight one fermionic super-Hamiltonian constraint. This term, combined with the Yang–Mills and Higgs energy densities, are composed with York’s integrated time functional. The result, when combined with the imaginary part of the Chern–Simons functional Q, forms the affine commutation relation with the volume element V(x). Affine algebraic quantization of gravitation and matter on equal footing implies a fundamental uncertainty relation which is predicated upon a non-vanishing cosmological constant. -- Highlights: •Wheeler–DeWitt equation (WDW) quantized as affine algebra, realizing Klauder’s program. •WDW formulated for interaction of matter and all forces, including gravity, as affine algebra. •WDW features Hermitian generators in spite of fermionic content: Standard Model addressed. •Constructed a family of physical states for the full, coupled theory via affine coherent states. •Fundamental uncertainty relation, predicated on non-vanishing cosmological constant.
Gravitational wave background from Standard Model physics: qualitative features
Ghiglieri, J.; Laine, M.
2015-07-16
Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at T>160 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors.
Gravitational wave background from Standard Model physics: qualitative features
Ghiglieri, J.; Laine, M. E-mail: laine@itp.unibe.ch
2015-07-01
Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at 0T > 16 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors.
CP violation outside the standard model phenomenology for pedestrians
Lipkin, H.J. ||
1993-09-23
So far the only experimental evidence for CP violation is the 1964 discovery of K{sub L}{yields}2{pi} where the two mass eigenstates produced by neutral meson mixing both decay into the same CP eigenstate. This result is described by two parameters {epsilon} and {epsilon}{prime}. Today {epsilon} {approx} its 1964 value, {epsilon}{prime} data are still inconclusive and there is no new evidence for CP violation. One might expect to observe similar phenomena in other systems and also direct CP violation as charge asymmetries between decays of charge conjugate hadrons H{sup {+-}} {yields} f{sup {+-}}. Why is it so hard to find CP violation? How can B Physics help? Does CP lead beyond the standard model? The author presents a pedestrian symmetry approach which exhibits the difficulties and future possibilities of these two types of CP-violation experiments, neutral meson mixing and direct charge asymmetry: what may work, what doesn`t work and why.
On push-forward representations in the standard gyrokinetic model
Miyato, N. Yagi, M.; Scott, B. D.
2015-01-15
Two representations of fluid moments in terms of a gyro-center distribution function and gyro-center coordinates, which are called push-forward representations, are compared in the standard electrostatic gyrokinetic model. In the representation conventionally used to derive the gyrokinetic Poisson equation, the pull-back transformation of the gyro-center distribution function contains effects of the gyro-center transformation and therefore electrostatic potential fluctuations, which is described by the Poisson brackets between the distribution function and scalar functions generating the gyro-center transformation. Usually, only the lowest order solution of the generating function at first order is considered to explicitly derive the gyrokinetic Poisson equation. This is true in explicitly deriving representations of scalar fluid moments with polarization terms. One also recovers the particle diamagnetic flux at this order because it is associated with the guiding-center transformation. However, higher-order solutions are needed to derive finite Larmor radius terms of particle flux including the polarization drift flux from the conventional representation. On the other hand, the lowest order solution is sufficient for the other representation, in which the gyro-center transformation part is combined with the guiding-center one and the pull-back transformation of the distribution function does not appear.
On the fate of the Standard Model at finite temperature
NASA Astrophysics Data System (ADS)
Rose, Luigi Delle; Marzo, Carlo; Urbano, Alfredo
2016-05-01
In this paper we revisit and update the computation of thermal corrections to the stability of the electroweak vacuum in the Standard Model. At zero temperature, we make use of the full two-loop effective potential, improved by three-loop beta functions with two-loop matching conditions. At finite temperature, we include one-loop thermal corrections together with resummation of daisy diagrams. We solve numerically — both at zero and finite temperature — the bounce equation, thus providing an accurate description of the thermal tunneling. Assuming a maximum temperature in the early Universe of the order of 1018 GeV, we find that the instability bound excludes values of the top mass M t ≳ 173 .6 GeV, with M h ≃ 125 GeV and including uncertainties on the strong coupling. We discuss the validity and temperature-dependence of this bound in the early Universe, with a special focus on the reheating phase after inflation.
Standard Model with a real singlet scalar and inflation
Enqvist, Kari; Nurmi, Sami; Tenkanen, Tommi; Tuominen, Kimmo E-mail: sami.nurmi@helsinki.fi E-mail: kimmo.i.tuominen@helsinki.fi
2014-08-01
We study the post-inflationary dynamics of the Standard Model Higgs and a real singlet scalar s, coupled together through a renormalizable coupling λ{sub sh}h{sup 2}s{sup 2}, in a Z{sub 2} symmetric model that may explain the observed dark matter abundance and/or the origin of baryon asymmetry. The initial values for the Higgs and s condensates are given by inflationary fluctuations, and we follow their dissipation and relaxation to the low energy vacua. We find that both the lowest order perturbative and the non-perturbative decays are blocked by thermal effects and large background fields and that the condensates decay by two-loop thermal effects. Assuming instant reheating at T=10{sup 16} GeV, the characteristic temperature for the Higgs condensate thermalization is found to be T{sub h} ∼ 10{sup 14} GeV, whereas s thermalizes typically around T{sub s} ∼ 10{sup 6} GeV. By that time, the amplitude of the singlet is driven very close to the vacuum value by the expansion of the universe, unless the portal coupling takes a value λ{sub sh}∼< 10{sup -7} and the singlet s never thermalizes. With these values of the coupling, it is possible to slowly produce a sizeable fraction of the observed dark matter abundance via singlet condensate fragmentation and thermal Higgs scattering. Physics also below the electroweak scale can therefore be affected by the non-vacuum initial conditions generated by inflation.
Custodial isospin violation in the Lee-Wick standard model
Chivukula, R. Sekhar; Farzinnia, Arsham; Foadi, Roshan; Simmons, Elizabeth H.
2010-05-01
We analyze the tension between naturalness and isospin violation in the Lee-Wick standard model (LW SM) by computing tree-level and fermionic one-loop contributions to the post-LEP electroweak parameters (S-circumflex, T-circumflex, W, and Y) and the Zb{sub L}b-bar{sub L} coupling. The model is most natural when the LW partners of the gauge bosons and fermions are light, but small partner masses can lead to large isospin violation. The post-LEP parameters yield a simple picture in the LW SM: the gauge sector contributes to Y and W only, with leading contributions arising at tree level, while the fermion sector contributes to S-circumflex and T-circumflex only, with leading corrections arising at one loop. Hence, W and Y constrain the masses of the LW gauge bosons to satisfy M{sub 1}, M{sub 2} > or approx. 2.4 TeV at 95% C.L. Likewise, experimental limits on T-circumflex reveal that the masses of the LW fermions must satisfy M{sub q}, M{sub t} > or approx. 1.6 TeV at 95% C.L. if the Higgs mass is light and tend to exclude the LW SM for any LW fermion masses if the Higgs mass is heavy. Contributions from the top-quark sector to the Zb{sub L}b{sub L} coupling can be even more stringent, placing a lower bound of 4 TeV on the LW fermion masses at 95% C.L.
Use and Abuse of the Model Waveform Accuracy Standards
NASA Astrophysics Data System (ADS)
Lindblom, Lee
2010-02-01
Accuracy standards have been developed to ensure that the waveforms used for gravitational-wave data analysis are good enough to serve their intended purposes. These standards place constraints on certain norms of the frequency-domain representations of the waveform errors. Examples will be presented of possible misinterpretations and misapplications of these standards, whose effect could be to vitiate the quality control they were intended to enforce. Suggestions will be given for ways to avoid these problems. )
Use and abuse of the model waveform accuracy standards
NASA Astrophysics Data System (ADS)
Lindblom, Lee
2009-09-01
Accuracy standards have been developed to ensure that the waveforms used for gravitational-wave data analysis are good enough to serve their intended purposes. These standards place constraints on certain norms of the frequency-domain representations of the waveform errors. Examples are given here of possible misinterpretations and misapplications of these standards, whose effect could be to vitiate the quality control they were intended to enforce. Suggestions are given for ways to avoid these problems.
ERIC Educational Resources Information Center
Li, Yuan H.; Schafer, William D.
An empirical study of the Yen (W. Yen, 1997) analytic formula for the standard error of a percent-above-cut [SE(PAC)] was conducted. This formula was derived from variance component information gathered in the context of generalizability theory. SE(PAC)s were estimated by different methods of estimating variance components (e.g., W. Yens…
Judgmental Standard Setting Using a Cognitive Components Model.
ERIC Educational Resources Information Center
McGinty, Dixie; Neel, John H.
A new standard setting approach is introduced, called the cognitive components approach. Like the Angoff method, the cognitive components method generates minimum pass levels (MPLs) for each item. In both approaches, the item MPLs are summed for each judge, then averaged across judges to yield the standard. In the cognitive components approach,…
A Visual Model for the Variance and Standard Deviation
ERIC Educational Resources Information Center
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
Peer responses to a team's weakest link: a test and extension of LePine and Van Dyne's model.
Jackson, Christine L; LePine, Jeffrey A
2003-06-01
This article reports research intended to assess and extend a recent theory of peer responses to low-performing team members (J. A. LePine & L. Van Dyne, 2001a). An instrument that assesses 4 types of peer responses to low performers (compensating for, training, motivating, and rejecting) was developed and then cross-validated in a subsequent study. Results of the study supported the validity of the peer responses measure and were generally consistent with the attributional theory of peer responses. Low-performer characteristics influenced the peer responses. These effects were mediated in part by peer attributions, affect, and cognitions, which explained variance in the peer responses over and above the variance explained by respondents' personality characteristics (i.e., The Big Five).
NASA Astrophysics Data System (ADS)
Cheng, Michael
2012-03-01
The Standard Model provides an elegant mechanism for electroweak symmetry breaking (EWSB) via the introduction of a scalar Higgs field. However, the Standard Model Higgs mechanism is not the only way to explain EWSB. A class of models, broadly known as Technicolor, postulates the existence of a new strongly-interacting gauge sector at the TeV scale, coupled to the Standard Model through technifermions charged under electroweak. In technicolor, the spontaneous breaking of chiral symmetry triggers EWSB, with the resulting Goldstone bosons ``eaten'' by the massive W, Z gauge bosons. Because they are strongly-coupled and inherently non-perturbative, numerical lattice gauge theory provides an ideal arena in which technicolor can be explored. The maturation of lattice methods and availability of sufficient computing power has spurred the investigation of technicolor using lattice gauge theory techniques, in particular one variant known as ``walking'' technicolor. A technicolor model that resembles QCD is problematic that it does not satisfy the constraints of precision electro-weak observables, most notably those encapsulated by the Peskin-Takeuchi parameters, as well as the contraints on flavor-changing neutral currents. Walking technicolor is a class of models where the theory is near-conformal, i.e. the gauge coupling runs very slowly (``walks'') over some large range of energy scales. This walking behavior produces a large separation of scales between the natural cut-off for the theory and the EWSB scale, allowing one to naturally generate fermion masses without violating contrainsts on flavor-changing neutral currents. The dynamics of walking theories may also allow it to satisfy the bounds on the Peskin-Takeuchi parameters. We discuss the results of recent lattice calculations that explore the properties of walking technicolor models and the its implications on possible physics beyond the Standard Model.
NASA Technical Reports Server (NTRS)
Maddox, M.; Rastatter, L.; Hesse, M.
2005-01-01
The disparate nature of space weather model output provides many challenges with regards to the portability and reuse of not only the data itself, but also any tools that are developed for analysis and visualization. We are developing and implementing a comprehensive data format standardization methodology that allows heterogeneous model output data to be stored uniformly in any common science data format. We will discuss our approach to identifying core meta-data elements that can be used to supplement raw model output data, thus creating self-descriptive files. The meta-data should also contain information describing the simulation grid. This will ultimately assists in the development of efficient data access tools capable of extracting data at any given point and time. We will also discuss our experiences standardizing the output of two global magnetospheric models, and how we plan to apply similar procedures when standardizing the output of the solar, heliospheric, and ionospheric models that are also currently hosted at the Community Coordinated Modeling Center.
Revisiting B→ϕπ decays in the standard model
NASA Astrophysics Data System (ADS)
Li, Ying; Lü, Cai-Dian; Wang, Wei
2009-07-01
In the standard model, we reinvestigate the rare decay B→ϕπ, which is viewed as an ideal probe to detect the new physics signals. We find that the tiny branching ratio in the naive factorization can be dramatically enhanced by the radiative corrections and the ω-ϕ mixing effect, while the long-distance contributions are negligibly small. Assuming the Cabibbo-Kobayashi-Maskawa angle γ=(58.6±10)° and the mixing angle θ=-(3.0±1.0)°, we obtain the branching ratios of B→ϕπ as Br(B±→ϕπ±)=(3.2-0.7+1.8+0.8-1.2)×10-8 and Br(B0→ϕπ0)=(6.8-0.3+1.0+0.3-0.7)×10-9. If the future experiment reports a branching ratio of (0.2-0.5)×10-7 for B-→ϕπ- decay, it may not be a clear signal for any new physics scenario. In order to discriminate the large new physics contributions from those due to the ω-ϕ mixing, we propose to measure the ratio of branching fractions of the charged and neutral B decay channel. We also study the direct CP asymmetries of these two channels: (-8.0-1.0-0.1+0.9+1.5)% and (-6.3+0.7-2.5-0.5+2.5)% for B±→ϕπ± and B0→ϕπ0, respectively. These asymmetries are dominated by the mixing effect.
Fourth standard model family neutrino at future linear colliders
Ciftci, A.K.; Ciftci, R.; Sultansoy, S.
2005-09-01
It is known that flavor democracy favors the existence of the fourth standard model (SM) family. In order to give nonzero masses for the first three-family fermions flavor democracy has to be slightly broken. A parametrization for democracy breaking, which gives the correct values for fundamental fermion masses and, at the same time, predicts quark and lepton Cabibbo-Kobayashi-Maskawa (CKM) matrices in a good agreement with the experimental data, is proposed. The pair productions of the fourth SM family Dirac ({nu}{sub 4}) and Majorana (N{sub 1}) neutrinos at future linear colliders with {radical}(s)=500 GeV, 1 TeV, and 3 TeV are considered. The cross section for the process e{sup +}e{sup -}{yields}{nu}{sub 4}{nu}{sub 4}(N{sub 1}N{sub 1}) and the branching ratios for possible decay modes of the both neutrinos are determined. The decays of the fourth family neutrinos into muon channels ({nu}{sub 4}(N{sub 1}){yields}{mu}{sup {+-}}W{sup {+-}}) provide cleanest signature at e{sup +}e{sup -} colliders. Meanwhile, in our parametrization this channel is dominant. W bosons produced in decays of the fourth family neutrinos will be seen in detector as either di-jets or isolated leptons. As an example, we consider the production of 200 GeV mass fourth family neutrinos at {radical}(s)=500 GeV linear colliders by taking into account di-muon plus four jet events as signatures.
Improved anatomy of ɛ'/ ɛ in the Standard Model
NASA Astrophysics Data System (ADS)
Buras, Andrzej J.; Gorbahn, Martin; Jäger, Sebastian; Jamin, Matthias
2015-11-01
We present a new analysis of the ratio ɛ'/ ɛ within the Standard Model (SM) using a formalism that is manifestly independent of the values of leading ( V - A) ⊗ ( V - A) QCD penguin, and EW penguin hadronic matrix elements of the operators Q 4, Q 9, and Q 10, and applies to the SM as well as extensions with the same operator structure. It is valid under the assumption that the SM exactly describes the data on CP-conserving K → ππ amplitudes. As a result of this and the high precision now available for CKM and quark mass parameters, to high accuracy ɛ' /ɛ depends only on two non-perturbative parameters, B 6 (1/2) and B 8 (3/2) , and perturbatively calculable Wilson coefficients. Within the SM, we are separately able to determine the hadronic matrix element < Q 4>0 from CP-conserving data, significantly more precisely than presently possible with lattice QCD. Employing B 6 (1/2) = 0 .57 ± 0 .19 and B 8 (3/2) = 0 .76 ± 0 .05, extracted from recent results by the RBC-UKQCD collaboration, we obtain ɛ' /ɛ = (1 .9 ± 4 .5) × 10-4, substantially more precise than the recent RBC-UKQCD prediction and 2 .9 σ below the experimental value (16 .6 ± 2 .3) × 10-4, with the error being fully dominated by that on B 6 (1/2) . Even discarding lattice input completely, but employing the recently obtained bound B 6 (1/2) ≤ B 8 (3/2) ≤ 1 from the large- N approach, the SM value is found more than 2 σ below the experimental value. At B 6 (1/2) = B 8 (3/2) = 1, varying all other parameters within one sigma, we find ɛ' /ɛ = (8 .6 ± 3 .2) × 10-4. We present a detailed anatomy of the various SM uncertainties, including all sub-leading hadronic matrix elements, briefly commenting on the possibility of underestimated SM contributions as well as on the impact of our results on new physics models.
NASA Astrophysics Data System (ADS)
Varlet, Madeleine
Le recours aux modeles et a la modelisation est mentionne dans la documentation scientifique comme un moyen de favoriser la mise en oeuvre de pratiques d'enseignement-apprentissage constructivistes pour pallier les difficultes d'apprentissage en sciences. L'etude prealable du rapport des enseignantes et des enseignants aux modeles et a la modelisation est alors pertinente pour comprendre leurs pratiques d'enseignement et identifier des elements dont la prise en compte dans les formations initiale et disciplinaire peut contribuer au developpement d'un enseignement constructiviste des sciences. Plusieurs recherches ont porte sur ces conceptions sans faire de distinction selon les matieres enseignees, telles la physique, la chimie ou la biologie, alors que les modeles ne sont pas forcement utilises ou compris de la meme maniere dans ces differentes disciplines. Notre recherche s'est interessee aux conceptions d'enseignantes et d'enseignants de biologie au secondaire au sujet des modeles scientifiques, de quelques formes de representations de ces modeles ainsi que de leurs modes d'utilisation en classe. Les resultats, que nous avons obtenus au moyen d'une serie d'entrevues semi-dirigees, indiquent que globalement leurs conceptions au sujet des modeles sont compatibles avec celle scientifiquement admise, mais varient quant aux formes de representations des modeles. L'examen de ces conceptions temoigne d'une connaissance limitee des modeles et variable selon la matiere enseignee. Le niveau d'etudes, la formation prealable, l'experience en enseignement et un possible cloisonnement des matieres pourraient expliquer les differentes conceptions identifiees. En outre, des difficultes temporelles, conceptuelles et techniques peuvent freiner leurs tentatives de modelisation avec les eleves. Toutefois, nos resultats accreditent l'hypothese que les conceptions des enseignantes et des enseignants eux-memes au sujet des modeles, de leurs formes de representation et de leur approche
Park, Yu Rang; Yoon, Young Jo; Jang, Tae Hun; Seo, Hwa Jeong
2014-01-01
Objectives Extension of the standard model while retaining compliance with it is a challenging issue because there is currently no method for semantically or syntactically verifying an extended data model. A metadata-based extended model, named CCR+, was designed and implemented to achieve interoperability between standard and extended models. Methods Furthermore, a multilayered validation method was devised to validate the standard and extended models. The American Society for Testing and Materials (ASTM) Community Care Record (CCR) standard was selected to evaluate the CCR+ model; two CCR and one CCR+ XML files were evaluated. Results In total, 188 metadata were extracted from the ASTM CCR standard; these metadata are semantically interconnected and registered in the metadata registry. An extended-data-model-specific validation file was generated from these metadata. This file can be used in a smartphone application (Health Avatar CCR+) as a part of a multilayered validation. The new CCR+ model was successfully evaluated via a patient-centric exchange scenario involving multiple hospitals, with the results supporting both syntactic and semantic interoperability between the standard CCR and extended, CCR+, model. Conclusions A feasible method for delivering an extended model that complies with the standard model is presented herein. There is a great need to extend static standard models such as the ASTM CCR in various domains: the methods presented here represent an important reference for achieving interoperability between standard and extended models. PMID:24627817
NASA Astrophysics Data System (ADS)
Peckham, Scott
2016-04-01
Over the last decade, model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that make it much easier for modelers to connect heterogeneous sets of process models in a plug-and-play manner to create composite "system models". These mechanisms greatly simplify code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing with standardized metadata. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can use the self description functions to learn about each process model in a collection to be coupled and then automatically call framework service components (e.g. regridders
Prototyping an online wetland ecosystem services model using open model sharing standards
Feng, M.; Liu, S.; Euliss, N.H.; Young, Caitlin; Mushet, D.M.
2011-01-01
Great interest currently exists for developing ecosystem models to forecast how ecosystem services may change under alternative land use and climate futures. Ecosystem services are diverse and include supporting services or functions (e.g., primary production, nutrient cycling), provisioning services (e.g., wildlife, groundwater), regulating services (e.g., water purification, floodwater retention), and even cultural services (e.g., ecotourism, cultural heritage). Hence, the knowledge base necessary to quantify ecosystem services is broad and derived from many diverse scientific disciplines. Building the required interdisciplinary models is especially challenging as modelers from different locations and times may develop the disciplinary models needed for ecosystem simulations, and these models must be identified and made accessible to the interdisciplinary simulation. Additional difficulties include inconsistent data structures, formats, and metadata required by geospatial models as well as limitations on computing, storage, and connectivity. Traditional standalone and closed network systems cannot fully support sharing and integrating interdisciplinary geospatial models from variant sources. To address this need, we developed an approach to openly share and access geospatial computational models using distributed Geographic Information System (GIS) techniques and open geospatial standards. We included a means to share computational models compliant with Open Geospatial Consortium (OGC) Web Processing Services (WPS) standard to ensure modelers have an efficient and simplified means to publish new models. To demonstrate our approach, we developed five disciplinary models that can be integrated and shared to simulate a few of the ecosystem services (e.g., water storage, waterfowl breeding) that are provided by wetlands in the Prairie Pothole Region (PPR) of North America. ?? 2010 Elsevier Ltd.
Missing experimental challenges to the Standard Model of particle physics
NASA Astrophysics Data System (ADS)
Perovic, Slobodan
The success of particle detection in high energy physics colliders critically depends on the criteria for selecting a small number of interactions from an overwhelming number that occur in the detector. It also depends on the selection of the exact data to be analyzed and the techniques of analysis. The introduction of automation into the detection process has traded the direct involvement of the physicist at each stage of selection and analysis for the efficient handling of vast amounts of data. This tradeoff, in combination with the organizational changes in laboratories of increasing size and complexity, has resulted in automated and semi-automated systems of detection. Various aspects of the semi-automated regime were greatly diminished in more generic automated systems, but turned out to be essential to a number of surprising discoveries of anomalous processes that led to theoretical breakthroughs, notably the establishment of the Standard Model of particle physics. The automated systems are much more efficient in confirming specific hypothesis in narrow energy domains than in performing broad exploratory searches. Thus, in the main, detection processes relying excessively on automation are more likely to miss potential anomalies and impede potential theoretical advances. I suggest that putting substantially more effort into the study of electron-positron colliders and increasing its funding could minimize the likelihood of missing potential anomalies, because detection in such an environment can be handled by the semi-automated regime-unlike detection in hadron colliders. Despite virtually unavoidable excessive reliance on automated detection in hadron colliders, their development has been deemed a priority because they can operate at currently highest energy levels. I suggest, however, that a focus on collisions at the highest achievable energy levels diverts funds from searches for potential anomalies overlooked due to tradeoffs at the previous energy
Implementing the Standards: Incorporating Mathematical Modeling into the Curriculum.
ERIC Educational Resources Information Center
Swetz, Frank
1991-01-01
Following a brief historical review of the mechanism of mathematical modeling, examples are included that associate a mathematical model with given data (changes in sea level) and that model a real-life situation (process of parallel parking). Also provided is the rationale for the curricular implementation of mathematical modeling. (JJK)
ERIC Educational Resources Information Center
Wen, Zhonglin; Marsh, Herbert W.; Hau, Kit-Tai
2010-01-01
Standardized parameter estimates are routinely used to summarize the results of multiple regression models of manifest variables and structural equation models of latent variables, because they facilitate interpretation. Although the typical standardization of interaction terms is not appropriate for multiple regression models, straightforward…
40 CFR 86.410-2006 - Emission standards for 2006 and later model year motorcycles.
Code of Federal Regulations, 2011 CFR
2011-07-01
...-1—Class I and II Motorcycle Emission Standards Model year Emission standards(g/km) HC CO 2006 and... not required to comply with permeation requirements in paragraph (g) of this section until model year.... (g) Model year 2008 and later motorcycles must comply with the evaporative emission...
S. T. Khericha; J. Mitman
2008-05-01
Nuclear plant operating experience and several studies show that the risk from shutdown operation during Modes 4, 5, and 6 at pressurized water reactors and Modes 4 and 5 at boiling water reactors can be significant. This paper describes using the U.S. Nuclear Regulatory Commission’s full-power Standardized Plant Analysis Risk (SPAR) model as the starting point for development of risk evaluation models for commercial nuclear power plants. The shutdown models are integrated with their respective internal event at-power SPAR model. This is accomplished by combining the modified system fault trees from the SPAR full-power model with shutdown event tree logic. Preliminary human reliability analysis results indicate that risk is dominated by the operator’s ability to correctly diagnose events and initiate systems.
Massive neutrinos in the standard model and beyond
NASA Astrophysics Data System (ADS)
Thalapillil, Arun Madhav
The generation of the fermion mass hierarchy in the standard model of particle physics is a long-standing puzzle. The recent discoveries from neutrino physics suggests that the mixing in the lepton sector is large compared to the quark mixings. To understand this asymmetry between the quark and lepton mixings is an important aim for particle physics. In this regard, two promising approaches from the theoretical side are grand unified theories and family symmetries. In the first part of my thesis we try to understand certain general features of grand unified theories with Abelian family symmetries by taking the simplest SU(5) grand unified theory as a prototype. We construct an SU(5) toy model with U(1) F ⊗Z'2 ⊗Z'' 2⊗Z''' 2 family symmetry that, in a natural way, duplicates the observed mass hierarchy and mixing matrices to lowest approximation. The system for generating the mass hierarchy is through a Froggatt-Nielsen type mechanism. One idea that we use in the model is that the quark and charged lepton sectors are hierarchical with small mixing angles while the light neutrino sector is democratic with larger mixing angles. We also discuss some of the difficulties in incorporating finer details into the model without making further assumptions or adding a large scalar sector. In the second part of my thesis, the interaction of high energy neutrinos with weak gravitational fields is explored. The form of the graviton-neutrino vertex is motivated from Lorentz and gauge invariance and the non-relativistic interpretations of the neutrino gravitational form factors are obtained. We comment on the renormalization conditions, the preservation of the weak equivalence principle and the definition of the neutrino mass radius. We associate the neutrino gravitational form factors with specific angular momentum states. Based on Feynman diagrams, spin-statistics, CP invariance and symmetries of the angular momentum states in the neutrino-graviton vertex, we deduce
Addressing Standardized Testing through a Novel Assesment Model
ERIC Educational Resources Information Center
Schifter, Catherine C.; Carey, Martha
2014-01-01
The No Child Left Behind (NCLB) legislation spawned a plethora of standardized testing services for all the high stakes testing required by the law. We argue that one-size-fits all assessments disadvantage students who are English Language Learners, in the USA, as well as students with limited economic resources, special needs, and not reading on…
Physical Education Model Curriculum Standards. Grades Nine through Twelve.
ERIC Educational Resources Information Center
California State Dept. of Education, Sacramento.
These physical education standards were designed to ensure that each student achieve the following goals: (1) physical activity--students develop interest and proficiency in movement skills and understand the importance of lifelong participation in daily physical activity; (2) physical fitness and wellness--students increase understanding of basic…
ISO 9000 quality standards: a model for blood banking?
Nevalainen, D E; Lloyd, H L
1995-06-01
The recent American Association of Blood Banks publications Quality Program and Quality Systems in the Blood Bank and Laboratory Environment, the FDA's draft guidelines, and recent changes in the GMP regulations all discuss the benefits of implementing quality systems in blood center and/or manufacturing operations. While the medical device GMPs in the United States have been rewritten to accommodate a quality system approach similar to ISO 9000, the Center for Biologics Evaluation and Research of the FDA is also beginning to make moves toward adopting "quality systems audits" as an inspection process rather than using the historical approach of record reviews. The approach is one of prevention of errors rather than detection after the fact (Tourault MA, oral communication, November 1994). The ISO 9000 series of standards is a quality system that has worldwide scope and can be applied in any industry or service. The use of such international standards in blood banking should raise the level of quality within an organization, among organizations on a regional level, within a country, and among nations on a worldwide basis. Whether an organization wishes to become registered to a voluntary standard or not, the use of such standards to become ISO 9000-compliant would be a move in the right direction and would be a positive sign to the regulatory authorities and the public that blood banking is making a visible effort to implement world-class quality systems in its operations. Implementation of quality system standards such as the ISO 9000 series will provide an organized approach for blood banks and blood bank testing operations. With the continued trend toward consolidation and mergers, resulting in larger operational units with more complexity, quality systems will become even more important as the industry moves into the future.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7770906
ERIC Educational Resources Information Center
Helfferich, Friedrich G.
1985-01-01
Presents a class exercise designed to find out how well students understand the nature and consequences of the mass action law and Le Chatelier's principle as applied to chemical equilibria. The exercise relates to a practical situation and provides simple relations for maximizing equilibrium quantities not found in standard textbooks. (JN)
A Journey in Standard Development: The Core Manufacturing Simulation Data (CMSD) Information Model
Lee, Yung-Tsun Tina
2015-01-01
This report documents a journey “from research to an approved standard” of a NIST-led standard development activity. That standard, Core Manufacturing Simulation Data (CMSD) information model, provides neutral structures for the efficient exchange of manufacturing data in a simulation environment. The model was standardized under the auspices of the international Simulation Interoperability Standards Organization (SISO). NIST started the research in 2001 and initiated the standardization effort in 2004. The CMSD standard was published in two SISO Products. In the first Product, the information model was defined in the Unified Modeling Language (UML) and published in 2010 as SISO-STD-008-2010. In the second Product, the information model was defined in Extensible Markup Language (XML) and published in 2013 as SISO-STD-008-01-2012. Both SISO-STD-008-2010 and SISO-STD-008-01-2012 are intended to be used together. PMID:26958450
ERIC Educational Resources Information Center
Newton, Jill A.; Kasten, Sarah E.
2013-01-01
The release of the Common Core State Standards for Mathematics and their adoption across the United States calls for careful attention to the alignment between mathematics standards and assessments. This study investigates 2 models that measure alignment between standards and assessments, the Surveys of Enacted Curriculum (SEC) and the Webb…
ERIC Educational Resources Information Center
Shaw, W. M., Jr.; And Others
1997-01-01
Describes a study that computed the low performance standards for queries in 17 test collections. Predicted by the hypergeometric distribution, the standards represent the highest level of retrieval effectiveness attributable to chance. Operational levels of performance for vector-space and other retrieval models were compared to the standards.…
Job Grading Standard for Model Maker, WG-4714.
ERIC Educational Resources Information Center
Civil Service Commission, Washington, DC. Bureau of Policies and Standards.
The pamphlet explains the different job requirements for different grades of model maker (WG-14 and WG-15) and contrasts them to the position of premium journeyman. It includes comment on what a model maker is (a nonsupervisory job involved in planning and fabricating complex research and prototype models which are made from a variety of materials…
Plot Scale Factor Models for Standard Orthographic Views
ERIC Educational Resources Information Center
Osakue, Edward E.
2007-01-01
Geometric modeling provides graphic representations of real or abstract objects. Realistic representation requires three dimensional (3D) attributes since natural objects have three principal dimensions. CAD software gives the user the ability to construct realistic 3D models of objects, but often prints of these models must be generated on two…
Accessing Patient Information for Probabilistic Patient Models Using Existing Standards.
Gaebel, Jan; Cypko, Mario A; Lemke, Heinz U
2016-01-01
Clinical decision support systems (CDSS) are developed to facilitate physicians' decision making, particularly for complex, oncological diseases. Access to relevant patient specific information from electronic health records (EHR) is limited to the structure and transmission formats in the respective hospital information system. We propose a system-architecture for a standardized access to patient specific information for a CDSS for laryngeal cancer. Following the idea of a CDSS using Bayesian Networks, we developed an architecture concept applying clinical standards. We recommend the application of Arden Syntax for the definition and processing of needed medical knowledge and clinical information, as well as the use of HL7 FHIR to identify the relevant data elements in an EHR to increase the interoperability the CDSS. PMID:27139392
A NUMERICAL MODEL OF STANDARD TO BLOWOUT JETS
Archontis, V.; Hood, A. W.
2013-06-01
We report on three-dimensional (3D) MHD simulations of the formation of jets produced during the emergence and eruption of solar magnetic fields. The interaction between an emerging and an ambient magnetic field in the solar atmosphere leads to (external) reconnection and the formation of ''standard'' jets with an inverse Y-shaped configuration. Eventually, low-atmosphere (internal) reconnection of sheared fieldlines in the emerging flux region produces an erupting magnetic flux rope and a reconnection jet underneath it. The erupting plasma blows out the ambient field and, moreover, it unwinds as it is ejected into the outer solar atmosphere. The fast emission of the cool material that erupts together with the hot outflows due to external/internal reconnection form a wider ''blowout'' jet. We show the transition from ''standard'' to ''blowout'' jets and report on their 3D structure. The physical plasma properties of the jets are consistent with observational studies.
Battery Ownership Model - Medium Duty HEV Battery Leasing & Standardization
Kelly, Ken; Smith, Kandler; Cosgrove, Jon; Prohaska, Robert; Pesaran, Ahmad; Paul, James; Wiseman, Marc
2015-12-01
Prepared for the U.S. Department of Energy, this milestone report focuses on the economics of leasing versus owning batteries for medium-duty hybrid electric vehicles as well as various battery standardization scenarios. The work described in this report was performed by members of the Energy Storage Team and the Vehicle Simulation Team in NREL's Transportation and Hydrogen Systems Center along with members of the Vehicles Analysis Team at Ricardo.
A standard model for storage of geological map data
NASA Astrophysics Data System (ADS)
Bain, K. A.; Giles, J. R. A.
1997-07-01
The information presented on a geological map may be represented by a logical model in the form of an entity-relationship diagram. This must show the links between the three-dimensional geology and the two-dimensional expression of that geology which is the map. The principles behind the model created for the British Geological Survey's Digital Map Production System are outlined, and the model's main features explained.
Physical Education Teachers Fidelity to and Perspectives of a Standardized Curricular Model
ERIC Educational Resources Information Center
Kloeppel, Tiffany; Stylianou, Michalis; Kulinna, Pamela Hodges
2014-01-01
Relatively little is known about the use of standardized physical education curricular models and teachers perceptions of and fidelity to such curricula. The purpose of this study was to examine teachers perceptions of and fidelity to a standardized physical education curricular model (i.e., Dynamic Physical Education [DPE]). Participants for this…
Lectures on perturbative QCD, jets and the standard model: collider phenomenology
Ellis, S.D.
1988-01-01
Applications of the Standard Model to the description of physics at hadron colliders are discussed. Particular attention is paid to the use of jets to characterize this physics. The issue of identifying physics beyond the Standard Model is also discussed. 59 refs., 6 figs., 4 tabs.
The Model Standards Project: Creating Inclusive Systems for LGBT Youth in Out-of-Home Care
ERIC Educational Resources Information Center
Wilber, Shannan; Reyes, Carolyn; Marksamer, Jody
2006-01-01
This article describes the Model Standards Project (MSP), a collaboration of Legal Services for Children and the National Center for Lesbian Rights. The MSP developed a set of model professional standards governing the care of lesbian, gay, bisexual and transgender (LGBT) youth in out-of-home care. This article provides an overview of the…
Higgs boson mass in the Standard Model at two-loop order and beyond
Martin, Stephen P.; Robertson, David G.
2014-10-23
We calculate the mass of the Higgs boson in the standard model in terms of the underlying Lagrangian parameters at complete 2-loop order with leading 3-loop corrections. A computer program implementing the results is provided. The program also computes and minimizes the standard model effective potential in Landau gauge at 2-loop order with leading 3-loop corrections.
Prevent-Teach-Reinforce: A Standardized Model of School-Based Behavioral Intervention
ERIC Educational Resources Information Center
Dunlap, Glen; Iovannone, Rose; Wilson, Kelly J.; Kincaid, Donald K.; Strain, Phillip
2010-01-01
Although there is a substantial empirical foundation for the basic intervention components of behavior analysis and positive behavior support (PBS), the field still lacks a standardized program model of individualized PBS suitable for widespread application by school personnel. This article provides a description of a standardized PBS model that…
ERIC Educational Resources Information Center
Vickner, Edward Henry, Jr.
An electronic simulation model was designed, constructed, and then field tested to determine student opinion of its effectiveness as an instructional aid. The model was designated as the Equilibrium System Simulator (ESS). The model was built on the principle of electrical symmetry applied to the Wheatstone bridge and was constructed from readily…
Symmetry Breaking, Unification, and Theories Beyond the Standard Model
Nomura, Yasunori
2009-07-31
A model was constructed in which the supersymmetric fine-tuning problem is solved without extending the Higgs sector at the weak scale. We have demonstrated that the model can avoid all the phenomenological constraints, while avoiding excessive fine-tuning. We have also studied implications of the model on dark matter physics and collider physics. I have proposed in an extremely simple construction for models of gauge mediation. We found that the {mu} problem can be simply and elegantly solved in a class of models where the Higgs fields couple directly to the supersymmetry breaking sector. We proposed a new way of addressing the flavor problem of supersymmetric theories. We have proposed a new framework of constructing theories of grand unification. We constructed a simple and elegant model of dark matter which explains excess flux of electrons/positrons. We constructed a model of dark energy in which evolving quintessence-type dark energy is naturally obtained. We studied if we can find evidence of the multiverse.
2010-01-01
Background The use of structural equation models for the analysis of recursive and simultaneous relationships between phenotypes has become more popular recently. The aim of this paper is to illustrate how these models can be applied in animal breeding to achieve parameterizations of different levels of complexity and, more specifically, to model phenotypic recursion between three calving traits: gestation length (GL), calving difficulty (CD) and stillbirth (SB). All recursive models considered here postulate heterogeneous recursive relationships between GL and liabilities to CD and SB, and between liability to CD and liability to SB, depending on categories of GL phenotype. Methods Four models were compared in terms of goodness of fit and predictive ability: 1) standard mixed model (SMM), a model with unstructured (co)variance matrices; 2) recursive mixed model 1 (RMM1), assuming that residual correlations are due to the recursive relationships between phenotypes; 3) RMM2, assuming that correlations between residuals and contemporary groups are due to recursive relationships between phenotypes; and 4) RMM3, postulating that the correlations between genetic effects, contemporary groups and residuals are due to recursive relationships between phenotypes. Results For all the RMM considered, the estimates of the structural coefficients were similar. Results revealed a nonlinear relationship between GL and the liabilities both to CD and to SB, and a linear relationship between the liabilities to CD and SB. Differences in terms of goodness of fit and predictive ability of the models considered were negligible, suggesting that RMM3 is plausible. Conclusions The applications examined in this study suggest the plausibility of a nonlinear recursive effect from GL onto CD and SB. Also, the fact that the most restrictive model RMM3, which assumes that the only cause of correlation is phenotypic recursion, performs as well as the others indicates that the phenotypic recursion
Testing the Standard Model with the Primordial Inflation Explorer
NASA Technical Reports Server (NTRS)
Kogut, Alan J.
2011-01-01
The Primordial Inflation Explorer is an Explorer-class mission to measure the gravity-wave signature of primordial inflation through its distinctive imprint on the linear polarization of the cosmic microwave background. PIXIE uses an innovative optical design to achieve background-limited sensitivity in 400 spectral channels spanning 2.5 decades in frequency from 30 GHz to 6 THz (1 cm to 50 micron wavelength). The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r < 10A{-3) at 5 standard deviations. The rich PIXIE data set will also constrain physical processes ranging from Big Bang cosmology to the nature of the first stars to physical conditions within the interstellar medium of the Galaxy. I describe the PIXIE instrument and mission architecture needed to detect the inflationary signature using only 4 semiconductor bolometers.
The SLq(2) extension of the standard model
NASA Astrophysics Data System (ADS)
Finkelstein, Robert J.
2015-06-01
The idea that the elementary particles might have the symmetry of knots has had a long history. In any modern formulation of this idea, however, the knot must be quantized. The present review is a summary of a small set of papers that began as an attempt to correlate the properties of quantized knots with empirical properties of the elementary particles. As the ideas behind these papers have developed over a number of years, the model has evolved, and this review is intended to present the model in its current form. The original picture of an elementary fermion as a solitonic knot of field, described by the trefoil representation of SUq(2), has expanded into its present form in which a knotted field is complementary to a composite structure composed of three preons that in turn are described by the fundamental representation of SLq(2). Higher representations of SLq(2) are interpreted as describing composite particles composed of three or more preons bound by a knotted field. This preon model unexpectedly agrees in important detail with the Harari-Shupe model. There is an associated Lagrangian dynamics capable in principle of describing the interactions and masses of the particles generated by the model.
Screening of heavy scalars beyond the standard model
Einhorn, M.B. Randall Laboratory of Physics, University of Michigan, Ann Arbor, Michigan 48109-1120 ); Wudka, J. )
1993-06-01
Spontaneously broken gauge models generically present large radiative corrections when the masses of the scalars are larger than the symmetry-breaking scale(s). This is not necessary, however, and we determine, based on the symmetry and renormalization properties of the theory, the most general conditions under which scalar radiative effects are screened. Barring fine tuning, the properties of the Goldstone sector determine whether this type of screening is present or not, and this can be decided in most cases by inspection (given the pattern of symmetry breaking). We consider several examples. In particular we show that in left-right symmetric models the two requirements that all scalars be significantly heavier than the gauge bosons is inconsistent with screening; this implies either the presence of large radiative corrections produced by the heavy scalars, or the presence of scalars with masses similar to that of the (heaviest) gauge bosons in these models.
ERIC Educational Resources Information Center
Laija-Rodriguez, Wilda; Grites, Karen; Bouman, Doug; Pohlman, Craig; Goldman, Richard L.
2013-01-01
Current assessments in the schools are based on a deficit model (Epstein, 1998). "The National Association of School Psychologists (NASP) Model for Comprehensive and Integrated School Psychological Services" (2010), federal initiatives and mandates, and experts in the field of assessment have highlighted the need for the comprehensive…
Ex-Nihilo: Obstacles Surrounding Teaching the Standard Model
ERIC Educational Resources Information Center
Pimbblet, Kevin A.
2002-01-01
The model of the Big Bang is an integral part of the national curricula in England and Wales. Previous work (e.g. Baxter 1989) has shown that pupils often come into education with many and varied prior misconceptions emanating from both internal and external sources. Whilst virtually all of these misconceptions can be remedied, there will remain…
Beyond the standard gauging: gauge symmetries of Dirac sigma models
NASA Astrophysics Data System (ADS)
Chatzistavrakidis, Athanasios; Deser, Andreas; Jonke, Larisa; Strobl, Thomas
2016-08-01
In this paper we study the general conditions that have to be met for a gauged extension of a two-dimensional bosonic σ-model to exist. In an inversion of the usual approach of identifying a global symmetry and then promoting it to a local one, we focus directly on the gauge symmetries of the theory. This allows for action functionals which are gauge invariant for rather general background fields in the sense that their invariance conditions are milder than the usual case. In particular, the vector fields that control the gauging need not be Killing. The relaxation of isometry for the background fields is controlled by two connections on a Lie algebroid L in which the gauge fields take values, in a generalization of the common Lie-algebraic picture. Here we show that these connections can always be determined when L is a Dirac structure in the H-twisted Courant algebroid. This also leads us to a derivation of the general form for the gauge symmetries of a wide class of two-dimensional topological field theories called Dirac σ-models, which interpolate between the G/G Wess-Zumino-Witten model and the (Wess-Zumino-term twisted) Poisson sigma model.
NASA Astrophysics Data System (ADS)
Wang, Wen-Yu; Xiong, Zhao-Hua; Zhou, Si-Hong
2014-06-01
Using the Bs meson wave function extracted from non-leptonic Bs decays, we reevaluate the rare decays Bs → l+l- γ, (l=e, μ) in the Standard Model, including two kinds of contributions from the magnetic-penguin operator with virtual and real photons. We find that contributions to the exclusive decays from the magnetic-penguin operator b → sγ with real photons, which were regarded as negligible in the previous literature, are large and the branching ratios Bs → l+l- γ are enhanced by a factor of almost 2. With the predicted branching ratios of the order of 10-8, it is expected that these radiative dileptonic decays will be detected in LHC-b and B factories in the near future.
Research and development of the evolving architecture for beyond the Standard Model
NASA Astrophysics Data System (ADS)
Cho, Kihyeon; Kim, Jangho; Kim, Junghyun
2015-12-01
The Standard Model (SM) has been successfully validated with the discovery of Higgs boson. However, the model is not yet fully regarded as a complete description. There are efforts to develop phenomenological models that are collectively termed beyond the standard model (BSM). The BSM requires several orders of magnitude more simulations compared with those required for the Higgs boson events. On the other hand, particle physics research involves major investments in hardware coupled with large-scale theoretical and computational efforts along with experiments. These fields include simulation toolkits based on an evolving computing architecture. Using the simulation toolkits, we study particle physics beyond the standard model. Here, we describe the state of this research and development effort for evolving computing architecture of high throughput computing (HTC) and graphic processing units (GPUs) for searching beyond the standard model.
ERIC Educational Resources Information Center
Tumthong, Suwut; Piriyasurawong, Pullop; Jeerangsuwan, Namon
2016-01-01
This research proposes a functional competency development model for academic personnel based on international professional qualification standards in computing field and examines the appropriateness of the model. Specifically, the model consists of three key components which are: 1) functional competency development model, 2) blended training…
Singlet extensions of the standard model at LHC Run 2: benchmarks and comparison with the NMSSM
NASA Astrophysics Data System (ADS)
Costa, Raul; Mühlleitner, Margarete; Sampaio, Marco O. P.; Santos, Rui
2016-06-01
The Complex singlet extension of the Standard Model (CxSM) is the simplest extension that provides scenarios for Higgs pair production with different masses. The model has two interesting phases: the dark matter phase, with a Standard Model-like Higgs boson, a new scalar and a dark matter candidate; and the broken phase, with all three neutral scalars mixing. In the latter phase Higgs decays into a pair of two different Higgs bosons are possible.
Metabolomics, Standards, and Metabolic Modeling for Synthetic Biology in Plants.
Hill, Camilla Beate; Czauderna, Tobias; Klapperstück, Matthias; Roessner, Ute; Schreiber, Falk
2015-01-01
Life on earth depends on dynamic chemical transformations that enable cellular functions, including electron transfer reactions, as well as synthesis and degradation of biomolecules. Biochemical reactions are coordinated in metabolic pathways that interact in a complex way to allow adequate regulation. Biotechnology, food, biofuel, agricultural, and pharmaceutical industries are highly interested in metabolic engineering as an enabling technology of synthetic biology to exploit cells for the controlled production of metabolites of interest. These approaches have only recently been extended to plants due to their greater metabolic complexity (such as primary and secondary metabolism) and highly compartmentalized cellular structures and functions (including plant-specific organelles) compared with bacteria and other microorganisms. Technological advances in analytical instrumentation in combination with advances in data analysis and modeling have opened up new approaches to engineer plant metabolic pathways and allow the impact of modifications to be predicted more accurately. In this article, we review challenges in the integration and analysis of large-scale metabolic data, present an overview of current bioinformatics methods for the modeling and visualization of metabolic networks, and discuss approaches for interfacing bioinformatics approaches with metabolic models of cellular processes and flux distributions in order to predict phenotypes derived from specific genetic modifications or subjected to different environmental conditions.
Metabolomics, Standards, and Metabolic Modeling for Synthetic Biology in Plants
Hill, Camilla Beate; Czauderna, Tobias; Klapperstück, Matthias; Roessner, Ute; Schreiber, Falk
2015-01-01
Life on earth depends on dynamic chemical transformations that enable cellular functions, including electron transfer reactions, as well as synthesis and degradation of biomolecules. Biochemical reactions are coordinated in metabolic pathways that interact in a complex way to allow adequate regulation. Biotechnology, food, biofuel, agricultural, and pharmaceutical industries are highly interested in metabolic engineering as an enabling technology of synthetic biology to exploit cells for the controlled production of metabolites of interest. These approaches have only recently been extended to plants due to their greater metabolic complexity (such as primary and secondary metabolism) and highly compartmentalized cellular structures and functions (including plant-specific organelles) compared with bacteria and other microorganisms. Technological advances in analytical instrumentation in combination with advances in data analysis and modeling have opened up new approaches to engineer plant metabolic pathways and allow the impact of modifications to be predicted more accurately. In this article, we review challenges in the integration and analysis of large-scale metabolic data, present an overview of current bioinformatics methods for the modeling and visualization of metabolic networks, and discuss approaches for interfacing bioinformatics approaches with metabolic models of cellular processes and flux distributions in order to predict phenotypes derived from specific genetic modifications or subjected to different environmental conditions. PMID:26557642
Truong, Quynh A.; Thai, Wai-ee; Wai, Bryan; Cordaro, Kevin; Cheng, Teresa; Beaudoin, Jonathan; Xiong, Guanglei; Cheung, Jim W.; Altman, Robert; Min, James K.; Singh, Jagmeet P.; Barrett, Conor D.; Danik, Stephan
2015-01-01
Background Myocardial scar is a substrate for ventricular tachycardia and sudden cardiac death. Late enhancement computed tomography (CT) imaging can detect scar, but it remains unclear whether newer late enhancement dual-energy (LE-DECT) acquisition has benefit over standard single-energy late enhancement (LE-CT). Objective We aim to compare late enhancement CT using newer LE-DECT acquisition and single-energy LE-CT acquisitions to pathology and electroanatomical map (EAM) in an experimental chronic myocardial infarction (MI) porcine study. Methods In 8 chronic MI pigs (59±5 kg), we performed dual-source CT, EAM, and pathology. For CT imaging, we performed 3 acquisitions at 10 minutes post-contrast: LE-CT 80 kV, LE-CT 100 kV, and LE-DECT with two post-processing software settings. Results Of the sequences, LE-CT 100 kV provided the best contrast-to-noise ratio (all p≤0.03) and correlation to pathology for scar (ρ=0.88). While LE-DECT overestimated scar (both p=0.02), LE-CT images did not (both p=0.08). On a segment basis (n=136), all CT sequences had high specificity (87–93%) and modest sensitivity (50–67%), with LE-CT 100 kV having the highest specificity of 93% for scar detection compared to pathology and agreement with EAM (κ 0.69). Conclusions Standard single-energy LE-CT, particularly 100kV, matched better to pathology and EAM than dual-energy LE-DECT for scar detection. Larger human trials as well as more technical-based studies that optimize varying different energies with newer hardware and software are warranted. PMID:25977115
e/sup +/e/sup -/ interactions at very high energy: searching beyond the standard model
Dorfan, J.
1983-04-01
These lectures discuss e/sup +/e/sup -/ interactions at very high energies with a particular emphasis on searching the standard model which we take to be SU(3)/sub color/..lambda.. SU(2) ..lambda.. U(1). The highest e/sup +/e/sup -/ collision energy exploited to date is at PETRA where data have been taken at 38 GeV. We will consider energies above this to be the very high energy frontier. The lectures will begin with a review of the collision energies which will be available in the upgraded machines of today and the machines planned for tomorrow. Without going into great detail, we will define the essential elements of the standard model. We will remind ourselves that some of these essential elements have not yet been verified and that part of the task of searching beyond the standard model will involve experiments aimed at this verification. For if we find the standard model lacking, then clearly we are forced to find an alternative. So we will investigate how the higher energy e/sup +/e/sup -/ collisions can be used to search for the top quark, the neutral Higgs scalar, provide true verification of the non-Abelian nature of QCD, etc. Having done this we will look at tests of models involving simple extensions of the standard model. Models considered are those without a top quark, those with charged Higgs scalars, with multiple and/or composite vector bosons, with additional generations and possible alternative explanations for the PETRA three jet events which don't require gluon bremsstrahlung. From the simple extensions of the standard model we will move to more radical alternatives, alternatives which have arisen from the unhappiness with the gauge hierarchy problem of the standard model. Technicolor, Supersymmetry and composite models will be discussed. In the final section we will summarize what the future holds in terms of the search beyond the standard model.
New perspectives in physics beyond the standard model
Weiner, Neal Jonathan
2000-09-09
In 1934 Fermi postulated a theory for weak interactions containing a dimensionful coupling with a size of roughly 250 GeV. Only now are we finally exploring this energy regime. What arises is an open question: supersymmetry and large extra dimensions are two possible scenarios. Meanwhile, other experiments will begin providing definitive information into the nature of neutrino masses and CP violation. In this paper, we explore features of possible theoretical scenarios, and study the phenomenological implications of various models addressing the open questions surrounding these issues.
Yosano, Akira; Yamamoto, Masae; Shouno, Takahiro; Shiiki, Sayaka; Hamase, Maki; Kasahara, Kiyohiro; Takaki, Takashi; Takano, Nobuo; Uchiyama, Takeshi; Shibahara, Takahiko
2005-08-01
It is difficult to translate analytical values into accurate model surgery by traditional methods, especially when moving the posterior maxilla. This is because cephalometric radiographic analysis generated information on movement of the posterior nasal spine (PNS) can not be recreated in model surgery. Therefore, we propose a method that accurately reflects such analysis and simulation of movement using Quick Ceph 2000 (Orthodontic Processing Corporation, USA). This will allow the enrichment of model surgery prior to actual surgery in cases where upward movement of the posterior maxilla is involved. All patients who participated in this study had skeletal mandibular prognathism characterized by a small occlusal plane angle in respect to the S-N plane. Cephalometric radiographs were taken and analyzed with the Quick Ceph 2000. Pre- and post-surgical evaluations were performed using Sassouni arc analysis and Ricketts analysis. Prior to transposition, we then prepared an anterior occlusal bite record on a model mounted on an articulator. This bite was then used as a reference when the molar parts were to be transposed upwards. The use of a occlusal bite permitted an accurate translation of the preoperative computer simulation into model surgery, thus facilitating favorable surgical results.
Cosmic strings in hidden sectors: 1. Radiation of standard model particles
Long, Andrew J.; Hyde, Jeffrey M.; Vachaspati, Tanmay E-mail: jmhyde@asu.edu
2014-09-01
In hidden sector models with an extra U(1) gauge group, new fields can interact with the Standard Model only through gauge kinetic mixing and the Higgs portal. After the U(1) is spontaneously broken, these interactions couple the resultant cosmic strings to Standard Model particles. We calculate the spectrum of radiation emitted by these ''dark strings'' in the form of Higgs bosons, Z bosons, and Standard Model fermions assuming that string tension is above the TeV scale. We also calculate the scattering cross sections of Standard Model fermions on dark strings due to the Aharonov-Bohm interaction. These radiation and scattering calculations will be applied in a subsequent paper to study the cosmological evolution and observational signatures of dark strings.
Search for the Standard Model Higgs Boson Produced in Association with Top Quarks
Wilson, Jonathan Samuel
2011-01-01
We have performed a search for the Standard Model Higgs boson produced in association with top quarks in the lepton plus jets channel. We impose no constraints on the decay of the Higgs boson. We employ ensembles of neural networks to discriminate events containing a Higgs boson from the dominant tt¯background, and set upper bounds on the Higgs production cross section. At a Higgs boson mass mH = 120 GeV/c2 , we expect to exclude a cross section 12.7 times the Standard Model prediction, and we observe an exclusion 27.4 times the Standard Model prediction with 95 % confidence.
NASA Astrophysics Data System (ADS)
Tedesco, M.; Datta, R.; Fettweis, X.; Agosta, C.
2015-12-01
Surface-layer snow density is important to processes contributing to surface mass balance, but is highly variable over Antarctica due to a wide range of near-surface climate conditions over the continent. Formulations for fresh snow density have typically either used fixed values or been modeled empirically using field data that is limited to specific seasons or regions. There is also currently limited work exploring how the sensitivity to fresh snow density in regional climate models varies with resolution. Here, we present a new formulation compiled from (a) over 1600 distinct density profiles from multiple sources across Antarctica and (b) near-surface variables from the regional climate model Modèle Atmosphérique Régionale (MAR). Observed values represent coastal areas as well as the plateau, in both West and East Antarctica (although East Antarctica is dominant). However, no measurements are included from the Antarctic Peninsula, which is both highly topographically variable and extends to lower latitudes than the remainder of the continent. In order to assess the applicability of this fresh snow density formulation to the Antarctic Peninsula at high resolutions, a version of MAR is run for several years both at low-resolution at the continental scale and at a high resolution for the Antarctic Peninsula alone. This setup is run both with and without the new fresh density formulation to quantify the sensitivity of the energy balance and SMB components to fresh snow density. Outputs are compared with near-surface atmospheric variables available from AWS stations (provided by the University of Wisconsin Madison) as well as net accumulation values from the SAMBA database (provided from the Laboratoire de Glaciologie et Géophysique de l'Environnement).
Non-standard charged Higgs decay at the LHC in Next-to-Minimal Supersymmetric Standard Model
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Priyotosh; Huitu, Katri; Niyogi, Saurabh
2016-07-01
We consider next-to-minimal supersymmetric standard model (NMSSM) which has a gauge singlet superfield. In the scale invariant superpotential we do not have the mass terms and the whole Lagrangian has an additional Z 3 symmetry. This model can have light scalar and/or pseudoscalar allowed by the recent data from LHC and the old data from LEP. We investigate the situation where a relatively light charged Higgs can decay to such a singlet-like pseudoscalar and a W ± boson giving rise to a final state containing τ and/or b-jets and lepton(s). Such decays evade the recent bounds on charged Higgs from the LHC, and according to our PYTHIA-FastJet based simulation can be probed with 10 fb-1 at the LHC center of mass energy of 13 and 14 TeV.
A New Proof of the Expected Frequency Spectrum under the Standard Neutral Model.
Hudson, Richard R
2015-01-01
The sample frequency spectrum is an informative and frequently employed approach for summarizing DNA variation data. Under the standard neutral model the expectation of the sample frequency spectrum has been derived by at least two distinct approaches. One relies on using results from diffusion approximations to the Wright-Fisher Model. The other is based on Pólya urn models that correspond to the standard coalescent model. A new proof of the expected frequency spectrum is presented here. It is a proof by induction and does not require diffusion results and does not require the somewhat complex sums and combinatorics of the derivations based on urn models.
A New Proof of the Expected Frequency Spectrum under the Standard Neutral Model.
Hudson, Richard R
2015-01-01
The sample frequency spectrum is an informative and frequently employed approach for summarizing DNA variation data. Under the standard neutral model the expectation of the sample frequency spectrum has been derived by at least two distinct approaches. One relies on using results from diffusion approximations to the Wright-Fisher Model. The other is based on Pólya urn models that correspond to the standard coalescent model. A new proof of the expected frequency spectrum is presented here. It is a proof by induction and does not require diffusion results and does not require the somewhat complex sums and combinatorics of the derivations based on urn models. PMID:26197064
Energizing a Large Urban System: Reform through a Standards Driven Model.
ERIC Educational Resources Information Center
Robbins, Stephen B.
This paper describes the District of Columbia Public School System (DCPS); articulates challenges it faced prior to standards based reform; presents strategies for reforming large urban systems' health and physical education (HPE) programs; and notes strategies for incorporating a standards-based performance-driven model. DCPS reading and math…
Model Core Teaching Standards: A Resource for State Dialogue. (Draft for Public Comment)
ERIC Educational Resources Information Center
Council of Chief State School Officers, 2010
2010-01-01
With this document, the Council of Chief State School Officers (CCSSO) offers for public dialogue and comment a set of model core teaching standards that outline what teachers should know and be able to do to help all students reach the goal of being college- and career-ready in today's world. These standards are an update of the 1992 Interstate…
Can Cognitive Writing Models Inform the Design of the Common Core State Standards?
ERIC Educational Resources Information Center
Hayes, John R.; Olinghouse, Natalie G.
2015-01-01
In this article, we compare the Common Core State Standards in Writing to the Hayes cognitive model of writing, adapted to describe the performance of young and developing writers. Based on the comparison, we propose the inclusion of standards for motivation, goal setting, writing strategies, and attention by writers to the text they have just…
ERIC Educational Resources Information Center
Crawford, Linda
These instructional materials are designed for students with some French reading skills and vocabulary in late beginning or early intermediate senior high school French. The objectives are to introduce students to a French newspaper, "Le Figaro," and develop reading skills for skimming, gathering specific information, and relying on cognates. The…
Integrating Science into Design Technology Projects: Using a Standard Model in the Design Process.
ERIC Educational Resources Information Center
Zubrowski, Bernard
2002-01-01
Fourth graders built a model windmill using a three-step process: (1) open exploration of designs; (2) application of a standard model incorporating features of suggested designs; and (3) refinement of preliminary models. The approach required math, science, and technology teacher collaboration and adequate time. (Contains 21 references.) (SK)
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
ERIC Educational Resources Information Center
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
The fermion content of the Standard Model from a simple world-line theory
NASA Astrophysics Data System (ADS)
Mansfield, Paul
2015-04-01
We describe a simple model that automatically generates the sum over gauge group representations and chiralities of a single generation of fermions in the Standard Model, augmented by a sterile neutrino. The model is a modification of the world-line approach to chiral fermions.
40 CFR 86.410-90 - Emission standards for 1990 and later model year motorcycles.
Code of Federal Regulations, 2012 CFR
2012-07-01
... model year motorcycles. 86.410-90 Section 86.410-90 Protection of Environment ENVIRONMENTAL PROTECTION... ENGINES Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.410-90 Emission standards for 1990 and later model year motorcycles. (a)(1) Exhaust emissions from 1990 and later model...
40 CFR 86.410-90 - Emission standards for 1990 and later model year motorcycles.
Code of Federal Regulations, 2014 CFR
2014-07-01
... model year motorcycles. 86.410-90 Section 86.410-90 Protection of Environment ENVIRONMENTAL PROTECTION... ENGINES Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.410-90 Emission standards for 1990 and later model year motorcycles. (a)(1) Exhaust emissions from 1990 and later model...
Classical conformality in the Standard Model from Coleman’s theory
NASA Astrophysics Data System (ADS)
Kawana, Kiyoharu
2016-09-01
The classical conformality (CC) is one of the possible candidates for explaining the gauge hierarchy of the Standard Model (SM). We show that it is naturally obtained from the Coleman’s theory on baby universe.
Tests of local Lorentz invariance violation of gravity in the standard model extension with pulsars.
Shao, Lijing
2014-03-21
The standard model extension is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model and general relativity (GR). In the pure-gravity sector of minimal standard model extension, nine coefficients describe dominant observable deviations from GR. We systematically implemented 27 tests from 13 pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of minimal standard model extension with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for the convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravity. PMID:24702346
Tests of local Lorentz invariance violation of gravity in the standard model extension with pulsars.
Shao, Lijing
2014-03-21
The standard model extension is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model and general relativity (GR). In the pure-gravity sector of minimal standard model extension, nine coefficients describe dominant observable deviations from GR. We systematically implemented 27 tests from 13 pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of minimal standard model extension with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for the convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravity.
NASA Astrophysics Data System (ADS)
Finnegan, N. J.; Gran, K. B.
2012-12-01
Catastrophic draining of glacial Lake Agassiz at the end of the Pleistocene triggered a pulse of incision along the Minnesota River, MN, USA, that is currently propagating into tributary channels and elevating channel incision rates far above regional background levels. At the same time, installation of artificial drainage to remove excess soil water (tiling) in tributaries of the Minnesota has resulted in shorter and higher amplitude hydrographs during spring snow melt and storm events. Thus both natural and anthropogenic explanations exist for high sediment loads from tributaries to the Minnesota River, among them the Le Sueur River, which is currently impaired for turbidity under EPA Clean Water Act standards. Here we investigate the transient incision history of the Le Sueur River to aid in the development of Total Maximum Daily Loads (TMDLs) for sediment in the Le Sueur. Establishing TMDLs for the Le Sueur requires separation of anthropogenic and geologic contributions to current sediment loads. Towards this end, we ran a series of numerical simulations of the excavation of the Le Sueur River valley over the Holocene in order to constrain pre-settlement rates of sediment export. Our approach relies on coupling (with varying strength) a 2D numerical model for river meandering to various 1D numerical models for river incision. Fortuitously, both the initial profile of the Le Sueur (prior to the flood from Lake Agassiz) as well as the timing of the flood itself can be reasonably constrained from LiDAR data and previous Quaternary studies, respectively. Additionally, LiDAR-mapping of discontinuous, unpaired strath terraces combined with OSL and/or 14C dates on 18 strath terrace deposits pin pieces of the long profile of the Le Sueur River in time and space. By minimizing the model misfit for strath terrace ages, the current river elevation long profile, and the width between bluffs along the Le Sueur River valley, we identify a preferred valley excavation history
Search for New Physics Beyond the Standard Model at BaBar
Barrett, Matthew; /Brunel U.
2008-04-16
A review of selected recent BaBar results are presented that illustrate the ability of the experiment to search for physics beyond the standard model. The decays B {yields} {tau}{nu} and B {yields} s{gamma} provide constraints on the mass of a charged Higgs. Searches for Lepton Flavour Violation could provide a clear signal for beyond the standard model physics. Babar does not observe any signal for New Physics with the current dataset.
From many body wee partons dynamics to perfect fluid: a standard model for heavy ion collisions
Venugopalan, R.
2010-07-22
We discuss a standard model of heavy ion collisions that has emerged both from experimental results of the RHIC program and associated theoretical developments. We comment briefly on the impact of early results of the LHC program on this picture. We consider how this standard model of heavy ion collisions could be solidified or falsified in future experiments at RHIC, the LHC and a future Electro-Ion Collider.
Higgs production cross-section in a Standard Model with four generations at the LHC
Furlan E.; Anastasiou, C.; Buehler, S.; Herzog, F.; Lazopoulos, A.
2011-07-12
We present theoretical predictions for the Higgs boson production cross-section via gluon fusion at the LHC in a Standard Model with four generations. We include QCD corrections through NLO retaining the full dependence on the quark masses, and the NNLO corrections in the heavy quark effective theory approximation. We also include electroweak corrections through three loops. Electroweak and bottom-quark contributions are suppressed in comparison to the Standard Model with three generations.
The Plate Paradigm; the Standard Model Reductio ~ ad ~Absurdum
NASA Astrophysics Data System (ADS)
Anderson, D. L.
2003-12-01
Midplate volcanism, volcanic chains, diffuse boundaries and variable chemistry basalts are usually considered to be outside the plate tectonic hypothesis and to need separate explanations. This is true only for the instantaneous, steady state, kinematic and hard plate versions of the hypothesis. In the more general plate paradigm (with fewer restrictive adjectives), melting `anomalies', seamount chains, and LIPs are by-products of plate tectonics. This assumes that the shallow mantle is close to the (variable) melting point and that athermal and episodic processes are important. Cooling of the surface generates forces that drive, break and reorganize plates; global reorganizations (including new plate boundaries) are intrinsic; regions of intense, long-lived magmatism and shallow tensile stress are (usually but not always) plate boundaries. Plates are regions of lateral compression. Plate boundaries have shallow extensional or strike-slip earthquakes; where the mantle is near the melting point the buoyancy of magma generates dikes and volcanoes. When compressional forces dominate, upwelling magmas pond beneath the plate until released by extensional stresses. Large melting anomalies are episodic and associated with changes in plate stress and new plate boundaries (often triple-junctions). Incipient boundaries can be extensional and volcanic, as can abandoned ones. Ridges, island arcs, seamount fields and chains, and reactivated and incipient boundaries, are part of a single process. The plate paradigm thereby reverses the assumptions of current geodynamic and geochemical reservoir models : Locations of volcanoes are controlled by lithospheric stress and fabric ( not mantle temperature ).The volumes of magma are controlled by lithospheric extension and shallow mantle fertility (not by conditions at the core mantle boundary).The stress-, fertility- and thermal-states are controlled by plate tectonics and upper mantle recycling (not by infusions from the deep mantle
Ahern, J C M; Smith, F H
2004-01-01
This study documents and examines selected implications of the adolescent supraorbital anatomy of the Le Moustier 1 Neandertal. Le Moustier's supraorbital morphology conforms to that expected of an adolescent Neandertal but indicates that significant development of the adult Neandertal torus occurs late in ontogeny. As the best preserved adolescent from the Late Pleistocene, Le Moustier 1's anatomy is used to help distinguish adolescent from adult anatomy in two cases of fragmentary supraorbital fossils, the Vindija late Neandertals and KRM 16425 from Klasies River Mouth (South Africa). It has been suggested that the modern-like aspects of the Vindija and Klasies supraorbital fossils are a function of developmental age rather than evolution. Although Le Moustier 1's anatomy does indicate that two of the Vindija fossils are adolescent; these two fossils have already been excluded from studies that demonstrate transitional aspects of the Vindija adult supraorbitals. Results of an analysis of KRM 16425 in light of Le Moustier 1 are more ambiguous. KRM 16425 is clearly not a Neandertal, but its morphology suggests that it may be an adolescent form of such late archaic Africans like Florisbad or Ngaloba. Both the Vindija and Klasies River Mouth cases highlight the need to be wary of confusing adolescent anatomy with modernity.
The model standards project: creating inclusive systems for LGBT youth in out-of-home care.
Wilber, Shannan; Reyes, Carolyn; Marksamer, Jody
2006-01-01
This article describes the Model Standards Project (MSP), a collaboration of Legal Services for Children and the National Center for Lesbian Rights. The MSP developed a set of model professional standards governing the care of lesbian, gay, bisexual and transgender (LGBT) youth in out-of-home care. This article provides an overview of the experiences of LGBT youth in state custody, drawing from existing research, as well as the actual experiences of youth who participated in the project or spoke with project staff. It will describe existing professional standards applicable to child welfare and juvenile justice systems, and the need for standards specifically focused on serving LGBT youth. The article concludes with recommendations for implementation of the standards in local jurisdictions.
Two hundred heterotic standard models on smooth Calabi-Yau threefolds
NASA Astrophysics Data System (ADS)
Anderson, Lara B.; Gray, James; Lukas, Andre; Palti, Eran
2011-11-01
We construct heterotic standard models by compactifying on smooth Calabi-Yau three-folds in the presence of purely Abelian internal gauge fields. A systematic search over complete intersection Calabi-Yau manifolds with less than six Kähler parameters leads to over 200 such models which we present. Each of these models has precisely the matter spectrum of the minimal supersymmetric standard model, at least one pair of Higgs doublets, the standard model gauge group, and no exotics. For about 100 of these models there are four additional U(1) symmetries which are Green-Schwarz anomalous and, hence, massive. In the remaining cases, three U(1) symmetries are anomalous, while the fourth, massless one can be spontaneously broken by singlet vacuum expectation values. The presence of additional global U(1) symmetries, together with the possibility of switching on singlet vacuum expectation values, leads to a rich phenomenology which is illustrated for a particular example. Our database of standard models, which can be further enlarged by simply extending the computer-based search, allows for a detailed and systematic phenomenological analysis of string standard models, covering issues such as the structure of Yukawa couplings, R-parity violation, proton stability, and neutrino masses.
Search for Standard Model $ZH \\to \\ell^+\\ell^-b\\bar{b}$ at DØ
Jiang, Peng
2014-07-01
We present a search for the Standard Model Higgs boson in the ZH → ℓ + ℓ ₋ $b\\bar{b}$ channel, using data collected with the DØ detector at the Fermilab Tevatron Collider. This analysis is based on a sample of reprocessed data incorporating several improve ments relative to a previous published result, and a modified multivariate analysis strategy. For a Standard Model Higgs boson of mass 125 GeV, the expected cross section limit over the Standard M odel prediction is improved by about 5% compared to the previous published results in this c hannel from the DØ Collaboration
Hucka, Michael; Nickerson, David P.; Bader, Gary D.; Bergmann, Frank T.; Cooper, Jonathan; Demir, Emek; Garny, Alan; Golebiewski, Martin; Myers, Chris J.; Schreiber, Falk; Waltemath, Dagmar; Le Novère, Nicolas
2015-01-01
The Computational Modeling in Biology Network (COMBINE) is a consortium of groups involved in the development of open community standards and formats used in computational modeling in biology. COMBINE’s aim is to act as a coordinator, facilitator, and resource for different standardization efforts whose domains of use cover related areas of the computational biology space. In this perspective article, we summarize COMBINE, its general organization, and the community standards and other efforts involved in it. Our goals are to help guide readers toward standards that may be suitable for their research activities, as well as to direct interested readers to relevant communities where they can best expect to receive assistance in how to develop interoperable computational models. PMID:25759811
NASA Astrophysics Data System (ADS)
Bizouard, Christian
2012-03-01
Les variations de la rotation terrestre. En conditionnant à la fois notre vie quotidienne, notre perception du ciel, et bon nombre de phénomènes géophysiques comme la formation des cyclones, la rotation de la Terre se trouve au croisement de plusieurs disciplines. Si le phenomena se faisait uniformément, le sujet serait vite discuté, mais c'est parce que la rotation terrestre varie, même imperceptiblement pour nos sens, dans sa vitesse angulaire comme dans la direction de son axe, qu'elle suscite un grand intérêt. D'abord pour des raisons pratiques : non seulement les aléas de la rotation terrestre modi_ent à la longue les pointés astrométriques à un instant donné de la journée mais in_uencent aussi les mesures opérées par les techniques spatiales ; en consequence l'exploitation de ces mesures, par exemple pour déterminer les orbites des satellites impliqués ou pratiquer le positionnement au sol, nécessite une connaissance précise de ces variations. Plus fondamentalement, elles traduisent les propriétés globales de la Terre comme les processus physiques qui s'y déroulent, si bien qu'en analysant les causes des fluctuations observées, on dispose d'un moyen de mieux connaître notre globe. La découverte progressive des fluctuations de la rotation de la Terre a une longue histoire. Sous l'angle des techniques d'observation, trois époques se pro-celle du pointé astrométrique à l'oeil nu, à l'aide d'instruments en bois ou métalliques (quart de cercle muraux par exemple). À partir du XVIIe siècle débute l'astrométrie télescopique dont les pointés sont complétés par des datations de plus en plus précises grâce à l'invention d'horloges régulées par balancier. Cette deuxième époque se termine vers 1960, avec l'avènement des techniques spatiales : les pointés astrométriques sont délaissés au profit de la mesure ultra-précise de durées ou de fréquences de signaux électromagnétiques, grâce à l'invention des horloges
NASA Technical Reports Server (NTRS)
1981-01-01
The use of an International Standards Organization (ISO) Open Systems Interconnection (OSI) Reference Model and its relevance to interconnecting an Applications Data Service (ADS) pilot program for data sharing is discussed. A top level mapping between the conjectured ADS requirements and identified layers within the OSI Reference Model was performed. It was concluded that the OSI model represents an orderly architecture for the ADS networking planning and that the protocols being developed by the National Bureau of Standards offer the best available implementation approach.
A Standard-Based Model for Adaptive E-Learning Platform for Mauritian Academic Institutions
ERIC Educational Resources Information Center
Kanaksabee, P.; Odit, M. P.; Ramdoyal, A.
2011-01-01
The key aim of this paper is to introduce a standard-based model for adaptive e-learning platform for Mauritian academic institutions and to investigate the conditions and tools required to implement this model. The main forces of the system are that it allows collaborative learning, communication among user, and reduce considerable paper work.…
ERIC Educational Resources Information Center
Kartal, Ozgul; Dunya, Beyza Aksu; Diefes-Dux, Heidi A.; Zawojewski, Judith S.
2016-01-01
Critical to many science, technology, engineering, and mathematics (STEM) career paths is mathematical modeling--specifically, the creation and adaptation of mathematical models to solve problems in complex settings. Conventional standardized measures of mathematics achievement are not structured to directly assess this type of mathematical…
ERIC Educational Resources Information Center
Finch, W. Holmes; Cassady, Jerrell C.
2014-01-01
In the USA, trends in educational accountability have driven several models attempting to provide quality data for decision making at the national, state, and local levels, regarding the success of schools in meeting standards for competence. Statistical methods to generate data for such decisions have generally included (a) status models that…
NASA Astrophysics Data System (ADS)
García-Alegre, Ana; Sánchez, Francisco; Gómez-Ballesteros, María; Hinz, Hilmar; Serrano, Alberto; Parra, Santiago
2014-08-01
The management and protection of potentially vulnerable species and habitats require the availability of detailed spatial data. However, such data are often not readily available in particular areas that are challenging for sampling by traditional sampling techniques, for example seamounts. Within this study habitat modelling techniques were used to create predictive maps of six species of conservation concern for the Le Danois Bank (El Cachucho Marine Protected Area in the South of the Bay of Biscay). The study used data from ECOMARG multidisciplinary surveys that aimed to create a representative picture of the physical and biological composition of the area. Classical fishing gear (otter trawl and beam trawl) was used to sample benthic communities that inhabit sedimentary areas, and non-destructive visual sampling techniques (ROV and photogrammetric sled) were used to determine the presence of epibenthic macrofauna in complex and vulnerable habitats. Multibeam echosounder data, high-resolution seismic profiles (TOPAS system) and geological data from box-corer were used to characterize the benthic terrain. ArcGIS software was used to produce high-resolution maps (75×75 m2) of such variables in the entire area. The Maximum Entropy (MAXENT) technique was used to process these data and create Habitat Suitability maps for six species of special conservation interest. The model used seven environmental variables (depth, rugosity, aspect, slope, Bathymetric Position Index (BPI) in fine and broad scale and morphosedimentary characteristics) to identify the most suitable habitats for such species and indicates which environmental factors determine their distribution. The six species models performed highly significantly better than random (p<0.0001; Mann-Whitney test) when Area Under the Curve (AUC) values were tested. This indicates that the environmental variables chosen are relevant to distinguish the distribution of these species. The Jackknife test estimated depth
Standardization Process for Space Radiation Models Used for Space System Design
NASA Technical Reports Server (NTRS)
Barth, Janet; Daly, Eamonn; Brautigam, Donald
2005-01-01
The space system design community has three concerns related to models of the radiation belts and plasma: 1) AP-8 and AE-8 models are not adequate for modern applications; 2) Data that have become available since the creation of AP-8 and AE-8 are not being fully exploited for modeling purposes; 3) When new models are produced, there is no authorizing organization identified to evaluate the models or their datasets for accuracy and robustness. This viewgraph presentation provided an overview of the roadmap adopted by the Working Group Meeting on New Standard Radiation Belt and Space Plasma Models.
NASA Astrophysics Data System (ADS)
Skersys, Tomas; Butleris, Rimantas; Kapocius, Kestutis
2013-10-01
Approaches for the analysis and specification of business vocabularies and rules are very relevant topics in both Business Process Management and Information Systems Development disciplines. However, in common practice of Information Systems Development, the Business modeling activities still are of mostly empiric nature. In this paper, basic aspects of the approach for business vocabularies' semi-automated extraction from business process models are presented. The approach is based on novel business modeling-level OMG standards "Business Process Model and Notation" (BPMN) and "Semantics for Business Vocabularies and Business Rules" (SBVR), thus contributing to OMG's vision about Model-Driven Architecture (MDA) and to model-driven development in general.
Assessment of the Draft AIAA S-119 Flight Dynamic Model Exchange Standard
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Murri, Daniel G.; Hill, Melissa A.; Jessick, Matthew V.; Penn, John M.; Hasan, David A.; Crues, Edwin Z.; Falck, Robert D.; McCarthy, Thomas G.; Vuong, Nghia; Zimmerman, Curtis
2011-01-01
An assessment of a draft AIAA standard for flight dynamics model exchange, ANSI/AIAA S-119-2011, was conducted on behalf of NASA by a team from the NASA Engineering and Safety Center. The assessment included adding the capability of importing standard models into real-time simulation facilities at several NASA Centers as well as into analysis simulation tools. All participants were successful at importing two example models into their respective simulation frameworks by using existing software libraries or by writing new import tools. Deficiencies in the libraries and format documentation were identified and fixed; suggestions for improvements to the standard were provided to the AIAA. An innovative tool to generate C code directly from such a model was developed. Performance of the software libraries compared favorably with compiled code. As a result of this assessment, several NASA Centers can now import standard models directly into their simulations. NASA is considering adopting the now-published S-119 standard as an internal recommended practice.
A Lifecycle Approach to Brokered Data Management for Hydrologic Modeling Data Using Open Standards.
NASA Astrophysics Data System (ADS)
Blodgett, D. L.; Booth, N.; Kunicki, T.; Walker, J.
2012-12-01
The U.S. Geological Survey Center for Integrated Data Analytics has formalized an information management-architecture to facilitate hydrologic modeling and subsequent decision support throughout a project's lifecycle. The architecture is based on open standards and open source software to decrease the adoption barrier and to build on existing, community supported software. The components of this system have been developed and evaluated to support data management activities of the interagency Great Lakes Restoration Initiative, Department of Interior's Climate Science Centers and WaterSmart National Water Census. Much of the research and development of this system has been in cooperation with international interoperability experiments conducted within the Open Geospatial Consortium. Community-developed standards and software, implemented to meet the unique requirements of specific disciplines, are used as a system of interoperable, discipline specific, data types and interfaces. This approach has allowed adoption of existing software that satisfies the majority of system requirements. Four major features of the system include: 1) assistance in model parameter and forcing creation from large enterprise data sources; 2) conversion of model results and calibrated parameters to standard formats, making them available via standard web services; 3) tracking a model's processes, inputs, and outputs as a cohesive metadata record, allowing provenance tracking via reference to web services; and 4) generalized decision support tools which rely on a suite of standard data types and interfaces, rather than particular manually curated model-derived datasets. Recent progress made in data and web service standards related to sensor and/or model derived station time series, dynamic web processing, and metadata management are central to this system's function and will be presented briefly along with a functional overview of the applications that make up the system. As the separate
Model Validation and Testing: The Methodological Foundation of ASHRAE Standard 140; Preprint
Judkoff, R.; Neymark, J.
2006-07-01
Ideally, whole-building energy simulation programs model all aspects of a building that influence energy use and thermal and visual comfort for the occupants. An essential component of the development of such computer simulation models is a rigorous program of validation and testing. This paper describes a methodology to evaluate the accuracy of whole-building energy simulation programs. The methodology is also used to identify and diagnose differences in simulation predictions that may be caused by algorithmic differences, modeling limitations, coding errors, or input errors. The methodology has been adopted by ANSI/ASHRAE Standard 140 (ANSI/ASHRAE 2001, 2004), Method of Test for the Evaluation of Building Energy Analysis Computer Programs. A summary of the method is included in the ASHRAE Handbook of Fundamentals (ASHRAE 2005). This paper describes the ANSI/ASHRAE Standard 140 method of test and its methodological basis. Also discussed are possible future enhancements to Standard 140 and related research recommendations.
The BNL g-2 Experiment: A Virtual Probe of the Standard Model
NASA Astrophysics Data System (ADS)
Cushman, Priscilla
2002-04-01
The Brookhaven g-2 experiment to measure the anomalous magnetic moment of the muon has accumulated over 10 billion decays from both polarities of the muon. A superferric storage ring shimmed to 1 ppm, superconducting inflector, electrostatic quadrupoles, and scintillating fiber calorimeters represent state-of-the-art engineering for a precision experiment whose sensitivity to new physics rivals high energy colliders. The result from the 1999 data set with 1.3 ppm precision in the anomaly showed a 2.5 sigma deviation from the Standard Model. Recent changes in the theoretical calculation reduce this to 1.5 sigma. The 2000 data set will double the precision. Do we still have a deviation from the Standard Model and what goes into the Standard Model calculation anyway? Results from the latest run and details of the experiment will be presented.
Exploring the Standard Model with the High Luminosity, Polarized Electron-Ion Collider
Milner, Richard G.
2009-08-04
The Standard Model is only a few decades old and has been successfully confirmed by experiment, particularly at the high energy frontier. This will continue with renewed vigor at the LHC. However, many important elements of the Standard Model remain poorly understood. In particular, the exploration of the strong interaction theory Quantum Chromodynamics is in its infancy. How does the spin-1/2 of the proton arise from the fundamental quark and gluon constituents? Can we understand the new QCD world of virtual quarks and gluons in the nucleon? Using precision measurements can we test the limits of the Standard Model and look for new physics? To address these and other important questions, physicists have developed a concept for a new type of accelerator, namely a high luminosity, polarized electron-ion collider. Here the scientific motivation is summarized and the accelerator concepts are outlined.
B → K∗ ℓ + ℓ - decays at large recoil in the Standard Model: a theoretical reappraisal
NASA Astrophysics Data System (ADS)
Ciuchini, Marco; Fedele, Marco; Franco, Enrico; Mishima, Satoshi; Paul, Ayan; Silvestrini, Luca; Valli, Mauro
2016-06-01
We critically reassess the theoretical uncertainties in the Standard Model calculation of the B → K ∗ ℓ + ℓ - observables, focusing on the low q 2 region. We point out that even optimized observables are affected by sizable uncertainties, since hadronic contributions generated by current-current operators with charm are difficult to estimate, especially for q 2 ˜ 4 m c 2 ≃ 6.8 GeV2. We perform a detailed numerical analysis and present both predictions and results from the fit obtained using most recent data. We find that non-factorizable power corrections of the expected order of magnitude are sufficient to give a good description of current experimental data within the Standard Model. We discuss in detail the q 2 dependence of the corrections and their possible interpretation as shifts of the Standard Model Wilson coefficients.
Performance of preproduction model cesium beam frequency standards for spacecraft applications
NASA Technical Reports Server (NTRS)
Levine, M. W.
1978-01-01
A cesium beam frequency standards for spaceflight application on Navigation Development Satellites was designed and fabricated and preliminary testing was completed. The cesium standard evolved from an earlier prototype model launched aboard NTS-2 and the engineering development model to be launched aboard NTS satellites during 1979. A number of design innovations, including a hybrid analog/digital integrator and the replacement of analog filters and phase detectors by clocked digital sampling techniques are discussed. Thermal and thermal-vacuum testing was concluded and test data are presented. Stability data for 10 to 10,000 seconds averaging interval, measured under laboratory conditions, are shown.
Beyond Standard Model Physics: At the Frontiers of Cosmology and Particle Physics
NASA Astrophysics Data System (ADS)
Lopez-Suarez, Alejandro O.
I begin to write this thesis at a time of great excitement in the field of cosmology and particle physics. The aim of this thesis is to study and search for beyond the standard model (BSM) physics in the cosmological and high energy particle fields. There are two main questions, which this thesis aims to address: 1) what can we learn about the inflationary epoch utilizing the pioneer gravitational wave detector Adv. LIGO?, and 2) what are the dark matter particle properties and interactions with the standard model particles?. This thesis will focus on advances in answering both questions.
a Glance Beyond the Standard Model: Latest Results from the MEG Experiment
NASA Astrophysics Data System (ADS)
Dussoni, Simeone
2014-12-01
The MEG experiment started taking data in 2009 looking for the Standard Model suppressed decay μ → e + γ, which, if observed, can reveal Beyond Standard Model physics. It makes use of state-of-the art detectors optimized for operating in conditions of very high intensity, rejecting as much background as possible. The data taking ended August 2013 and an upgrade R&D is started to push the experimental sensitivity. The present upper limit on the decay Branching Ratio (BR) is presented, obtained with the subset of data from 2009 to 2011 run, together with a description of the key features of the upgraded detector.
230Th-234U Model-Ages of Some Uranium Standard Reference Materials
Williams, R W; Gaffney, A M; Kristo, M J; Hutcheon, I D
2009-05-28
The 'age' of a sample of uranium is an important aspect of a nuclear forensic investigation and of the attribution of the material to its source. To the extent that the sample obeys the standard rules of radiochronometry, then the production ages of even very recent material can be determined using the {sup 230}Th-{sup 234}U chronometer. These standard rules may be summarized as (a) the daughter/parent ratio at time=zero must be known, and (b) there has been no daughter/parent fractionation since production. For most samples of uranium, the 'ages' determined using this chronometer are semantically 'model-ages' because (a) some assumption of the initial {sup 230}Th content in the sample is required and (b) closed-system behavior is assumed. The uranium standard reference materials originally prepared and distributed by the former US National Bureau of Standards and now distributed by New Brunswick Laboratory as certified reference materials (NBS SRM = NBL CRM) are good candidates for samples where both rules are met. The U isotopic standards have known purification and production dates, and closed-system behavior in the solid form (U{sub 3}O{sub 8}) may be assumed with confidence. We present here {sup 230}Th-{sup 234}U model-ages for several of these standards, determined by isotope dilution mass spectrometry using a multicollector ICP-MS, and compare these ages with their known production history.
NASA Technical Reports Server (NTRS)
Avila, Arturo
2011-01-01
The Standard JPL thermal engineering practice prescribes worst-case methodologies for design. In this process, environmental and key uncertain thermal parameters (e.g., thermal blanket performance, interface conductance, optical properties) are stacked in a worst case fashion to yield the most hot- or cold-biased temperature. Thus, these simulations would represent the upper and lower bounds. This, effectively, represents JPL thermal design margin philosophy. Uncertainty in the margins and the absolute temperatures is usually estimated by sensitivity analyses and/or by comparing the worst-case results with "expected" results. Applicability of the analytical model for specific design purposes along with any temperature requirement violations are documented in peer and project design review material. In 2008, NASA released NASA-STD-7009, Standard for Models and Simulations. The scope of this standard covers the development and maintenance of models, the operation of simulations, the analysis of the results, training, recommended practices, the assessment of the Modeling and Simulation (M&S) credibility, and the reporting of the M&S results. The Mars Exploration Rover (MER) project thermal control system M&S activity was chosen as a case study determining whether JPL practice is in line with the standard and to identify areas of non-compliance. This paper summarizes the results and makes recommendations regarding the application of this standard to JPL thermal M&S practices.
Standards in Modeling and Simulation: The Next Ten Years MODSIM World Paper 2010
NASA Technical Reports Server (NTRS)
Collins, Andrew J.; Diallo, Saikou; Sherfey, Solomon R.; Tolk, Andreas; Turnitsa, Charles D.; Petty, Mikel; Wiesel, Eric
2011-01-01
The world has moved on since the introduction of the Distributed Interactive Simulation (DIS) standard in the early 1980s. The cold-war maybe over but there is still a requirement to train for and analyze the next generation of threats that face the free world. With the emergence of new and more powerful computer technology and techniques means that modeling and simulation (M&S) has become an important and growing, part in satisfying this requirement. As an industry grows, the benefits from standardization within that industry grow with it. For example, it is difficult to imagine what the USA would be like without the 110 volts standard for domestic electricity supply. This paper contains an overview of the outcomes from a recent workshop to investigate the possible future of M&S standards within the federal government.
Geometrically engineering the standard model: Locally unfolding three families out of E{sub 8}
Bourjaily, Jacob L.
2007-08-15
This paper extends and builds upon the results of [J. L. Bourjaily, arXiv:0704.0444.], in which we described how to use the tools of geometrical engineering to deform geometrically engineered grand unified models into ones with lower symmetry. This top-down unfolding has the advantage that the relative positions of singularities giving rise to the many 'low-energy' matter fields are related by only a few parameters which deform the geometry of the unified model. And because the relative positions of singularities are necessary to compute the superpotential, for example, this is a framework in which the arbitrariness of geometrically engineered models can be greatly reduced. In [J. L. Bourjaily, arXiv:0704.0444.], this picture was made concrete for the case of deforming the representations of an SU{sub 5} model into their standard model content. In this paper we continue that discussion to show how a geometrically engineered 16 of SO{sub 10} can be unfolded into the standard model, and how the three families of the standard model uniquely emerge from the unfolding of a single, isolated E{sub 8} singularity.
NASA Astrophysics Data System (ADS)
Naggary, Schabnam; Brinkmann, Ralf Peter
2015-09-01
The characteristics of radio frequency (RF) modulated plasma boundary sheaths are studied on the basis of the so-called ``standard sheath model.'' This model assumes that the applied radio frequency ωRF is larger than the plasma frequency of the ions but smaller than that of the electrons. It comprises a phase-averaged ion model - consisting of an equation of continuity (with ionization neglected) and an equation of motion (with collisional ion-neutral interaction taken into account) - a phase-resolved electron model - consisting of an equation of continuity and the assumption of Boltzmann equilibrium -, and Poisson's equation for the electrical field. Previous investigations have studied the standard sheath model under additional approximations, most notably the assumption of a step-like electron front. This contribution presents an investigation and parameter study of the standard sheath model which avoids any further assumptions. The resulting density profiles and overall charge-voltage characteristics are compared with those of the step-model based theories. The authors gratefully acknowledge Efe Kemaneci for helpful comments and fruitful discussions.
NASA Astrophysics Data System (ADS)
Lu, Wenlong; Liu, Xiaojun; Jiang, Xiangqian; Qi, Qunfen; Scott, Paul
2010-08-01
Geometrical Product Specifications is an international standard system regarding standardization of dimensional, tolerancing, surface texture and related metrological principles and practices in the charge of ISO/TC213. Integrated information system is necessary to encapsulate the knowledge in GPS to extend its application in digital manufacturing. Establishing a suitable data structure for GPS data is one of the main works in building the integrated information system. This paper is focused on cylindricity and the main points are as follows: proposes the complete verification operator and the complete drawing indication for cylindricity consistent with GPS standard system; models the inter/intra relationships between the elements of operations involved in cylindricity and integrates them by category theory; solves the storage format and closure of query for the categorical data model by the pull-back structure and functor transform in category theory respectively.
Jenkins, Melinda; Myers, Esther; Charney, Pam; Escott-Stump, Sylvia
2006-01-01
Standardized terminology and digital sources of evidence are essential for evidence-based practice. Dieticians desire concise and consistent documentation of nutrition diagnoses, interventions and outcomes that will be fit for electronic health records. Building on more than 5 years of work to generate the Nutrition Care Process and Model as a road map to quality nutrition care and outcomes, and recognizing existing standardized languages serving other health professions, a task force of the American Dietetic Association (ADA) has begun to develop and disseminate standardized nutrition language. This paper will describe the group's working logic model, the Nutrition Care Process, and the current status of the nutrition language with comparisons to nursing process and terminology.
Sairam, K; Dorababu, M; Goel, R K; Bhattacharya, S K
2002-04-01
Bacopa monniera Wettst. (syn. Herpestis monniera L.; Scrophulariaceae) is a commonly used Ayurvedic drug for mental disorders. The standardized extract was reported earlier to have significant anti-oxidant effect, anxiolytic activity and improve memory retention in Alzheimer's disease. Presently, the standardized methanolic extract of Bacopa monniera (bacoside A - 38.0+/-0.9) was investigated for potential antidepressant activity in rodent models of depression. The effect was compared with the standard antidepressant drug imipramine (15 mg/kg, ip). The extract when given in the dose of 20 and 40 mg/kg, orally once daily for 5 days was found to have significant antidepressant activity in forced swim and learned helplessness models of depression and was comparable to that of imipramine.
Johnsen, David C; Williams, John N; Baughman, Pauletta Gay; Roesch, Darren M; Feldman, Cecile A
2015-10-01
This opinion article applauds the recent introduction of a new dental accreditation standard addressing critical thinking and problem-solving, but expresses a need for additional means for dental schools to demonstrate they are meeting the new standard because articulated outcomes, learning models, and assessments of competence are still being developed. Validated, research-based learning models are needed to define reference points against which schools can design and assess the education they provide to their students. This article presents one possible learning model for this purpose and calls for national experts from within and outside dental education to develop models that will help schools define outcomes and assess performance in educating their students to become practitioners who are effective critical thinkers and problem-solvers.
Applicability of Iceland spar as a stone model standard for lithotripsy devices.
Blitz, B F; Lyon, E S; Gerber, G S
1995-12-01
The identification of a universal stone model standard would enable reproducible fragmentation data useful for the design, evaluation, and comparison of various lithotripsy devices. The clinical benefits of such a stone model include the elucidation of setting parameters that would optimize fragmentation strategies. Iceland spar is a pure form of calcite (CaCO3) that was subjected to experimental disintegration by electrohydraulic lithotripsy and extracorporeal shockwave lithotripsy. Iceland spar was fragmented with both lithotripsy methods in a reproducible fashion. The degree of fragmentation was directly related to alterations in either power or shock frequency. Iceland spar is radiopaque, inexpensive, easily obtained, homogenous in composition, and sizable. Iceland spar meets a variety of stone model criteria, warranting its continued investigation as a potential stone model standard.
Rethinking Connes' approach to the standard model of particle physics via non-commutative geometry
NASA Astrophysics Data System (ADS)
Boyle, Latham; Farnsworth, Shane
2015-04-01
Connes' non-commutative geometry (NCG) is a generalization of Riemannian geometry that is particularly apt for expressing the standard model of particle physics coupled to Einstein gravity. Recently, we suggested a reformulation of this framework that is: (i) simpler and more unified in its axioms, and (ii) allows the Lagrangian for the standard model of particle physics (coupled to Einstein gravity) to be specified in a way that is tighter and more explanatory than the traditional algorithm based on effective field theory. Here we explain how this same reformulation yields a new perspective on the symmetries of a given NCG. Applying this perspective to the NCG traditionally used to describe the standard model we find, instead, an extension of the standard model by an extra U(1) B - L gauge symmetry, and a single extra complex scalar field σ, which is a singlet under SU(3)C × SU(2)L × U(1)Y , but has B - L = 2 . This field has cosmological implications, and offers a new solution to the discrepancy between the observed Higgs mass and the NCG prediction. We acknowledge support from an NSERC Discovery Grant.
Rethinking Connes’ Approach to the Standard Model of Particle Physics Via Non-Commutative Geometry
NASA Astrophysics Data System (ADS)
Farnsworth, Shane; Boyle, Latham
2015-02-01
Connes’ non-commutative geometry (NCG) is a generalization of Riemannian geometry that is particularly apt for expressing the standard model of particle physics coupled to Einstein gravity. In a previous paper, we suggested a reformulation of this framework that is: (i) simpler and more unified in its axioms, and (ii) allows the Lagrangian for the standard model of particle physics (coupled to Einstein gravity) to be specified in a way that is tighter and more explanatory than the traditional algorithm based on effective field theory. Here we explain how this same reformulation yields a new perspective on the symmetries of a given NCG. Applying this perspective to the NCG traditionally used to describe the standard model we find, instead, an extension of the standard model by an extra U{{(1)}B-L} gauge symmetry, and a single extra complex scalar field σ, which is a singlet under SU{{(3)}C}× SU{{(2)}L}× U{{(1)}Y}, but has B-L=2. This field has cosmological implications, and offers a new solution to the discrepancy between the observed Higgs mass and the NCG prediction.
Existence of standard models of conic fibrations over non-algebraically-closed fields
Avilov, A A
2014-12-31
We prove an analogue of Sarkisov's theorem on the existence of a standard model of a conic fibration over an algebraically closed field of characteristic different from two for three-dimensional conic fibrations over an arbitrary field of characteristic zero with an action of a finite group. Bibliography: 16 titles.
B →K*l+l-: Zeros of angular observables as test of standard model
NASA Astrophysics Data System (ADS)
Kumar, Girish; Mahajan, Namit
2016-03-01
We calculate the zeros of angular observables P4' and P5' of the angular distribution of 4-body decay B →K*(→K π )l+l- where LHCb, in its analysis of form-factor independent angular observables, has found deviations from the standard model predictions. In the large recoil region, we obtain relations between the zeros of P4' , P5' and the zero (s^0) of forward-backward asymmetry of lepton pair, AF B. These relations are independent of hadronic uncertainties and depend only on the Wilson coefficients. We also construct a new observable, OTL ,R, whose zero in the standard model coincides with s^0, but in the presence of new physics contributions will show different behavior. Moreover, the profile of the new observable, even within the standard model, is very different from AF B. We point out that precise measurements of these zeros in the near future would provide a crucial test of the standard model and would be useful in distinguishing between different possible new physics contributions to the Wilson coefficients.
On the Use of IRT Models with Judgmental Standard Setting Procedures.
ERIC Educational Resources Information Center
Kane, Michael T.
1987-01-01
The use of item response theory models for analyzing the results of judgmental standard setting studies (the Angoff technique) for establishing minimum pass levels is discussed. A comparison of three methods indicates the traditional approach may not be best. A procedure based on generalizability theory is suggested. (GDC)
Beyond standard model searches in the MiniBooNE experiment
Katori, Teppei; Conrad, Janet M.
2014-08-05
Tmore » he MiniBooNE experiment has contributed substantially to beyond standard model searches in the neutrino sector. he experiment was originally designed to test the Δm2~1eV2 region of the sterile neutrino hypothesis by observing νe(ν-e) charged current quasielastic signals from a νμ(ν-μ) beam. MiniBooNE observed excesses of νe and ν-e candidate events in neutrino and antineutrino mode, respectively. o date, these excesses have not been explained within the neutrino standard model (νSM); the standard model extended for three massive neutrinos. Confirmation is required by future experiments such as MicroBooNE. MiniBooNE also provided an opportunity for precision studies of Lorentz violation. he results set strict limits for the first time on several parameters of the standard-model extension, the generic formalism for considering Lorentz violation. Most recently, an extension to MiniBooNE running, with a beam tuned in beam-dump mode, is being performed to search for dark sector particles. In addition, this review describes these studies, demonstrating that short baseline neutrino experiments are rich environments in new physics searches.« less
ERIC Educational Resources Information Center
Kulgemeyer, Christoph; Schecker, Horst
2014-01-01
This paper gives an overview of research on modelling science competence in German science education. Since the first national German educational standards for physics, chemistry and biology education were released in 2004 research projects dealing with competences have become prominent strands. Most of this research is about the structure of…
Neutron Spin Structure Studies and Low-Energy Tests of the Standard Model at JLab
Jager, Kees de
2008-10-13
The most recent results on the spin structure of the neutron from Hall A are presented and discussed. Then, an overview is given of various experiments planned with the 12 GeV upgrade at Jefferson Lab to provide sensitive tests of the Standard Model at relatively low energies.
STANDARD MODEL CP VIOLATION IN B → Xdℓ+ℓ- DECAYS
NASA Astrophysics Data System (ADS)
EYGI, ZEYNEP DENIZ; TURAN, G.Ü.RSEVIL
We investigate the CP violating asymmetry, the forward backward asymmetry and the CP violating asymmetry in the forward backward asymmetry for the inclusive B → Xdℓ+ℓ- decays for the ℓ = e,μ,τ channels in the standard model. It is observed that these asymmetries are quite sizeable and B → Xdℓ+ℓ- decays seem promising for investigating CP violation.
Standard pre-main sequence models of low-mass stars
Prada Moroni, P. G.; Degl'Innocenti, S.; Tognelli, E.
2014-05-09
The main characteristics of standard pre-main sequence (PMS) models are described. A discussion of the uncer-tainties affecting the current generation of PMS evolutionary tracks and isochrones is also provided. In particular, the impact of the uncertainties in the adopted equation of state, radiative opacity, nuclear cross sections, and initial chemical abundances are analysed.
40 CFR 86.410-90 - Emission standards for 1990 and later model year motorcycles.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Emission standards for 1990 and later model year motorcycles. 86.410-90 Section 86.410-90 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978...
W / Z + heavy flavor production and the standard model Higgs searches at the Tevatron
Choi, S.Y.; /UC, Riverside
2004-08-01
Searches for the Standard Model Higgs in WH and H {yields} WW channels by CDF and D0 collaborations are presented. The preliminary results are based on < 180 pb{sup -1} of data analyzed by each experiment. Important backgrounds to Higgs searches, such as heavy flavor production in association with massive vector bosons (W and Z) are studied in the process.
Analogous behavior in the quantum hall effect, anyon superconductivity, and the standard model
Laughlin, R.B. . Dept. of Physics); Libby, S.B. )
1991-07-01
Similarities between physical behavior known to occur, or suspected of occurring, in simple condensed matter systems and behavior postulated by the standard model are identified and discussed. Particular emphasis is given to quantum number fractionalization, spontaneous occurrence of gauge forces, spontaneous violation of P and T, and anomaly cancellation. 46 refs.
Search for a Standard Model Higgs Boson with a Dilepton and Missing Energy Signature
Gerbaudo, Davide
2011-09-01
The subject of this thesis is the search for a standard model Higgs boson decaying to a pair of W bosons that in turn decay leptonically, H → W^{+}W^{-} → $\\bar{ℓ}$vℓ$\\bar{v}$. This search is performed considering events produced in p$\\bar{p}$ collisions at √s = 1.96 TeV, where two oppositely charged lepton candidates (e^{+}e^{-}, e^{±}μ^{±}, or μ^{+}μ}^{-}), and missing transverse energy, have been reconstructed. The data were collected with the D0 detector at the Fermilab Tevatron collider, and are tested against the standard model predictions computed for a Higgs boson with mass in the range 115-200 GeV. No excess of events over background is observed, and limits on Standard Model Higgs boson production are determined. An interpretation of these limits within the hypothesis of a fourth-generation extension to the standard model is also given. The overall analysis scheme is the same for the three dilepton pairs being considered (e^{+}e^{-}, e^{±}μ^{±}, or μ^{+}μ^{-}); this thesis, however, describes in detail the study of the dimuon final state.
Gravity in the Century of Light: The Gravitation Theory of Georges-Louis Le Sage
NASA Astrophysics Data System (ADS)
Evans, James
2006-05-01
Each generation of physicists, or natural philosophers, has sought to place universal gravitation in the context of its own worldview. Often this has entailed an effort to reduce gravitation to something more fundamental. But what is deemed fundamental has, of course, changed with time. Each generation attacked the problem of universal gravitation with the tools of its day and brought to bear the concepts of its own standard model. The most successful eighteenth-century attempt to provide a mechanical explanation of gravity was that of Georges-Louis Le Sage (1724-1803) of Geneva. Le Sage postulated a sea of ultramundane corpuscles, streaming in all directions and characterized by minute mass, great velocity, and complete inelasticity. Mostly these corpuscles just pass through gross bodies such as apples or planets, but a few are absorbed, leading to all the phenomena of attraction. In a voluminous correspondence with nearly all the savants of the day, Le Sage constantly reshaped his arguments for his system in order to appeal to metaphysicians, mechanicians and Newtonians of several varieties. Le Sage's theory is an especially interesting one, for several reasons. First, it serves as the prototype of a dynamical explanation of Newtonian gravity. Second, the theory came quite close to accomplishing its aim. Third, the theory had a long life and attracted comment by the leading physical thinkers of several successive generations, including Laplace, Kelvin, Maxwell and Feynman. Le Sage's theory therefore provides an excellent opportunity for the study of the evolution of attitudes toward physical explanation. The effects of national style in science and generational change take on a new clarity.
The Standard Model in the history of the Natural Sciences, Econometrics, and the social sciences
NASA Astrophysics Data System (ADS)
Fisher, W. P., Jr.
2010-07-01
In the late 18th and early 19th centuries, scientists appropriated Newton's laws of motion as a model for the conduct of any other field of investigation that would purport to be a science. This early form of a Standard Model eventually informed the basis of analogies for the mathematical expression of phenomena previously studied qualitatively, such as cohesion, affinity, heat, light, electricity, and magnetism. James Clerk Maxwell is known for his repeated use of a formalized version of this method of analogy in lectures, teaching, and the design of experiments. Economists transferring skills learned in physics made use of the Standard Model, especially after Maxwell demonstrated the value of conceiving it in abstract mathematics instead of as a concrete and literal mechanical analogy. Haavelmo's probability approach in econometrics and R. Fisher's Statistical Methods for Research Workers brought a statistical approach to bear on the Standard Model, quietly reversing the perspective of economics and the social sciences relative to that of physics. Where physicists, and Maxwell in particular, intuited scientific method as imposing stringent demands on the quality and interrelations of data, instruments, and theory in the name of inferential and comparative stability, statistical models and methods disconnected theory from data by removing the instrument as an essential component. New possibilities for reconnecting economics and the social sciences to Maxwell's sense of the method of analogy are found in Rasch's probabilistic models for measurement.
Search for the standard model higgs boson in association with a w boson at d0
NASA Astrophysics Data System (ADS)
Shaw, Savanna Marie
I present a search for the standard model Higgs boson, H, produced in association with a W boson in data events containing a charged lepton (electron or muon), missing energy, and two or three jets. The data analysed correspond to 9.7 fb-1 of integrated luminosity collected at a center-of-momentum energy of sqrt(s) = 1.96 TeV with the D0 detector at the Fermilab Tevatron ppbar collider. This search uses algorithms to identify the signature of bottom quark production and multivariate techniques to improve the purity of H→ bbbar production. We validate our methodology by measuring WZ and ZZ production with Z→ bbbar and find production rates consistent with the standard model prediction. For a Higgs boson mass of 125 GeV, we determine a 95% C.L. upper limit on the production of a standard model Higgs boson of 4.8 times the standard model Higgs boson production cross section, while the expected limit is 4.7 times the standard model production cross section. I also present a novel method for improving the energy resolution for charged particles within hadronic signatures. This is achieved by replacing the calorimeter energy measurement for charged particles within a hadronic signature with the tracking momentum measurement. This technique leads to a 20% improvement in the jet energy resolution, which yields a 7% improvement in the reconstructed dijet mass width for H→ bbbar events. The improved energy calculation leads to a 5% improvement in our expected 95% C.L. upper limit on the Higgs boson production cross section.
Search for the standard model Higgs boson in association with a W boson at D0.
Shaw, Savanna Marie
2013-01-01
I present a search for the standard model Higgs boson, H, produced in association with a W boson in data events containing a charged lepton (electron or muon), missing energy, and two or three jets. The data analysed correspond to 9.7 fb^{-1} of integrated luminosity collected at a center-of-momentum energy of √s = 1.96 TeV with the D0 detector at the Fermilab Tevatron p$\\bar{p}$ collider. This search uses algorithms to identify the signature of bottom quark production and multivariate techniques to improve the purity of H → b$\\bar{b}$ production. We validate our methodology by measuring WZ and ZZ production with Z → b$\\bar{b}$ and find production rates consistent with the standard model prediction. For a Higgs boson mass of 125 GeV, we determine a 95% C.L. upper limit on the production of a standard model Higgs boson of 4.8 times the standard model Higgs boson production cross section, while the expected limit is 4.7 times the standard model production cross section. I also present a novel method for improving the energy resolution for charged particles within hadronic signatures. This is achieved by replacing the calorimeter energy measurement for charged particles within a hadronic signature with the tracking momentum measurement. This technique leads to a ~ 20% improvement in the jet energy resolution, which yields a ~ 7% improvement in the reconstructed dijet mass width for H → b$\\bar{b}$ events. The improved energy calculation leads to a ~ 5% improvement in our expected 95% C.L. upper limit on the Higgs boson production cross section.
Ecotoxicological models generally have large data requirements and are frequently based on existing information from diverse sources. Standardizing data for toxicological models may be necessary to reduce extraneous variation and to ensure models reflect intrinsic relationships. ...
NASA Astrophysics Data System (ADS)
Mirvis, E.; Iredell, M.
2015-12-01
The operational (OPS) NOAA National Centers for Environmental Prediction (NCEP) suite, traditionally, consist of a large set of multi- scale HPC models, workflows, scripts, tools and utilities, which are very much depending on the variety of the additional components. Namely, this suite utilizes a unique collection of the in-house developed 20+ shared libraries (NCEPLIBS), certain versions of the 3-rd party libraries (like netcdf, HDF, ESMF, jasper, xml etc.), HPC workflow tool within dedicated (sometimes even vendors' customized) HPC system homogeneous environment. This domain and site specific, accompanied with NCEP's product- driven large scale real-time data operations complicates NCEP collaborative development tremendously by reducing chances to replicate this OPS environment anywhere else. The NOAA/NCEP's Environmental Modeling Center (EMC) missions to develop and improve numerical weather, climate, hydrological and ocean prediction through the partnership with the research community. Realizing said difficulties, lately, EMC has been taken an innovative approach to improve flexibility of the HPC environment by building the elements and a foundation for NCEP OPS functionally equivalent environment (FEE), which can be used to ease the external interface constructs as well. Aiming to reduce turnaround time of the community code enhancements via Research-to-Operations (R2O) cycle, EMC developed and deployed several project sub-set standards that already paved the road to NCEP OPS implementation standards. In this topic we will discuss the EMC FEE for O2R requirements and approaches in collaborative standardization, including NCEPLIBS FEE and models code version control paired with the models' derived customized HPC modules and FEE footprints. We will share NCEP/EMC experience and potential in the refactoring of EMC development processes, legacy codes and in securing model source code quality standards by using combination of the Eclipse IDE, integrated with the
Scafetta, Nicola
2011-12-01
Probability distributions of human displacements have been fit with exponentially truncated Lévy flights or fat tailed Pareto inverse power law probability distributions. Thus, people usually stay within a given location (for example, the city of residence), but with a non-vanishing frequency they visit nearby or far locations too. Herein, we show that an important empirical distribution of human displacements (range: from 1 to 1000 km) can be well fit by three consecutive Pareto distributions with simple integer exponents equal to 1, 2, and (>) 3. These three exponents correspond to three displacement range zones of about 1 km ≲Δr≲10 km, 10 km ≲Δr≲300 km, and 300 km ≲Δr≲1000 km, respectively. These three zones can be geographically and physically well determined as displacements within a city, visits to nearby cities that may occur within just one-day trips, and visit to far locations that may require multi-days trips. The incremental integer values of the three exponents can be easily explained with a three-scale mobility cost∕benefit model for human displacements based on simple geometrical constrains. Essentially, people would divide the space into three major regions (close, medium, and far distances) and would assume that the travel benefits are randomly∕uniformly distributed mostly only within specific urban-like areas. The three displacement distribution zones appear to be characterized by an integer (1, 2, or >3) inverse power exponent because of the specific number (1, 2, or >3) of cost mechanisms (each of which is proportional to the displacement length). The distributions in the first two zones would be associated to Pareto distributions with exponent β = 1 and β = 2 because of simple geometrical statistical considerations due to the a priori assumption that most benefits are searched in the urban area of the city of residence or in the urban area of specific nearby cities. We also show, by using independent records of
Scafetta, Nicola
2011-12-01
Probability distributions of human displacements have been fit with exponentially truncated Lévy flights or fat tailed Pareto inverse power law probability distributions. Thus, people usually stay within a given location (for example, the city of residence), but with a non-vanishing frequency they visit nearby or far locations too. Herein, we show that an important empirical distribution of human displacements (range: from 1 to 1000 km) can be well fit by three consecutive Pareto distributions with simple integer exponents equal to 1, 2, and (>) 3. These three exponents correspond to three displacement range zones of about 1 km ≲Δr≲10 km, 10 km ≲Δr≲300 km, and 300 km ≲Δr≲1000 km, respectively. These three zones can be geographically and physically well determined as displacements within a city, visits to nearby cities that may occur within just one-day trips, and visit to far locations that may require multi-days trips. The incremental integer values of the three exponents can be easily explained with a three-scale mobility cost∕benefit model for human displacements based on simple geometrical constrains. Essentially, people would divide the space into three major regions (close, medium, and far distances) and would assume that the travel benefits are randomly∕uniformly distributed mostly only within specific urban-like areas. The three displacement distribution zones appear to be characterized by an integer (1, 2, or >3) inverse power exponent because of the specific number (1, 2, or >3) of cost mechanisms (each of which is proportional to the displacement length). The distributions in the first two zones would be associated to Pareto distributions with exponent β = 1 and β = 2 because of simple geometrical statistical considerations due to the a priori assumption that most benefits are searched in the urban area of the city of residence or in the urban area of specific nearby cities. We also show, by using independent records of
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Luckring, James M.; Morrison, Joseph H.; Blattnig, Steve R.; Green, Lawrence L.; Tripathi, Ram K.
2007-01-01
The National Aeronautics and Space Administration (NASA) recently issued an interim version of the Standard for Models and Simulations (M&S Standard) [1]. The action to develop the M&S Standard was identified in an internal assessment [2] of agency-wide changes needed in the wake of the Columbia Accident [3]. The primary goal of this standard is to ensure that the credibility of M&S results is properly conveyed to those making decisions affecting human safety or mission success criteria. The secondary goal is to assure that the credibility of the results from models and simulations meets the project requirements (for credibility). This presentation explains the motivation and key aspects of the M&S Standard, with a special focus on the requirements for verification, validation and uncertainty quantification. Some pilot applications of this standard to computational fluid dynamics applications will be provided as illustrations. The authors of this paper are the members of the team that developed the initial three drafts of the standard, the last of which benefited from extensive comments from most of the NASA Centers. The current version (number 4) incorporates modifications made by a team representing 9 of the 10 NASA Centers. A permanent version of the M&S Standard is expected by December 2007. The scope of the M&S Standard is confined to those uses of M&S that support program and project decisions that may affect human safety or mission success criteria. Such decisions occur, in decreasing order of importance, in the operations, the test & evaluation, and the design & analysis phases. Requirements are placed on (1) program and project management, (2) models, (3) simulations and analyses, (4) verification, validation and uncertainty quantification (VV&UQ), (5) recommended practices, (6) training, (7) credibility assessment, and (8) reporting results to decision makers. A key component of (7) and (8) is the use of a Credibility Assessment Scale, some of the details
Automated Verification of Design Patterns with LePUS3
NASA Technical Reports Server (NTRS)
Nicholson, Jonathan; Gasparis, Epameinondas; Eden, Ammon H.; Kazman, Rick
2009-01-01
Specification and [visual] modelling languages are expected to combine strong abstraction mechanisms with rigour, scalability, and parsimony. LePUS3 is a visual, object-oriented design description language axiomatized in a decidable subset of the first-order predicate logic. We demonstrate how LePUS3 is used to formally specify a structural design pattern and prove ( verify ) whether any JavaTM 1.4 program satisfies that specification. We also show how LePUS3 specifications (charts) are composed and how they are verified fully automatically in the Two-Tier Programming Toolkit.
ERIC Educational Resources Information Center
Kaliski, Pamela; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna; Plake, Barbara; Reshetar, Rosemary
2012-01-01
The Many-Facet Rasch (MFR) Model is traditionally used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR Model by examining the quality of ratings obtained from a…
NASA Standard for Models and Simulations (M and S): Development Process and Rationale
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Blattnig, Steve R.; Green, Lawrence L.; Hemsch, Michael J.; Luckring, James M.; Morison, Joseph H.; Tripathi, Ram K.
2009-01-01
After the Columbia Accident Investigation Board (CAIB) report. the NASA Administrator at that time chartered an executive team (known as the Diaz Team) to identify the CAIB report elements with Agency-wide applicability, and to develop corrective measures to address each element. This report documents the chronological development and release of an Agency-wide Standard for Models and Simulations (M&S) (NASA Standard 7009) in response to Action #4 from the report, "A Renewed Commitment to Excellence: An Assessment of the NASA Agency-wide Applicability of the Columbia Accident Investigation Board Report, January 30, 2004".
Standard model explanations for the NuTeV electroweak measurements
R. H. Bernstein
2003-12-23
The NuTeV Collaboration has measured the electroweak parameters sin{sup 2} {theta}{sub W} and {rho} in neutrino-nucleon deep-inelastic scattering using a sign-selected beam. The nearly pure {nu} or {bar {nu}} beams that result provide many of the cancellations of systematics associated with the Paschos-Wolfenstein relation. The extracted result for sin{sup 2} {theta}{sub W}(on-shell) = 1 - M{sub W}{sup 2}/M{sub Z}{sup 2} is three standard deviations from prediction. We discuss Standard Model explanations for the puzzle.
Liu, Yan; Cai, Wensheng; Shao, Xueguang
2016-12-01
Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. PMID:27380302
NASA Astrophysics Data System (ADS)
Smolyakov, Mikhail N.; Volobuev, Igor P.
2016-01-01
In this paper we examine, from the purely theoretical point of view and in a model-independent way, the case, when matter, gauge and Higgs fields are allowed to propagate in the bulk of five-dimensional brane world models with compact extra dimension, and the Standard Model fields and their interactions are supposed to be reproduced by the corresponding zero Kaluza-Klein modes. An unexpected result is that in order to avoid possible pathological behavior in the fermion sector, it is necessary to impose constraints on the fermion field Lagrangian. In the case when the fermion zero modes are supposed to be localized at one of the branes, these constraints imply an additional relation between the vacuum profile of the Higgs field and the form of the background metric. Moreover, this relation between the vacuum profile of the Higgs field and the form of the background metric results in the exact reproduction of the gauge boson and fermion sectors of the Standard Model by the corresponding zero mode four-dimensional effective theory in all the physically relevant cases, allowed by the absence of pathologies. Meanwhile, deviations from these conditions can lead either back to pathological behavior in the fermion sector or to a variance between the resulting zero mode four-dimensional effective theory and the Standard Model, which, depending on the model at hand, may, in principle, result in constraints putting the theory out of the reach of the present day experiments.
Laomettachit, Teeraphan; Chen, Katherine C.; Baumann, William T.
2016-01-01
To understand the molecular mechanisms that regulate cell cycle progression in eukaryotes, a variety of mathematical modeling approaches have been employed, ranging from Boolean networks and differential equations to stochastic simulations. Each approach has its own characteristic strengths and weaknesses. In this paper, we propose a “standard component” modeling strategy that combines advantageous features of Boolean networks, differential equations and stochastic simulations in a framework that acknowledges the typical sorts of reactions found in protein regulatory networks. Applying this strategy to a comprehensive mechanism of the budding yeast cell cycle, we illustrate the potential value of standard component modeling. The deterministic version of our model reproduces the phenotypic properties of wild-type cells and of 125 mutant strains. The stochastic version of our model reproduces the cell-to-cell variability of wild-type cells and the partial viability of the CLB2-dbΔ clb5Δ mutant strain. Our simulations show that mathematical modeling with “standard components” can capture in quantitative detail many essential properties of cell cycle control in budding yeast. PMID:27187804
VandeVord, Pamela J; Leonardi, Alessandra Dal Cengio; Ritzel, David
2016-01-01
Recent military combat has heightened awareness to the complexity of blast-related traumatic brain injuries (bTBI). Experiments using animal, cadaver, or biofidelic physical models remain the primary measures to investigate injury biomechanics as well as validate computational simulations, medical diagnostics and therapies, or protection technologies. However, blast injury research has seen a range of irregular and inconsistent experimental methods for simulating blast insults generating results which may be misleading, cannot be cross-correlated between laboratories, or referenced to any standard for exposure. Both the US Army Medical Research and Materiel Command and the National Institutes of Health have noted that there is a lack of standardized preclinical models of TBI. It is recommended that the blast injury research community converge on a consistent set of experimental procedures and reporting of blast test conditions. This chapter describes the blast conditions which can be recreated within a laboratory setting and methodology for testing in vivo models within the appropriate environment.
Ground state phase transition in the Nilsson mean-field plus standard pairing model
NASA Astrophysics Data System (ADS)
Guan, Xin; Xu, Haocheng; Zhang, Yu; Pan, Feng; Draayer, Jerry P.
2016-08-01
The ground state phase transition in Nd, Sm, and Gd isotopes is investigated by using the Nilsson mean-field plus standard pairing model based on the exact solutions obtained from the extended Heine-Stieltjes correspondence. The results of the model calculations successfully reproduce the critical phenomena observed experimentally in the odd-even mass differences, odd-even differences of two-neutron separation energy, and the α -decay and double β--decay energies of these isotopes. Since the odd-even effects are the most important signatures of pairing interactions in nuclei, the model calculations yield microscopic insight into the nature of the ground state phase transition manifested by the standard pairing interaction.
Single Top Production at Next-to-Leading Order in the Standard Model Effective Field Theory.
Zhang, Cen
2016-04-22
Single top production processes at hadron colliders provide information on the relation between the top quark and the electroweak sector of the standard model. We compute the next-to-leading order QCD corrections to the three main production channels: t-channel, s-channel, and tW associated production, in the standard model including operators up to dimension six. The calculation can be matched to parton shower programs and can therefore be directly used in experimental analyses. The QCD corrections are found to significantly impact the extraction of the current limits on the operators, because both of an improved accuracy and a better precision of the theoretical predictions. In addition, the distributions of some of the key discriminating observables are modified in a nontrivial way, which could change the interpretation of measurements in terms of UV complete models. PMID:27152795
NASA Astrophysics Data System (ADS)
Eliassen, Lene; Andersen, Søren
2016-09-01
The wind turbine design standards recommend two different methods to generate turbulent wind for design load analysis, the Kaimal spectra combined with an exponential coherence function and the Mann turbulence model. The two turbulence models can give very different estimates of fatigue life, especially for offshore floating wind turbines. In this study the spatial distributions of the two turbulence models are investigated using Proper Orthogonal Decomposition, which is used to characterize large coherent structures. The main focus has been on the structures that contain the most energy, which are the lowest POD modes. The Mann turbulence model generates coherent structures that stretches in the horizontal direction for the longitudinal component, while the structures found in the Kaimal model are more random in their shape. These differences in the coherent structures at lower frequencies for the two turbulence models can be the reason for differences in fatigue life estimates for wind turbines.
Parametrisation D'effets Non-Standard EN Phenomenologie Electrofaible
NASA Astrophysics Data System (ADS)
Maksymyk, Ivan
Cette these pat articles porte sur la parametrisation d'effets non standard en physique electrofaible. Dans chaque analyse, nous avons ajoute plusieurs operateurs non standard au lagrangien du modele standard electrofaible. Les operateurs non standard decrivent les nouveaux effets decoulant d'un modele sous-jacent non-specefie. D'emblee, le nombre d'operateurs non standard que l'on peut inclure dans une telle analyse est illimite. Mais pour une classe specifique de modeles sous-jacents, les effets non standard peuvent etre decrits par un nombre raisonnable d'operateurs. Dans chaque analyse nous avons developpe des expressions pour des observables electrofaibles, en fonction des coefficients des operateurs nouveaux. En effectuant un "fit" statistique sur un ensemble de donnees experimentales precises, nous avons obtenu des contraintes phenomenologiques sur ces coefficients. Dans "Model-Independent Global Constraints on New Physics", nous avons adopte des hypotheses tres peu contraignantes relatives aux modeles sous-jacents. Nous avons tronque le lagrangien effectif a la dimension cinq (inclusivement). Visant la plus grande generalite possible, nous avons admis des interactions qui ne respectent pas les symetries discretes (soit C, P et CP) ainsi que des interactions qui ne conservent pas la saveur. Le lagrangien effectif contient une quarantaine d'operateurs nouveaux. Nous avons determine que, pour la plupart des coefficients des nouveaux operateurs, les contraintes sont assez serrees (2 ou 3%), mais il y a des exceptions interessantes. Dans "Bounding Anomalous Three-Gauge-Boson Couplings", nous avons determine des contraintes phenomenologiques sur les deviations des couplages a trois bosons de jauge par rapport aux interactions prescrites par le modele standard. Pour ce faire, nous avons calcule les contributions indirectes des CTBJ non standard aux observables de basse energie. Puisque le lagrangien effectif est non-renormalisable, certaines difficultes techniques
Boullata, Joseph I; Holcombe, Beverly; Sacks, Gordon; Gervasio, Jane; Adams, Stephen C; Christensen, Michael; Durfee, Sharon; Ayers, Phil; Marshall, Neil; Guenter, Peggi
2016-08-01
Parenteral nutrition (PN) is a high-alert medication with a complex drug use process. Key steps in the process include the review of each PN prescription followed by the preparation of the formulation. The preparation step includes compounding the PN or activating a standardized commercially available PN product. The verification and review, as well as preparation of this complex therapy, require competency that may be determined by using a standardized process for pharmacists and for pharmacy technicians involved with PN. An American Society for Parenteral and Enteral Nutrition (ASPEN) standardized model for PN order review and PN preparation competencies is proposed based on a competency framework, the ASPEN-published interdisciplinary core competencies, safe practice recommendations, and clinical guidelines, and is intended for institutions and agencies to use with their staff.
Boullata, Joseph I; Holcombe, Beverly; Sacks, Gordon; Gervasio, Jane; Adams, Stephen C; Christensen, Michael; Durfee, Sharon; Ayers, Phil; Marshall, Neil; Guenter, Peggi
2016-08-01
Parenteral nutrition (PN) is a high-alert medication with a complex drug use process. Key steps in the process include the review of each PN prescription followed by the preparation of the formulation. The preparation step includes compounding the PN or activating a standardized commercially available PN product. The verification and review, as well as preparation of this complex therapy, require competency that may be determined by using a standardized process for pharmacists and for pharmacy technicians involved with PN. An American Society for Parenteral and Enteral Nutrition (ASPEN) standardized model for PN order review and PN preparation competencies is proposed based on a competency framework, the ASPEN-published interdisciplinary core competencies, safe practice recommendations, and clinical guidelines, and is intended for institutions and agencies to use with their staff. PMID:27317615
A standard protocol for describing individual-based and agent-based models
Grimm, Volker; Berger, Uta; Bastiansen, Finn; Eliassen, Sigrunn; Ginot, Vincent; Giske, Jarl; Goss-Custard, John; Grand, Tamara; Heinz, Simone K.; Huse, Geir; Huth, Andreas; Jepsen, Jane U.; Jorgensen, Christian; Mooij, Wolf M.; Muller, Birgit; Pe'er, Guy; Piou, Cyril; Railsback, Steven F.; Robbins, Andrew M.; Robbins, Martha M.; Rossmanith, Eva; Ruger, Nadja; Strand, Espen; Souissi, Sami; Stillman, Richard A.; Vabo, Rune; Visser, Ute; DeAngelis, Donald L.
2006-01-01
Simulation models that describe autonomous individual organisms (individual based models, IBM) or agents (agent-based models, ABM) have become a widely used tool, not only in ecology, but also in many other disciplines dealing with complex systems made up of autonomous entities. However, there is no standard protocol for describing such simulation models, which can make them difficult to understand and to duplicate. This paper presents a proposed standard protocol, ODD, for describing IBMs and ABMs, developed and tested by 28 modellers who cover a wide range of fields within ecology. This protocol consists of three blocks (Overview, Design concepts, and Details), which are subdivided into seven elements: Purpose, State variables and scales, Process overview and scheduling, Design concepts, Initialization, Input, and Submodels. We explain which aspects of a model should be described in each element, and we present an example to illustrate the protocol in use. In addition, 19 examples are available in an Online Appendix. We consider ODD as a first step for establishing a more detailed common format of the description of IBMs and ABMs. Once initiated, the protocol will hopefully evolve as it becomes used by a sufficiently large proportion of modellers.
3D Building Modeling in LoD2 Using the CityGML Standard
NASA Astrophysics Data System (ADS)
Preka, D.; Doulamis, A.
2016-10-01
Over the last decade, scientific research has been increasingly focused on the third dimension in all fields and especially in sciences related to geographic information, the visualization of natural phenomena and the visualization of the complex urban reality. The field of 3D visualization has achieved rapid development and dynamic progress, especially in urban applications, while the technical restrictions on the use of 3D information tend to subside due to advancements in technology. A variety of 3D modeling techniques and standards has already been developed, as they gain more traction in a wide range of applications. Such a modern standard is the CityGML, which is open and allows for sharing and exchanging of 3D city models. Within the scope of this study, key issues for the 3D modeling of spatial objects and cities are considered and specifically the key elements and abilities of CityGML standard, which is used in order to produce a 3D model of 14 buildings that constitute a block at the municipality of Kaisariani, Athens, in Level of Detail 2 (LoD2), as well as the corresponding relational database. The proposed tool is based upon the 3DCityDB package in tandem with a geospatial database (PostgreSQL w/ PostGIS 2.0 extension). The latter allows for execution of complex queries regarding the spatial distribution of data. The system is implemented in order to facilitate a real-life scenario in a suburb of Athens.
Standardized 3D Bioprinting of Soft Tissue Models with Human Primary Cells.
Rimann, Markus; Bono, Epifania; Annaheim, Helene; Bleisch, Matthias; Graf-Hausner, Ursula
2016-08-01
Cells grown in 3D are more physiologically relevant than cells cultured in 2D. To use 3D models in substance testing and regenerative medicine, reproducibility and standardization are important. Bioprinting offers not only automated standardizable processes but also the production of complex tissue-like structures in an additive manner. We developed an all-in-one bioprinting solution to produce soft tissue models. The holistic approach included (1) a bioprinter in a sterile environment, (2) a light-induced bioink polymerization unit, (3) a user-friendly software, (4) the capability to print in standard labware for high-throughput screening, (5) cell-compatible inkjet-based printheads, (6) a cell-compatible ready-to-use BioInk, and (7) standard operating procedures. In a proof-of-concept study, skin as a reference soft tissue model was printed. To produce dermal equivalents, primary human dermal fibroblasts were printed in alternating layers with BioInk and cultured for up to 7 weeks. During long-term cultures, the models were remodeled and fully populated with viable and spreaded fibroblasts. Primary human dermal keratinocytes were seeded on top of dermal equivalents, and epidermis-like structures were formed as verified with hematoxylin and eosin staining and immunostaining. However, a fully stratified epidermis was not achieved. Nevertheless, this is one of the first reports of an integrative bioprinting strategy for industrial routine application.
NASA Astrophysics Data System (ADS)
Bermudez, L. E.; Percivall, G.; Idol, T. A.
2015-12-01
Experts in climate modeling, remote sensing of the Earth, and cyber infrastructure must work together in order to make climate predictions available to decision makers. Such experts and decision makers worked together in the Open Geospatial Consortium's (OGC) Testbed 11 to address a scenario of population displacement by coastal inundation due to the predicted sea level rise. In a Policy Fact Sheet "Harnessing Climate Data to Boost Ecosystem & Water Resilience", issued by White House Office of Science and Technology (OSTP) in December 2014, OGC committed to increase access to climate change information using open standards. In July 2015, the OGC Testbed 11 Urban Climate Resilience activity delivered on that commitment with open standards based support for climate-change preparedness. Using open standards such as the OGC Web Coverage Service and Web Processing Service and the NetCDF and GMLJP2 encoding standards, Testbed 11 deployed an interoperable high-resolution flood model to bring climate model outputs together with global change assessment models and other remote sensing data for decision support. Methods to confirm model predictions and to allow "what-if-scenarios" included in-situ sensor webs and crowdsourcing. A scenario was in two locations: San Francisco Bay Area and Mozambique. The scenarios demonstrated interoperation and capabilities of open geospatial specifications in supporting data services and processing services. The resultant High Resolution Flood Information System addressed access and control of simulation models and high-resolution data in an open, worldwide, collaborative Web environment. The scenarios examined the feasibility and capability of existing OGC geospatial Web service specifications in supporting the on-demand, dynamic serving of flood information from models with forecasting capacity. Results of this testbed included identification of standards and best practices that help researchers and cities deal with climate-related issues
Wu, Zhi-Wei; He, Hong-Shi; Liang, Yu; Luo, Xu; Cai, Long-Yan; Long, Jing
2012-06-01
From the viewpoint of forest fire behavior, and based on the key parameters of fuels, three standard forest fuel models for the forests differed significantly in fuel characteristics and local environmental conditions in Fenglin Natural Reserve were established by using hierarchical cluster analysis. The three models were FL-I, FL-II, and FL-III for the broadleaved-Korean pine forest, spruce-fir forest, and poplar-birch forest, the representative forest types in the Reserve, respectively. According to the forest structure and composition, land cover type, and horizontal and vertical continuity, the three models FL-I , FL-II, and FL-III were similar to the models C-5, C-2, and D-1 in Canada CFBPS fuel classification system, respectively. The forest ground features and the horizontal and vertical characteristics of the three models established in this paper could help the investigators to identify the fuel types in fuel inventory.
Baryogenesis in the two doublet and inert singlet extension of the Standard Model
NASA Astrophysics Data System (ADS)
Alanne, Tommi; Kainulainen, Kimmo; Tuominen, Kimmo; Vaskonen, Ville
2016-08-01
We investigate an extension of the Standard Model containing two Higgs doublets and a singlet scalar field (2HDSM). We show that the model can have a strongly first-order phase transition and give rise to the observed baryon asymmetry of the Universe, consistent with all experimental constraints. In particular, the constraints from the electron and neutron electric dipole moments are less constraining here than in pure two-Higgs-doublet model (2HDM). The two-step, first-order transition in 2HDSM, induced by the singlet field, may lead to strong supercooling and low nucleation temperatures in comparison with the critical temperature, Tn ll Tc, which can significantly alter the usual phase-transition pattern in 2HD models with Tn ≈ Tc. Furthermore, the singlet field can be the dark matter particle. However, in models with a strong first-order transition its abundance is typically but a thousandth of the observed dark matter abundance.
Cai, Longyan; He, Hong S; Wu, Zhiwei; Lewis, Benard L; Liang, Yu
2014-01-01
Understanding the fire prediction capabilities of fuel models is vital to forest fire management. Various fuel models have been developed in the Great Xing'an Mountains in Northeast China. However, the performances of these fuel models have not been tested for historical occurrences of wildfires. Consequently, the applicability of these models requires further investigation. Thus, this paper aims to develop standard fuel models. Seven vegetation types were combined into three fuel models according to potential fire behaviors which were clustered using Euclidean distance algorithms. Fuel model parameter sensitivity was analyzed by the Morris screening method. Results showed that the fuel model parameters 1-hour time-lag loading, dead heat content, live heat content, 1-hour time-lag SAV(Surface Area-to-Volume), live shrub SAV, and fuel bed depth have high sensitivity. Two main sensitive fuel parameters: 1-hour time-lag loading and fuel bed depth, were determined as adjustment parameters because of their high spatio-temporal variability. The FARSITE model was then used to test the fire prediction capabilities of the combined fuel models (uncalibrated fuel models). FARSITE was shown to yield an unrealistic prediction of the historical fire. However, the calibrated fuel models significantly improved the capabilities of the fuel models to predict the actual fire with an accuracy of 89%. Validation results also showed that the model can estimate the actual fires with an accuracy exceeding 56% by using the calibrated fuel models. Therefore, these fuel models can be efficiently used to calculate fire behaviors, which can be helpful in forest fire management.
Cai, Longyan; He, Hong S; Wu, Zhiwei; Lewis, Benard L; Liang, Yu
2014-01-01
Understanding the fire prediction capabilities of fuel models is vital to forest fire management. Various fuel models have been developed in the Great Xing'an Mountains in Northeast China. However, the performances of these fuel models have not been tested for historical occurrences of wildfires. Consequently, the applicability of these models requires further investigation. Thus, this paper aims to develop standard fuel models. Seven vegetation types were combined into three fuel models according to potential fire behaviors which were clustered using Euclidean distance algorithms. Fuel model parameter sensitivity was analyzed by the Morris screening method. Results showed that the fuel model parameters 1-hour time-lag loading, dead heat content, live heat content, 1-hour time-lag SAV(Surface Area-to-Volume), live shrub SAV, and fuel bed depth have high sensitivity. Two main sensitive fuel parameters: 1-hour time-lag loading and fuel bed depth, were determined as adjustment parameters because of their high spatio-temporal variability. The FARSITE model was then used to test the fire prediction capabilities of the combined fuel models (uncalibrated fuel models). FARSITE was shown to yield an unrealistic prediction of the historical fire. However, the calibrated fuel models significantly improved the capabilities of the fuel models to predict the actual fire with an accuracy of 89%. Validation results also showed that the model can estimate the actual fires with an accuracy exceeding 56% by using the calibrated fuel models. Therefore, these fuel models can be efficiently used to calculate fire behaviors, which can be helpful in forest fire management. PMID:24714164
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Emission Standards for 2008 Model Year and Later Emergency Stationary CI ICE 2 Table 2 to Subpart IIII of Part 60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance...
Search for the standard model Higgs boson in tau final states.
Abazov, V M; Abbott, B; Abolins, M; Acharya, B S; Adams, M; Adams, T; Aguilo, E; Ahsan, M; Alexeev, G D; Alkhazov, G; Alton, A; Alverson, G; Alves, G A; Ancu, L S; Andeen, T; Anzelc, M S; Aoki, M; Arnoud, Y; Arov, M; Arthaud, M; Askew, A; Asman, B; Atramentov, O; Avila, C; Backusmayes, J; Badaud, F; Bagby, L; Baldin, B; Bandurin, D V; Banerjee, S; Barberis, E; Barfuss, A-F; Bargassa, P; Baringer, P; Barreto, J; Bartlett, J F; Bassler, U; Bauer, D; Beale, S; Bean, A; Begalli, M; Begel, M; Belanger-Champagne, C; Bellantoni, L; Bellavance, A; Benitez, J A; Beri, S B; Bernardi, G; Bernhard, R; Bertram, I; Besançon, M; Beuselinck, R; Bezzubov, V A; Bhat, P C; Bhatnagar, V; Blazey, G; Blessing, S; Bloom, K; Boehnlein, A; Boline, D; Bolton, T A; Boos, E E; Borissov, G; Bose, T; Brandt, A; Brock, R; Brooijmans, G; Bross, A; Brown, D; Bu, X B; Buchholz, D; Buehler, M; Buescher, V; Bunichev, V; Burdin, S; Burnett, T H; Buszello, C P; Calfayan, P; Calpas, B; Calvet, S; Cammin, J; Carrasco-Lizarraga, M A; Carrera, E; Carvalho, W; Casey, B C K; Castilla-Valdez, H; Chakrabarti, S; Chakraborty, D; Chan, K M; Chandra, A; Cheu, E; Cho, D K; Choi, S; Choudhary, B; Christoudias, T; Cihangir, S; Claes, D; Clutter, J; Cooke, M; Cooper, W E; Corcoran, M; Couderc, F; Cousinou, M-C; Crépé-Renaudin, S; Cuplov, V; Cutts, D; Cwiok, M; Das, A; Davies, G; De, K; de Jong, S J; De La Cruz-Burelo, E; DeVaughan, K; Déliot, F; Demarteau, M; Demina, R; Denisov, D; Denisov, S P; Desai, S; Diehl, H T; Diesburg, M; Dominguez, A; Dorland, T; Dubey, A; Dudko, L V; Duflot, L; Duggan, D; Duperrin, A; Dutt, S; Dyshkant, A; Eads, M; Edmunds, D; Ellison, J; Elvira, V D; Enari, Y; Eno, S; Ermolov, P; Escalier, M; Evans, H; Evdokimov, A; Evdokimov, V N; Facini, G; Ferapontov, A V; Ferbel, T; Fiedler, F; Filthaut, F; Fisher, W; Fisk, H E; Fortner, M; Fox, H; Fu, S; Fuess, S; Gadfort, T; Galea, C F; Garcia-Bellido, A; Gavrilov, V; Gay, P; Geist, W; Geng, W; Gerber, C E; Gershtein, Y; Gillberg, D; Ginther, G; Gómez, B; Goussiou, A; Grannis, P D; Greder, S; Greenlee, H; Greenwood, Z D; Gregores, E M; Grenier, G; Gris, Ph; Grivaz, J-F; Grohsjean, A; Grünendahl, S; Grünewald, M W; Guo, F; Guo, J; Gutierrez, G; Gutierrez, P; Haas, A; Hadley, N J; Haefner, P; Hagopian, S; Haley, J; Hall, I; Hall, R E; Han, L; Harder, K; Harel, A; Hauptman, J M; Hays, J; Hebbeker, T; Hedin, D; Hegeman, J G; Heinson, A P; Heintz, U; Hensel, C; Heredia-De La Cruz, I; Herner, K; Hesketh, G; Hildreth, M D; Hirosky, R; Hoang, T; Hobbs, J D; Hoeneisen, B; Hohlfeld, M; Hossain, S; Houben, P; Hu, Y; Hubacek, Z; Huske, N; Hynek, V; Iashvili, I; Illingworth, R; Ito, A S; Jabeen, S; Jaffré, M; Jain, S; Jakobs, K; Jamin, D; Jarvis, C; Jesik, R; Johns, K; Johnson, C; Johnson, M; Johnston, D; Jonckheere, A; Jonsson, P; Juste, A; Kajfasz, E; Karmanov, D; Kasper, P A; Katsanos, I; Kaushik, V; Kehoe, R; Kermiche, S; Khalatyan, N; Khanov, A; Kharchilava, A; Kharzheev, Y N; Khatidze, D; Kim, T J; Kirby, M H; Kirsch, M; Klima, B; Kohli, J M; Konrath, J-P; Kozelov, A V; Kraus, J; Kuhl, T; Kumar, A; Kupco, A; Kurca, T; Kuzmin, V A; Kvita, J; Lacroix, F; Lam, D; Lammers, S; Landsberg, G; Lebrun, P; Lee, W M; Leflat, A; Lellouch, J; Li, J; Li, L; Li, Q Z; Lietti, S M; Lim, J K; Lincoln, D; Linnemann, J; Lipaev, V V; Lipton, R; Liu, Y; Liu, Z; Lobodenko, A; Lokajicek, M; Love, P; Lubatti, H J; Luna-Garcia, R; Lyon, A L; Maciel, A K A; Mackin, D; Mättig, P; Magerkurth, A; Mal, P K; Malbouisson, H B; Malik, S; Malyshev, V L; Maravin, Y; Martin, B; McCarthy, R; McGivern, C L; Meijer, M M; Melnitchouk, A; Mendoza, L; Menezes, D; Mercadante, P G; Merkin, M; Merritt, K W; Meyer, A; Meyer, J; Mitrevski, J; Mommsen, R K; Mondal, N K; Moore, R W; Moulik, T; Muanza, G S; Mulhearn, M; Mundal, O; Mundim, L; Nagy, E; Naimuddin, M; Narain, M; Neal, H A; Negret, J P; Neustroev, P; Nilsen, H; Nogima, H; Novaes, S F; Nunnemann, T; Obrant, G; Ochando, C; Onoprienko, D; Orduna, J; Oshima, N; Osman, N; Osta, J; Otec, R; Otero Y Garzón, G J; Owen, M; Padilla, M; Padley, P; Pangilinan, M; Parashar, N; Park, S-J; Park, S K; Parsons, J; Partridge, R; Parua, N; Patwa, A; Pawloski, G; Penning, B; Perfilov, M; Peters, K; Peters, Y; Pétroff, P; Piegaia, R; Piper, J; Pleier, M-A; Podesta-Lerma, P L M; Podstavkov, V M; Pogorelov, Y; Pol, M-E; Polozov, P; Popov, A V; Potter, C; Prado da Silva, W L; Protopopescu, S; Qian, J; Quadt, A; Quinn, B; Rakitine, A; Rangel, M S; Ranjan, K; Ratoff, P N; Renkel, P; Rich, P; Rijssenbeek, M; Ripp-Baudot, I; Rizatdinova, F; Robinson, S; Rodrigues, R F; Rominsky, M; Royon, C; Rubinov, P; Ruchti, R; Safronov, G; Sajot, G; Sánchez-Hernández, A; Sanders, M P; Sanghi, B; Savage, G; Sawyer, L; Scanlon, T; Schaile, D; Schamberger, R D; Scheglov, Y; Schellman, H; Schliephake, T; Schlobohm, S; Schwanenberger, C; Schwienhorst, R; Sekaric, J; Severini, H; Shabalina, E; Shamim, M; Shary, V; Shchukin, A A; Shivpuri, R K; Siccardi, V; Simak, V; Sirotenko, V; Skubic, P; Slattery, P; Smirnov, D; Snow, G R; Snow, J; Snyder, S; Söldner-Rembold, S; Sonnenschein, L; Sopczak, A; Sosebee, M; Soustruznik, K; Spurlock, B; Stark, J; Stolin, V; Stoyanova, D A; Strandberg, J; Strandberg, S; Strang, M A; Strauss, E; Strauss, M; Ströhmer, R; Strom, D; Stutte, L; Sumowidagdo, S; Svoisky, P; Takahashi, M; Tanasijczuk, A; Taylor, W; Tiller, B; Tissandier, F; Titov, M; Tokmenin, V V; Torchiani, I; Tsybychev, D; Tuchming, B; Tully, C; Tuts, P M; Unalan, R; Uvarov, L; Uvarov, S; Uzunyan, S; Vachon, B; van den Berg, P J; Van Kooten, R; van Leeuwen, W M; Varelas, N; Varnes, E W; Vasilyev, I A; Verdier, P; Vertogradov, L S; Verzocchi, M; Vilanova, D; Vint, P; Vokac, P; Voutilainen, M; Wagner, R; Wahl, H D; Wang, M H L S; Warchol, J; Watts, G; Wayne, M; Weber, G; Weber, M; Welty-Rieger, L; Wenger, A; Wetstein, M; White, A; Wicke, D; Williams, M R J; Wilson, G W; Wimpenny, S J; Wobisch, M; Wood, D R; Wyatt, T R; Xie, Y; Xu, C; Yacoob, S; Yamada, R; Yang, W-C; Yasuda, T; Yatsunenko, Y A; Ye, Z; Yin, H; Yip, K; Yoo, H D; Youn, S W; Yu, J; Zeitnitz, C; Zelitch, S; Zhao, T; Zhou, B; Zhu, J; Zielinski, M; Zieminska, D; Zivkovic, L; Zutshi, V; Zverev, E G
2009-06-26
We present a search for the standard model Higgs boson using hadronically decaying tau leptons, in 1 fb(-1) of data collected with the D0 detector at the Fermilab Tevatron pp collider. We select two final states: tau+/- plus missing transverse energy and b jets, and tau+ tau- plus jets. These final states are sensitive to a combination of associated W/Z boson plus Higgs boson, vector boson fusion, and gluon-gluon fusion production processes. The observed ratio of the combined limit on the Higgs production cross section at the 95% C.L. to the standard model expectation is 29 for a Higgs boson mass of 115 GeV. PMID:19659068
Francium spectroscopy: Towards a low energy test of the standard model
Orozco, L. A.; Simsarian, J. E.; Sprouse, G. D.; Zhao, W. Z.
1997-03-15
An atomic parity non-conservation measurement can test the predictions of the standard model for the electron-quark coupling constants. The measurements, performed at very low energies compared to the Z{sup 0} pole, can be sensitive to physics beyond the standard model. Francium, the heaviest alkali, is a viable candidate for atomic parity violation measurements. The extraction of weak interaction parameters requires a detailed knowledge of the electronic wavefunctions of the atom. Measurements of atomic properties of francium provide data for careful comparisons with ab initio calculations of its atomic structure. The spectroscopy, including energy level location and atomic lifetimes, is carried out using the recently developed techniques of laser cooling and trapping of atoms.
The Higgs Boson as a Window to Beyond the Standard Model
Vega-Morales, Roberto
2013-08-01
The recent discovery of a Higgs boson at the LHC with properties resembling those predicted by the Standard Model (SM) gives strong indication that the final missing piece of the SM is now in place. In particular, the mechanism responsible for Electroweak Symmetry Breaking (EWSB) and generating masses for the Z and W vector bosons appears to have been established. Even with this amazing discovery there are still many outstanding theoretical and phenomenological questions which suggest that there must be physics Beyond the Standard Model (BSM). As we investigate in this thesis, the Higgs boson offers the exciting possibility of acting as a window to this new physics through various avenues which are experimentally testable in the coming years. We investigate a subset of these possibilities and begin by discussing them briefly below before a detailed examination in the following chapters.
Conceptual explanation for the algebra in the noncommutative approach to the standard model.
Chamseddine, Ali H; Connes, Alain
2007-11-01
The purpose of this Letter is to remove the arbitrariness of the ad hoc choice of the algebra and its representation in the noncommutative approach to the standard model, which was begging for a conceptual explanation. We assume as before that space-time is the product of a four-dimensional manifold by a finite noncommmutative space F. The spectral action is the pure gravitational action for the product space. To remove the above arbitrariness, we classify the irreducible geometries F consistent with imposing reality and chiral conditions on spinors, to avoid the fermion doubling problem, which amounts to have total dimension 10 (in the K-theoretic sense). It gives, almost uniquely, the standard model with all its details, predicting the number of fermions per generation to be 16, their representations and the Higgs breaking mechanism, with very little input.
Search for the standard model Higgs boson in tau lepton final states
NASA Astrophysics Data System (ADS)
D0 Collaboration; Abazov, V. M.; Abbott, B.; Acharya, B. S.; Adams, M.; Adams, T.; Alexeev, G. D.; Alkhazov, G.; Alton, A.; Alverson, G.; Aoki, M.; Askew, A.; Atkins, S.; Augsten, K.; Avila, C.; Badaud, F.; Bagby, L.; Baldin, B.; Bandurin, D. V.; Banerjee, S.; Barberis, E.; Baringer, P.; Barreto, J.; Bartlett, J. F.; Bassler, U.; Bazterra, V.; Bean, A.; Begalli, M.; Bellantoni, L.; Beri, S. B.; Bernardi, G.; Bernhard, R.; Bertram, I.; Besançon, M.; Beuselinck, R.; Bezzubov, V. A.; Bhat, P. C.; Bhatia, S.; Bhatnagar, V.; Blazey, G.; Blessing, S.; Bloom, K.; Boehnlein, A.; Boline, D.; Boos, E. E.; Borissov, G.; Bose, T.; Brandt, A.; Brandt, O.; Brock, R.; Brooijmans, G.; Bross, A.; Brown, D.; Brown, J.; Bu, X. B.; Buehler, M.; Buescher, V.; Bunichev, V.; Burdin, S.; Buszello, C. P.; Camacho-Pérez, E.; Casey, B. C. K.; Castilla-Valdez, H.; Caughron, S.; Chakrabarti, S.; Chakraborty, D.; Chan, K. M.; Chandra, A.; Chapon, E.; Chen, G.; Chevalier-Théry, S.; Cho, D. K.; Cho, S. W.; Choi, S.; Choudhary, B.; Cihangir, S.; Claes, D.; Clutter, J.; Cooke, M.; Cooper, W. E.; Corcoran, M.; Couderc, F.; Cousinou, M.-C.; Croc, A.; Cutts, D.; Das, A.; Davies, G.; de Jong, S. J.; De La Cruz-Burelo, E.; Déliot, F.; Demina, R.; Denisov, D.; Denisov, S. P.; Desai, S.; Deterre, C.; DeVaughan, K.; Diehl, H. T.; Diesburg, M.; Ding, P. F.; Dominguez, A.; Dubey, A.; Dudko, L. V.; Duggan, D.; Duperrin, A.; Dutt, S.; Dyshkant, A.; Eads, M.; Edmunds, D.; Ellison, J.; Elvira, V. D.; Enari, Y.; Evans, H.; Evdokimov, A.; Evdokimov, V. N.; Facini, G.; Feng, L.; Ferbel, T.; Fiedler, F.; Filthaut, F.; Fisher, W.; Fisk, H. E.; Fortner, M.; Fox, H.; Fuess, S.; Garcia-Bellido, A.; García-González, J. A.; García-Guerra, G. A.; Gavrilov, V.; Gay, P.; Geng, W.; Gerbaudo, D.; Gerber, C. E.; Gershtein, Y.; Ginther, G.; Golovanov, G.; Goussiou, A.; Grannis, P. D.; Greder, S.; Greenlee, H.; Grenier, G.; Gris, Ph.; Grivaz, J.-F.; Grohsjean, A.; Grünendahl, S.; Grünewald, M. W.; Guillemin, T.; Gutierrez, G.; Gutierrez, P.; Haas, A.; Hagopian, S.; Haley, J.; Han, L.; Harder, K.; Harel, A.; Hauptman, J. M.; Hays, J.; Head, T.; Hebbeker, T.; Hedin, D.; Hegab, H.; Heinson, A. P.; Heintz, U.; Hensel, C.; Heredia-De La Cruz, I.; Herner, K.; Hesketh, G.; Hildreth, M. D.; Hirosky, R.; Hoang, T.; Hobbs, J. D.; Hoeneisen, B.; Hohlfeld, M.; Howley, I.; Hubacek, Z.; Hynek, V.; Iashvili, I.; Ilchenko, Y.; Illingworth, R.; Ito, A. S.; Jabeen, S.; Jaffré, M.; Jayasinghe, A.; Jesik, R.; Johns, K.; Johnson, E.; Johnson, M.; Jonckheere, A.; Jonsson, P.; Joshi, J.; Jung, A. W.; Juste, A.; Kaadze, K.; Kajfasz, E.; Karmanov, D.; Kasper, P. A.; Katsanos, I.; Kehoe, R.; Kermiche, S.; Khalatyan, N.; Khanov, A.; Kharchilava, A.; Kharzheev, Y. N.; Kiselevich, I.; Kohli, J. M.; Kozelov, A. V.; Kraus, J.; Kulikov, S.; Kumar, A.; Kupco, A.; Kurča, T.; Kuzmin, V. A.; Lammers, S.; Landsberg, G.; Lebrun, P.; Lee, H. S.; Lee, S. W.; Lee, W. M.; Lellouch, J.; Li, H.; Li, L.; Li, Q. Z.; Lim, J. K.; Lincoln, D.; Linnemann, J.; Lipaev, V. V.; Lipton, R.; Liu, H.; Liu, Y.; Lobodenko, A.; Lokajicek, M.; Lopes de Sa, R.; Lubatti, H. J.; Luna-Garcia, R.; Lyon, A. L.; Maciel, A. K. A.; Madar, R.; Magaña-Villalba, R.; Malik, S.; Malyshev, V. L.; Maravin, Y.; Martínez-Ortega, J.; McCarthy, R.; McGivern, C. L.; Meijer, M. M.; Melnitchouk, A.; Menezes, D.; Mercadante, P. G.; Merkin, M.; Meyer, A.; Meyer, J.; Miconi, F.; Mondal, N. K.; Mulhearn, M.; Nagy, E.; Naimuddin, M.; Narain, M.; Nayyar, R.; Neal, H. A.; Negret, J. P.; Neustroev, P.; Nunnemann, T.; Obrant, G.; Orduna, J.; Osman, N.; Osta, J.; Padilla, M.; Pal, A.; Parashar, N.; Parihar, V.; Park, S. K.; Partridge, R.; Parua, N.; Patwa, A.; Penning, B.; Perfilov, M.; Peters, Y.; Petridis, K.; Petrillo, G.; Pétroff, P.; Pleier, M.-A.; Podesta-Lerma, P. L. M.; Podstavkov, V. M.; Popov, A. V.; Prewitt, M.; Price, D.; Prokopenko, N.; Qian, J.; Quadt, A.; Quinn, B.; Rangel, M. S.; Ranjan, K.; Ratoff, P. N.; Razumov, I.; Renkel, P.; Ripp-Baudot, I.; Rizatdinova, F.; Rominsky, M.; Ross, A.; Royon, C.; Rubinov, P.; Ruchti, R.; Sajot, G.; Salcido, P.; Sánchez-Hernández, A.; Sanders, M. P.; Sanghi, B.; Santos, A. S.; Savage, G.; Sawyer, L.; Scanlon, T.; Schamberger, R. D.; Scheglov, Y.; Schellman, H.; Schlobohm, S.; Schwanenberger, C.; Schwienhorst, R.; Sekaric, J.; Severini, H.; Shabalina, E.; Shary, V.; Shaw, S.; Shchukin, A. A.; Shivpuri, R. K.; Simak, V.; Skubic, P.; Slattery, P.; Smirnov, D.; Smith, K. J.; Snow, G. R.; Snow, J.; Snyder, S.; Söldner-Rembold, S.; Sonnenschein, L.; Soustruznik, K.; Stark, J.; Stoyanova, D. A.; Strauss, M.; Stutte, L.; Suter, L.; Svoisky, P.; Takahashi, M.; Titov, M.; Tokmenin, V. V.; Tsai, Y.-T.; Tschann-Grimm, K.; Tsybychev, D.; Tuchming, B.; Tully, C.; Uvarov, L.; Uvarov, S.; Uzunyan, S.; Van Kooten, R.; van Leeuwen, W. M.; Varelas, N.; Varnes, E. W.; Vasilyev, I. A.; Verdier, P.; Verkheev, A. Y.; Vertogradov, L. S.; Verzocchi, M.; Vesterinen, M.; Vilanova, D.; Vokac, P.; Wahl, H. D.; Wang, M. H. L. S.; Warchol, J.; Watts, G.; Wayne, M.; Weichert, J.; Welty-Rieger, L.; White, A.; Wicke, D.; Williams, M. R. J.; Wilson, G. W.; Wobisch, M.; Wood, D. R.; Wyatt, T. R.; Xie, Y.; Yamada, R.; Yang, W.-C.; Yasuda, T.; Yatsunenko, Y. A.; Ye, W.; Ye, Z.; Yin, H.; Yip, K.; Youn, S. W.; Zennamo, J.; Zhao, T.; Zhao, T. G.; Zhou, B.; Zhu, J.; Zielinski, M.; Zieminska, D.; Zivkovic, L.
2012-08-01
We present a search for the standard model Higgs boson in final states with an electron or muon and a hadronically decaying tau lepton in association with zero, one, or two or more jets using data corresponding to an integrated luminosity of up to 7.3 fb collected with the D0 detector at the Fermilab Tevatron collider. The analysis is sensitive to Higgs boson production via gluon-gluon fusion, associated vector boson production, and vector boson fusion, and to Higgs boson decays to ττ, WW, ZZ and bb¯ pairs. Observed (expected) limits are set on the ratio of 95% C.L. upper limits on the cross section times branching ratio, relative to those predicted by the Standard Model, of 22 (14) at a Higgs boson mass of 115 GeV and 6.8 (7.7) at 165 GeV.
Neutral Higgs boson pair production at the linear collider in the noncommutative standard model
Das, Prasanta Kumar; Prakash, Abhishodh; Mitra, Anupam
2011-03-01
We study the Higgs boson pair production at the linear collider in the noncommutative extension of the standard model using the Seiberg-Witten map of this to the first order of the noncommutative parameter {Theta}{sub {mu}{nu}}. Unlike the standard model (where the process is forbidden) here the Higgs boson pair directly interacts with the photon. We find that the pair production cross section can be quite significant for the noncommutative scale {Lambda} lying in the range 0.5 TeV to 1.0 TeV. Using the experimental (LEP 2, Tevatron, and global electroweak fit) bound on the Higgs mass, we obtain 626 GeV{<=}{Lambda}{<=}974 GeV.
Search for the standard model Higgs boson in tau lepton final states
Abazov, Victor Mukhamedovich; et al.
2012-08-01
We present a search for the standard model Higgs boson in final states with an electron or muon and a hadronically decaying tau lepton in association with zero, one, or two or more jets using data corresponding to an integrated luminosity of up to 7.3 fb{sup -1} collected with the D0 detector at the Fermilab Tevatron collider. The analysis is sensitive to Higgs boson production via gluon gluon fusion, associated vector boson production, and vector boson fusion, and to Higgs boson decays to tau lepton pairs or W boson pairs. Observed (expected) limits are set on the ratio of 95% C.L. upper limits on the cross section times branching ratio, relative to those predicted by the Standard Model, of 14 (22) at a Higgs boson mass of 115 GeV and 7.7 (6.8) at 165 GeV.
Lepton number violation in theories with a large number of standard model copies
Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich
2011-03-01
We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U{sub 1(B-L)}. Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.
Wang Lei; Han Xiaofang
2010-11-01
In the framework of the simplest little Higgs model, we perform a comprehensive study for the pair productions of the pseudoscalar boson {eta} and standard model-like Higgs boson h at LHC, namely gg(bb){yields}{eta}{eta}, gg(qq){yields}{eta}h, and gg(bb){yields}hh. These production processes provide a way to probe the couplings between Higgs bosons. We find that the cross section of gg{yields}{eta}{eta} always dominates over that of bb{yields}{eta}{eta}. When the Higgs boson h which mediates these two processes is on-shell, their cross sections can reach several thousand fb and several hundred fb, respectively. When the intermediate state h is off-shell, those two cross sections are reduced by 2 orders of magnitude, respectively. The cross sections of gg{yields}{eta}h and qq{yields}{eta}h are about in the same order of magnitude, which can reach O(10{sup 2} fb) for a light {eta} boson. Besides, compared with the standard model prediction, the cross section of a pair of standard model-like Higgs bosons production at LHC can be enhanced sizably. Finally, we briefly discuss the observable signatures of {eta}{eta}, {eta}h, and hh at the LHC.
Aerodynamic characteristics of the standard dynamics model in coning motion at Mach 0.6
NASA Technical Reports Server (NTRS)
Jermey, C.; Schiff, L. B.
1985-01-01
A wind tunnel test was conducted on the Standard Dynamics Model (a simplified generic fighter aircraft shape) undergoing coning motion at Mach 0.6. Six component force and moment data are presented for a range of angle of attack, sideslip, and coning rates. At the relatively low non-dimensional coning rate employed (omega b/2V less than or equal to 0.04), the lateral aerodynamic characteristics generally show a linear variation with coning rate.
Two-loop results for MW in the standard model and the MSSM
Ayres Freitas; Sven Heinemeyer; Georg Weiglein
2002-12-09
Recent higher-order results for the prediction of the W-boson mass, M{sub W}, within the Standard Model are reviewed and an estimate of the remaining theoretical uncertainties of the electroweak precision observables is given. An updated version of a simple numerical parameterization of the result for M{sub W} is presented. Furthermore, leading electroweak two-loop contributions to the precision observables within the MSSM are discussed.
Precision Higgs Boson Physics and Implications for Beyond the Standard Model Physics Theories
Wells, James
2015-06-10
The discovery of the Higgs boson is one of science's most impressive recent achievements. We have taken a leap forward in understanding what is at the heart of elementary particle mass generation. We now have a significant opportunity to develop even deeper understanding of how the fundamental laws of nature are constructed. As such, we need intense focus from the scientific community to put this discovery in its proper context, to realign and narrow our understanding of viable theory based on this positive discovery, and to detail the implications the discovery has for theories that attempt to answer questions beyond what the Standard Model can explain. This project's first main object is to develop a state-of-the-art analysis of precision Higgs boson physics. This is to be done in the tradition of the electroweak precision measurements of the LEP/SLC era. Indeed, the electroweak precision studies of the past are necessary inputs to the full precision Higgs program. Calculations will be presented to the community of Higgs boson observables that detail just how well various couplings of the Higgs boson can be measured, and more. These will be carried out using state-of-the-art theory computations coupled with the new experimental results coming in from the LHC. The project's second main objective is to utilize the results obtained from LHC Higgs boson experiments and the precision analysis, along with the direct search studies at LHC, and discern viable theories of physics beyond the Standard Model that unify physics to a deeper level. Studies will be performed on supersymmetric theories, theories of extra spatial dimensions (and related theories, such as compositeness), and theories that contain hidden sector states uniquely accessible to the Higgs boson. In addition, if data becomes incompatible with the Standard Model's low-energy effective lagrangian, new physics theories will be developed that explain the anomaly and put it into a more unified framework beyond
Mass and mixing angle patterns in the Standard Model and its material Supersymmetric Extension
Ramond, P.
1992-01-01
Using renormalization group techniques, we examine several interesting relations among masses and mixing angles of quarks and lepton in the Standard Model of Elementary Particle Interactions as a functionof scale. We extend the analysis to the minimal Supersymmetric Extension to determine its effect on these mass relations. For a heavy to quark, and minimal supersymmetry, most of these relations, can be made to agree at one unification scale.
NASA Technical Reports Server (NTRS)
Mack, Robert J.
2007-01-01
Low-boom model pressure signatures are often measured at two or more wind-tunnel facilities. Preliminary measurements are made at small separation distances in a wind tunnel close at hand, and a second set of pressure signatures is measured at larger separation distances in a wind-tunnel facility with a larger test section. In this report, a method for correcting and standardizing the wind-tunnel-measured pressure signatures obtained in different wind tunnel facilities is presented and discussed.
Search for the Standard Model Higgs Boson in the $WH \\to \\ell \
Nagai, Yoshikazu
2010-02-01
We have searched for the Standard Model Higgs boson in the WH → lvbb channel in 1.96 TeV pp collisions at CDF. This search is based on the data collected by March 2009, corresponding to an integrated luminosity of 4.3 fb^{-1}. The W H channel is one of the most promising channels for the Higgs boson search at Tevatron in the low Higgs boson mass region.
Status of searches for Higgs and physics beyond the standard model at CDF
Tsybychev, D.; /Florida U.
2004-12-01
This article presents selected experimental results on searches for Higgs and physics beyond the standard model (BSM) at the Collider Detector at Fermilab (CDF). The results are based on about 350 pb{sup -1} of proton-antiproton collisions data at {radical}s = 1.96 TeV, collected during Run II of the Tevatron. No evidence of signal was found and limits on the production cross section of various physics processes BSM are derived.
Observation of an excess in the search for the Standard Model Higgs boson at ALEPH
NASA Astrophysics Data System (ADS)
ALEPH Collaboration; Barate, R.; De Bonis, I.; Decamp, D.; Ghez, P.; Goy, C.; Jezequel, S.; Lees, J.-P.; Martin, F.; Merle, E.; Minard, M.-N.; Pietrzyk, B.; Bravo, S.; Casado, M. P.; Chmeissani, M.; Crespo, J. M.; Fernandez, E.; Fernandez-Bosman, M.; Garrido, Ll.; Graugés, E.; Lopez, J.; Martinez, M.; Merino, G.; Miquel, R.; Mir, Ll. M.; Pacheco, A.; Paneque, D.; Ruiz, H.; Colaleo, A.; Creanza, D.; De Filippis, N.; de Palma, M.; Iaselli, G.; Maggi, G.; Maggi, M.; Nuzzo, S.; Ranieri, A.; Raso, G.; Ruggieri, F.; Selvaggi, G.; Silvestris, L.; Tempesta, P.; Tricomi, A.; Zito, G.; Huang, X.; Lin, J.; Ouyang, Q.; Wang, T.; Xie, Y.; Xu, R.; Xue, S.; Zhang, J.; Zhang, L.; Zhao, W.; Abbaneo, D.; Azzurri, P.; Barklow, T.; Boix, G.; Buchmüller, O.; Cattaneo, M.; Cerutti, F.; Clerbaux, B.; Dissertori, G.; Drevermann, H.; Forty, R. W.; Frank, M.; Gianotti, F.; Greening, T. C.; Hansen, J. B.; Harvey, J.; Hutchcroft, D. E.; Janot, P.; Jost, B.; Kado, M.; Lemaitre, V.; Maley, P.; Mato, P.; Minten, A.; Moutoussi, A.; Ranjard, F.; Rolandi, L.; Schlatter, D.; Schmitt, M.; Schneider, O.; Spagnolo, P.; Tejessy, W.; Teubert, F.; Tournefier, E.; Valassi, A.; Ward, J. J.; Wright, A. E.; Ajaltouni, Z.; Badaud, F.; Dessagne, S.; Falvard, A.; Fayolle, D.; Gay, P.; Henrard, P.; Jousset, J.; Michel, B.; Monteil, S.; Montret, J.-C.; Pallin, D.; Pascolo, J. M.; Perret, P.; Podlyski, F.; Hansen, J. D.; Hansen, J. R.; Hansen, P. H.; Nilsson, B. S.; Wäänänen, A.; Daskalakis, G.; Kyriakis, A.; Markou, C.; Simopoulou, E.; Vayaki, A.; Blondel, A.; Brient, J.-C.; Machefert, F.; Rougé, A.; Swynghedauw, M.; Tanaka, R.; Videau, H.; Focardi, E.; Parrini, G.; Zachariadou, K.; Antonelli, A.; Antonelli, M.; Bencivenni, G.; Bologna, G.; Bossi, F.; Campana, P.; Capon, G.; Chiarella, V.; Laurelli, P.; Mannocchi, G.; Murtas, F.; Murtas, G. P.; Passalacqua, L.; Pepe-Altarelli, M.; Chalmers, M.; Halley, A. W.; Kennedy, J.; Lynch, J. G.; Negus, P.; O'Shea, V.; Raeven, B.; Smith, D.; Teixeira-Dias, P.; Thompson, A. S.; Cavanaugh, R.; Dhamotharan, S.; Geweniger, C.; Hanke, P.; Hepp, V.; Kluge, E. E.; Leibenguth, G.; Putzer, A.; Tittel, K.; Werner, S.; Wunsch, M.; Beuselinck, R.; Binnie, D. M.; Cameron, W.; Davies, G.; Dornan, P. J.; Girone, M.; Marinelli, N.; Nowell, J.; Przysiezniak, H.; Sedgbeer, J. K.; Thompson, J. C.; Thomson, E.; White, R.; Ghete, V. M.; Girtler, P.; Kneringer, E.; Kuhn, D.; Rudolph, G.; Bouhova-Thacker, E.; Bowdery, C. K.; Clarke, D. P.; Ellis, G.; Finch, A. J.; Foster, F.; Hughes, G.; Jones, R. W. L.; Pearson, M. R.; Robertson, N. A.; Smizanska, M.; Giehl, I.; Hölldorfer, F.; Jakobs, K.; Kleinknecht, K.; Kröcker, M.; Müller, A.-S.; Nürnberger, H.-A.; Quast, G.; Renk, B.; Rohne, E.; Sander, H.-G.; Schmeling, S.; Wachsmuth, H.; Zeitnitz, C.; Ziegler, T.; Bonissent, A.; Carr, J.; Coyle, P.; Curtil, C.; Ealet, A.; Fouchez, D.; Leroy, O.; Kachelhoffer, T.; Payre, P.; Rousseau, D.; Tilquin, A.; Aleppo, M.; Gilardoni, S.; Ragusa, F.; David, A.; Dietl, H.; Ganis, G.; Heister, A.; Hüttmann, K.; Lütjens, G.; Mannert, C.; Männer, W.; Moser, H.-G.; Schael, S.; Settles, R.; Stenzel, H.; Wolf, G.; Boucrot, J.; Callot, O.; Davier, M.; Duflot, L.; Grivaz, J.-F.; Heusse, Ph.; Jacholkowska, A.; Serin, L.; Veillet, J.-J.; Videau, I.; de Vivie de Régie, J.-B.; Yuan, C.; Zerwas, D.; Bagliesi, G.; Boccali, T.; Calderini, G.; Ciulli, V.; Foà, L.; Giammanco, A.; Giassi, A.; Ligabue, F.; Messineo, A.; Palla, F.; Rizzo, G.; Sanguinetti, G.; Sciabà, A.; Sguazzoni, G.; Steinberger, J.; Tenchini, R.; Venturi, A.; Verdini, P. G.; Blair, G. A.; Coles, J.; Cowan, G.; Green, M. G.; Jones, L. T.; Medcalf, T.; Strong, J. A.; Clifft, R. W.; Edgecock, T. R.; Norton, P. R.; Tomalin, I. R.; Bloch-Devaux, B.; Boumediene, D.; Colas, P.; Fabbro, B.; Lançon, E.; Lemaire, M.-C.; Locci, E.; Perez, P.; Rander, J.; Renardy, J.-F.; Rosowsky, A.; Seager, P.; Trabelsi, A.; Tuchming, B.; Vallage, B.; Konstantinidis, N.; Loomis, C.; Litke, A. M.; Taylor, G.; Booth, C. N.; Cartwright, S.; Combley, F.; Hodgson, P. N.; Lehto, M.; Thompson, L. F.; Affholderbach, K.; Böhrer, A.; Brandt, S.; Grupen, C.; Hess, J.; Misiejuk, A.; Prange, G.; Sieler, U.; Borean, C.; Giannini, G.; Gobbo, B.; He, H.; Putz, J.; Rothberg, J.; Wasserbaech, S.; Armstrong, S. R.; Cranmer, K.; Elmer, P.; Ferguson, D. P. S.; Gao, Y.; González, S.; Hayes, O. J.; Hu, H.; Jin, S.; Kile, J.; McNamara, P. A.; Nielsen, J.; Orejudos, W.; Pan, Y. B.; Saadi, Y.; Scott, I. J.; Shao, N.; von Wimmersperg-Toeller, J. H.; Walsh, J.; Wiedenmann, W.; Wu, J.; Wu, S. L.; Wu, X.; Zobernig, G.
2000-12-01
A search has been performed for the Standard Model Higgs boson in the data sample collected with the ALEPH detector at LEP, at centre-of-mass energies up to 209GeV. An excess of /3σ beyond the background expectation is found, consistent with the production of the Higgs boson with a mass near 114GeV/c2. Much of this excess is seen in the four-jet analyses, where three high purity events are selected.
NASA Astrophysics Data System (ADS)
Bouwman, R. W.; van Engen, R. E.; Young, K. C.; Veldkamp, W. J. H.; Dance, D. R.
2015-01-01
Slabs of polymethyl methacrylate (PMMA) or a combination of PMMA and polyethylene (PE) slabs are used to simulate standard model breasts for the evaluation of the average glandular dose (AGD) in digital mammography (DM) and digital breast tomosynthesis (DBT). These phantoms are optimized for the energy spectra used in DM and DBT, which normally have a lower average energy than used in contrast enhanced digital mammography (CEDM). In this study we have investigated whether these phantoms can be used for the evaluation of AGD with the high energy x-ray spectra used in CEDM. For this purpose the calculated values of the incident air kerma for dosimetry phantoms and standard model breasts were compared in a zero degree projection with the use of an anti scatter grid. It was found that the difference in incident air kerma compared to standard model breasts ranges between -10% to +4% for PMMA slabs and between 6% and 15% for PMMA-PE slabs. The estimated systematic error in the measured AGD for both sets of phantoms were considered to be sufficiently small for the evaluation of AGD in quality control procedures for CEDM. However, the systematic error can be substantial if AGD values from different phantoms are compared.
Neutrino-Antineutrino Mass Splitting in the Standard Model: Neutrino Oscillation and Baryogenesis
NASA Astrophysics Data System (ADS)
Fujikawa, Kazuo; Tureanu, Anca
By adding a neutrino mass term to the Standard Model, which is Lorentz and SU(2) × U(1) invariant but nonlocal to evade CPT theorem, it is shown that nonlocality within a distance scale of the Planck length, that may not be fatal to unitarity in generic effective theory, can generate the neutrino-antineutrino mass splitting of the order of observed neutrino mass differences, which is tested in oscillation experiments, and non-negligible baryon asymmetry depending on the estimate of sphaleron dynamics. The one-loop order induced electron-positron mass splitting in the Standard Model is shown to be finite and estimated at ˜ 10-20 eV, well below the experimental bound < 10-2 eV. The induced CPT violation in the K-meson in the Standard Model is expected to be even smaller and well below the experimental bound |m_{K} - m_{bar{K}}| < 0.44 × 10^{-18} GeV.
Neutrino-antineutrino mass splitting in the Standard Model: Neutrino oscillation and baryogenesis
NASA Astrophysics Data System (ADS)
Fujikawa, Kazuo; Tureanu, Anca
2015-07-01
By adding a neutrino mass term to the Standard Model, which is Lorentz and SU(2) × U(1) invariant but nonlocal to evade CPT theorem, it is shown that nonlocality within a distance scale of the Planck length, that may not be fatal to unitarity in generic effective theory, can generate the neutrino-antineutrino mass splitting of the order of observed neutrino mass differences, which is tested in oscillation experiments, and non-negligible baryon asymmetry depending on the estimate of sphaleron dynamics. The one-loop order induced electron-positron mass splitting in the Standard Model is shown to be finite and estimated at ˜ 10-20 eV, well below the experimental bound < 10-2 eV. The induced CPT violation in the K-meson in the Standard Model is expected to be even smaller and well below the experimental bound |mK - mK¯| < 0.44 × 10-18GeV.
Embedding inflation into the Standard Model — More evidence for classical scale invariance
NASA Astrophysics Data System (ADS)
Kannike, Kristjan; Racioppi, Antonio; Raidal, Martti
2014-06-01
If cosmological inflation is due to a slowly rolling single inflation field taking trans-Planckian values as suggested by the BICEP2 measurement of primordial tensor modes in CMB, embedding inflation into the Standard Model challenges standard paradigm of effective field theories. Together with an apparent absence of Planck scale contributions to the Higgs mass and to the cosmological constant, BICEP2 provides further experimental evidence for the absence of large M P induced operators. We show that classical scale invariance — the paradigm that all fundamental scales in Nature are induced by quantum effects — solves the problem and allows for a remarkably simple scale-free Standard Model extension with inflaton without extending the gauge group. Due to trans-Planckian inflaton values and vevs, a dynamically induced Coleman-Weinberg-type inflaton potential of the model can predict tensor-to-scalar ratio r in a large range, converging around the prediction of chaotic m 2 ϕ 2 inflation for a large trans-Planckian value of the inflaton vev. Precise determination of r in future experiments will single out a unique scale-free inflation potential, allowing to test the proposed field-theoretic framework.
Standard model of particles and forces in the framework of two-time physics
NASA Astrophysics Data System (ADS)
Bars, Itzhak
2006-10-01
In this paper it will be shown that the standard model in 3+1 dimensions is a gauge fixed version of a 2T physics field theory in 4+2 dimensions, thus establishing that 2T physics provides a correct description of nature from the point of view of 4+2 dimensions. The 2T formulation leads to phenomenological consequences of considerable significance. In particular, the higher structure in 4+2 dimensions prevents the problematic F*F term in QCD. This resolves the strong CP problem without a need for the Peccei-Quinn symmetry or the corresponding elusive axion. Mass generation with the Higgs mechanism is less straightforward in the new formulation of the standard model, but its resolution leads to an appealing deeper physical basis for mass, coupled with phenomena that could be measurable. In addition, there are some brand new mechanisms of mass generation related to the higher dimensions that deserve further study. The technical progress is based on the construction of a new field theoretic version of 2T physics including interactions in an action formalism in d+2 dimensions. The action is invariant under a new type of gauge symmetry which we call 2T-gauge symmetry in field theory. This opens the way for investigations of the standard model directly in 4+2 dimensions, or from the point of view of various embeddings of 3+1 dimensions, by using the duality, holography, symmetry, and unifying features of 2T physics.
Fermionic extensions of the Standard Model in light of the Higgs couplings
NASA Astrophysics Data System (ADS)
Bizot, Nicolas; Frigerio, Michele
2016-01-01
As the Higgs boson properties settle, the constraints on the Standard Model extensions tighten. We consider all possible new fermions that can couple to the Higgs, inspecting sets of up to four chiral multiplets. We confront them with direct collider searches, electroweak precision tests, and current knowledge of the Higgs couplings. The focus is on scenarios that may depart from the decoupling limit of very large masses and vanishing mixing, as they offer the best prospects for detection. We identify exotic chiral families that may receive a mass from the Higgs only, still in agreement with the hγγ signal strength. A mixing θ between the Standard Model and non-chiral fermions induces order θ 2 deviations in the Higgs couplings. The mixing can be as large as θ ˜ 0 .5 in case of custodial protection of the Z couplings or accidental cancellation in the oblique parameters. We also notice some intriguing effects for much smaller values of θ, especially in the lepton sector. Our survey includes a number of unconventional pairs of vector-like and Majorana fermions coupled through the Higgs, that may induce order one corrections to the Higgs radiative couplings. We single out the regions of parameters where hγγ and hgg are unaffected, while the hγZ signal strength is significantly modified, turning a few times larger than in the Standard Model in two cases. The second run of the LHC will effectively test most of these scenarios.
Real Z{sub 2}-bigradings, Majorana modules, and the standard model action
Tolksdorf, Juergen
2010-05-15
The action functional of the standard model of particle physics is intimately related to a specific class of first order differential operators called Dirac operators of Pauli type ('Pauli-Dirac operators'). The aim of this article is to carefully analyze the geometrical structure of this class of Dirac operators on the basis of real Dirac operators of simple type. On the basis of simple type Dirac operators, it is shown how the standard model action (STM action) may be viewed as generalizing the Einstein-Hilbert action in a similar way that the Einstein-Hilbert action is generalized by a cosmological constant. Furthermore, we demonstrate how the geometrical scheme presented allows to naturally incorporate also Majorana mass terms within the standard model. For reasons of consistency, these Majorana mass terms are shown to dynamically contribute to the Einstein-Hilbert action by a 'true' cosmological constant. Due to its specific form, this cosmological constant can be very small. Nonetheless, this cosmological constant may provide a significant contribution to dark matter/energy. In the geometrical description presented, this possibility arises from a subtle interplay between Dirac and Majorana masses.
Phenomenology of the minimal B-L extension of the standard model: The Higgs sector
Basso, Lorenzo; Moretti, Stefano; Pruna, Giovanni Marco
2011-03-01
We investigate the phenomenology of the Higgs sector of the minimal B-L extension of the standard model. We present results for both the foreseen energy stages of the Large Hadron Collider ({radical}(s)=7 and 14 TeV). We show that in such a scenario several novel production and decay channels involving the two physical Higgs states could be accessed at such a machine. Amongst these, several Higgs signatures have very distinctive features with respect to those of other models with an enlarged Higgs sector, as they involve interactions of Higgs bosons between themselves, with Z{sup '} bosons as well as with heavy neutrinos.
Testing the Standard Model by precision measurement of the weak charges of quarks
Ross Young; Roger Carlini; Anthony Thomas; Julie Roche
2007-05-01
In a global analysis of the latest parity-violating electron scattering measurements on nuclear targets, we demonstrate a significant improvement in the experimental knowledge of the weak neutral-current lepton-quark interactions at low-energy. The precision of this new result, combined with earlier atomic parity-violation measurements, limits the magnitude of possible contributions from physics beyond the Standard Model - setting a model-independent, lower-bound on the scale of new physics at ~1 TeV.
Imagining the future, or how the Standard Model may survive the attacks
NASA Astrophysics Data System (ADS)
’T Hooft, Gerard
2016-06-01
After the last missing piece, the Higgs particle, has probably been identified, the Standard Model of the subatomic particles appears to be a quite robust structure, that can survive on its own for a long time to come. Most researchers expect considerable modifications and improvements to come in the near future, but it could also be that the Model will stay essentially as it is. This, however, would also require a change in our thinking, and the question remains whether and how it can be reconciled with our desire for our theories to be “natural”.
Unification and Dark Matter in a Minimal Scalar Extension of the Standard Model
Lisanti, Mariangela; Wacker, Jay G.
2007-04-25
The six Higgs doublet model is a minimal extension of the Standard Model (SM) that addresses dark matter and gauge coupling unification. Another Higgs doublet in the 5 representation of a discrete symmetry group, such as S{sub 6}, is added to the SM. The lightest components of the 5-Higgs are neutral, stable and serve as dark matter so long as the discrete symmetry is not broken. Direct and indirect detection signals, as well as collider signatures are discussed. The five-fold multiplicity of the dark matter decreases its mass and typically helps make the dark matter more visible in upcoming experiments.
Searches for Higgs bosons beyond the Standard Model at the Tevatron
Biscarat, Catherine; /Lancaster U.
2004-08-01
Preliminary results from the CDF and D0 Collaborations on the searches for Higgs bosons beyond the Standard Model at the Run II Tevatron are reviewed. These results are based on datasets corresponding to an integrated luminosity of 100-200 pb{sup -1} collected from proton anti-proton collisions at a center of mass energy of 1.96 TeV. No evidence of signal is observed and limits on Higgs bosons production cross sections times branching ratio, couplings and masses from various models are set.
NASA Astrophysics Data System (ADS)
Benedict, K. K.; Yang, C.; Huang, Q.
2010-12-01
The availability of high-speed research networks such as the US National Lambda Rail and the GÉANT network, scalable on-demand commodity computing resources provided by public and private "cloud" computing systems, and increasing demand for rapid access to the products of environmental models for both research and public policy development contribute to a growing need for the evaluation and development of environmental modeling systems that distribute processing, storage, and data delivery capabilities between network connected systems. In an effort to address the feasibility of developing a standards-based distributed modeling system in which model execution systems are physically separate from data storage and delivery systems, the research project presented in this paper developed a distributed dust forecasting system in which two nested atmospheric dust models are executed at George Mason University (GMU, in Fairfax, VA) while data and model output processing services are hosted at the University of New Mexico (UNM, in Albuquerque, NM). Exchange of model initialization and boundary condition parameters between the servers at UNM and the model execution systems at GMU is accomplished through Open Geospatial Consortium (OGC) Web Coverage Services (WCS) and Web Feature Services (WFS) while model outputs are pushed from GMU systems back to UNM using a REST web service interface. In addition to OGC and non-OGC web services for exchange between UNM and GMU, the servers at UNM also provide access to the input meteorological model products, intermediate and final dust model outputs, and other products derived from model outputs through OGC WCS, WFS, and OGC Web Map Services (WMS). The performance of the nested versus non-nested models is assessed in this research, with the results of the performance analysis providing the core content of the produced feasibility study. System integration diagram illustrating the storage and service platforms hosted at the Earth Data
Coupling lattice Boltzmann model for simulation of thermal flows on standard lattices.
Li, Q; Luo, K H; He, Y L; Gao, Y J; Tao, W Q
2012-01-01
In this paper, a coupling lattice Boltzmann (LB) model for simulating thermal flows on the standard two-dimensional nine-velocity (D2Q9) lattice is developed in the framework of the double-distribution-function (DDF) approach in which the viscous heat dissipation and compression work are considered. In the model, a density distribution function is used to simulate the flow field, while a total energy distribution function is employed to simulate the temperature field. The discrete equilibrium density and total energy distribution functions are obtained from the Hermite expansions of the corresponding continuous equilibrium distribution functions. The pressure given by the equation of state of perfect gases is recovered in the macroscopic momentum and energy equations. The coupling between the momentum and energy transports makes the model applicable for general thermal flows such as non-Boussinesq flows, while the existing DDF LB models on standard lattices are usually limited to Boussinesq flows in which the temperature variation is small. Meanwhile, the simple structure and general features of the DDF LB approach are retained. The model is tested by numerical simulations of thermal Couette flow, attenuation-driven acoustic streaming, and natural convection in a square cavity with small and large temperature differences. The numerical results are found to be in good agreement with the analytical solutions and/or other numerical results reported in the literature.
Application of TDCR-Geant4 modeling to standardization of 63Ni.
Thiam, C; Bobin, C; Chauvenet, B; Bouchard, J
2012-09-01
As an alternative to the classical TDCR model applied to liquid scintillation (LS) counting, a stochastic approach based on the Geant4 toolkit is presented for the simulation of light emission inside the dedicated three-photomultiplier detection system. To this end, the Geant4 modeling includes a comprehensive description of optical properties associated with each material constituting the optical chamber. The objective is to simulate the propagation of optical photons from their creation in the LS cocktail to the production of photoelectrons in the photomultipliers. First validated for the case of radionuclide standardization based on Cerenkov emission, the scintillation process has been added to a TDCR-Geant4 modeling using the Birks expression in order to account for the light-emission nonlinearity owing to ionization quenching. The scintillation yield of the commercial Ultima Gold LS cocktail has been determined from double-coincidence detection efficiencies obtained for (60)Co and (54)Mn with the 4π(LS)β-γ coincidence method. In this paper, the stochastic TDCR modeling is applied for the case of the standardization of (63)Ni (pure β(-)-emitter; E(max)=66.98 keV) and the activity concentration is compared with the result given by the classical model.
English, Sinéad; Bateman, Andrew W; Clutton-Brock, Tim H
2012-05-01
Lifetime records of changes in individual size or mass in wild animals are scarce and, as such, few studies have attempted to model variation in these traits across the lifespan or to assess the factors that affect them. However, quantifying lifetime growth is essential for understanding trade-offs between growth and other life history parameters, such as reproductive performance or survival. Here, we used model selection based on information theory to measure changes in body mass over the lifespan of wild meerkats, and compared the relative fits of several standard growth models (monomolecular, von Bertalanffy, Gompertz, logistic and Richards). We found that meerkats exhibit monomolecular growth, with the best model incorporating separate growth rates before and after nutritional independence, as well as effects of season and total rainfall in the previous nine months. Our study demonstrates how simple growth curves may be improved by considering life history and environmental factors, which may be particularly relevant when quantifying growth patterns in wild populations.
Heavy to light Higgs boson decays at NLO in the singlet extension of the Standard Model
NASA Astrophysics Data System (ADS)
Bojarski, F.; Chalons, G.; López-Val, D.; Robens, T.
2016-02-01
We study the decay of a heavy Higgs boson into a light Higgs pair at one loop in the singlet extension of the Standard Model. To this purpose, we construct several renormalization schemes for the extended Higgs sector of the model. We apply these schemes to calculate the heavy-to-light Higgs decay width Γ H → hh at next-to-leading order electroweak accuracy, and demonstrate that certain prescriptions lead to gauge-dependent results. We comprehensively examine how the NLO predictions depend on the relevant singlet model parameters, with emphasis on the trademark behavior of the quantum effects, and how these change under different renormalization schemes and a variable renormalization scale. Once all present constraints on the model are included, we find mild NLO corrections, typically of few percent, and with small theoretical uncertainties.
Le couplage pulsation-convection.
NASA Astrophysics Data System (ADS)
Poyet, J.-P.
Contents: Quelques problèmes Boussinesq bien definis. Les théories de couplage pulsation radiale-convection. Quelques pas dans le domaine du couplage des pulsations non radiales avec la convection. Conclusion.
Perez, Hector R.; Stoeckle, James H.
2016-01-01
Résumé Objectif Fournir une mise à jour sur l’épidémiologie, l’hérédité, la physiopathologie, le diagnostic et le traitement du bégaiement développemental. Qualité des données Une recherche d’études récentes ou non portant sur l’épidémiologie, l’hérédité, la physiopathologie, le diagnostic et le traitement du bégaiement développemental a été effectuée dans les bases de données MEDLINE et Cochrane. La plupart des recommandations s’appuient sur des études de petite envergure, des données probantes de qualité limitée ou des consensus. Message principal Le bégaiement est un trouble d’élocution fréquent chez les personnes de tous âges, il altère la fluidité verbale normale et l’enchaînement du discours. Le bégaiement a été lié à des différences de l’anatomie, du fonctionnement et de la régulation dopaminergique du cerveau qui seraient de source génétique. Il importe de poser le diagnostic avec attention et de faire les recommandations qui conviennent chez les enfants, car de plus en plus, le consensus veut que l’intervention précoce par un traitement d’orthophonie soit cruciale chez les enfants bègues. Chez les adultes, le bégaiement est lié à une morbidité psychosociale substantielle, dont l’anxiété sociale et une piètre qualité de vie. Les traitements pharmacologiques ont soulevé l’intérêt depuis quelques années, mais les données cliniques sont limitées. Le traitement des enfants et des adultes repose sur l’orthophonie. Conclusion De plus en plus de recherches ont tenté de lever le voile sur la physiopathologie du bégaiement. La meilleure solution pour les enfants et les adultes bègues demeure la recommandation à un traitement d’orthophonie.
40 CFR 86.099-9 - Emission standards for 1999 and later model year light-duty trucks.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Emission standards for 1999 and later... VEHICLES AND ENGINES General Provisions for Emission Regulations for 1977 and Later Model Year New Light....099-9 Emission standards for 1999 and later model year light-duty trucks. (a)(1)(i)-(iii) (iv)...
40 CFR 86.099-8 - Emission standards for 1999 and later model year light-duty vehicles.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Emission standards for 1999 and later... VEHICLES AND ENGINES General Provisions for Emission Regulations for 1977 and Later Model Year New Light....099-8 Emission standards for 1999 and later model year light-duty vehicles. (a)(1)(i)-(ii) (iii)...
Pan Feng; Wang Yin; Guan Xin; Jia Lu; Chen Xiangrong; Draayer, J. P.
2011-06-28
Exact solutions of Nilsson mean-field with various pairing interactions are reviewed. Some even-odd mass differences and moments of inertia of low-lying states for rare earth and actinide nuclei are calculated for the nearest-orbit pairing approximation as well as for the extended pairing model and compared to available experimental data. An exact boson mapping of the standard pairing Hamiltonian is also reported. Under the mapping, fermion pair operators are mapped exactly onto corresponding bosons. The image of the mapping is a Bose-Hubbard model with orbit-dependent hopping.
A simple modelling approach for prediction of standard state real gas entropy of pure materials.
Bagheri, M; Borhani, T N G; Gandomi, A H; Manan, Z A
2014-01-01
The performance of an energy conversion system depends on exergy analysis and entropy generation minimisation. A new simple four-parameter equation is presented in this paper to predict the standard state absolute entropy of real gases (SSTD). The model development and validation were accomplished using the Linear Genetic Programming (LGP) method and a comprehensive dataset of 1727 widely used materials. The proposed model was compared with the results obtained using a three-layer feed forward neural network model (FFNN model). The root-mean-square error (RMSE) and the coefficient of determination (r(2)) of all data obtained for the LGP model were 52.24 J/(mol K) and 0.885, respectively. Several statistical assessments were used to evaluate the predictive power of the model. In addition, this study provides an appropriate understanding of the most important molecular variables for exergy analysis. Compared with the LGP based model, the application of FFNN improved the r(2) to 0.914. The developed model is useful in the design of materials to achieve a desired entropy value.
The framed Standard Model (I) — A physics case for framing the Yang-Mills theory?
NASA Astrophysics Data System (ADS)
Chan, Hong-Mo; Tsou, Sheung Tsun
2015-10-01
Introducing, in the underlying gauge theory of the Standard Model, the frame vectors in internal space as field variables (framons), in addition to the usual gauge boson and matter fermions fields, one obtains: the standard Higgs scalar as the framon in the electroweak sector; a global su˜(3) symmetry dual to colour to play the role of fermion generations. Renormalization via framon loops changes the orientation in generation space of the vacuum, hence also of the mass matrices of leptons and quarks, thus making them rotate with changing scale μ. From previous work, it is known already that a rotating mass matrix will lead automatically to: CKM mixing and neutrino oscillations, hierarchical masses for quarks and leptons, a solution to the strong-CP problem transforming the theta-angle into a Kobayashi-Maskawa phase. Here in the framed standard model (FSM), the renormalization group equation has some special properties which explain the main qualitative features seen in experiment both for mixing matrices of quarks and leptons, and for their mass spectrum. Quantitative results will be given in Paper II. The present paper ends with some tentative predictions on Higgs decay, and with some speculations on the origin of dark matter.
The Framed Standard Model (I) -- A Physics Case for Framing the Yang-Mills Theory?
NASA Astrophysics Data System (ADS)
Chan, Hong-Mo; Tsou, Sheung Tsun
Introducing, in the underlying gauge theory of the Standard Model, the frame vectors in internal space as field variables (framons), in addition to the usual gauge boson and matter fermions fields, one obtains: * the standard Higgs scalar as the framon in the electroweak sector; * a global widetilde{su}(3) symmetry dual to colour to play the role of fermion generations. Renormalization via framon loops changes the orientation in generation space of the vacuum, hence also of the mass matrices of leptons and quarks, thus making them rotate with changing scale μ. From previous work, it is known already that a rotating mass matrix will lead automatically to: * CKM mixing and neutrino oscillations, * hierarchical masses for quarks and leptons, * a solution to the strong-CP problem transforming the theta-angle into a Kobayashi-Maskawa phase. Here in the framed standard model (FSM), the renormalization group equation has some special properties which explain the main qualitative features seen in experiment both for mixing matrices of quarks and leptons, and for their mass spectrum. Quantitative results will be given in Paper II. The present paper ends with some tentative predictions on Higgs decay, and with some speculations on the origin of dark matter...
T Dwarfs Model Fits for Spectral Standards at Low Spectral Resolution
NASA Astrophysics Data System (ADS)
Giorla, Paige; Rice, Emily L.; Douglas, Stephanie T.; Mace, Gregory N.; McLean, Ian S.; Martin, Emily C.; Logsdon, Sarah E.
2015-01-01
We present model fits to the T dwarf spectral standards which cover spectral types from T0 to T8. For a complete spectral range analysis, we have included a T9 object which is not considered a spectral standard. We have low-resolution (R~120) SpeX Prism spectra and a variety of higher resolution (R~1,000-25,000) spectra for all nine of these objects. The synthetic spectra are from the BT-SETTL 2013 models. We compare the best fit parameters from low resolution spectra to results from the higher resolution fits of prominent spectral type dependent features, where possible. Using the T dwarf standards to calibrate the effective temperature and gravity parameters for each spectral type, we will expand our analysis to a larger, more varied sample, which includes over one hundred field T dwarfs, for which we have a variety of low, medium, and high resolution spectra from the SpeX Prism Library and the NIRSPEC Brown Dwarf Spectroscopic Survey. This sample includes a handful of peculiar and red T dwarfs, for which we explore the causes of their non-normalcy.
1997-10-01
It is widely recognized that cascade models are potentially effective and powerful tools for interpreting and predicting multi-particle observables in heavy ion physics. However, the lack of common standards, documentation, version control, and accessibility have made it difficult to apply objective scientific criteria for evaluating the many physical and algorithmic assumptions or even to reproduce some published results. The first RIKEN Research Center workshop was proposed by Yang Pang to address this problem by establishing open standards for original codes for applications to nuclear collisions at RHIC energies. The aim of this first workshop is: (1) to prepare a WWW depository site for original source codes and detailed documentation with examples; (2) to develop and perform standardized test for the models such as Lorentz invariance, kinetic theory comparisons, and thermodynamic simulations; (3) to publish a compilation of results of the above work in a journal e.g., ``Heavy Ion Physics``; and (4) to establish a policy statement on a set of minimal requirements for inclusion in the OSCAR-WWW depository.
A Search for the Standard Model Higgs Boson Produced in Association with a $W$ Boson
Frank, Martin Johannes
2011-05-01
We present a search for a standard model Higgs boson produced in association with a W boson using data collected with the CDF II detector from p$\\bar{p}$ collisions at √s = 1.96 TeV. The search is performed in the WH → ℓvb$\\bar{b}$ channel. The two quarks usually fragment into two jets, but sometimes a third jet can be produced via gluon radiation, so we have increased the standard two-jet sample by including events that contain three jets. We reconstruct the Higgs boson using two or three jets depending on the kinematics of the event. We find an improvement in our search sensitivity using the larger sample together with this multijet reconstruction technique. Our data show no evidence of a Higgs boson, so we set 95% confidence level upper limits on the WH production rate. We set limits between 3.36 and 28.7 times the standard model prediction for Higgs boson masses ranging from 100 to 150 GeV/c^{2}.
Crop loss assessment for California: modeling losses with different ozone standard scenarios.
Olszyk, D M; Thompson, C R; Poe, M P
1988-01-01
Crop yield losses were estimated for ambient O3 concentrations and for a series of potential O3 air quality standards for California, including the current statewide 1-h oxidant (O3) standard of 0.10 ppm (196 microg m(-3)), 12-h growing season averages, and other models. A model for statewide losses was developed using hourly O3 data for all sites in the State, county crop productivity data, and available O3 concentration-yield loss equations to determine potential yield losses for each crop in each county in California for 1984. Losses were based on comparison to an estimated background filtered air concentration of 0.025 or 0.027 ppm, for 12 or 7 h, respectively. Potential losses due to ambient air in 1984 were estimated at 19% to 25% for dry beans, cotton, grapes, lemons, onions, and oranges. Losses of 5% to 9% were estimated for alfalfa and sweet corn. Losses of 4% or less were estimated for barley, field corn, lettuce, grain sorghum, rice, corn silage, spinach, strawberries, sugar beets, fresh tomatoes, processing tomatoes, and wheat. Implementation of either a modified rollback to meet the current 1 h California O3 standard (0.10 ppm) or a three-month, 12-h growing season average of 0.045 ppm was necessary to produce large reductions in potential crop losses. PMID:15092558
V3885 Sagittarius: A Comparison With a Range of Standard Model Accretion Disks
NASA Technical Reports Server (NTRS)
Linnell, Albert P.; Godon, Patrick; Hubeny, Ivan; Sion, Edward M; Szkody, Paula; Barrett, Paul E.
2009-01-01
A chi-squared analysis of standard model accretion disk synthetic spectrum fits to combined Far Ultraviolet Spectroscopic Explorer and Space Telescope Imaging Spectrograph spectra of V3885 Sagittarius, on an absolute flux basis, selects a model that accurately represents the observed spectral energy distribution. Calculation of the synthetic spectrum requires the following system parameters. The cataclysmic variable secondary star period-mass relation calibrated by Knigge in 2006 and 2007 sets the secondary component mass. A mean white dwarf (WD) mass from the same study, which is consistent with an observationally determined mass ratio, sets the adopted WD mass of 0.7M(solar mass), and the WD radius follows from standard theoretical models. The adopted inclination, i = 65 deg, is a literature consensus, and is subsequently supported by chi-squared analysis. The mass transfer rate is the remaining parameter to set the accretion disk T(sub eff) profile, and the Hipparcos parallax constrains that parameter to mas transfer = (5.0 +/- 2.0) x 10(exp -9) M(solar mass)/yr by a comparison with observed spectra. The fit to the observed spectra adopts the contribution of a 57,000 +/- 5000 K WD. The model thus provides realistic constraints on mass transfer and T(sub eff) for a large mass transfer system above the period gap.
NASA Astrophysics Data System (ADS)
Signell, R. P.; Camossi, E.
2015-11-01
Work over the last decade has resulted in standardized web-services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by: (1) making it simple for providers to enable web service access to existing output files; (2) using technology that is free, and that is easy to deploy and configure; and (3) providing tools to communicate with web services that work in existing research environments. We present a simple, local brokering approach that lets modelers continue producing custom data, but virtually aggregates and standardizes the data using NetCDF Markup Language. The THREDDS Data Server is used for data delivery, pycsw for data search, NCTOOLBOX (Matlab®1) and Iris (Python) for data access, and Ocean Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.1 Mention of trade names or commercial products does not constitute endorsement or recommendation for use by the US Government.
A Simple Mathematical Model for Standard Model of Elementary Particles and Extension Thereof
NASA Astrophysics Data System (ADS)
Sinha, Ashok
2016-03-01
An algebraically (and geometrically) simple model representing the masses of the elementary particles in terms of the interaction (strong, weak, electromagnetic) constants is developed, including the Higgs bosons. The predicted Higgs boson mass is identical to that discovered by LHC experimental programs; while possibility of additional Higgs bosons (and their masses) is indicated. The model can be analyzed to explain and resolve many puzzles of particle physics and cosmology including the neutrino masses and mixing; origin of the proton mass and the mass-difference between the proton and the neutron; the big bang and cosmological Inflation; the Hubble expansion; etc. A novel interpretation of the model in terms of quaternion and rotation in the six-dimensional space of the elementary particle interaction-space - or, equivalently, in six-dimensional spacetime - is presented. Interrelations among particle masses are derived theoretically. A new approach for defining the interaction parameters leading to an elegant and symmetrical diagram is delineated. Generalization of the model to include supersymmetry is illustrated without recourse to complex mathematical formulation and free from any ambiguity. This Abstract represents some results of the Author's Independent Theoretical Research in Particle Physics, with possible connection to the Superstring Theory. However, only very elementary mathematics and physics is used in my presentation.
Integrated Standardized Database/Model Management System: Study management concepts and requirements
Baker, R.; Swerdlow, S.; Schultz, R.; Tolchin, R.
1994-02-01
Data-sharing among planners and planning software for utility companies is the motivation for creating the Integrated Standardized Database (ISD) and Model Management System (MMS). The purpose of this document is to define the requirements for the ISD/MMS study management component in a manner that will enhance the use of the ISD. After an analysis period which involved EPRI member utilities across the United States, the study concept was formulated. It is defined in terms of its entities, relationships and its support processes, specifically for implementation as the key component of the MMS. From the study concept definition, requirements are derived. There are unique requirements, such as the necessity to interface with DSManager, EGEAS, IRPManager, MIDAS and UPM and there are standard information systems requirements, such as create, modify, delete and browse data. An initial ordering of the requirements is established, with a section devoted to future enhancements.
Falk, Carl F; Savalei, Victoria
2011-01-01
Popular computer programs print 2 versions of Cronbach's alpha: unstandardized alpha, α(Σ), based on the covariance matrix, and standardized alpha, α(R), based on the correlation matrix. Sources that accurately describe the theoretical distinction between the 2 coefficients are lacking, which can lead to the misconception that the differences between α(R) and α(Σ) are unimportant and to the temptation to report the larger coefficient. We explore the relationship between α(R) and α(Σ) and the reliability of the standardized and unstandardized composite under 3 popular measurement models; we clarify the theoretical meaning of each coefficient and conclude that researchers should choose an appropriate reliability coefficient based on theoretical considerations. We also illustrate that α(R) and α(Σ) estimate the reliability of different composite scores, and in most cases cannot be substituted for one another. PMID:21859284
b--> sl{sup +}l{sup {minus}} Decays in and Beyond the Standard Model
Hiller, Gudrun
2000-08-11
The authors briefly review the status of rare radiative and semileptonic b --> s(gamma,l{sup +}l{sup {minus}}), (l=e,mu) decays. They discuss possible signatures of new physics in these modes and emphasize the role of the exclusive channels. In particular, measurements of the Forward-Backward asymmetry in B -->K*l{sup +}l{sup {minus}} decays and its zero provide a clean test of the Standard Model, complementary to studies in b -->s gamma decays. Further, the Forward-Backward CP asymmetry in B --> K*l{sup +}l{sup {minus}} decays is sensitive to possible non-standard sources of CP violation mediated by Flavor changing neutral current Z-penguins.
NASA Technical Reports Server (NTRS)
Hildreth, Bruce L.; Jackson, E. Bruce
2009-01-01
The American Institute of Aeronautics Astronautics (AIAA) Modeling and Simulation Technical Committee is in final preparation of a new standard for the exchange of flight dynamics models. The standard will become an ANSI standard and is under consideration for submission to ISO for acceptance by the international community. The standard has some a spects that should provide benefits to the simulation training community. Use of the new standard by the training simulation community will reduce development, maintenance and technical refresh investment on each device. Furthermore, it will significantly lower the cost of performing model updates to improve fidelity or expand the envelope of the training device. Higher flight fidelity should result in better transfer of training, a direct benefit to the pilots under instruction. Costs of adopting the standard are minimal and should be paid back within the cost of the first use for that training device. The standard achie ves these advantages by making it easier to update the aerodynamic model. It provides a standard format for the model in a custom eXtensible Markup Language (XML) grammar, the Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML). It employs an existing XML grammar, MathML, to describe the aerodynamic model in an input data file, eliminating the requirement for actual software compilation. The major components of the aero model become simply an input data file, and updates are simply new XML input files. It includes naming and axis system conventions to further simplify the exchange of information.
NASA Astrophysics Data System (ADS)
Meisel, David D.; Szasz, Csilla; Kero, Johan
2008-06-01
The Arecibo UHF radar is able to detect the head-echos of micron-sized meteoroids up to velocities of 75 km/s over a height range of 80 140 km. Because of their small size there are many uncertainties involved in calculating their above atmosphere properties as needed for orbit determination. An ab initio model of meteor ablation has been devised that should work over the mass range 10-16 kg to 10-7 kg, but the faint end of this range cannot be observed by any other method and so direct verification is not possible. On the other hand, the EISCAT UHF radar system detects micrometeors in the high mass part of this range and its observations can be fit to a “standard” ablation model and calibrated to optical observations (Szasz et al. 2007). In this paper, we present a preliminary comparison of the two models, one observationally confirmable. Among the features of the ab initio model that are different from the “standard” model are: (1) uses the experimentally based low pressure vaporization theory of O’Hanlon (A users’s guide to vacuum technology, 2003) for ablation, (2) uses velocity dependent functions fit from experimental data on heat transfer, luminosity and ionization efficiencies measured by Friichtenicht and Becker (NASA Special Publication 319: 53, 1973) for micron sized particles, (3) assumes a density and temperature dependence of the micrometeoroids and ablation product specific heats, (4) assumes a density and size dependent value for the thermal emissivity and (5) uses a unified synthesis of experimental data for the most important meteoroid elements and their oxides through least square fits (as functions of temperature, density, and/or melting point) of the tables of thermodynamic parameters given in Weast (CRC Handbook of Physics and Chemistry, 1984), Gray (American Institute of Physics Handbook, 1972), and Cox (Allen’s Astrophysical Quantities 2000). This utilization of mostly experimentally determined data is the main reason for
Mid-infrared interferometry of Seyfert galaxies: Challenging the Standard Model
NASA Astrophysics Data System (ADS)
López-Gonzaga, N.; Jaffe, W.
2016-06-01
Aims: We aim to find torus models that explain the observed high-resolution mid-infrared (MIR) measurements of active galactic nuclei (AGN). Our goal is to determine the general properties of the circumnuclear dusty environments. Methods: We used the MIR interferometric data of a sample of AGNs provided by the instrument MIDI/VLTI and followed a statistical approach to compare the observed distribution of the interferometric measurements with the distributions computed from clumpy torus models. We mainly tested whether the diversity of Seyfert galaxies can be described using the Standard Model idea, where differences are solely due to a line-of-sight (LOS) effect. In addition to the LOS effects, we performed different realizations of the same model to include possible variations that are caused by the stochastic nature of the dusty models. Results: We find that our entire sample of AGNs, which contains both Seyfert types, cannot be explained merely by an inclination effect and by including random variations of the clouds. Instead, we find that each subset of Seyfert type can be explained by different models, where the filling factor at the inner radius seems to be the largest difference. For the type 1 objects we find that about two thirds of our objects could also be described using a dusty torus similar to the type 2 objects. For the remaining third, it was not possible to find a good description using models with high filling factors, while we found good fits with models with low filling factors. Conclusions: Within our model assumptions, we did not find one single set of model parameters that could simultaneously explain the MIR data of all 21 AGN with LOS effects and random variations alone. We conclude that at least two distinct cloud configurations are required to model the differences in Seyfert galaxies, with volume-filling factors differing by a factor of about 5-10. A continuous transition between the two types cannot be excluded.
NASA Astrophysics Data System (ADS)
Gronewold, A. D.; Ritzenthaler, A.; Fry, L. M.; Anderson, E. J.
2012-12-01
There is a clear need in the water resource and public health management communities to develop and test modeling systems which provide robust predictions of water quality and water quality standard violations, particularly in coastal communities. These predictions have the potential to supplement, or even replace, conventional human health protection strategies which (in the case of controlling public access to beaches, for example) are often based on day-old fecal indicator bacteria monitoring results. Here, we present a coupled modeling system which builds upon recent advancements in watershed-scale hydrological modeling and coastal hydrodynamic modeling, including the evolution of the Huron-Erie Connecting Waterways Forecasting System (HECWFS), developed through a partnership between NOAA's Great Lakes Environmental Research Laboratory (GLERL) and the University of Michigan Cooperative Institute for Limnology and Ecosystems Research (CILER). Our study is based on applying the modeling system to a popular beach in the metro-Detroit (Michigan, USA) area and implementing a routine shoreline monitoring program to help assess model forecasting skill. This research presents an important stepping stone towards the application of similar modeling systems in frequently-closed beaches throughout the Great Lakes region.
Dynamic output feedback stabilization for nonlinear systems based on standard neural network models.
Liu, Meiqin
2006-08-01
A neural-model-based control design for some nonlinear systems is addressed. The design approach is to approximate the nonlinear systems with neural networks of which the activation functions satisfy the sector conditions. A novel neural network model termed standard neural network model (SNNM) is advanced for describing this class of approximating neural networks. Full-order dynamic output feedback control laws are then designed for the SNNMs with inputs and outputs to stabilize the closed-loop systems. The control design equations are shown to be a set of linear matrix inequalities (LMIs) which can be easily solved by various convex optimization algorithms to determine the control signals. It is shown that most neural-network-based nonlinear systems can be transformed into input-output SNNMs to be stabilization synthesized in a unified way. Finally, some application examples are presented to illustrate the control design procedures.
Ignition-and-Growth Modeling of NASA Standard Detonator and a Linear Shaped Charge
NASA Technical Reports Server (NTRS)
Oguz, Sirri
2010-01-01
The main objective of this study is to quantitatively investigate the ignition and shock sensitivity of NASA Standard Detonator (NSD) and the shock wave propagation of a linear shaped charge (LSC) after being shocked by NSD flyer plate. This combined explosive train was modeled as a coupled Arbitrary Lagrangian-Eulerian (ALE) model with LS-DYNA hydro code. An ignition-and-growth (I&G) reactive model based on unreacted and reacted Jones-Wilkins-Lee (JWL) equations of state was used to simulate the shock initiation. Various NSD-to-LSC stand-off distances were analyzed to calculate the shock initiation (or failure to initiate) and detonation wave propagation along the shaped charge. Simulation results were verified by experimental data which included VISAR tests for NSD flyer plate velocity measurement and an aluminum target severance test for LSC performance verification. Parameters used for the analysis were obtained from various published data or by using CHEETAH thermo-chemical code.
Scale Invariant Extension of the Standard Model with a Strongly Interacting Hidden Sector
Hur, Taeil; Ko, P.
2011-04-08
We present a scale invariant extension of the standard model with a new QCD-like strong interaction in the hidden sector. A scale {Lambda}{sub H} is dynamically generated in the hidden sector by dimensional transmutation, and chiral symmetry breaking occurs in the hidden sector. This scale is transmitted to the SM sector by a real singlet scalar messenger S and can trigger electroweak symmetry breaking. Thus all the mass scales in this model arise from the hidden sector scale {Lambda}{sub H}, which has quantum mechanical origin. Furthermore, the lightest hadrons in the hidden sector are stable by the flavor conservation of the hidden sector strong interaction, and could be the cold dark matter (CDM). We study collider phenomenology, relic density, and direct detection rates of the CDM of this model.
Model-Invariant Hybrid Computations of Separated Flows for RCA Standard Test Cases
NASA Technical Reports Server (NTRS)
Woodruff, Stephen
2016-01-01
NASA's Revolutionary Computational Aerosciences (RCA) subproject has identified several smooth-body separated flows as standard test cases to emphasize the challenge these flows present for computational methods and their importance to the aerospace community. Results of computations of two of these test cases, the NASA hump and the FAITH experiment, are presented. The computations were performed with the model-invariant hybrid LES-RANS formulation, implemented in the NASA code VULCAN-CFD. The model- invariant formulation employs gradual LES-RANS transitions and compensation for model variation to provide more accurate and efficient hybrid computations. Comparisons revealed that the LES-RANS transitions employed in these computations were sufficiently gradual that the compensating terms were unnecessary. Agreement with experiment was achieved only after reducing the turbulent viscosity to mitigate the effect of numerical dissipation. The stream-wise evolution of peak Reynolds shear stress was employed as a measure of turbulence dynamics in separated flows useful for evaluating computations.
Higgs Boson Production via Gluon Fusion in the Standard Model with four Generations
Li Qiang; Spira, Michael; Gao, Jun; Li Chongsheng
2011-05-01
Higgs bosons can be produced copiously at the LHC via gluon fusion induced by top and bottom quark loops, and can be enhanced strongly if extra heavy quarks exist. We present results for Higgs+zero-, one- and two-jet production at the LHC operating at 7 and 14 TeV collision energy, in both the standard model and the 4th generation model, by evaluating the corresponding heavy quark triangle, box, and pentagon Feynman diagrams. We compare the results by using the effective Higgs-gluon interactions in the limit of heavy quarks with the cross sections including the full mass dependences. NLO effects on Higgs+zero-jet production rate with full mass dependence are presented for the first time consistently in the 4th generation model. Our results improve the theoretical basis for fourth generation effects on the Higgs boson search at the LHC.
Criticality and the onset of ordering in the standard Vicsek model.
Baglietto, Gabriel; Albano, Ezequiel V; Candia, Julián
2012-12-01
Experimental observations of animal collective behaviour have shown stunning evidence for the emergence of large-scale cooperative phenomena resembling phase transitions in physical systems. Indeed, quantitative studies have found scale-free correlations and critical behaviour consistent with the occurrence of continuous, second-order phase transitions. The standard Vicsek model (SVM), a minimal model of self-propelled particles in which their tendency to align with each other competes with perturbations controlled by a noise term, appears to capture the essential ingredients of critical flocking phenomena. In this paper, we review recent finite-size scaling and dynamical studies of the SVM, which present a full characterization of the continuous phase transition through dynamical and critical exponents. We also present a complex network analysis of SVM flocks and discuss the onset of ordering in connection with XY-like spin models.
Criticality and the onset of ordering in the standard Vicsek model
Baglietto, Gabriel; Albano, Ezequiel V.; Candia, Julián
2012-01-01
Experimental observations of animal collective behaviour have shown stunning evidence for the emergence of large-scale cooperative phenomena resembling phase transitions in physical systems. Indeed, quantitative studies have found scale-free correlations and critical behaviour consistent with the occurrence of continuous, second-order phase transitions. The standard Vicsek model (SVM), a minimal model of self-propelled particles in which their tendency to align with each other competes with perturbations controlled by a noise term, appears to capture the essential ingredients of critical flocking phenomena. In this paper, we review recent finite-size scaling and dynamical studies of the SVM, which present a full characterization of the continuous phase transition through dynamical and critical exponents. We also present a complex network analysis of SVM flocks and discuss the onset of ordering in connection with XY-like spin models. PMID:24312724
Resonant leptogenesis in the minimal B-L extended standard model at TeV
Iso, Satoshi; Orikasa, Yuta; Okada, Nobuchika
2011-05-01
We investigate the resonant leptogenesis scenario in the minimal B-L extended standard model with the B-L symmetry breaking at the TeV scale. Through detailed analysis of the Boltzmann equations, we show how much the resultant baryon asymmetry via leptogenesis is enhanced or suppressed, depending on the model parameters, in particular, the neutrino Dirac-Yukawa couplings and the TeV scale Majorana masses of heavy degenerate neutrinos. In order to consider a realistic case, we impose a simple ansatz for the model parameters and analyze the neutrino oscillation parameters and the baryon asymmetry via leptogenesis as a function of only a single CP phase. We find that for a fixed CP phase all neutrino oscillation data and the observed baryon asymmetry of the present Universe can be simultaneously reproduced.
A perturbative approach to the redshift space power spectrum: beyond the Standard Model
NASA Astrophysics Data System (ADS)
Bose, Benjamin; Koyama, Kazuya
2016-08-01
We develop a code to produce the power spectrum in redshift space based on standard perturbation theory (SPT) at 1-loop order. The code can be applied to a wide range of modified gravity and dark energy models using a recently proposed numerical method by A.Taruya to find the SPT kernels. This includes Horndeski's theory with a general potential, which accommodates both chameleon and Vainshtein screening mechanisms and provides a non-linear extension of the effective theory of dark energy up to the third order. Focus is on a recent non-linear model of the redshift space power spectrum which has been shown to model the anisotropy very well at relevant scales for the SPT framework, as well as capturing relevant non-linear effects typical of modified gravity theories. We provide consistency checks of the code against established results and elucidate its application within the light of upcoming high precision RSD data.
Le pompage optique naturel dans le milieu astrophysique
NASA Astrophysics Data System (ADS)
Pecker, J.-C.
The title of this lecture abstracts only a part of it : the importance in astrophysics of the study of non-LTE situations has become considerable, as well in the stellar atmospheres as, still more, in the study of fortuitous coincidences as a mechanism of formation of emission line nebular spectra, or of molecular interstellar « masers ». Another part of this talk underlines the role of Kastler in his time, and describes his warm personality through his public reactions in front of the nuclear armement, of the Viet-Nam and Algerian wars, of the problems of political refugees... Kastler was a great scientist ; he was also a courageous humanist. 1976 : Les accords nucléaires du Brésil : allocution d'ouverture (19 mars). Colloque sur le sujet ci-dessus. 1976 : La promotion de la culture dans le nouvel ordre économique international, allocution à l'occasion d'une table ronde sur ce thème par l'UNESCO (23-27 juin 1976) ; « Sciences et Techniques », octobre 1976. 1979 : La bête immonde (avec J.-C. Pecker), « Le Matin », 20 mars. 1979 : Appel à nos ministres (avec J.-C. Pecker), « Le Monde », 13 décembre. 1979 : Le flou, le ténébreux, l'irrationnel (avec J.-C. Pecker), « Le Monde », 14 septembre. 1980 : Education à la paix, Préface, in : Publ. UNESCO. 1981 : Le vrai danger, « Le Monde », 6 août 1981. 1982 : Nucléaire civil et militaire, « Le Monde », 1er juin 1982. 1982 : Les scientifiques face à la perspective d'holocauste nucléaire (texte inédit). Le titre de cette communication en résume seulement une partie : l'importance prise en astrophysique par l'analyse des situations hors ETL est devenue considérable, qu'il s'agisse des atmosphères stellaires, ou plus encore, des coïncidences fortuites de la formation des spectres d'émission nébulaires, ou des « masers » moléculaires interstellaires. Une autre partie de cet exposé souligne le rôle de Kastler dans son époque, et décrit sa personnalité généreuse à travers ses r
Charge quantization and the Standard Model from the CP2 and CP3 nonlinear σ-models
NASA Astrophysics Data System (ADS)
Hellerman, Simeon; Kehayias, John; Yanagida, Tsutomu T.
2014-04-01
We investigate charge quantization in the Standard Model (SM) through a CP2 nonlinear sigma model (NLSM), SU(3/(SU(2×U(1), and a CP3 model, SU(4/(SU(3×U(1). We also generalize to any CPk model. Charge quantization follows from the consistency and dynamics of the NLSM, without a monopole or Grand Unified Theory, as shown in our earlier work on the CP1 model (arXiv:1309.0692). We find that representations of the matter fields under the unbroken non-abelian subgroup dictate their charge quantization under the U(1 factor. In the CP2 model the unbroken group is identified with the weak and hypercharge groups of the SM, and the Nambu-Goldstone boson (NGB) has the quantum numbers of a SM Higgs. There is the intriguing possibility of a connection with the vanishing of the Higgs self-coupling at the Planck scale. Interestingly, with some minor assumptions (no vector-like matter and minimal representations) and starting with a single quark doublet, anomaly cancellation requires the matter structure of a generation in the SM. Similar analysis holds in the CP3 model, with the unbroken group identified with QCD and hypercharge, and the NGB having the up quark as a partner in a supersymmetric model. This can motivate solving the strong CP problem with a vanishing up quark mass.
Chatrchyan, Serguei; et al.,
2014-01-21
A search for the standard model Higgs boson (H) decaying to b b-bar when produced in association with a weak vector boson (V) is reported for the following channels: W(mu nu)H, W(e nu)H, W(tau nu)H, Z(mu mu)H, Z(e e)H, and Z(nu nu)H. The search is performed in data samples corresponding to integrated luminosities of up to 5.1 inverse femtobarns at sqrt(s) = 7 TeV and up to 18.9 inverse femtobarns at sqrt(s) = 8 TeV, recorded by the CMS experiment at the LHC. An excess of events is observed above the expected background with a local significance of 2.1 standard deviations for a Higgs boson mass of 125 GeV, consistent with the expectation from the production of the standard model Higgs boson. The signal strength corresponding to this excess, relative to that of the standard model Higgs boson, is 1.0 +/- 0.5.
LHC signals of a B -L supersymmetric standard model C P -even Higgs boson
NASA Astrophysics Data System (ADS)
Hammad, A.; Khalil, S.; Moretti, S.
2016-06-01
We study the scope of the Large Hadron Collider in accessing a neutral Higgs boson of the B -L supersymmetric standard model. After assessing the surviving parameter space configurations following the Run 1 data taking, we investigate the possibilities of detecting this object during Run 2. For the model configurations in which the mixing between such a state and the discovered standard-model-like Higgs boson is non-negligible, there exist several channels enabling its discovery over a mass range spanning from ≈140 to ≈500 GeV . For a heavier Higgs state, with mass above 250 GeV (i.e., twice the mass of the Higgs state discovered in 2012), the hallmark signature is its decay in two such 125 GeV scalars, h'→h h , where h h →b b ¯ γ γ . For a lighter Higgs state, with mass of order 140 GeV, three channels are accessible: γ γ , Z γ , and Z Z , wherein the Z boson decays leptonically. In all such cases, significances above discovery can occur for already planned luminosities at the CERN machine.
Ferrer, Francesc; Krauss, Lawrence M.; Profumo, Stefano
2006-12-01
We explore the prospects for indirect detection of neutralino dark matter in supersymmetric models with an extended Higgs sector (next-to-minimal supersymmetric standard model, or NMSSM). We compute, for the first time, one-loop amplitudes for NMSSM neutralino pair annihilation into two photons and two gluons, and point out that extra diagrams (with respect to the minimal supersymmetric standard model, or MSSM), featuring a potentially light CP-odd Higgs boson exchange, can strongly enhance these radiative modes. Expected signals in neutrino telescopes due to the annihilation of relic neutralinos in the Sun and in the Earth are evaluated, as well as the prospects of detection of a neutralino annihilation signal in space-based gamma-ray, antiproton and positron search experiments, and at low-energy antideuteron searches. We find that in the low mass regime the signals from capture in the Earth are enhanced compared to the MSSM, and that NMSSM neutralinos have a remote possibility of affecting solar dynamics. Also, antimatter experiments are an excellent probe of galactic NMSSM dark matter. We also find enhanced two-photon decay modes that make the possibility of the detection of a monochromatic gamma-ray line within the NMSSM more promising than in the MSSM, although likely below the sensitivity of next generation gamma-ray telescopes.
VoICE: A semi-automated pipeline for standardizing vocal analysis across models
Burkett, Zachary D.; Day, Nancy F.; Peñagarikano, Olga; Geschwind, Daniel H.; White, Stephanie A.
2015-01-01
The study of vocal communication in animal models provides key insight to the neurogenetic basis for speech and communication disorders. Current methods for vocal analysis suffer from a lack of standardization, creating ambiguity in cross-laboratory and cross-species comparisons. Here, we present VoICE (Vocal Inventory Clustering Engine), an approach to grouping vocal elements by creating a high dimensionality dataset through scoring spectral similarity between all vocalizations within a recording session. This dataset is then subjected to hierarchical clustering, generating a dendrogram that is pruned into meaningful vocalization “types” by an automated algorithm. When applied to birdsong, a key model for vocal learning, VoICE captures the known deterioration in acoustic properties that follows deafening, including altered sequencing. In a mammalian neurodevelopmental model, we uncover a reduced vocal repertoire of mice lacking the autism susceptibility gene, Cntnap2. VoICE will be useful to the scientific community as it can standardize vocalization analyses across species and laboratories. PMID:26018425
VandeVord, Pamela J; Leonardi, Alessandra Dal Cengio; Ritzel, David
2016-01-01
Recent military combat has heightened awareness to the complexity of blast-related traumatic brain injuries (bTBI). Experiments using animal, cadaver, or biofidelic physical models remain the primary measures to investigate injury biomechanics as well as validate computational simulations, medical diagnostics and therapies, or protection technologies. However, blast injury research has seen a range of irregular and inconsistent experimental methods for simulating blast insults generating results which may be misleading, cannot be cross-correlated between laboratories, or referenced to any standard for exposure. Both the US Army Medical Research and Materiel Command and the National Institutes of Health have noted that there is a lack of standardized preclinical models of TBI. It is recommended that the blast injury research community converge on a consistent set of experimental procedures and reporting of blast test conditions. This chapter describes the blast conditions which can be recreated within a laboratory setting and methodology for testing in vivo models within the appropriate environment. PMID:27604715
NASA Technical Reports Server (NTRS)
Sakuraba, K.; Tsuruda, Y.; Hanada, T.; Liou, J.-C.; Akahoshi, Y.
2007-01-01
This paper summarizes two new satellite impact tests conducted in order to investigate on the outcome of low- and hyper-velocity impacts on two identical target satellites. The first experiment was performed at a low velocity of 1.5 km/s using a 40-gram aluminum alloy sphere, whereas the second experiment was performed at a hyper-velocity of 4.4 km/s using a 4-gram aluminum alloy sphere by two-stage light gas gun in Kyushu Institute of Technology. To date, approximately 1,500 fragments from each impact test have been collected for detailed analysis. Each piece was analyzed based on the method used in the NASA Standard Breakup Model 2000 revision. The detailed analysis will conclude: 1) the similarity in mass distribution of fragments between low and hyper-velocity impacts encourages the development of a general-purpose distribution model applicable for a wide impact velocity range, and 2) the difference in area-to-mass ratio distribution between the impact experiments and the NASA standard breakup model suggests to describe the area-to-mass ratio by a bi-normal distribution.
NASA Astrophysics Data System (ADS)
Appelt, Veit; Shvetsov, Vladimir
2006-04-01
For projects concerning modification of urban structures or landscape, it is essential to have a visualisation before, during and after the planning. It conveys an impression of existing city structures or newly planned buildings roads, railways in 3D reality it helps to gain public acceptance. The design of such constructions makes high demands on geometry and planning technology. The construction project, as a 3D object, must therefore be assessed in whole and only this leads to a comprehensive evaluation of alignment, design and following up safety. On the basis of surveying and planning data, a 3D model fitted together of several information levels.
The GeoDataPortal: A Standards-based Environmental Modeling Data Access and Manipulation Toolkit
NASA Astrophysics Data System (ADS)
Blodgett, D. L.; Kunicki, T.; Booth, N.; Suftin, I.; Zoerb, R.; Walker, J.
2010-12-01
Environmental modelers from fields of study such as climatology, hydrology, geology, and ecology rely on many data sources and processing methods that are common across these disciplines. Interest in inter-disciplinary, loosely coupled modeling and data sharing is increasing among scientists from the USGS, other agencies, and academia. For example, hydrologic modelers need downscaled climate change scenarios and land cover data summarized for the watersheds they are modeling. Subsequently, ecological modelers are interested in soil moisture information for a particular habitat type as predicted by the hydrologic modeler. The USGS Center for Integrated Data Analytics Geo Data Portal (GDP) project seeks to facilitate this loose model coupling data sharing through broadly applicable open-source web processing services. These services simplify and streamline the time consuming and resource intensive tasks that are barriers to inter-disciplinary collaboration. The GDP framework includes a catalog describing projects, models, data, processes, and how they relate. Using newly introduced data, or sources already known to the catalog, the GDP facilitates access to sub-sets and common derivatives of data in numerous formats on disparate web servers. The GDP performs many of the critical functions needed to summarize data sources into modeling units regardless of scale or volume. A user can specify their analysis zones or modeling units as an Open Geospatial Consortium (OGC) standard Web Feature Service (WFS). Utilities to cache Shapefiles and other common GIS input formats have been developed to aid in making the geometry available for processing via WFS. Dataset access in the GDP relies primarily on the Unidata NetCDF-Java library’s common data model. Data transfer relies on methods provided by Unidata’s Thematic Real-time Environmental Data Distribution System Data Server (TDS). TDS services of interest include the Open-source Project for a Network Data Access Protocol
40 CFR 85.2203 - Short test standards for 1981 and later model year light-duty vehicles.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Performance Warranty eligibility (that is, 1981 and later model year light-duty vehicles at low altitude and 1982 and later model year vehicles at high altitude to which high altitude certification standards of 1... later model year light-duty vehicles at low altitude and 1982 and later model year vehicles at...
mr: A C++ library for the matching and running of the Standard Model parameters
NASA Astrophysics Data System (ADS)
Kniehl, Bernd A.; Pikelner, Andrey F.; Veretin, Oleg L.
2016-09-01
We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS bar renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.
On the standard model predictions for R_K and R_{K^*}
NASA Astrophysics Data System (ADS)
Bordone, Marzia; Isidori, Gino; Pattori, Andrea
2016-08-01
We evaluate the impact of radiative corrections in the ratios Γ [B→ M μ ^+μ ^-]/Γ [B→ M e^+e^-] when the meson M is a K or a K^*. Employing the cuts on m^2_{ℓ ℓ } and the reconstructed B-meson mass presently applied by the LHCb Collaboration, such corrections do not exceed a few %. Moreover, their effect is well described (and corrected for) by existing Monte Carlo codes. Our analysis reinforces the interest of these observables as clean probe of physics beyond the Standard Model.
NASA Astrophysics Data System (ADS)
Dubinin, M. N.; Petrova, E. Yu.
2016-07-01
Constraints on the parameter space of theMinimal Supersymmetric StandardModel (MSSM) that are imposed by the experimentally observed mass of the Higgs boson ( m H = 125 GeV) upon taking into account radiative corrections within an effective theory for the Higgs sector in the decoupling limit are examined. It is also shown that simplified approximations for radiative corrections in theMSSM Higgs sector could reduce, to a rather high degree of precision, the dimensionality of the multidimensionalMSSM parameter space to two.
Beyond standard model searches in the MiniBooNE experiment
Katori, Teppei; Conrad, Janet M.
2014-08-05
The MiniBooNE experiment has contributed substantially to beyond standard model searches in the neutrino sector. The experiment was originally designed to test the $\mathrm{\Delta}{m}^{2}~1$eV^{2} region of the sterile neutrino hypothesis by observing ${\nu}_{e}$(${\stackrel{-}{\nu}}_{e}$) charged current quasielastic signals from a ${\nu}_{\mu}$(${\stackrel{-}{\nu}}_{\mu}$) beam. MiniBooNE observed excesses of ${\nu}_{e}$ and ${\stackrel{-}{\nu}}_{e}$ candidate events in neutrino and antineutrino mode, respectively. To date, these excesses have not been explained within the neutrino standard model ($\nu $SM); the standard model extended for three massive neutrinos. Confirmation is required by future experiments such as MicroBooNE. MiniBooNE also provided an opportunity for precision studies of Lorentz violation. The results set strict limits for the first time on several parameters of the standard-model extension, the generic formalism for considering Lorentz violation. Most recently, an extension to MiniBooNE running, with a beam tuned in beam-dump mode, is being performed to search for dark sector particles. In addition, this review describes these studies, demonstrating that short baseline neutrino experiments
Testing the standard model by precision measurement of the weak charges of quarks.
Young, R D; Carlini, R D; Thomas, A W; Roche, J
2007-09-21
In a global analysis of the latest parity-violating electron scattering measurements on nuclear targets, we demonstrate a significant improvement in the experimental knowledge of the weak neutral-current lepton-quark interactions at low energy. The precision of this new result, combined with earlier atomic parity-violation measurements, places tight constraints on the size of possible contributions from physics beyond the standard model. Consequently, this result improves the lower-bound on the scale of relevant new physics to approximately 1 TeV.
Order g{sup 2} susceptibilities in the symmetric phase of the Standard Model
Bödeker, D.; Sangel, M.
2015-04-23
Susceptibilities of conserved charges such as baryon minus lepton number enter baryogenesis computations, since they provide the relationship between conserved charges and chemical potentials. Their next-to-leading order corrections are of order g, where g is a generic Standard Model coupling. They are due to soft Higgs boson exchange, and have been calculated recently, together with some order g{sup 2} corrections. Here we compute the complete g{sup 2} contributions. Close to the electroweak crossover the soft Higgs contribution is of order g{sup 2}, and is determined by the non-perturbative physics at the magnetic screening scale.
Gravitational waves from domain walls in the next-to-minimal supersymmetric standard model
Kadota, Kenji; Kawasaki, Masahiro; Saikawa, Ken’ichi
2015-10-16
The next-to-minimal supersymmetric standard model predicts the formation of domain walls due to the spontaneous breaking of the discrete Z{sub 3}-symmetry at the electroweak phase transition, and they collapse before the epoch of big bang nucleosynthesis if there exists a small bias term in the potential which explicitly breaks the discrete symmetry. Signatures of gravitational waves produced from these unstable domain walls are estimated and their parameter dependence is investigated. It is shown that the amplitude of gravitational waves becomes generically large in the decoupling limit, and that their frequency is low enough to be probed in future pulsar timing observations.
Experimental constraints from flavour changing processes and physics beyond the Standard Model
NASA Astrophysics Data System (ADS)
Gersabeck, M.; Gligorov, V. V.; Serra, N.
2012-08-01
Flavour physics has a long tradition of paving the way for direct discoveries of new particles and interactions. Results over the last decade have placed stringent bounds on the parameter space of physics beyond the Standard Model. Early results from the LHC, and its dedicated flavour factory LHCb, have further tightened these constraints and reiterate the ongoing relevance of flavour studies. The experimental status of flavour observables in the charm and beauty sectors is reviewed in measurements of CP violation, neutral meson mixing, and measurements of rare decays.