Insights: Simple Models for Teaching Equilibrium and Le Chatelier's Principle.
ERIC Educational Resources Information Center
Russell, Joan M.
1988-01-01
Presents three models that have been effective for teaching chemical equilibrium and Le Chatelier's principle: (1) the liquid transfer model, (2) the fish model, and (3) the teeter-totter model. Explains each model and its relation to Le Chatelier's principle. (MVL)
The standard cosmological model
NASA Astrophysics Data System (ADS)
Scott, D.
2006-06-01
The Standard Model of Particle Physics (SMPP) is an enormously successful description of high-energy physics, driving ever more precise measurements to find "physics beyond the standard model", as well as providing motivation for developing more fundamental ideas that might explain the values of its parameters. Simultaneously, a description of the entire three-dimensional structure of the present-day Universe is being built up painstakingly. Most of the structure is stochastic in nature, being merely the result of the particular realization of the "initial conditions" within our observable Universe patch. However, governing this structure is the Standard Model of Cosmology (SMC), which appears to require only about a dozen parameters. Cosmologists are now determining the values of these quantities with increasing precision to search for "physics beyond the standard model", as well as trying to develop an understanding of the more fundamental ideas that might explain the values of its parameters. Although it is natural to see analogies between the two Standard Models, some intrinsic differences also exist, which are discussed here. Nevertheless, a truly fundamental theory will have to explain both the SMPP and SMC, and this must include an appreciation of which elements are deterministic and which are accidental. Considering different levels of stochasticity within cosmology may make it easier to accept that physical parameters in general might have a nondeterministic aspect.
Peskin, M.E.
1997-05-01
These lectures constitute a short course in ``Beyond the Standard Model`` for students of experimental particle physics. The author discusses the general ideas which guide the construction of models of physics beyond the Standard model. The central principle, the one which most directly motivates the search for new physics, is the search for the mechanism of the spontaneous symmetry breaking observed in the theory of weak interactions. To illustrate models of weak-interaction symmetry breaking, the author gives a detailed discussion of the idea of supersymmetry and that of new strong interactions at the TeV energy scale. He discusses experiments that will probe the details of these models at future pp and e{sup +}e{sup {minus}} colliders.
Calpas, Betty Constante
2010-06-11
The organization of this thesis consists of three main ideas: the first presents the theoretical framework and experimental, as well as objects used in the analysis and the second relates to the various work tasks of service that I performed on the calorimeter, and the third is the search for the Higgs boson in the channel ZH → e^{+}e^{-}b$\\bar{b}$. Thus, this thesis has the following structure: Chapter 1 is an introduction to the standard model of particle physics and the Higgs mechanism; Chapter 2 is an overview of the complex and the acceleration of the Tevatron at Fermilab DØ detector; Chapter 3 is an introduction to physical objects used in this thesis; Chapter 4 presents the study made on correcting the energy measured in the calorimeter; Chapter 5 describes the study of certification of electrons in the calorimeter; Chapter 6 describes the study of certification of electrons in the intercryostat region of calorimeter; Chapter 7 Detailed analysis on the search for Higgs production in the channel ZH → e^{+}e^{-}b$\\bar{b}$; and Chapter 8 presents the final results of the calculations of upper limits to the production cross section of the Higgs boson on a range of low masses.
Marciano, W.J.
1994-03-01
In these lectures, my aim is to provide a survey of the standard model with emphasis on its renormalizability and electroweak radiative corrections. Since this is a school, I will try to be somewhat pedagogical by providing examples of loop calculations. In that way, I hope to illustrate some of the commonly employed tools of particle physics. With those goals in mind, I have organized my presentations as follows: In Section 2, renormalization is discussed from an applied perspective. The technique of dimensional regularization is described and used to define running couplings and masses. The utility of the renormalization group for computing leading logs is illustrated for the muon anomalous magnetic moment. In Section 3 electroweak radiative corrections are discussed. Standard model predictions are surveyed and used to constrain the top quark mass. The S, T, and U parameters are introduced and employed to probe for ``new physics``. The effect of Z{prime} bosons on low energy phenomenology is described. In Section 4, a detailed illustration of electroweak radiative corrections is given for atomic parity violation. Finally, in Section 5, I conclude with an outlook for the future.
Standardissimo. Les limitations théoriques du Modèle Standard. Quelles réponses y apporter?
NASA Astrophysics Data System (ADS)
Renard, F. M.
Nous présentons I 'état du Modèle Standard des interactions fortes, faibles et électromagnétiques. Après une description rapide de ses 3 secteurs, secteur de jauge (radiation), secteur fermionique (matière) et secteur scalaire (génération des masses), nous insistons sur le grand nombre de paramètres libres et sur les choix arbitraires qu'il a fallu faire dans l'élaboration du modèle. Nous faisons ressortir les problèmes techniques non résolus et nous dressons la liste des questions fondamentales restées sans réponses. Nous passons ensuite en revue les idées et méthodes proposées pour répondre à ces questions. Elles utilisent essentiellement 3 voies différentes. La première consiste à requérir plus de symétrie (extension du modèle, symétrie Gauche-Droite, Grandes Unifications, Supersymétrie,...). La seconde contient les diverses alternatives au Modèle Standard impliquant des modifications dans certains secteurs (par exemple le secteur scalaire avec le modèle de la Technicouleur) ou de façon plus violente l'hypothèse d'une sous-structure des leptons, des quarks et des bosons W et Z eux-mêmes. Une dernière voie cherche à justifier les particularités du Modèle Standard et relier ses paramètres libres en se basant sur des principes de cohérence interne du modèle. Les conséquences observables de ces diverses approches sont dans chaque cas mentionnées.
Phenomenology beyond the standard model
Lykken, Joseph D.; /Fermilab
2005-03-01
An elementary review of models and phenomenology for physics beyond the Standard Model (excluding supersymmetry). The emphasis is on LHC physics. Based upon a talk given at the ''Physics at LHC'' conference, Vienna, 13-17 July 2004.
MODeLeR: A Virtual Constructivist Learning Environment and Methodology for Object-Oriented Design
ERIC Educational Resources Information Center
Coffey, John W.; Koonce, Robert
2008-01-01
This article contains a description of the organization and method of use of an active learning environment named MODeLeR, (Multimedia Object Design Learning Resource), a tool designed to facilitate the learning of concepts pertaining to object modeling with the Unified Modeling Language (UML). MODeLeR was created to provide an authentic,…
Reference and Standard Atmosphere Models
NASA Technical Reports Server (NTRS)
Johnson, Dale L.; Roberts, Barry C.; Vaughan, William W.; Parker, Nelson C. (Technical Monitor)
2002-01-01
This paper describes the development of standard and reference atmosphere models along with the history of their origin and use since the mid 19th century. The first "Standard Atmospheres" were established by international agreement in the 1920's. Later some countries, notably the United States, also developed and published "Standard Atmospheres". The term "Reference Atmospheres" is used to identify atmosphere models for specific geographical locations. Range Reference Atmosphere Models developed first during the 1960's are examples of these descriptions of the atmosphere. This paper discusses the various models, scopes, applications and limitations relative to use in aerospace industry activities.
Colorado Model Content Standards: Science
ERIC Educational Resources Information Center
Colorado Department of Education, 2007
2007-01-01
The Colorado Model Content Standards for Science specify what all students should know and be able to do in science as a result of their school studies. Specific expectations are given for students completing grades K-2, 3-5, 6-8, and 9-12. Five standards outline the essential level of science knowledge and skills needed by Colorado citizens to…
CoLeMo: A Collaborative Learning Environment for UML Modelling
ERIC Educational Resources Information Center
Chen, Weiqin; Pedersen, Roger Heggernes; Pettersen, Oystein
2006-01-01
This paper presents the design, implementation, and evaluation of a distributed collaborative UML modelling environment, CoLeMo. CoLeMo is designed for students studying UML modelling. It can also be used as a platform for collaborative design of software. We conducted formative evaluations and a summative evaluation to improve the environment and…
The New Minimal Standard Model
Davoudiasl, Hooman; Kitano, Ryuichiro; Li, Tianjun; Murayama, Hitoshi
2005-01-13
We construct the New Minimal Standard Model that incorporates the new discoveries of physics beyond the Minimal Standard Model (MSM): Dark Energy, non-baryonic Dark Matter, neutrino masses, as well as baryon asymmetry and cosmic inflation, adopting the principle of minimal particle content and the most general renormalizable Lagrangian. We base the model purely on empirical facts rather than aesthetics. We need only six new degrees of freedom beyond the MSM. It is free from excessive flavor-changing effects, CP violation, too-rapid proton decay, problems with electroweak precision data, and unwanted cosmological relics. Any model of physics beyond the MSM should be measured against the phenomenological success of this model.
Standard model of knowledge representation
NASA Astrophysics Data System (ADS)
Yin, Wensheng
2016-09-01
Knowledge representation is the core of artificial intelligence research. Knowledge representation methods include predicate logic, semantic network, computer programming language, database, mathematical model, graphics language, natural language, etc. To establish the intrinsic link between various knowledge representation methods, a unified knowledge representation model is necessary. According to ontology, system theory, and control theory, a standard model of knowledge representation that reflects the change of the objective world is proposed. The model is composed of input, processing, and output. This knowledge representation method is not a contradiction to the traditional knowledge representation method. It can express knowledge in terms of multivariate and multidimensional. It can also express process knowledge, and at the same time, it has a strong ability to solve problems. In addition, the standard model of knowledge representation provides a way to solve problems of non-precision and inconsistent knowledge.
CIM - A Manufacturing Paradigm (Le CIM - Un Nouveau Modele Industriel),
1986-07-01
fait la plupart des entreprises , ont affind le module de "Rgvolution industrielle". Nous vivons A l’poque du spdcialiste. Toutefois, le modale de sp...program will serve as an umbrella under which specific projects are planned, financed , managed, and imple- mented. Well defined corporate goals must be...assets. Through an integration of financing strategies an enterprise can focus on capital investment in shared, value-added assets such as databases
Model Standards Advance the Profession
ERIC Educational Resources Information Center
Journal of Staff Development, 2011
2011-01-01
Leadership by teachers is essential to serving the needs of students, schools, and the teaching profession. To that end, the Teacher Leadership Exploratory Consortium has developed Teacher Leader Model Standards to codify, promote, and support teacher leadership as a vehicle to transform schools for the needs of the 21st century. The Teacher…
Consistency Across Standards or Standards in a New Business Model
NASA Technical Reports Server (NTRS)
Russo, Dane M.
2010-01-01
Presentation topics include: standards in a changing business model, the new National Space Policy is driving change, a new paradigm for human spaceflight, consistency across standards, the purpose of standards, danger of over-prescriptive standards, a balance is needed (between prescriptive and general standards), enabling versus inhibiting, characteristics of success-oriented standards, characteristics of success-oriented standards, and conclusions. Additional slides include NASA Procedural Requirements 8705.2B identifies human rating standards and requirements, draft health and medical standards for human rating, what's been done, government oversight models, examples of consistency from anthropometry, examples of inconsistency from air quality and appendices of government and non-governmental human factors standards.
Neutrinos beyond the Standard Model
Valle, J.W.F.
1989-08-01
I review some basic aspects of neutrino physics beyond the Standard Model such as neutrino mixing and neutrino non-orthogonality, universality and CP violation in the lepton sector, total lepton number and lepton flavor violation, etc.. These may lead to neutrino decays and oscillations, exotic weak decay processes, neutrinoless double /beta/ decay, etc.. Particle physics models are discussed where some of these processes can be sizable even in the absence of measurable neutrino masses. These may also substantially affect the propagation properties of solar and astrophysical neutrinos. 39 refs., 4 figs.
Gaillard, M.K.
1983-04-01
Focussing on the standard electroweak model, we examine physics issues which may be addressed with the help of intense beams of strange particles. I have collected miscellany of issues, starting with some philosophical remarks on how things stand and where we should go from here. I will then focus on a case study: the decay K/sup +/ ..-->.. ..pi../sup +/ + nothing observable, which provides a nice illustration of the type of physics that can be probed through rare decays. Other topics I will mention are CP violation in K-decays, hyperon and anti-hyperon physics, and a few random comments on other relevant phenomena.
Standard for Models and Simulations
NASA Technical Reports Server (NTRS)
Steele, Martin J.
2016-01-01
This NASA Technical Standard establishes uniform practices in modeling and simulation to ensure essential requirements are applied to the design, development, and use of models and simulations (MS), while ensuring acceptance criteria are defined by the program project and approved by the responsible Technical Authority. It also provides an approved set of requirements, recommendations, and criteria with which MS may be developed, accepted, and used in support of NASA activities. As the MS disciplines employed and application areas involved are broad, the common aspects of MS across all NASA activities are addressed. The discipline-specific details of a given MS should be obtained from relevant recommended practices. The primary purpose is to reduce the risks associated with MS-influenced decisions by ensuring the complete communication of the credibility of MS results.
From Interactive Open Learner Modelling to Intelligent Mentoring: STyLE-OLM and Beyond
ERIC Educational Resources Information Center
Dimitrova, Vania; Brna, Paul
2016-01-01
STyLE-OLM (Dimitrova 2003 "International Journal of Artificial Intelligence in Education," 13, 35-78) presented a framework for interactive open learner modelling which entails the development of the means by which learners can "inspect," "discuss" and "alter" the learner model that has been jointly…
Conductivite dans le modele de Hubbard bi-dimensionnel a faible couplage
NASA Astrophysics Data System (ADS)
Bergeron, Dominic
Le modele de Hubbard bi-dimensionnel (2D) est souvent considere comme le modele minimal pour les supraconducteurs a haute temperature critique a base d'oxyde de cuivre (SCHT). Sur un reseau carre, ce modele possede les phases qui sont communes a tous les SCHT, la phase antiferromagnetique, la phase supraconductrice et la phase dite du pseudogap. Il n'a pas de solution exacte, toutefois, plusieurs methodes approximatives permettent d'etudier ses proprietes de facon numerique. Les proprietes optiques et de transport sont bien connues dans les SCHT et sont donc de bonne candidates pour valider un modele theorique et aider a comprendre mieux la physique de ces materiaux. La presente these porte sur le calcul de ces proprietes pour le modele de Hubbard 2D a couplage faible ou intermediaire. La methode de calcul utilisee est l'approche auto-coherente a deux particules (ACDP), qui est non-perturbative et inclue l'effet des fluctuations de spin et de charge a toutes les longueurs d'onde. La derivation complete de l'expression de la conductivite dans l'approche ACDP est presentee. Cette expression contient ce qu'on appelle les corrections de vertex, qui tiennent compte des correlations entre quasi-particules. Pour rendre possible le calcul numerique de ces corrections, des algorithmes utilisant, entre autres, des transformees de Fourier rapides et des splines cubiques sont developpes. Les calculs sont faits pour le reseau carre avec sauts aux plus proches voisins autour du point critique antiferromagnetique. Aux dopages plus faibles que le point critique, la conductivite optique presente une bosse dans l'infrarouge moyen a basse temperature, tel qu'observe dans plusieurs SCHT. Dans la resistivite en fonction de la temperature, on trouve un comportement isolant dans le pseudogap lorsque les corrections de vertex sont negligees et metallique lorsqu'elles sont prises en compte. Pres du point critique, la resistivite est lineaire en T a basse temperature et devient
Le modele de Hubbard bidimensionnel a faible couplage: Thermodynamique et phenomenes critiques
NASA Astrophysics Data System (ADS)
Roy, Sebastien
Une etude systematique du modele de Hubbard en deux dimensions a faible couplage a l'aide de la theorie Auto-Coherente a Deux Particules (ACDP) dans le diagramme temperature-dopage-interaction-sauts permet de mettre en evidence l'influence des fluctuations magnetiques sur les proprietes thermodynamiques du systeme electronique sur reseau. Le regime classique renormalise a temperature finie pres du dopage nul est marque par la grandeur de la longueur de correlation de spin comparee a la longueur thermique de de Broglie et est caracterisee par un accroissement drastique de la longueur de correlation de spin. Cette croissance exponentielle a dopage nul marque la presence d'un pic de chaleur specifique en fonction de la temperature a basse temperature. Une temperature de crossover est alors associee a la temperature a laquelle la longueur de correlation de spin est egale a la longueur thermique de de Broglie. C'est a cette temperature caracteristique, ou est observee l'ouverture du pseudogap dans le poids spectral, que se situe le maximum du pic de chaleur specifique. La presence de ce pic a des consequences sur l'evolution du potentiel chimique avec le dopage lorsque l'uniformite thermodynamique est respectee. Les contraintes imposees par les lois de la thermodynamique font en sorte que l'evolution du potentiel chimique avec le dopage est non triviale. On demontre entre autres que le potentiel chimique est proportionnel a la double occupation qui est reliee au moment local. Par ailleurs, une derivation de la fonction de mise a l'echelle de la susceptibilite de spin a frequence nulle au voisinage d'un point critique marque sans equivoque la presence d'un point critique quantique en dopage pour une valeur donnee de l'interaction. Ce point critique, associe a une transition de phase magnetique en fonction du dopage a temperature nulle, induit un comportement non trivial sur les proprietes physiques du systeme a temperature finie. L'approche quantitative ACDP permet de
Modeling in the Common Core State Standards
ERIC Educational Resources Information Center
Tam, Kai Chung
2011-01-01
The inclusion of modeling and applications into the mathematics curriculum has proven to be a challenging task over the last fifty years. The Common Core State Standards (CCSS) has made mathematical modeling both one of its Standards for Mathematical Practice and one of its Conceptual Categories. This article discusses the need for mathematical…
A standard satellite control reference model
NASA Technical Reports Server (NTRS)
Golden, Constance
1994-01-01
This paper describes a Satellite Control Reference Model that provides the basis for an approach to identify where standards would be beneficial in supporting space operations functions. The background and context for the development of the model and the approach are described. A process for using this reference model to trace top level interoperability directives to specific sets of engineering interface standards that must be implemented to meet these directives is discussed. Issues in developing a 'universal' reference model are also identified.
Beyond the supersymmetric standard model
Hall, L.J.
1988-02-01
The possibility of baryon number violation at the weak scale and an alternative primordial nucleosynthesis scheme arising from the decay of gravitations are discussed. The minimal low energy supergravity model is defined and a few of its features are described. Renormalization group scaling and flavor physics are mentioned.
Less minimal supersymmetric standard model
de Gouvea, Andre; Friedland, Alexander; Murayama, Hitoshi
1998-03-28
Most of the phenomenological studies of supersymmetry have been carried out using the so-called minimal supergravity scenario, where one assumes a universal scalar mass, gaugino mass, and trilinear coupling at M{sub GUT}. Even though this is a useful simplifying assumption for phenomenological analyses, it is rather too restrictive to accommodate a large variety of phenomenological possibilities. It predicts, among other things, that the lightest supersymmetric particle (LSP) is an almost pure B-ino, and that the {mu}-parameter is larger than the masses of the SU(2){sub L} and U(1){sub Y} gauginos. We extend the minimal supergravity framework by introducing one extra parameter: the Fayet'Iliopoulos D-term for the hypercharge U(1), D{sub Y}. Allowing for this extra parameter, we find a much more diverse phenomenology, where the LSP is {tilde {nu}}{sub {tau}}, {tilde {tau}} or a neutralino with a large higgsino content. We discuss the relevance of the different possibilities to collider signatures. The same type of extension can be done to models with the gauge mediation of supersymmetry breaking. We argue that it is not wise to impose cosmological constraints on the parameter space.
An alternative to the standard model
Baek, Seungwon; Ko, Pyungwon; Park, Wan-Il
2014-06-24
We present an extension of the standard model to dark sector with an unbroken local dark U(1){sub X} symmetry. Including various singlet portal interactions provided by the standard model Higgs, right-handed neutrinos and kinetic mixing, we show that the model can address most of phenomenological issues (inflation, neutrino mass and mixing, baryon number asymmetry, dark matter, direct/indirect dark matter searches, some scale scale puzzles of the standard collisionless cold dark matter, vacuum stability of the standard model Higgs potential, dark radiation) and be regarded as an alternative to the standard model. The Higgs signal strength is equal to one as in the standard model for unbroken U(1){sub X} case with a scalar dark matter, but it could be less than one independent of decay channels if the dark matter is a dark sector fermion or if U(1){sub X} is spontaneously broken, because of a mixing with a new neutral scalar boson in the models.
Higgs couplings in noncommutative Standard Model
NASA Astrophysics Data System (ADS)
Batebi, S.; Haghighat, M.; Tizchang, S.; Akafzade, H.
2015-06-01
We consider the Higgs and Yukawa parts of the Noncommutative Standard Model (NCSM). We explore the NC-action to give all Feynman rules for couplings of the Higgs boson to electroweak gauge fields and fermions.
The Higgs boson in the Standard Model
NASA Astrophysics Data System (ADS)
Djouadi, Abdelhak; Grazzini, Massimiliano
2016-10-01
The major goal of the Large Hadron Collider is to probe the electroweak symmetry breaking mechanism and the generation of the elementary particle masses. In the Standard Model this mechanism leads to the existence of a scalar Higgs boson with unique properties. We review the physics of the Standard Model Higgs boson, discuss its main search channels at hadron colliders and the corresponding theoretical predictions. We also summarize the strategies to study its basic properties.
Exploring the Standard Model of Particles
ERIC Educational Resources Information Center
Johansson, K. E.; Watkins, P. M.
2013-01-01
With the recent discovery of a new particle at the CERN Large Hadron Collider (LHC) the Higgs boson could be about to be discovered. This paper provides a brief summary of the standard model of particle physics and the importance of the Higgs boson and field in that model for non-specialists. The role of Feynman diagrams in making predictions for…
Exploring the Standard Model at the LHC
NASA Astrophysics Data System (ADS)
Vachon, Brigitte
The ATLAS and CMS collaborations have performed studies of a wide range of Standard Model processes using data collected at the Large Hadron Collider at center-of-mass energies of 7, 8 and 13 TeV. These measurements are used to explore the Standard Model in a new kinematic regime, perform precision tests of the model, determine some of its fundamental parameters, constrain the proton parton distribution functions, and study new rare processes observed for the first time. Examples of recent Standard Model measurements performed by the ATLAS and CMS collaborations are summarized in this report. The measurements presented span a wide range of event final states including jets, photons, W/Z bosons, top quarks, and Higgs bosons.
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
Development of NASA's Models and Simulations Standard
NASA Technical Reports Server (NTRS)
Bertch, William J.; Zang, Thomas A.; Steele, Martin J.
2008-01-01
From the Space Shuttle Columbia Accident Investigation, there were several NASA-wide actions that were initiated. One of these actions was to develop a standard for development, documentation, and operation of Models and Simulations. Over the course of two-and-a-half years, a team of NASA engineers, representing nine of the ten NASA Centers developed a Models and Simulation Standard to address this action. The standard consists of two parts. The first is the traditional requirements section addressing programmatics, development, documentation, verification, validation, and the reporting of results from both the M&S analysis and the examination of compliance with this standard. The second part is a scale for evaluating the credibility of model and simulation results using levels of merit associated with 8 key factors. This paper provides an historical account of the challenges faced by and the processes used in this committee-based development effort. This account provides insights into how other agencies might approach similar developments. Furthermore, we discuss some specific applications of models and simulations used to assess the impact of this standard on future model and simulation activities.
NASA Astrophysics Data System (ADS)
Boutana, Mohammed Nabil
Les proprietes d'emploi des alliages de titane sont extremement dependantes a certains aspects des microstructures developpees lors de leur elaboration. Ces microstructures peuvent etre fortement heterogenes du point de vue de leur orientation cristallographique et de leur repartition spatiale. Leurs influences sur le comportement du materiau et son endommagement precoce sont des questions qui sont actuellement soulevees. Dans le present projet de doctorat on chercher a repondre a cette question mais aussi de presenter des solutions tangibles quant a l'utilisation securitaire de ces alliages. Un nouveau modele appele automate cellulaire a ete developpe pour simuler le comportement mecanique des alliages de titane en fatigue-fluage a froid. Ces modeles ont permet de mieux comprendre la correlation entre la microstructure et le comportement mecanique du materiau et surtout une analyse detaillee du comportement local du materiau. Mots-cles: Automate cellulaire, fatigue/fluage, alliage de titane, inclusion d'Eshelby, modelisation
Wang, M; Sun, X Z; Tang, S X; Tan, Z L; Pacheco, D
2013-06-01
Water-soluble components of feedstuffs are mainly utilized during the early phase of microbial fermentation, which could be deemed an important determinant of gas production behavior in vitro. Many studies proposed that the fractional rate of degradation (FRD) estimated by fitting gas production curves to mathematical models might be used to characterize the early incubation for in vitro systems. In this study, the mathematical concept of FRD was developed on the basis of the Logistic-Exponential (LE) model, with initial gas volume being zero (LE0). The FRD of the LE0 model exhibits a continuous increase from initial (FRD 0) toward final asymptotic value (FRD F) with longer incubation time. The relationships between the FRD and gas production at incubation times 2, 4, 6, 8, 12 and 24 h were compared for four models, in addition to LE0, Generalization of the Mitscherlich (GM), c th order Michaelis-Menten (MM) and Exponential with a discrete LAG (EXPLAG). A total of 94 in vitro gas curves from four subsets with a wide range of feedstuffs from different laboratories and incubation periods were used for model testing. Results indicated that compared with the GM, MM and EXPLAG models, the FRD of LE0 model consistently had stronger correlations with gas production across the four subsets, especially at incubation times 2, 4, 6, 8 and 12 h. Thus, the LE0 model was deemed to provide a better representation of the early fermentation rates. Furthermore, the FRD 0 also exhibited strong correlations (P < 0.05) with gas production at early incubation times 2, 4, 6 and 8 h across all four subsets. In summary, the FRD of LE0 model provides an alternative to quantify the rate of early stage incubation, and its initial value could be an important starting parameter of rate.
ERIC Educational Resources Information Center
Rostad, John
1997-01-01
Describes the production of news broadcasts on video by a high school class in Le Center, Minnesota. Topics include software for Apple computers, equipment used, student responsibilities, class curriculum, group work, communication among the production crew, administrative and staff support, and future improvements. (LRW)
Models of the Primordial Standard Clock
NASA Astrophysics Data System (ADS)
Chen, Xingang; Namjoo, Mohammad Hossein; Wang, Yi
2015-02-01
Oscillating massive fields in the primordial universe can be used as Standard Clocks. The ticks of these oscillations induce features in the density perturbations, which directly record the time evolution of the scale factor of the primordial universe, thus if detected, provide a direct evidence for the inflation scenario or the alternatives. In this paper, we construct a full inflationary model of primordial Standard Clock and study its predictions on the density perturbations. This model provides a full realization of several key features proposed previously. We compare the theoretical predictions from inflation and alternative scenarios with the Planck 2013 temperature data on Cosmic Microwave Background (CMB), and identify a statistically marginal but interesting candidate. We discuss how future CMB temperature and polarization data, non-Gaussianity analysis and Large Scale Structure data may be used to further test or constrain the Standard Clock signals.
Observational challenges for the standard FLRW model
NASA Astrophysics Data System (ADS)
Buchert, Thomas; Coley, Alan A.; Kleinert, Hagen; Roukema, Boudewijn F.; Wiltshire, David L.
2016-02-01
In this paper, we summarize some of the main observational challenges for the standard Friedmann-Lemaître-Robertson-Walker (FLRW) cosmological model and describe how results recently presented in the parallel session “Large-scale Structure and Statistics” (DE3) at the “Fourteenth Marcel Grossman Meeting on General Relativity” are related to these challenges.
Inflation in the standard cosmological model
NASA Astrophysics Data System (ADS)
Uzan, Jean-Philippe
2015-12-01
The inflationary paradigm is now part of the standard cosmological model as a description of its primordial phase. While its original motivation was to solve the standard problems of the hot big bang model, it was soon understood that it offers a natural theory for the origin of the large-scale structure of the universe. Most models rely on a slow-rolling scalar field and enjoy very generic predictions. Besides, all the matter of the universe is produced by the decay of the inflaton field at the end of inflation during a phase of reheating. These predictions can be (and are) tested from their imprint of the large-scale structure and in particular the cosmic microwave background. Inflation stands as a window in physics where both general relativity and quantum field theory are at work and which can be observationally studied. It connects cosmology with high-energy physics. Today most models are constructed within extensions of the standard model, such as supersymmetry or string theory. Inflation also disrupts our vision of the universe, in particular with the ideas of chaotic inflation and eternal inflation that tend to promote the image of a very inhomogeneous universe with fractal structure on a large scale. This idea is also at the heart of further speculations, such as the multiverse. This introduction summarizes the connections between inflation and the hot big bang model and details the basics of its dynamics and predictions. xml:lang="fr"
Some Standard model problems and possible solutions
NASA Astrophysics Data System (ADS)
Barranco, J.
2016-10-01
Three problems of the standard model of elementary particles are studied from a phenomenological approach. (i) It is shown that the Dirac or the Majorana nature of the neutrino can be studied by looking for differences in the v-electron scattering if the polarization of the neutrino is considered. (ii) The absolute scale of the neutrino mass can be set if a four zero mass matrix texture is considered for the leptons. It is found that m ν3 ∼⃒ 0.05 eV. (iii) It is shown that it is possible -within a certain class of two Higgs model extensions of the standard model- to have a cancelation of the quadratic divergences to the mass of physical Higgs boson.
Toward a midisuperspace quantization of LeMaitre-Tolman-Bondi collapse models
Vaz, Cenalo; Witten, Louis; Singh, T. P.
2001-05-15
LeMaitre-Tolman-Bondi models of spherical dust collapse have been used and continue to be used extensively to study various stellar collapse scenarios. It is by now well known that these models lead to the formation of black holes and naked singularities from regular initial data. The final outcome of the collapse, particularly in the event of naked singularity formation, depends very heavily on quantum effects during the final stages. These quantum effects cannot generally be treated semiclassically as quantum fluctuations of the gravitational field are expected to dominate before the final state is reached. We present a canonical reduction of LeMaitre-Tolman-Bondi space-times describing the marginally bound collapse of inhomogeneous dust, in which the physical radius R, the proper time of the collapsing dust {tau}, and the mass function F are the canonical coordinates R(r), {tau}(r) and F(r) on the phase space. Dirac's constraint quantization leads to a simple functional (Wheeler-DeWitt) equation. The equation is solved and the solution can be employed to study some of the effects of quantum gravity during gravitational collapse with different initial conditions.
Electroweak standard model with very special relativity
NASA Astrophysics Data System (ADS)
Alfaro, Jorge; González, Pablo; Ávila, Ricardo
2015-05-01
The very special relativity electroweak Standard Model (VSR EW SM) is a theory with SU (2 )L×U (1 )R symmetry, with the same number of leptons and gauge fields as in the usual Weinberg-Salam model. No new particles are introduced. The model is renormalizable and unitarity is preserved. However, photons obtain mass and the massive bosons obtain different masses for different polarizations. Besides, neutrino masses are generated. A VSR-invariant term will produce neutrino oscillations and new processes are allowed. In particular, we compute the rate of the decays μ →e +γ . All these processes, which are forbidden in the electroweak Standard Model, put stringent bounds on the parameters of our model and measure the violation of Lorentz invariance. We investigate the canonical quantization of this nonlocal model. Second quantization is carried out, and we obtain a well-defined particle content. Additionally, we do a counting of the degrees of freedom associated with the gauge bosons involved in this work, after spontaneous symmetry breaking has been realized. Violations of Lorentz invariance have been predicted by several theories of quantum gravity [J. Alfaro, H. Morales-Tecotl, and L. F. Urrutia, Phys. Rev. Lett. 84, 2318 (2000); Phys. Rev. D 65, 103509 (2002)]. It is a remarkable possibility that the low-energy effects of Lorentz violation induced by quantum gravity could be contained in the nonlocal terms of the VSR EW SM.
SU(5) heterotic Standard Model bundles
NASA Astrophysics Data System (ADS)
Andreas, Björn; Hoffmann, Norbert
2012-04-01
We construct a class of stable SU(5) bundles on an elliptically fibered Calabi-Yau threefold with two sections, a variant of the ordinary Weierstrass fibration, which admits a free involution. The bundles are invariant under the involution, solve the topological constraint imposed by the heterotic anomaly equation and give three generations of Standard Model fermions after symmetry breaking by Wilson lines of the intermediate SU(5) GUT-group to the Standard Model gauge group. Among the solutions we find some which can be perturbed to solutions of the Strominger system. Thus these solutions provide a step toward the construction of phenomenologically realistic heterotic flux compactifications via non-Kähler deformations of Calabi-Yau geometries with bundles. This particular class of solutions involves a rank two hidden sector bundle and does not require background fivebranes for anomaly cancellation.
The standard model coupled to quantum gravitodynamics
NASA Astrophysics Data System (ADS)
Aldabe, Fermin
2017-01-01
We show that the renormalizable SO(4)× U(1)× SU(2)× SU(3) Yang-Mills coupled to matter and the Higgs field fits all the experimentally observed differential cross sections known in nature. This extended Standard Model reproduces the experimental gravitational differential cross sections without resorting to the graviton field and instead by exchanging SO(4) gauge fields. By construction, each SO(4) generator in quantum gravitodynamics does not commute with the Dirac gamma matrices. This produces additional interactions absent to non-Abelian gauge fields in the Standard Model. The contributions from these new terms yield differential cross sections consistent with the Newtonian and post-Newtonian interactions derived from General Relativity. Dimensional analysis of the Lagrangian shows that all its terms have total dimensionality four or less and therefore that all physical quantities in the theory renormalize by finite amounts. These properties make QGD the only renormalizable four-dimensional theory describing gravitational interactions.
Temperature dependence of standard model CP violation.
Brauner, Tomáš; Taanila, Olli; Tranberg, Anders; Vuorinen, Aleksi
2012-01-27
We analyze the temperature dependence of CP violation effects in the standard model by determining the effective action of its bosonic fields, obtained after integrating out the fermions from the theory and performing a covariant gradient expansion. We find nonvanishing CP violating terms starting at the sixth order of the expansion, albeit only in the C-odd-P-even sector, with coefficients that depend on quark masses, Cabibbo-Kobayashi-Maskawa matrix elements, temperature and the magnitude of the Higgs field. The CP violating effects are observed to decrease rapidly with temperature, which has important implications for the generation of a matter-antimatter asymmetry in the early Universe. Our results suggest that the cold electroweak baryogenesis scenario may be viable within the standard model, provided the electroweak transition temperature is at most of order 1 GeV.
The Standard Model of Nuclear Physics
NASA Astrophysics Data System (ADS)
Detmold, William
2015-04-01
At its core, nuclear physics, which describes the properties and interactions of hadrons, such as protons and neutrons, and atomic nuclei, arises from the Standard Model of particle physics. However, the complexities of nuclei result in severe computational difficulties that have historically prevented the calculation of central quantities in nuclear physics directly from this underlying theory. The availability of petascale (and prospect of exascale) high performance computing is changing this situation by enabling us to extend the numerical techniques of lattice Quantum Chromodynamics (LQCD), applied successfully in particle physics, to the more intricate dynamics of nuclear physics. In this talk, I will discuss this revolution and the emerging understanding of hadrons and nuclei within the Standard Model.
Search for the fourth standard model family
Sahin, M.; Sultansoy, S.; Turkoz, S.
2011-03-01
Existence of the fourth family follows from the basics of the standard model (SM) and the actual mass spectrum of the third family fermions. We discuss possible manifestations of the fourth SM family at existing and future colliders. The LHC and Tevatron potentials to discover the fourth SM family have been compared. The scenario with dominance of the anomalous decay modes of the fourth-family quarks has been considered in detail.
Renormalization Group in the Standard Model
Kielanowski, P.; Juarez W, S. R.
2007-11-27
We discuss two applications of the renormalization group method in the Standard Model. In the first one we present some theorems about the running of the Cabibbo-Kobayashi-Maskawa matrix and show that the evolution depends on one function of energy only. In the second one we discuss the properties of the running of the Higgs potential and derive the limits for the Higgs mass.
Beyond the standard model in many directions
Chris Quigg
2004-04-28
These four lectures constitute a gentle introduction to what may lie beyond the standard model of quarks and leptons interacting through SU(3){sub c} {direct_product} SU(2){sub L} {direct_product} U(1){sub Y} gauge bosons, prepared for an audience of graduate students in experimental particle physics. In the first lecture, I introduce a novel graphical representation of the particles and interactions, the double simplex, to elicit questions that motivate our interest in physics beyond the standard model, without recourse to equations and formalism. Lecture 2 is devoted to a short review of the current status of the standard model, especially the electroweak theory, which serves as the point of departure for our explorations. The third lecture is concerned with unified theories of the strong, weak, and electromagnetic interactions. In the fourth lecture, I survey some attempts to extend and complete the electroweak theory, emphasizing some of the promise and challenges of supersymmetry. A short concluding section looks forward.
Indoorgml - a Standard for Indoor Spatial Modeling
NASA Astrophysics Data System (ADS)
Li, Ki-Joune
2016-06-01
With recent progress of mobile devices and indoor positioning technologies, it becomes possible to provide location-based services in indoor space as well as outdoor space. It is in a seamless way between indoor and outdoor spaces or in an independent way only for indoor space. However, we cannot simply apply spatial models developed for outdoor space to indoor space due to their differences. For example, coordinate reference systems are employed to indicate a specific position in outdoor space, while the location in indoor space is rather specified by cell number such as room number. Unlike outdoor space, the distance between two points in indoor space is not determined by the length of the straight line but the constraints given by indoor components such as walls, stairs, and doors. For this reason, we need to establish a new framework for indoor space from fundamental theoretical basis, indoor spatial data models, and information systems to store, manage, and analyse indoor spatial data. In order to provide this framework, an international standard, called IndoorGML has been developed and published by OGC (Open Geospatial Consortium). This standard is based on a cellular notion of space, which considers an indoor space as a set of non-overlapping cells. It consists of two types of modules; core module and extension module. While core module consists of four basic conceptual and implementation modeling components (geometric model for cell, topology between cells, semantic model of cell, and multi-layered space model), extension modules may be defined on the top of the core module to support an application area. As the first version of the standard, we provide an extension for indoor navigation.
Beyond standard model calculations with Sherpa
Höche, Stefan; Kuttimalai, Silvan; Schumann, Steffen; ...
2015-03-24
We present a fully automated framework as part of the Sherpa event generator for the computation of tree-level cross sections in beyond Standard Model scenarios, making use of model information given in the Universal FeynRules Output format. Elementary vertices are implemented into C++ code automatically and provided to the matrix-element generator Comix at runtime. Widths and branching ratios for unstable particles are computed from the same building blocks. The corresponding decays are simulated with spin correlations. Parton showers, QED radiation and hadronization are added by Sherpa, providing a full simulation of arbitrary BSM processes at the hadron level.
Experimentally testing the standard cosmological model
Schramm, D.N. Fermi National Accelerator Lab., Batavia, IL )
1990-11-01
The standard model of cosmology, the big bang, is now being tested and confirmed to remarkable accuracy. Recent high precision measurements relate to the microwave background; and big bang nucleosynthesis. This paper focuses on the latter since that relates more directly to high energy experiments. In particular, the recent LEP (and SLC) results on the number of neutrinos are discussed as a positive laboratory test of the standard cosmology scenario. Discussion is presented on the improved light element observational data as well as the improved neutron lifetime data. alternate nucleosynthesis scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conclusions on the baryonic density relative to the critical density, {Omega}{sub b}, remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the standard model conclusion that {Omega}{sub b} {approximately} 0.06. This latter point is the deriving force behind the need for non-baryonic dark matter (assuming {Omega}{sub total} = 1) and the need for dark baryonic matter, since {Omega}{sub visible} < {Omega}{sub b}. Recent accelerator constraints on non-baryonic matter are discussed, showing that any massive cold dark matter candidate must now have a mass M{sub x} {approx gt} 20 GeV and an interaction weaker than the Z{sup 0} coupling to a neutrino. It is also noted that recent hints regarding the solar neutrino experiments coupled with the see-saw model for {nu}-masses may imply that the {nu}{sub {tau}} is a good hot dark matter candidate. 73 refs., 5 figs.
2011-09-01
Her Majesty the Queen in Right of Canada, as represented by the Minister of National Defence,.2011 © Sa Majesté la Reine (en droit du Canada...d’autocorrection du système. Le recours à la violence peut – dans une certaine mesure – permettre à un régime de restaurer le niveau critique ...Model ............................................................................................. 3 4 A Critique of Easton’s Model
The computation of standard solar models
NASA Technical Reports Server (NTRS)
Ulrich, Roger K.; Cox, Arthur N.
1991-01-01
Procedures for calculating standard solar models with the usual simplifying approximations of spherical symmetry, no mixing except in the surface convection zone, no mass loss or gain during the solar lifetime, and no separation of elements by diffusion are described. The standard network of nuclear reactions among the light elements is discussed including rates, energy production and abundance changes. Several of the equation of state and opacity formulations required for the basic equations of mass, momentum and energy conservation are presented. The usual mixing-length convection theory is used for these results. Numerical procedures for calculating the solar evolution, and current evolution and oscillation frequency results for the present sun by some recent authors are given.
Statistical model with a standard Gamma distribution.
Patriarca, Marco; Chakraborti, Anirban; Kaski, Kimmo
2004-01-01
We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter lambda. We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity lambda. Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T(lambda), where particles exchange energy in a space with an effective dimension D(lambda).
The Hypergeometrical Universe: Cosmology and Standard Model
Pereira, Marco A.
2010-12-22
This paper presents a simple and purely geometrical Grand Unification Theory. Quantum Gravity, Electrostatic and Magnetic interactions are shown in a unified framework. Newton's, Gauss' and Biot-Savart's Laws are derived from first principles. Unification symmetry is defined for all the existing forces. This alternative model does not require Strong and Electroweak forces. A 4D Shock -Wave Hyperspherical topology is proposed for the Universe which together with a Quantum Lagrangian Principle and a Dilator based model for matter result in a quantized stepwise expansion for the whole Universe along a radial direction within a 4D spatial manifold. The Hypergeometrical Standard Model for matter, Universe Topology and a new Law of Gravitation are presented.
Twisted spectral geometry for the standard model
NASA Astrophysics Data System (ADS)
Martinetti, Pierre
2015-07-01
In noncommutative geometry, the spectral triple of a manifold does not generate bosonic fields, for fluctuations of the Dirac operator vanish. A Connes-Moscovici twist forces the commutative algebra to be multiplied by matrices. Keeping the space of spinors untouched, twisted-fluctuations then yield perturbations of the spin connection. Applied to the spectral triple of the Standard Model, a similar twist yields the scalar field needed to stabilize the vacuum and to make the computation of the Higgs mass compatible with its experimental value.
Standard model vacuum decay with gravity
NASA Astrophysics Data System (ADS)
Rajantie, Arttu; Stopyra, Stephen
2017-01-01
We present a calculation of the decay rate of the electroweak vacuum, fully including all gravitational effects and a possible nonminimal Higgs-curvature coupling ξ , and using the three-loop Standard Model effective potential. Without a nonminimal coupling, we find that the effect of the gravitational backreaction is small and less significant than previous calculations suggested. The gravitational effects are smallest, and almost completely suppressed, near the conformal value ξ =1 /6 of the nonminimal coupling. Moving ξ away from this value in either direction universally suppresses the decay rate.
A New Generation of Standard Solar Models
NASA Astrophysics Data System (ADS)
Vinyoles, Núria; Serenelli, Aldo M.; Villante, Francesco L.; Basu, Sarbani; Bergström, Johannes; Gonzalez-Garcia, M. C.; Maltoni, Michele; Peña-Garay, Carlos; Song, Ningqiang
2017-02-01
We compute a new generation of standard solar models (SSMs) that includes recent updates on some important nuclear reaction rates and a more consistent treatment of the equation of state. Models also include a novel and flexible treatment of opacity uncertainties based on opacity kernels, required in light of recent theoretical and experimental works on radiative opacity. Two large sets of SSMs, each based on a different canonical set of solar abundances with high and low metallicity (Z), are computed to determine model uncertainties and correlations among different observables. We present detailed comparisons of high- and low-Z models against different ensembles of solar observables, including solar neutrinos, surface helium abundance, depth of the convective envelope, and sound speed profile. A global comparison, including all observables, yields a p-value of 2.7σ for the high-Z model and 4.7σ for the low-Z one. When the sound speed differences in the narrow region of 0.65< r/{R}ȯ < 0.70 are excluded from the analysis, results are 0.9σ and 3.0σ for high- and low-Z models respectively. These results show that high-Z models agree well with solar data but have a systematic problem right below the bottom of the convective envelope linked to steepness of molecular weight and temperature gradients, and that low-Z models lead to a much more general disagreement with solar data. We also show that, while simple parametrizations of opacity uncertainties can strongly alleviate the solar abundance problem, they are insufficient to substantially improve the agreement of SSMs with helioseismic data beyond that obtained for high-Z models due to the intrinsic correlations of theoretical predictions.
Sphaleron Rate in the Minimal Standard Model
NASA Astrophysics Data System (ADS)
D'Onofrio, Michela; Rummukainen, Kari; Tranberg, Anders
2014-10-01
We use large-scale lattice simulations to compute the rate of baryon number violating processes (the sphaleron rate), the Higgs field expectation value, and the critical temperature in the standard model across the electroweak phase transition temperature. While there is no true phase transition between the high-temperature symmetric phase and the low-temperature broken phase, the crossover is sharp and located at temperature Tc=(159.5±1.5) GeV. The sphaleron rate in the symmetric phase (T >Tc) is Γ/T4=(18±3)αW5, and in the broken phase in the physically interesting temperature range 130 GeV
ERIC Educational Resources Information Center
Roy, Sylvie
2000-01-01
Examines how linguistic standard globalization in a call centre affects the value of bilingualism and the linguistic varieties of a francophone minority population. Bilingualism grants access to a job in the information and service sector, but since the emergence of linguistic standardization in this sector only a certain selection of individuals…
Wisconsin's Model Academic Standards for Agricultural Education. Bulletin No. 9003.
ERIC Educational Resources Information Center
Fortier, John D.; Albrecht, Bryan D.; Grady, Susan M.; Gagnon, Dean P.; Wendt, Sharon, W.
These model academic standards for agricultural education in Wisconsin represent the work of a task force of educators, parents, and business people with input from the public. The introductory section of this bulletin defines the academic standards and discusses developing the standards, using the standards, relating the standards to all…
Beyond the standard model of particle physics.
Virdee, T S
2016-08-28
The Large Hadron Collider (LHC) at CERN and its experiments were conceived to tackle open questions in particle physics. The mechanism of the generation of mass of fundamental particles has been elucidated with the discovery of the Higgs boson. It is clear that the standard model is not the final theory. The open questions still awaiting clues or answers, from the LHC and other experiments, include: What is the composition of dark matter and of dark energy? Why is there more matter than anti-matter? Are there more space dimensions than the familiar three? What is the path to the unification of all the fundamental forces? This talk will discuss the status of, and prospects for, the search for new particles, symmetries and forces in order to address the open questions.This article is part of the themed issue 'Unifying physics and technology in light of Maxwell's equations'.
The spherically symmetric Standard Model with gravity
NASA Astrophysics Data System (ADS)
Balasin, H.; Böhmer, C. G.; Grumiller, D.
2005-08-01
Spherical reduction of generic four-dimensional theories is revisited. Three different notions of "spherical symmetry" are defined. The following sectors are investigated: Einstein-Cartan theory, spinors, (non-)abelian gauge fields and scalar fields. In each sector a different formalism seems to be most convenient: the Cartan formulation of gravity works best in the purely gravitational sector, the Einstein formulation is convenient for the Yang-Mills sector and for reducing scalar fields, and the Newman-Penrose formalism seems to be the most transparent one in the fermionic sector. Combining them the spherically reduced Standard Model of particle physics together with the usually omitted gravity part can be presented as a two-dimensional (dilaton gravity) theory.
Standard model fermions and N =8 supergravity
NASA Astrophysics Data System (ADS)
Meissner, Krzysztof A.; Nicolai, Hermann
2015-03-01
In a scheme originally proposed by Gell-Mann, and subsequently shown to be realized at the SU (3 )×U (1 ) stationary point of maximal gauged SO(8) supergravity by Warner and one of the present authors, the 48 spin-1/2 fermions of the theory remaining after the removal of eight Goldstinos can be identified with the 48 quarks and leptons (including right-chiral neutrinos) of the Standard model, provided one identifies the residual SU(3) with the diagonal subgroup of the color group SU (3 )c and a family symmetry SU (3 )f . However, there remained a systematic mismatch in the electric charges by a spurion charge of ±1/6 . We here identify the "missing" U(1) that rectifies this mismatch, and that takes a surprisingly simple, though unexpected form.
Standard Model thermodynamics across the electroweak crossover
NASA Astrophysics Data System (ADS)
Laine, M.; Meyer, M.
2015-07-01
Even though the Standard Model with a Higgs mass mH = 125GeV possesses no bulk phase transition, its thermodynamics still experiences a "soft point" at temperatures around T = 160GeV, with a deviation from ideal gas thermodynamics. Such a deviation may have an effect on precision computations of weakly interacting dark matter relic abundances if their mass is in the few TeV range, or on leptogenesis scenarios operating in this temperature range. By making use of results from lattice simulations based on a dimensionally reduced effective field theory, we estimate the relevant thermodynamic functions across the crossover. The results are tabulated in a numerical form permitting for their insertion as a background equation of state into cosmological particle production/decoupling codes. We find that Higgs dynamics induces a non-trivial "structure" visible e.g. in the heat capacity, but that in general the largest radiative corrections originate from QCD effects, reducing the energy density by a couple of percent from the free value even at T > 160GeV.
Standard Model thermodynamics across the electroweak crossover
Laine, M.; Meyer, M. E-mail: meyer@itp.unibe.ch
2015-07-01
Even though the Standard Model with a Higgs mass m{sub H} = 125GeV possesses no bulk phase transition, its thermodynamics still experiences a 'soft point' at temperatures around T = 160GeV, with a deviation from ideal gas thermodynamics. Such a deviation may have an effect on precision computations of weakly interacting dark matter relic abundances if their mass is in the few TeV range, or on leptogenesis scenarios operating in this temperature range. By making use of results from lattice simulations based on a dimensionally reduced effective field theory, we estimate the relevant thermodynamic functions across the crossover. The results are tabulated in a numerical form permitting for their insertion as a background equation of state into cosmological particle production/decoupling codes. We find that Higgs dynamics induces a non-trivial 'structure' visible e.g. in the heat capacity, but that in general the largest radiative corrections originate from QCD effects, reducing the energy density by a couple of percent from the free value even at T > 160GeV.
Gravity, Lorentz violation, and the standard model
NASA Astrophysics Data System (ADS)
Kostelecký, V. Alan
2004-05-01
The role of the gravitational sector in the Lorentz- and CPT-violating standard-model extension (SME) is studied. A framework is developed for addressing this topic in the context of Riemann-Cartan spacetimes, which include as limiting cases the usual Riemann and Minkowski geometries. The methodology is first illustrated in the context of the QED extension in a Riemann-Cartan background. The full SME in this background is then considered, and the leading-order terms in the SME action involving operators of mass dimension three and four are constructed. The incorporation of arbitrary Lorentz and CPT violation into general relativity and other theories of gravity based on Riemann-Cartan geometries is discussed. The dominant terms in the effective low-energy action for the gravitational sector are provided, thereby completing the formulation of the leading-order terms in the SME with gravity. Explicit Lorentz symmetry breaking is found to be incompatible with generic Riemann-Cartan geometries, but spontaneous Lorentz breaking evades this difficulty.
Experimental tests of the standard model.
Nodulman, L.
1998-11-11
The title implies an impossibly broad field, as the Standard Model includes the fermion matter states, as well as the forces and fields of SU(3) x SU(2) x U(1). For practical purposes, I will confine myself to electroweak unification, as discussed in the lectures of M. Herrero. Quarks and mixing were discussed in the lectures of R. Aleksan, and leptons and mixing were discussed in the lectures of K. Nakamura. I will essentially assume universality, that is flavor independence, rather than discussing tests of it. I will not pursue tests of QED beyond noting the consistency and precision of measurements of {alpha}{sub EM} in various processes including the Lamb shift, the anomalous magnetic moment (g-2) of the electron, and the quantum Hall effect. The fantastic precision and agreement of these predictions and measurements is something that convinces people that there may be something to this science enterprise. Also impressive is the success of the ''Universal Fermi Interaction'' description of beta decay processes, or in more modern parlance, weak charged current interactions. With one coupling constant G{sub F}, most precisely determined in muon decay, a huge number of nuclear instabilities are described. The slightly slow rate for neutron beta decay was one of the initial pieces of evidence for Cabbibo mixing, now generalized so that all charged current decays of any flavor are covered.
Standard Model thermodynamics across the electroweak crossover
Laine, M.; Meyer, M.
2015-07-22
Even though the Standard Model with a Higgs mass m{sub \\tiny H}=125 GeV possesses no bulk phase transition, its thermodynamics still experiences a “soft point” at temperatures around T=160 GeV, with a deviation from ideal gas thermodynamics. Such a deviation may have an effect on precision computations of weakly interacting dark matter relic abundances if their mass is in the few TeV range, or on leptogenesis scenarios operating in this temperature range. By making use of results from lattice simulations based on a dimensionally reduced effective field theory, we estimate the relevant thermodynamic functions across the crossover. The results are tabulated in a numerical form permitting for their insertion as a background equation of state into cosmological particle production/decoupling codes. We find that Higgs dynamics induces a non-trivial “structure” visible e.g. in the heat capacity, but that in general the largest radiative corrections originate from QCD effects, reducing the energy density by a couple of percent from the free value even at T>160 GeV.
Connected formulas for amplitudes in standard model
NASA Astrophysics Data System (ADS)
He, Song; Zhang, Yong
2017-03-01
Witten's twistor string theory has led to new representations of S-matrix in massless QFT as a single object, including Cachazo-He-Yuan formulas in general and connected formulas in four dimensions. As a first step towards more realistic processes of the standard model, we extend the construction to QCD tree amplitudes with massless quarks and those with a Higgs boson. For both cases, we find connected formulas in four dimensions for all multiplicities which are very similar to the one for Yang-Mills amplitudes. The formula for quark-gluon color-ordered amplitudes differs from the pure-gluon case only by a Jacobian factor that depends on flavors and orderings of the quarks. In the formula for Higgs plus multi-parton amplitudes, the massive Higgs boson is effectively described by two additional massless legs which do not appear in the Parke-Taylor factor. The latter also represents the first twistor-string/connected formula for form factors.
Efficient modelling necessitates standards for model documentation and exchange.
Gernaey, K V; Rosen, C; Batstone, D J; Alex, J
2006-01-01
In this paper, problems related to simulation model documentation and model exchange between users are discussed. Complex simulation models have gained popularity in the environmental field, but require extensive documentation to allow independent implementation. The existence of different simulation platforms puts high demands on the quality of the original documentation. Recent experiences from cross-platform implementations with the ASM2d and ADM1 models reveal that error-free model documentation is difficult to obtain, and as a consequence, considerable time is spent on searching for documentation and implementation errors of various sources. As such, the list of errors and coding pitfalls provided for ASM2d and ADM1 in this paper is vital information for any future implementation of both models. The time needed to obtain an error-free model implementation can be significantly reduced if a standard language for model documentation and exchange is adopted. The extensible markup language (XML) and languages based on this format may provide a remedy to the problem of platform independent model documentation and exchange. In this paper the possibility to apply this to environmental models is discussed, whereas the practical model implementation examples corroborate the necessity for a standardised approach.
Wisconsin's Model Academic Standards for Dance.
ERIC Educational Resources Information Center
Wisconsin State Dept. of Public Instruction, Madison.
Wisconsin's Department of Public Instruction, in collaboration with Wisconsin citizens, developed academic standards in 12 curricular areas. The dance education standards go beyond emphasizing mastery of individual student areas--they weave five essential characteristics of literate individuals throughout: application of the basics, ability to…
Models of Teaching: Connecting Student Learning with Standards
ERIC Educational Resources Information Center
Dell'Olio, Jeanine M.; Donk, Tony
2007-01-01
"Models of Teaching: Connecting Student Learning with Standards" features classic and contemporary models of teaching appropriate to elementary and secondary settings. Authors Jeanine M. Dell'Olio and Tony Donk use detailed case studies to discuss 10 models of teaching and demonstrate how the models can incorporate state content standards and…
ERIC Educational Resources Information Center
Lee, Jaekyung; Liu, Xiaoyan; Amo, Laura Casey; Wang, Weichun Leilani
2014-01-01
Drawing on national and state assessment datasets in reading and math, this study tested "external" versus "internal" standards-based education models. The goal was to understand whether and how student performance standards work in multilayered school systems under No Child Left Behind Act of 2001 (NCLB). Under the…
Neutrinos: in and out of the standard model
Parke, Stephen; /Fermilab
2006-07-01
The particle physics Standard Model has been tremendously successful in predicting the outcome of a large number of experiments. In this model Neutrinos are massless. Yet recent evidence points to the fact that neutrinos are massive particles with tiny masses compared to the other particles in the Standard Model. These tiny masses allow the neutrinos to change flavor and oscillate. In this series of Lectures, I will review the properties of Neutrinos In the Standard Model and then discuss the physics of Neutrinos Beyond the Standard Model. Topics to be covered include Neutrino Flavor Transformations and Oscillations, Majorana versus Dirac Neutrino Masses, the Seesaw Mechanism and Leptogenesis.
Tool for physics beyond the standard model
NASA Astrophysics Data System (ADS)
Newby, Christopher A.
The standard model (SM) of particle physics is a well studied theory, but there are hints that the SM is not the final story. What the full picture is, no one knows, but this thesis looks into three methods useful for exploring a few of the possibilities. To begin I present a paper by Spencer Chang, Nirmal Raj, Chaowaroj Wanotayaroj, and me, that studies the Higgs boson. The scalar particle first seen in 2012 may be the vanilla SM version, but there is some evidence that its couplings are different than predicted. By means of increasing the Higgs' coupling to vector bosons and fermions, we can be more consistent with the data. Next, in a paper by Spencer Chang, Gabriel Barello, and me, we elaborate on a tool created to study dark matter (DM) direct detection. The original work by Anand. et al. focused on elastic dark matter, whereas we extended this work to include the in elastic case, where different DM mass states enter and leave the collision. We also examine several direct detection experiments with our new framework to see if DAMA's modulation can be explained while avoiding the strong constraints imposed by the other experiments. We find that there are several operators that can do this. Finally, in a paper by Spencer Chang, Gabriel Barello, and me, we study an interesting phenomenon know as kinetic mixing, where two gauge bosons can share interactions with particles even though these particles aren't charged under both gauge groups. This, in and of itself, is not new, but we discuss a different method of obtaining this mixing where instead of mixing between two Abelian groups one of the groups is Nonabelian. Using this we then see that there is an inherent mass scale in the mixing strength; something that is absent in the Abelian-Abelian case. Furthermore, if the Nonabelian symmetry is the SU(2)L of the SM then the mass scale of the physics responsible for the mixing is about 1 TeV, right around the sweet spot for detection at the LHC. This dissertation
The Standard Solar Model versus Experimental Observations
NASA Astrophysics Data System (ADS)
Manuel, O.
2000-12-01
The standard solar model (ssm) assumes the that Sun formed as a homogeneous body, its interior consists mostly of hydrogen, and its radiant energy comes from H-fusion in its core. Two sets of measurements indicate the ssm is wrong: 1. Analyses of material in the planetary system show that - (a) Fe, O, Ni, Si, Mg, S and Ca have high nuclear stability and comprise 98+% of ordinary meteorites that formed at the birth of the solar system; (b) the cores of inner planets formed in a central region consisting mostly of heavy elements like Fe, Ni and S; (c) the outer planets formed mostly from elements like H, He and C; and (d) isotopic heterogeneities accompanied these chemical gradients in debris of the supernova that exploded here 5 billion years ago to produce the solar system (See Origin of the Elements at http://www.umr.edu/õm/). 2. Analyses of material coming from the Sun show that - (a) there are not enough neutrinos for H-fusion to be its main source of energy; (b) light-weight isotopes (mass =L) of He, Ne, Ar, Kr and Xe in the solar wind are enriched relative to heavy isotopes (mass = H) by a factor, f, where log f = 4.56 log [H/L] -- - Eq. (1); (c) solar flares by-pass 3.4 of these 9-stages of diffusion and deplete the light-weight isotopes of He, Ne, Mg and Ar by a factor, f*, where log f* = -1.7 log [H/L] --- Eq. (2); (d) proton-capture on N-14 increased N-15 in the solar wind over geologic time; and (e) solar flares dredge up nitrogen with less N-15 from this H-fusion reaction. Each observation above is unexplained by ssm. After correcting photospheric abundances for diffusion [Observation 2(b)], the most abundant elements in the bulk sun are Fe, Ni, O, Si, S, Mg and Ca, the same elements that comprise ordinary meteorites [Observation 1(a)]. The probability that Eq. (1) would randomly select these elements from the photosphere, i.e., the likelihood for a meaningless agreement between observations 2(b) and 1(a), is < 2.0E(-33). Thus, ssm does not describe the
Primordial lithium and the standard model(s)
NASA Technical Reports Server (NTRS)
Deliyannis, Constantine P.; Demarque, Pierre; Kawaler, Steven D.; Romanelli, Paul; Krauss, Lawrence M.
1989-01-01
The results of new theoretical work on surface Li-7 and Li-6 evolution in the oldest halo stars are presented, along with a new and refined analysis of the predicted primordial Li abundance resulting from big-bang nucleosynthesis. This makes it possible to determine the constraints which can be imposed on cosmology using primordial Li and both standard big-bang and stellar-evolution models. This leads to limits on the baryon density today of 0.0044-0.025 (where the Hubble constant is 100h km/sec Mpc) and imposes limitations on alternative nucleosynthesis scenarios.
42 CFR 403.210 - NAIC model standards.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 2 2010-10-01 2010-10-01 false NAIC model standards. 403.210 Section 403.210 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PROVISIONS SPECIAL PROGRAMS AND PROJECTS Medicare Supplemental Policies General Provisions § 403.210 NAIC model standards. (a) NAIC model...
Modeling and Simulation Network Data Standards
2011-09-30
Century, Joint Network Analysis Tool, and OPNET . The Architecture Integration Management Division (AIMD), the Army Materiel Systems Analysis Activity...baseline to develop enhancements in data transfers in future projects. 15. SUBJECT TERMS AWARS, COMBATXXI, JNAT, OPNET , network data standards, M&S...B-1 Appendix C. Joint Network Analysis Tool (JNAT) .............................................................. C-1 Appendix D. OPNET
Template and Model Driven Development of Standardized Electronic Health Records.
Kropf, Stefan; Chalopin, Claire; Denecke, Kerstin
2015-01-01
Digital patient modeling targets the integration of distributed patient data into one overarching model. For this integration process, both a theoretical standard-based model and information structures combined with concrete instructions in form of a lightweight development process of single standardized Electronic Health Records (EHRs) are needed. In this paper, we introduce such a process along side a standard-based architecture. It allows the modeling and implementation of EHRs in a lightweight Electronic Health Record System (EHRS) core. The approach is demonstrated and tested by a prototype implementation. The results show that the suggested approach is useful and facilitates the development of standardized EHRSs.
Standard solar model. II - g-modes
NASA Technical Reports Server (NTRS)
Guenther, D. B.; Demarque, P.; Pinsonneault, M. H.; Kim, Y.-C.
1992-01-01
The paper presents the g-mode oscillation for a set of modern solar models. Each solar model is based on a single modification or improvement to the physics of a reference solar model. Improvements were made to the nuclear reaction rates, the equation of state, the opacities, and the treatment of the atmosphere. The error in the predicted g-mode periods associated with the uncertainties in the model physics is predicted and the specific sensitivities of the g-mode periods and their period spacings to the different model structures are described. In addition, these models are compared to a sample of published observations. A remarkably good agreement is found between the 'best' solar model and the observations of Hill and Gu (1990).
New results on standard solar models
NASA Astrophysics Data System (ADS)
Serenelli, Aldo M.
2010-07-01
We describe the current status of solar modeling and focus on the problems originating with the introduction of solar abundance determinations with low CNO abundance values. We use models computed with solar abundance compilations obtained during the last decade, including the newest published abundances by Asplund and collaborators. The results presented here focus on both helioseismic properties and models as well as neutrino flux predictions. We also discuss changes in radiative opacities to restore agreement between helioseismology, solar models, and solar abundances and show the effect of such modifications on solar neutrino fluxes.
Particle Physics Primer: Explaining the Standard Model of Matter.
ERIC Educational Resources Information Center
Vondracek, Mark
2002-01-01
Describes the Standard Model, a basic model of the universe that describes electromagnetic force, weak nuclear force radioactivity, and the strong nuclear force responsible for holding particles within the nucleus together. (YDS)
Creating Better School-Age Care Jobs: Model Work Standards.
ERIC Educational Resources Information Center
Haack, Peggy
Built on the premise that good school-age care jobs are the cornerstone of high-quality services for school-age youth and their families, this guide presents model work standards for school-age care providers. The guide begins with a description of the strengths and challenges of the school-age care profession. The model work standards are…
The Standard Model from LHC to future colliders.
Forte, S; Nisati, A; Passarino, G; Tenchini, R; Calame, C M Carloni; Chiesa, M; Cobal, M; Corcella, G; Degrassi, G; Ferrera, G; Magnea, L; Maltoni, F; Montagna, G; Nason, P; Nicrosini, O; Oleari, C; Piccinini, F; Riva, F; Vicini, A
This review summarizes the results of the activities which have taken place in 2014 within the Standard Model Working Group of the "What Next" Workshop organized by INFN, Italy. We present a framework, general questions, and some indications of possible answers on the main issue for Standard Model physics in the LHC era and in view of possible future accelerators.
Energy standards and model codes development, adoption, implementation, and enforcement
Conover, D.R.
1994-08-01
This report provides an overview of the energy standards and model codes process for the voluntary sector within the United States. The report was prepared by Pacific Northwest Laboratory (PNL) for the Building Energy Standards Program and is intended to be used as a primer or reference on this process. Building standards and model codes that address energy have been developed by organizations in the voluntary sector since the early 1970s. These standards and model codes provide minimum energy-efficient design and construction requirements for new buildings and, in some instances, existing buildings. The first step in the process is developing new or revising existing standards or codes. There are two overall differences between standards and codes. Energy standards are developed by a consensus process and are revised as needed. Model codes are revised on a regular annual cycle through a public hearing process. In addition to these overall differences, the specific steps in developing/revising energy standards differ from model codes. These energy standards or model codes are then available for adoption by states and local governments. Typically, energy standards are adopted by or adopted into model codes. Model codes are in turn adopted by states through either legislation or regulation. Enforcement is essential to the implementation of energy standards and model codes. Low-rise residential construction is generally evaluated for compliance at the local level, whereas state agencies tend to be more involved with other types of buildings. Low-rise residential buildings also may be more easily evaluated for compliance because the governing requirements tend to be less complex than for commercial buildings.
NASREN: Standard reference model for telerobot control
NASA Technical Reports Server (NTRS)
Albus, J. S.; Lumia, R.; Mccain, H.
1987-01-01
A hierarchical architecture is described which supports space station telerobots in a variety of modes. The system is divided into three hierarchies: task decomposition, world model, and sensory processing. Goals at each level of the task dedomposition heirarchy are divided both spatially and temporally into simpler commands for the next lower level. This decomposition is repreated until, at the lowest level, the drive signals to the robot actuators are generated. To accomplish its goals, task decomposition modules must often use information stored it the world model. The purpose of the sensory system is to update the world model as rapidly as possible to keep the model in registration with the physical world. The architecture of the entire control system hierarch is described and how it can be applied to space telerobot applications.
Big bang nucleosynthesis - The standard model and alternatives
NASA Technical Reports Server (NTRS)
Schramm, David N.
1991-01-01
The standard homogeneous-isotropic calculation of the big bang cosmological model is reviewed, and alternate models are discussed. The standard model is shown to agree with the light element abundances for He-4, H-2, He-3, and Li-7 that are available. Improved observational data from recent LEP collider and SLC results are discussed. The data agree with the standard model in terms of the number of neutrinos, and provide improved information regarding neutron lifetimes. Alternate models are reviewed which describe different scenarios for decaying matter or quark-hadron induced inhomogeneities. The baryonic density relative to the critical density in the alternate models is similar to that of the standard model when they are made to fit the abundances. This reinforces the conclusion that the baryonic density relative to critical density is about 0.06, and also reinforces the need for both nonbaryonic dark matter and dark baryonic matter.
Standardization of A Physiologic Hypoparathyroidism Animal Model
Jung, Soo Yeon; Kim, Ha Yeong; Park, Hae Sang; Yin, Xiang Yun; Chung, Sung Min; Kim, Han Su
2016-01-01
Ideal hypoparathyroidism animal models are a prerequisite to developing new treatment modalities for this disorder. The purpose of this study was to evaluate the feasibility of a model whereby rats were parathyroidectomized (PTX) using a fluorescent-identification method and the ideal calcium content of the diet was determined. Thirty male rats were divided into surgical sham (SHAM, n = 5) and PTX plus 0, 0.5, and 2% calcium diet groups (PTX-FC (n = 5), PTX-NC (n = 10), and PTX-HC (n = 10), respectively). Serum parathyroid hormone levels decreased to non-detectable levels in all PTX groups. All animals in the PTX—FC group died within 4 days after the operation. All animals survived when supplied calcium in the diet. However, serum calcium levels were higher in the PTX-HC than the SHAM group. The PTX-NC group demonstrated the most representative modeling of primary hypothyroidism. Serum calcium levels decreased and phosphorus levels increased, and bone volume was increased. All animals survived without further treatment and did not show nephrotoxicity including calcium deposits. These findings demonstrate that PTX animal models produced by using the fluorescent-identification method, and fed a 0.5% calcium diet, are appropriate for hypoparathyroidism treatment studies. PMID:27695051
Physics Beyond the Standard Model: Supersymmetry
Nojiri, M.M.; Plehn, T.; Polesello, G.; Alexander, John M.; Allanach, B.C.; Barr, Alan J.; Benakli, K.; Boudjema, F.; Freitas, A.; Gwenlan, C.; Jager, S.; /CERN /LPSC, Grenoble
2008-02-01
This collection of studies on new physics at the LHC constitutes the report of the supersymmetry working group at the Workshop 'Physics at TeV Colliders', Les Houches, France, 2007. They cover the wide spectrum of phenomenology in the LHC era, from alternative models and signatures to the extraction of relevant observables, the study of the MSSM parameter space and finally to the interplay of LHC observations with additional data expected on a similar time scale. The special feature of this collection is that while not each of the studies is explicitly performed together by theoretical and experimental LHC physicists, all of them were inspired by and discussed in this particular environment.
Sporulation in Bacteria: Beyond the Standard Model.
Hutchison, Elizabeth A; Miller, David A; Angert, Esther R
2014-10-01
Endospore formation follows a complex, highly regulated developmental pathway that occurs in a broad range of Firmicutes. Although Bacillus subtilis has served as a powerful model system to study the morphological, biochemical, and genetic determinants of sporulation, fundamental aspects of the program remain mysterious for other genera. For example, it is entirely unknown how most lineages within the Firmicutes regulate entry into sporulation. Additionally, little is known about how the sporulation pathway has evolved novel spore forms and reproductive schemes. Here, we describe endospore and internal offspring development in diverse Firmicutes and outline progress in characterizing these programs. Moreover, comparative genomics studies are identifying highly conserved sporulation genes, and predictions of sporulation potential in new isolates and uncultured bacteria can be made from these data. One surprising outcome of these comparative studies is that core regulatory and some structural aspects of the program appear to be universally conserved. This suggests that a robust and sophisticated developmental framework was already in place in the last common ancestor of all extant Firmicutes that produce internal offspring or endospores. The study of sporulation in model systems beyond B. subtilis will continue to provide key information on the flexibility of the program and provide insights into how changes in this developmental course may confer advantages to cells in diverse environments.
Standardized verification of fuel cycle modeling
Feng, B.; Dixon, B.; Sunny, E.; Cuadra, A.; Jacobson, J.; Brown, N. R.; Powers, J.; Worrall, A.; Passerini, S.; Gregg, R.
2016-04-05
A nuclear fuel cycle systems modeling and code-to-code comparison effort was coordinated across multiple national laboratories to verify the tools needed to perform fuel cycle analyses of the transition from a once-through nuclear fuel cycle to a sustainable potential future fuel cycle. For this verification study, a simplified example transition scenario was developed to serve as a test case for the four systems codes involved (DYMOND, VISION, ORION, and MARKAL), each used by a different laboratory participant. In addition, all participants produced spreadsheet solutions for the test case to check all the mass flows and reactor/facility profiles on a year-by-year basis throughout the simulation period. The test case specifications describe a transition from the current US fleet of light water reactors to a future fleet of sodium-cooled fast reactors that continuously recycle transuranic elements as fuel. After several initial coordinated modeling and calculation attempts, it was revealed that most of the differences in code results were not due to different code algorithms or calculation approaches, but due to different interpretations of the input specifications among the analysts. Therefore, the specifications for the test case itself were iteratively updated to remove ambiguity and to help calibrate interpretations. In addition, a few corrections and modifications were made to the codes as well, which led to excellent agreement between all codes and spreadsheets for this test case. Although no fuel cycle transition analysis codes matched the spreadsheet results exactly, all remaining differences in the results were due to fundamental differences in code structure and/or were thoroughly explained. As a result, the specifications and example results are provided so that they can be used to verify additional codes in the future for such fuel cycle transition scenarios.
Standardized verification of fuel cycle modeling
Feng, B.; Dixon, B.; Sunny, E.; ...
2016-04-05
A nuclear fuel cycle systems modeling and code-to-code comparison effort was coordinated across multiple national laboratories to verify the tools needed to perform fuel cycle analyses of the transition from a once-through nuclear fuel cycle to a sustainable potential future fuel cycle. For this verification study, a simplified example transition scenario was developed to serve as a test case for the four systems codes involved (DYMOND, VISION, ORION, and MARKAL), each used by a different laboratory participant. In addition, all participants produced spreadsheet solutions for the test case to check all the mass flows and reactor/facility profiles on a year-by-yearmore » basis throughout the simulation period. The test case specifications describe a transition from the current US fleet of light water reactors to a future fleet of sodium-cooled fast reactors that continuously recycle transuranic elements as fuel. After several initial coordinated modeling and calculation attempts, it was revealed that most of the differences in code results were not due to different code algorithms or calculation approaches, but due to different interpretations of the input specifications among the analysts. Therefore, the specifications for the test case itself were iteratively updated to remove ambiguity and to help calibrate interpretations. In addition, a few corrections and modifications were made to the codes as well, which led to excellent agreement between all codes and spreadsheets for this test case. Although no fuel cycle transition analysis codes matched the spreadsheet results exactly, all remaining differences in the results were due to fundamental differences in code structure and/or were thoroughly explained. As a result, the specifications and example results are provided so that they can be used to verify additional codes in the future for such fuel cycle transition scenarios.« less
Ji, Danfeng; Xi, Beidou; Su, Jing; Huo, Shouliang; He, Li; Liu, Hongliang; Yang, Queping
2013-09-01
Lake eutrophication (LE) has become an increasingly severe environmental problem recently. However, there has been no nutrient standard established for LE control in many developing countries such as China. This study proposes a structural equation model to assist in the establishment of a lake nutrient standard for drinking water sources in Yunnan-Guizhou Plateau Ecoregion (Yungui Ecoregion), China. The modeling results indicate that the most predictive indicator for designated use-attainment is total phosphorus (TP) (total effect = -0.43), and chlorophyll a (Chl-a) is recommended as the second important indicator (total effect = -0.41). The model is further used for estimating the probability of use-attainment associated with lake water as a drinking water source and various levels of candidate criteria (based on the reference conditions and the current environmental quality standards for surface water). It is found that these candidate criteria cannot satisfy the designated 100% use-attainment. To achieve the short-term target (85% attainment of the designated use), TP and Chl-a values ought to be less than 0.02 mg/L and 1.4 microg/L, respectively. When used as a long-term target (90% or greater attainment of the designated use), the TP and Chl-a values are suggested to be less than 0.018 mg/L and 1 microg/L, respectively.
Colorado Model Content Standards for Theatre: Suggested Grade Level Expectations.
ERIC Educational Resources Information Center
Colorado State Dept. of Education, Denver.
This booklet lists six model content standards in theater arts for elementary and secondary school students in the state of Colorado. The six standards cited in the booklet are: (1) Students develop interpersonal skills and problem-solving capabilities through group interaction and artistic collaboration; (2) Students understand and apply the…
A Repository for Beyond-the-Standard-Model Tools
Skands, P.; Richardson, P.; Allanach, B.C.; Baer, H.; Belanger, G.; El Kacimi, M.; Ellwanger, U.; Freitas, A.; Ghodbane, N.; Goujdami, D.; Hahn, T.; Heinemeyer, S.; Kneur, J.-L.; Landsberg, G.; Lee, J.S.; Muhlleitner, M.; Ohl, T.; Perez, E.; Peskin, M.; Pilaftsis, A.; Plehn, T.
2005-05-01
To aid phenomenological studies of Beyond-the-Standard-Model (BSM) physics scenarios, a web repository for BSM calculational tools has been created. We here present brief overviews of the relevant codes, ordered by topic as well as by alphabet.
Enhancements to ASHRAE Standard 90.1 Prototype Building Models
Goel, Supriya; Athalye, Rahul A.; Wang, Weimin; Zhang, Jian; Rosenberg, Michael I.; Xie, YuLong; Hart, Philip R.; Mendon, Vrushali V.
2014-04-16
This report focuses on enhancements to prototype building models used to determine the energy impact of various versions of ANSI/ASHRAE/IES Standard 90.1. Since the last publication of the prototype building models, PNNL has made numerous enhancements to the original prototype models compliant with the 2004, 2007, and 2010 editions of Standard 90.1. Those enhancements are described here and were made for several reasons: (1) to change or improve prototype design assumptions; (2) to improve the simulation accuracy; (3) to improve the simulation infrastructure; and (4) to add additional detail to the models needed to capture certain energy impacts from Standard 90.1 improvements. These enhancements impact simulated prototype energy use, and consequently impact the savings estimated from edition to edition of Standard 90.1.
The standard data model approach to patient record transfer.
Canfield, K; Silva, M; Petrucci, K
1994-01-01
This paper develops an approach to electronic data exchange of patient records from Ambulatory Encounter Systems (AESs). This approach assumes that the AES is based upon a standard data model. The data modeling standard used here is IDEFIX for Entity/Relationship (E/R) modeling. Each site that uses a relational database implementation of this standard data model (or a subset of it) can exchange very detailed patient data with other such sites using industry standard tools and without excessive programming efforts. This design is detailed below for a demonstration project between the research-oriented geriatric clinic at the Baltimore Veterans Affairs Medical Center (BVAMC) and the Laboratory for Healthcare Informatics (LHI) at the University of Maryland.
Standard Model of Particle Physics--a health physics perspective.
Bevelacqua, J J
2010-11-01
The Standard Model of Particle Physics is reviewed with an emphasis on its relationship to the physics supporting the health physics profession. Concepts important to health physics are emphasized and specific applications are presented. The capability of the Standard Model to provide health physics relevant information is illustrated with application of conservation laws to neutron and muon decay and in the calculation of the neutron mean lifetime.
NASA Standard for Models and Simulations: Philosophy and Requirements Overview
NASA Technical Reports Server (NTRS)
Blattnig, St3eve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.
2009-01-01
Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.
NASA Standard for Models and Simulations: Philosophy and Requirements Overview
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.
2013-01-01
Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.
Sustainable model building the role of standards and biological semantics.
Krause, Falko; Schulz, Marvin; Swainston, Neil; Liebermeister, Wolfram
2011-01-01
Systems biology models can be reused within new simulation scenarios, as parts of more complex models or as sources of biochemical knowledge. Reusability does not come by itself but has to be ensured while creating a model. Most important, models should be designed to remain valid in different contexts-for example, for different experimental conditions-and be published in a standardized and well-documented form. Creating reusable models is worthwhile, but it requires some efforts when a model is developed, implemented, documented, and published. Minimum requirements for published systems biology models have been formulated by the MIRIAM initiative. Main criteria are completeness of information and documentation, availability of machine-readable models in standard formats, and semantic annotations connecting the model elements with entries in biological Web resources. In this chapter, we discuss the assumptions behind bottom-up modeling; present important standards like MIRIAM, the Systems Biology Markup Language (SBML), and the Systems Biology Graphical Notation (SBGN); and describe software tools and services for handling semantic annotations. Finally, we show how standards can facilitate the construction of large metabolic network models.
Bounce inflation cosmology with Standard Model Higgs boson
Wan, Youping; Huang, Fa Peng; Zhang, Xinmin; Qiu, Taotao; Cai, Yi-Fu; Li, Hong E-mail: qiutt@mail.ccnu.edu.cn E-mail: yifucai@ustc.edu.cn E-mail: xmzhang@ihep.ac.cn
2015-12-01
It is of great interest to connect cosmology in the early universe to the Standard Model of particle physics. In this paper, we try to construct a bounce inflation model with the standard model Higgs boson, where the one loop correction is taken into account in the effective potential of Higgs field. In this model, a Galileon term has been introduced to eliminate the ghost mode when bounce happens. Moreover, due to the fact that the Fermion loop correction can make part of the Higgs potential negative, one naturally obtains a large equation of state(EoS) parameter in the contracting phase, which can eliminate the anisotropy problem. After the bounce, the model can drive the universe into the standard higgs inflation phase, which can generate nearly scale-invariant power spectrum.
Animal Models of Tourette Syndrome—From Proliferation to Standardization
Yael, Dorin; Israelashvili, Michal; Bar-Gad, Izhar
2016-01-01
Tourette syndrome (TS) is a childhood onset disorder characterized by motor and vocal tics and associated with multiple comorbid symptoms. Over the last decade, the accumulation of findings from TS patients and the emergence of new technologies have led to the development of novel animal models with high construct validity. In addition, animal models which were previously associated with other disorders were recently attributed to TS. The proliferation of TS animal models has accelerated TS research and provided a better understanding of the mechanism underlying the disorder. This newfound success generates novel challenges, since the conclusions that can be drawn from TS animal model studies are constrained by the considerable variation across models. Typically, each animal model examines a specific subset of deficits and centers on one field of research (physiology/genetics/pharmacology/etc.). Moreover, different studies do not use a standard lexicon to characterize different properties of the model. These factors hinder the evaluation of individual model validity as well as the comparison across models, leading to a formation of a fuzzy, segregated landscape of TS pathophysiology. Here, we call for a standardization process in the study of TS animal models as the next logical step. We believe that a generation of standard examination criteria will improve the utility of these models and enable their consolidation into a general framework. This should lead to a better understanding of these models and their relationship to TS, thereby improving the research of the mechanism underlying this disorder and aiding the development of new treatments. PMID:27065791
Improving automation standards via semantic modelling: Application to ISA88.
Dombayci, Canan; Farreres, Javier; Rodríguez, Horacio; Espuña, Antonio; Graells, Moisès
2017-03-01
Standardization is essential for automation. Extensibility, scalability, and reusability are important features for automation software that rely in the efficient modelling of the addressed systems. The work presented here is from the ongoing development of a methodology for semi-automatic ontology construction methodology from technical documents. The main aim of this work is to systematically check the consistency of technical documents and support the improvement of technical document consistency. The formalization of conceptual models and the subsequent writing of technical standards are simultaneously analyzed, and guidelines proposed for application to future technical standards. Three paradigms are discussed for the development of domain ontologies from technical documents, starting from the current state of the art, continuing with the intermediate method presented and used in this paper, and ending with the suggested paradigm for the future. The ISA88 Standard is taken as a representative case study. Linguistic techniques from the semi-automatic ontology construction methodology is applied to the ISA88 Standard and different modelling and standardization aspects that are worth sharing with the automation community is addressed. This study discusses different paradigms for developing and sharing conceptual models for the subsequent development of automation software, along with presenting the systematic consistency checking method.
A Standard Kinematic Model for Flight Simulation at NASA Ames
NASA Technical Reports Server (NTRS)
Mcfarland, R. E.
1975-01-01
A standard kinematic model for aircraft simulation exists at NASA-Ames on a variety of computer systems, one of which is used to control the flight simulator for advanced aircraft (FSAA). The derivation of the kinematic model is given and various mathematical relationships are presented as a guide. These include descriptions of standardized simulation subsystems such as the atmospheric turbulence model and the generalized six-degrees-of-freedom trim routine, as well as an introduction to the emulative batch-processing system which enables this facility to optimize its real-time environment.
Tevatron searches for Higgs bosons beyond the standard model
Nielsen, Jason; /UC, Santa Cruz
2007-06-01
Theoretical frameworks beyond the standard model predict a rich Higgs sector with multiple charged and neutral Higgs bosons. Both the CDF II and D0 experiments at the Tevatron have analyzed 1 fb{sup -1} of p{bar p} collisions at {radical}s = 1.96TeV in search of Higgs boson production. A complete suite of results on searches for neutral, charged, and fermiophobic Higgs bosons limit the allowed production rates and constrain extended models, including the minimal supersymmetric standard model.
Informatics in radiology: an information model of the DICOM standard.
Kahn, Charles E; Langlotz, Curtis P; Channin, David S; Rubin, Daniel L
2011-01-01
The Digital Imaging and Communications in Medicine (DICOM) Standard is a key foundational technology for radiology. However, its complexity creates challenges for information system developers because the current DICOM specification requires human interpretation and is subject to nonstandard implementation. To address this problem, a formally sound and computationally accessible information model of the DICOM Standard was created. The DICOM Standard was modeled as an ontology, a machine-accessible and human-interpretable representation that may be viewed and manipulated by information-modeling tools. The DICOM Ontology includes a real-world model and a DICOM entity model. The real-world model describes patients, studies, images, and other features of medical imaging. The DICOM entity model describes connections between real-world entities and the classes that model the corresponding DICOM information entities. The DICOM Ontology was created to support the Cancer Biomedical Informatics Grid (caBIG) initiative, and it may be extended to encompass the entire DICOM Standard and serve as a foundation of medical imaging systems for research and patient care.
Higher Education Quality Assessment Model: Towards Achieving Educational Quality Standard
ERIC Educational Resources Information Center
Noaman, Amin Y.; Ragab, Abdul Hamid M.; Madbouly, Ayman I.; Khedra, Ahmed M.; Fayoumi, Ayman G.
2017-01-01
This paper presents a developed higher education quality assessment model (HEQAM) that can be applied for enhancement of university services. This is because there is no universal unified quality standard model that can be used to assess the quality criteria of higher education institutes. The analytical hierarchy process is used to identify the…
Kaon physics: Probing the standard model and beyond
Tschirhart, R.; /Fermilab
2009-01-01
The status and prospects of current and future kaon physics experiments is discussed. Both precision measurements and the search for and measurement of ultra-rare decays are powerful probes of many models of new physics beyond the Standard Model. The physics reach of these experiments is briefly discussed.
Non-standard models and the sociology of cosmology
NASA Astrophysics Data System (ADS)
López-Corredoira, Martín
2014-05-01
I review some theoretical ideas in cosmology different from the standard "Big Bang": the quasi-steady state model, the plasma cosmology model, non-cosmological redshifts, alternatives to non-baryonic dark matter and/or dark energy, and others. Cosmologists do not usually work within the framework of alternative cosmologies because they feel that these are not at present as competitive as the standard model. Certainly, they are not so developed, and they are not so developed because cosmologists do not work on them. It is a vicious circle. The fact that most cosmologists do not pay them any attention and only dedicate their research time to the standard model is to a great extent due to a sociological phenomenon (the "snowball effect" or "groupthink"). We might well wonder whether cosmology, our knowledge of the Universe as a whole, is a science like other fields of physics or a predominant ideology.
Conformal Loop quantization of gravity coupled to the standard model
NASA Astrophysics Data System (ADS)
Pullin, Jorge; Gambini, Rodolfo
2016-03-01
We consider a local conformal invariant coupling of the standard model to gravity free of any dimensional parameter. The theory is formulated in order to have a quantized version that admits a spin network description at the kinematical level like that of loop quantum gravity. The Gauss constraint, the diffeomorphism constraint and the conformal constraint are automatically satisfied and the standard inner product of the spin-network basis still holds. The resulting theory has resemblances with the Bars-Steinhardt-Turok local conformal theory, except it admits a canonical quantization in terms of loops. By considering a gauge fixed version of the theory we show that the Standard model coupled to gravity is recovered and the Higgs boson acquires mass. This in turn induces via the standard mechanism masses for massive bosons, baryons and leptons.
Explore Physics Beyond the Standard Model with GLAST
Lionetto, A. M.
2007-07-12
We give an overview of the possibility of GLAST to explore theories beyond the Standard Model of particle physics. Among the wide taxonomy we will focus in particular on low scale supersymmetry and theories with extra space-time dimensions. These theories give a suitable dark matter candidate whose interactions and composition can be studied using a gamma ray probe. We show the possibility of GLAST to disentangle such exotic signals from a standard production background.
NASA Standard for Models and Simulations: Credibility Assessment Scale
NASA Technical Reports Server (NTRS)
Babula, Maria; Bertch, William J.; Green, Lawrence L.; Hale, Joseph P.; Mosier, Gary E.; Steele, Martin J.; Woods, Jody
2009-01-01
As one of its many responses to the 2003 Space Shuttle Columbia accident, NASA decided to develop a formal standard for models and simulations (M&S). Work commenced in May 2005. An interim version was issued in late 2006. This interim version underwent considerable revision following an extensive Agency-wide review in 2007 along with some additional revisions as a result of the review by the NASA Engineering Management Board (EMB) in the first half of 2008. Issuance of the revised, permanent version, hereafter referred to as the M&S Standard or just the Standard, occurred in July 2008. Bertch, Zang and Steeleiv provided a summary review of the development process of this standard up through the start of the review by the EMB. A thorough recount of the entire development process, major issues, key decisions, and all review processes are available in Ref. v. This is the second of a pair of papers providing a summary of the final version of the Standard. Its focus is the Credibility Assessment Scale, a key feature of the Standard, including an example of its application to a real-world M&S problem for the James Webb Space Telescope. The companion paper summarizes the overall philosophy of the Standard and an overview of the requirements. Verbatim quotes from the Standard are integrated into the text of this paper, and are indicated by quotation marks.
Prediction of Standard Enthalpy of Formation by a QSPR Model
Vatani, Ali; Mehrpooya, Mehdi; Gharagheizi, Farhad
2007-01-01
The standard enthalpy of formation of 1115 compounds from all chemical groups, were predicted using genetic algorithm-based multivariate linear regression (GA-MLR). The obtained multivariate linear five descriptors model by GA-MLR has correlation coefficient (R2 = 0.9830). All molecular descriptors which have entered in this model are calculated from chemical structure of any molecule. As a result, application of this model for any compound is easy and accurate.
Peer Review of NRC Standardized Plant Analysis Risk Models
Anthony Koonce; James Knudsen; Robert Buell
2011-03-01
The Nuclear Regulatory Commission (NRC) Standardized Plant Analysis Risk (SPAR) Models underwent a Peer Review using ASME PRA standard (Addendum C) as endorsed by NRC in Regulatory Guide (RG) 1.200. The review was performed by a mix of industry probabilistic risk analysis (PRA) experts and NRC PRA experts. Representative SPAR models, one PWR and one BWR, were reviewed against Capability Category I of the ASME PRA standard. Capability Category I was selected as the basis for review due to the specific uses/applications of the SPAR models. The BWR SPAR model was reviewed against 331 ASME PRA Standard Supporting Requirements; however, based on the Capability Category I level of review and the absence of internal flooding and containment performance (LERF) logic only 216 requirements were determined to be applicable. Based on the review, the BWR SPAR model met 139 of the 216 supporting requirements. The review also generated 200 findings or suggestions. Of these 200 findings and suggestions 142 were findings and 58 were suggestions. The PWR SPAR model was also evaluated against the same 331 ASME PRA Standard Supporting Requirements. Of these requirements only 215 were deemed appropriate for the review (for the same reason as noted for the BWR). The PWR review determined that 125 of the 215 supporting requirements met Capability Category I or greater. The review identified 101 findings or suggestions (76 findings and 25 suggestions). These findings or suggestions were developed to identify areas where SPAR models could be enhanced. A process to prioritize and incorporate the findings/suggestions supporting requirements into the SPAR models is being developed. The prioritization process focuses on those findings that will enhance the accuracy, completeness and usability of the SPAR models.
Lee-Wick standard model at finite temperature
NASA Astrophysics Data System (ADS)
Lebed, Richard F.; Long, Andrew J.; TerBeek, Russell H.
2013-10-01
The Lee-Wick Standard Model at temperatures near the electroweak scale is considered, with the aim of studying the electroweak phase transition. While Lee-Wick theories possess states of negative norm, they are not pathological but instead are treated by imposing particular boundary conditions and using particular integration contours in the calculation of S-matrix elements. It is not immediately clear how to extend this prescription to formulate the theory at finite temperature; we explore two different pictures of finite-temperature Lee-Wick theories, and calculate the thermodynamic variables and the (one-loop) thermal effective potential. We apply these results to study the Lee-Wick Standard Model and find that the electroweak phase transition is a continuous crossover, much like in the Standard Model. However, the high-temperature behavior is modified due to cancellations between thermal corrections arising from the negative- and positive-norm states.
Test of a Power Transfer Model for Standardized Electrofishing
Miranda, L.E.; Dolan, C.R.
2003-01-01
Standardization of electrofishing in waters with differing conductivities is critical when monitoring temporal and spatial differences in fish assemblages. We tested a model that can help improve the consistency of electrofishing by allowing control over the amount of power that is transferred to the fish. The primary objective was to verify, under controlled laboratory conditions, whether the model adequately described fish immobilization responses elicited with various electrical settings over a range of water conductivities. We found that the model accurately described empirical observations over conductivities ranging from 12 to 1,030 ??S/cm for DC and various pulsed-DC settings. Because the model requires knowledge of a fish's effective conductivity, an attribute that is likely to vary according to species, size, temperature, and other variables, a second objective was to gather available estimates of the effective conductivity of fish to examine the magnitude of variation and to assess whether in practical applications a standard effective conductivity value for fish may be assumed. We found that applying a standard fish effective conductivity of 115 ??S/cm introduced relatively little error into the estimation of the peak power density required to immobilize fish with electrofishing. However, this standard was derived from few estimates of fish effective conductivity and a limited number of species; more estimates are needed to validate our working standard.
Non-standard Hubbard models in optical lattices: a review.
Dutta, Omjyoti; Gajda, Mariusz; Hauke, Philipp; Lewenstein, Maciej; Lühmann, Dirk-Sören; Malomed, Boris A; Sowiński, Tomasz; Zakrzewski, Jakub
2015-06-01
Originally, the Hubbard model was derived for describing the behavior of strongly correlated electrons in solids. However, for over a decade now, variations of it have also routinely been implemented with ultracold atoms in optical lattices, allowing their study in a clean, essentially defect-free environment. Here, we review some of the vast literature on this subject, with a focus on more recent non-standard forms of the Hubbard model. After giving an introduction to standard (fermionic and bosonic) Hubbard models, we discuss briefly common models for mixtures, as well as the so-called extended Bose-Hubbard models, that include interactions between neighboring sites, next-neighbor sites, and so on. The main part of the review discusses the importance of additional terms appearing when refining the tight-binding approximation for the original physical Hamiltonian. Even when restricting the models to the lowest Bloch band is justified, the standard approach neglects the density-induced tunneling (which has the same origin as the usual on-site interaction). The importance of these contributions is discussed for both contact and dipolar interactions. For sufficiently strong interactions, the effects related to higher Bloch bands also become important even for deep optical lattices. Different approaches that aim at incorporating these effects, mainly via dressing the basis, Wannier functions with interactions, leading to effective, density-dependent Hubbard-type models, are reviewed. We discuss also examples of Hubbard-like models that explicitly involve higher p orbitals, as well as models that dynamically couple spin and orbital degrees of freedom. Finally, we review mean-field nonlinear Schrödinger models of the Salerno type that share with the non-standard Hubbard models nonlinear coupling between the adjacent sites. In that part, discrete solitons are the main subject of consideration. We conclude by listing some open problems, to be addressed in the future.
Bianco, Simone; Corsi, Fulvio; Renò, Roberto
2009-01-01
We study the relation at intraday level between serial correlation and volatility of the Standard and Poor (S&P) 500 stock index futures returns. At daily and weekly levels, serial correlation and volatility forecasts have been found to be negatively correlated (LeBaron effect). After finding a significant attenuation of the original effect over time, we show that a similar but more pronounced effect holds by using intraday measures, by such as realized volatility and variance ratio. We also test the impact of unexpected volatility, defined as the part of volatility which cannot be forecasted, on the presence of intraday serial correlation in the time series by employing a model for realized volatility based on the heterogeneous market hypothesis. We find that intraday serial correlation is negatively correlated to volatility forecasts, whereas it is positively correlated to unexpected volatility.
GIS-based RUSLE modelling of Leça River Basin, Northern Portugal, in two different grid scales
NASA Astrophysics Data System (ADS)
Petan, S.; Barbosa, J. L. P.; Mikoš, M.; Pinto, F. T.
2009-04-01
Soil erosion is the mechanical degradation caused by the natural forces and it is also influenced by human activities. The biggest threats are the related loss of fertile soil for food production and disturbances of aquatic ecosystems which could unbalance the environment in a wider range. Thus, precise predictions of the soil erosion processes are of a major importance for preventing any kind of environmental degradations. Spatial GIS modelling and erosion maps greatly support the policymaking for land planning and environmental management. Leça River Basin, with a surface of 187 km2, is located in the Northern part of Portugal and it was chosen for testing RUSLE methodology for soil loss prediction and identifying areas with high potential erosion. The model involves daily rainfall data for rainfall erosivity estimation, topographic data for slope length and steepness factor calculation, soil type data, CORINE land cover and land use data. The raster layer model was structured in two different scales: with a grid cell size of 10 and 30 meters. The similarities and differences between the model results of both scales were evaluated.
Loop Corrections to Standard Model fields in inflation
NASA Astrophysics Data System (ADS)
Chen, Xingang; Wang, Yi; Xianyu, Zhong-Zhi
2016-08-01
We calculate 1-loop corrections to the Schwinger-Keldysh propagators of Standard-Model-like fields of spin-0, 1/2, and 1, with all renormalizable interactions during inflation. We pay special attention to the late-time divergences of loop corrections, and show that the divergences can be resummed into finite results in the late-time limit using dynamical renormalization group method. This is our first step toward studying both the Standard Model and new physics in the primordial universe.
Reheating the Standard Model from a hidden sector
NASA Astrophysics Data System (ADS)
Tenkanen, Tommi; Vaskonen, Ville
2016-10-01
We consider a scenario where the inflaton decays to a hidden sector thermally decoupled from the visible Standard Model sector. A tiny portal coupling between the hidden and the visible sectors later heats the visible sector so that the Standard Model degrees of freedom come to dominate the energy density of the Universe before big bang nucleosynthesis. We find that this scenario is viable, although obtaining the correct dark matter abundance and retaining successful big bang nucleosynthesis is not obvious. We also show that the isocurvature perturbations constituted by a primordial Higgs condensate are not problematic for the viability of the scenario.
Constraining new physics with collider measurements of Standard Model signatures
NASA Astrophysics Data System (ADS)
Butterworth, Jonathan M.; Grellscheid, David; Krämer, Michael; Sarrazin, Björn; Yallup, David
2017-03-01
A new method providing general consistency constraints for Beyond-the-Standard-Model (BSM) theories, using measurements at particle colliders, is presented. The method, `Constraints On New Theories Using Rivet', Contur, exploits the fact that particle-level differential measurements made in fiducial regions of phase-space have a high degree of model-independence. These measurements can therefore be compared to BSM physics implemented in Monte Carlo generators in a very generic way, allowing a wider array of final states to be considered than is typically the case. The Contur approach should be seen as complementary to the discovery potential of direct searches, being designed to eliminate inconsistent BSM proposals in a context where many (but perhaps not all) measurements are consistent with the Standard Model. We demonstrate, using a competitive simplified dark matter model, the power of this approach. The Contur method is highly scaleable to other models and future measurements.
Status of the AIAA Modeling and Simulation Format Standard
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Hildreth, Bruce L.
2008-01-01
The current draft AIAA Standard for flight simulation models represents an on-going effort to improve the productivity of practitioners of the art of digital flight simulation (one of the original digital computer applications). This initial release provides the capability for the efficient representation and exchange of an aerodynamic model in full fidelity; the DAVE-ML format can be easily imported (with development of site-specific import tools) in an unambiguous way with automatic verification. An attractive feature of the standard is the ability to coexist with existing legacy software or tools. The draft Standard is currently limited in scope to static elements of dynamic flight simulations; however, these static elements represent the bulk of typical flight simulation mathematical models. It is already seeing application within U.S. and Australian government agencies in an effort to improve productivity and reduce model rehosting overhead. An existing tool allows import of DAVE-ML models into a popular simulation modeling and analysis tool, and other community-contributed tools and libraries can simplify the use of DAVE-ML compliant models at compile- or run-time of high-fidelity flight simulation.
Search for the standard model Higgs boson in $l\
Li, Dikai
2013-01-01
Humans have always attempted to understand the mystery of Nature, and more recently physicists have established theories to describe the observed phenomena. The most recent theory is a gauge quantum field theory framework, called Standard Model (SM), which proposes a model comprised of elementary matter particles and interaction particles which are fundamental force carriers in the most unified way. The Standard Model contains the internal symmetries of the unitary product group SU(3)_{c} ⓍSU(2)_{L} Ⓧ U(1)_{Y} , describes the electromagnetic, weak and strong interactions; the model also describes how quarks interact with each other through all of these three interactions, how leptons interact with each other through electromagnetic and weak forces, and how force carriers mediate the fundamental interactions.
View of a five inch standard Mark III model 1 ...
View of a five inch standard Mark III model 1 #39, manufactured in 1916 at the naval gun factory waterveliet, NY; this is the only gun remaining on olympia dating from the period when it was in commission; note ammunition lift at left side of photograph. (p36) - USS Olympia, Penn's Landing, 211 South Columbus Boulevard, Philadelphia, Philadelphia County, PA
Beyond the Standard Model at the LHC and Beyond
Ellis, John
2007-11-20
Many of the open questions beyond the Standard Model will be addressed by the LHC, including the origin of mass, supersymmetry, dark matter and the possibility of large extra dimensions. A linear e{sup +}e{sup -} collider (LC) with sufficient centre-of-mass energy would add considerable value to the capabilities of the LHC.
Mathematical Modeling, Sense Making, and the Common Core State Standards
ERIC Educational Resources Information Center
Schoenfeld, Alan H.
2013-01-01
On October 14, 2013 the Mathematics Education Department at Teachers College hosted a full-day conference focused on the Common Core Standards Mathematical Modeling requirements to be implemented in September 2014 and in honor of Professor Henry Pollak's 25 years of service to the school. This article is adapted from my talk at this conference…
Teacher Leader Model Standards: Implications for Preparation, Policy, and Practice
ERIC Educational Resources Information Center
Berg, Jill Harrison; Carver, Cynthia L.; Mangin, Melinda M.
2014-01-01
Teacher leadership is increasingly recognized as a resource for instructional improvement. Consequently, teacher leader initiatives have expanded rapidly despite limited knowledge about how to prepare and support teacher leaders. In this context, the "Teacher Leader Model Standards" represent an important development in the field. In…
Precision tests of quantum chromodynamics and the standard model
Brodsky, S.J.; Lu, H.J.
1995-06-01
The authors discuss three topics relevant to testing the Standard Model to high precision: commensurate scale relations, which relate observables to each other in perturbation theory without renormalization scale or scheme ambiguity, the relationship of compositeness to anomalous moments, and new methods for measuring the anomalous magnetic and quadrupole moments of the W and Z.
Specification for a Standard Radar Sea Clutter Model
1990-09-01
7, •.i;j i :"Y Technical Document 1917 00 September 1990 0 N• Specification for a Standard Radar Sea I Clutter Model Richard A. Paulus .OTIC SELECTE...2 2.2 Radar Param eters ............................................................. 3 2.3 O utput...4 3.1 Grazing Angle at Sea Surface ................................................... 4 3.2 Radar Clutter Cross Section
Home Economics Education Career Path Guide and Model Curriculum Standards.
ERIC Educational Resources Information Center
California State Univ., Northridge.
This curriculum guide developed in California and organized in 10 chapters, provides a home economics education career path guide and model curriculum standards for high school home economics programs. The first chapter contains information on the following: home economics education in California, home economics careers for the future, home…
Searches for Standard Model Higgs at the Tevatron
Cortavitarte, Rocio Vilar; /Cantabria Inst. of Phys.
2007-11-01
A summary of the latest results of Standard Model Higgs boson searches from CDF and D0 presented at the DIS 2007 conference is reported in this paper. All analyses presented use 1 fb{sup -1} of Tevatron data. The strategy of the different analyses is determined by the Higgs production mechanism and decay channel.
Searches for standard model Higgs at the Tevatron
Vilar Cortabitarte, Rocio; /Cantabria U., Santander
2007-04-01
A summary of the latest results of Standard Model Higgs boson searches from CDF and D0 presented at the DIS 2007 conference is reported in this paper. All analyses presented use 1 fb{sup -1} of Tevatron data. The strategy of the different analyses is determined by the Higgs production mechanism and decay channel.
Radiative breaking of conformal symmetry in the Standard Model
NASA Astrophysics Data System (ADS)
Arbuzov, A. B.; Nazmitdinov, R. G.; Pavlov, A. E.; Pervushin, V. N.; Zakharov, A. F.
2016-02-01
Radiative mechanism of conformal symmetry breaking in a comformal-invariant version of the Standard Model is considered. The Coleman-Weinberg mechanism of dimensional transmutation in this system gives rise to finite vacuum expectation values and, consequently, masses of scalar and spinor fields. A natural bootstrap between the energy scales of the top quark and Higgs boson is suggested.
Ontology based standardization of petri net modeling for signaling pathways.
Takai-Igarashi, Takako
2011-01-01
Taking account of the great availability of Petri nets in modeling and analyzing large complicated signaling networks, semantics of Petri nets is in need of systematization for the purpose of consistency and reusability of the models. This paper reports on standardization of units of Petri nets on the basis of an ontology that gives an intrinsic definition to the process of signaling in signaling pathways.
Kwan, Joyce L Y; Chan, Wai
2011-09-01
We propose a two-stage method for comparing standardized coefficients in structural equation modeling (SEM). At stage 1, we transform the original model of interest into the standardized model by model reparameterization, so that the model parameters appearing in the standardized model are equivalent to the standardized parameters of the original model. At stage 2, we impose appropriate linear equality constraints on the standardized model and use a likelihood ratio test to make statistical inferences about the equality of standardized coefficients. Unlike other existing methods for comparing standardized coefficients, the proposed method does not require specific modeling features (e.g., specification of nonlinear constraints), which are available only in certain SEM software programs. Moreover, this method allows researchers to compare two or more standardized coefficients simultaneously in a standard and convenient way. Three real examples are given to illustrate the proposed method, using EQS, a popular SEM software program. Results show that the proposed method performs satisfactorily for testing the equality of standardized coefficients.
Markworth, A.J.; Gupta, A.; Rollins, R.W.
1998-08-04
The Portevin-Le Chatelier (PLC) effect, otherwise known as serrated yielding, repeated yielding, or jerky flow, has been a subject of investigation for many years. Modeling studies, and experiments have shown that the oscillatory PLC stress-strain dynamics can sometimes be a form of deterministic chaos. In such cases, the spontaneously occurring oscillations are aperiodic and are characterized by at least one positive Lyapunov exponent. However, whether they be chaotic or not, these oscillations are detrimental to the mechanical integrity of the material, so that some means by which they can be suppressed would be a desirable feature. In the work described below, a strategy for suppressing these oscillations is developed and applied to a model for the PLC effect. The strategy is physically realistic in the sense that it is based on the feedback of a control signal, obtained from a measurable quantity, to an accessible (i.e., controllable) parameter. Application is made here to a case for which the oscillations are chaotic, although the approach is applicable to periodic stress oscillations as well.
Phenomenological studies of minimal extensions of standard model
NASA Astrophysics Data System (ADS)
Kumar, Nilanjana
The ATLAS and CMS experiments at the Large Hadron Collider (LHC) have confirmed the existence of the Higgs boson, the last missing piece of the Standard Model, making this era a great time to look for beyond Standard Model physics, which can explain the deficiencies in the Standard Model. The research described here is highly motivated by supersymmetry, which appears as an extension of the Standard Model. The phenomenological consequences of that are of great importance and have been reflected in this research. In the first project, the prospects for LHC discovery of a narrow resonance that decays to two Higgs bosons using the bb¯gammagamma final state are studied. This study is inspired by the compressed Minimal Supersymmetric Standard Model, which allows the production of stoponium (a bound state of the supersymmetric partners of the top quark and its antiquark) and its decay to Higgs boson pairs, but this study is applicable to any other di-Higgs resonance produced by gluon fusion. The cross-section needed for a 5-sigma discovery at the 14 TeV LHC for such a narrow di-Higgs resonance is estimated as a function of the integrated luminosity, using the invariant mass distributions for bb¯ and photons. I have also found the integrated luminosity required for discovery of stoponium as a function of its mass. In my second project a viable extension of the Standard Model which incorporates vectorlike fermions near the electroweak scale has been explored. Vectorlike quarks and leptons are exotic new fermions that transform in non-chiral representations of the unbroken Standard Model gauge group. Two models are considered, in which the vectorlike leptons are weak isosinglets and isodoublets. The vectorlike leptons decay to tau leptons. I have studied the prospects for excluding or discovering vectorlike leptons using multilepton events at the LHC. If the vectorlike leptons are weak isosinglets, then discovery in multilepton states is found to be extremely challenging
Progress Toward a Format Standard for Flight Dynamics Models
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Hildreth, Bruce L.
2006-01-01
In the beginning, there was FORTRAN, and it was... not so good. But it was universal, and all flight simulator equations of motion were coded with it. Then came ACSL, C, Ada, C++, C#, Java, FORTRAN-90, Matlab/Simulink, and a number of other programming languages. Since the halcyon punch card days of 1968, models of aircraft flight dynamics have proliferated in training devices, desktop engineering and development computers, and control design textbooks. With the rise of industry teaming and increased reliance on simulation for procurement decisions, aircraft and missile simulation models are created, updated, and exchanged with increasing frequency. However, there is no real lingua franca to facilitate the exchange of models from one simulation user to another. The current state-of-the-art is such that several staff-months if not staff-years are required to 'rehost' each release of a flight dynamics model from one simulation environment to another one. If a standard data package or exchange format were to be universally adopted, the cost and time of sharing and updating aerodynamics, control laws, mass and inertia, and other flight dynamic components of the equations of motion of an aircraft or spacecraft simulation could be drastically reduced. A 2002 paper estimated over $ 6 million in savings could be realized for one military aircraft type alone. This paper describes the efforts of the American Institute of Aeronautics and Astronautics (AIAA) to develop a standard flight dynamic model exchange standard based on XML and HDF-5 data formats.
Tests and Problems of the Standard Model in Cosmology
NASA Astrophysics Data System (ADS)
López-Corredoira, Martín
2017-02-01
The main foundations of the standard Λ CDM model of cosmology are that: (1) the redshifts of the galaxies are due to the expansion of the Universe plus peculiar motions; (2) the cosmic microwave background radiation and its anisotropies derive from the high energy primordial Universe when matter and radiation became decoupled; (3) the abundance pattern of the light elements is explained in terms of primordial nucleosynthesis; and (4) the formation and evolution of galaxies can be explained only in terms of gravitation within a inflation + dark matter + dark energy scenario. Numerous tests have been carried out on these ideas and, although the standard model works pretty well in fitting many observations, there are also many data that present apparent caveats to be understood with it. In this paper, I offer a review of these tests and problems, as well as some examples of alternative models.
Our sun. I. The standard model: Successes and failures
Sackmann, I.J.; Boothroyd, A.I.; Fowler, W.A. )
1990-09-01
The results of computing a number of standard solar models are reported. A presolar helium content of Y = 0.278 is obtained, and a Cl-37 capture rate of 7.7 SNUs, consistently several times the observed rate of 2.1 SNUs, is determined. Thus, the solar neutrino problem remains. The solar Z value is determined primarily by the observed Z/X ratio and is affected very little by differences in solar models. Even large changes in the low-temperature molecular opacities have no effect on Y, nor even on conditions at the base of the convective envelope. Large molecular opacities do cause a large increase in the mixing-length parameter alpha but do not cause the convective envelope to reach deeper. The temperature remains too low for lithium burning, and there is no surface lithium depletion; thus, the lithium problem of the standard solar model remains. 103 refs.
Distinguishing standard model extensions using monotop chirality at the LHC
NASA Astrophysics Data System (ADS)
Allahverdi, Rouzbeh; Dalchenko, Mykhailo; Dutta, Bhaskar; Flórez, Andrés; Gao, Yu; Kamon, Teruki; Kolev, Nikolay; Mueller, Ryan; Segura, Manuel
2016-12-01
We present two minimal extensions of the standard model, each giving rise to baryogenesis. They include heavy color-triplet scalars interacting with a light Majorana fermion that can be the dark matter (DM) candidate. The electroweak charges of the new scalars govern their couplings to quarks of different chirality, which leads to different collider signals. These models predict monotop events at the LHC and the energy spectrum of decay products of highly polarized top quarks can be used to establish the chiral nature of the interactions involving the heavy scalars and the DM. Detailed simulation of signal and standard model background events is performed, showing that top quark chirality can be distinguished in hadronic and leptonic decays of the top quarks.
One Higgs and a Standard Model, No Need for Supersymmetry
NASA Astrophysics Data System (ADS)
Neto, David
2017-01-01
With the detection of a Higgs like boson at 125 GeV in the summer of 2012, the Standard Model of particle physics was complete. However, there remain theoretical problems with the SM, such as naturalness and the hierarchy problem to name a few. For many years, extensions of the SM such as Supersymmetry, have provided interesting theoretical solutions to many of these problems. In addition to the wealth of beyond the Standard Model physics these theories predict. Yet, with the latest LHC data, there is still no sign of SUSY. With the SM holding up extremely well to the highest energies we have been able to experimentally probe. Here, we examine some of the ``theoretical shortcomings'' of the SM. We intend to question, based on LHC data, whether there is a need for SUSY, or if the SM, with the so far discovered single Higgs boson, can remain a consistent model of particle physics at yet higher energies.
Rare B decays as tests of the Standard Model
NASA Astrophysics Data System (ADS)
Blake, Thomas; Lanfranchi, Gaia; Straub, David M.
2017-01-01
One of the most interesting puzzles in particle physics today is that new physics is expected at the TeV energy scale to solve the hierarchy problem, and stabilises the Higgs mass, but so far no unambiguous signal of new physics has been found. Strong constraints on the energy scale of new physics can be derived from precision tests of the electroweak theory and from flavour-changing or CP-violating processes in strange, charm and beauty hadron decays. Decays that proceed via flavour-changing-neutral-current processes are forbidden at the lowest perturbative order in the Standard Model and are, therefore, rare. Rare b hadron decays are playing a central role in the understanding of the underlying patterns of Standard Model physics and in setting up new directions in model building for new physics contributions. In this article the status and prospects of this field are reviewed.
Using geodetic VLBI to test Standard-Model Extension
NASA Astrophysics Data System (ADS)
Hees, Aurélien; Lambert, Sébastien; Le Poncin-Lafitte, Christophe
2016-04-01
The modeling of the relativistic delay in geodetic techniques is primordial to get accurate geodetic products. And geodetic techniques can also be used to measure the relativistic delay and get constraints on parameters describing the relativity theory. The effective field theory framework called the Standard-Model Extension (SME) has been developed in order to systematically parametrize hypothetical violations of Lorentz symmetry (in the Standard Model and in the gravitational sector). In terms of light deflexion by a massive body like the Sun, one can expect a dependence in the elongation angle different from GR. In this communication, we use geodetic VLBI observations of quasars made in the frame of the permanent geodetic VLBI monitoring program to constrain the first SME coefficient. Our results do not show any deviation from GR and they improve current constraints on both GR and SME parameters.
Higgs decays in gauge extensions of the standard model
NASA Astrophysics Data System (ADS)
Bunk, Don; Hubisz, Jay; Jain, Bithika
2014-02-01
We explore the phenomenology of virtual spin-1 contributions to the h→γγ and h→Zγ decay rates in gauge extensions of the standard model. We consider generic Lorentz and gauge-invariant vector self-interactions, which can have nontrivial structure after diagonalizing the quadratic part of the action. Such features are phenomenologically relevant in models where the electroweak gauge bosons mix with additional spin-1 fields, such as occurs in little Higgs models, extra dimensional models, strongly coupled variants of electroweak symmetry breaking, and other gauge extensions of the standard model. In models where nonrenormalizable operators mix field strengths of gauge groups, the one-loop Higgs decay amplitudes can be logarithmically divergent, and we provide power counting for the size of the relevant counterterm. We provide an example calculation in a four-site moose model that contains degrees of freedom that model the effects of vector and axial-vector resonances arising from TeV scale strong dynamics.
Quantum gravity and Standard-Model-like fermions
NASA Astrophysics Data System (ADS)
Eichhorn, Astrid; Lippoldt, Stefan
2017-04-01
We discover that chiral symmetry does not act as an infrared attractor of the renormalization group flow under the impact of quantum gravity fluctuations. Thus, observationally viable quantum gravity models must respect chiral symmetry. In our truncation, asymptotically safe gravity does, as a chiral fixed point exists. A second non-chiral fixed point with massive fermions provides a template for models with dark matter. This fixed point disappears for more than 10 fermions, suggesting that an asymptotically safe ultraviolet completion for the standard model plus gravity enforces chiral symmetry.
Towards realistic standard model from D-brane configurations
Leontaris, G. K.; Tracas, N. D.; Korakianitis, O.; Vlachos, N. D.
2007-12-01
Effective low energy models arising in the context of D-brane configurations with standard model (SM) gauge symmetry extended by several gauged Abelian factors are discussed. The models are classified according to their hypercharge embeddings consistent with the SM spectrum hypercharge assignment. Particular cases are analyzed according to their perspectives and viability as low energy effective field theory candidates. The resulting string scale is determined by means of a two-loop renormalization group calculation. Their implications in Yukawa couplings, neutrinos and flavor changing processes are also presented.
Expressing hNF-LE397K results in abnormal gaiting in a transgenic model of CMT2E
Dale, Jeffrey M.; Villalon, Eric; Shannon, Stephen G.; Barry, Devin M.; Markey, Rachel M.; Garcia, Virginia B.; Garcia, Michael L.
2012-01-01
Charcot-Marie-Tooth disease (CMT) is the most commonly inherited peripheral neuropathy. CMT disease signs include distal limb neuropathy, abnormal gaiting, exacerbation of neuropathy, sensory defects, and deafness. We generated a novel line of CMT2E mice expressing a hNF-LE397K transgene, which displayed muscle atrophy of the lower limbs without denervation, proximal reduction in large caliber axons, and decreased nerve conduction velocity. In this study, we demonstrated that hNF-LE397K mice developed abnormal gait of the hind limbs. The identification of severe gaiting defects in combination with previously observed muscle atrophy, reduced axon caliber, and decreased nerve conduction velocity suggests that hNF-LE397K mice recapitulate many of clinical signs associated with CMT2E. Therefore, hNF-LE397K mice provide a context for potential therapeutic intervention. PMID:22288874
Precision Electroweak Measurements and Constraints on the Standard Model
Not Available
2011-11-11
This note presents constraints on Standard Model parameters using published and preliminary precision electroweak results measured at the electron-positron colliders LEP and SLC. The results are compared with precise electroweak measurements from other experiments, notably CDF and D0 at the Tevatron. Constraints on the input parameters of the Standard Model are derived from the results obtained in high-Q{sup 2} interactions, and used to predict results in low-Q{sup 2} experiments, such as atomic parity violation, Moller scattering, and neutrino-nucleon scattering. The main changes with respect to the experimental results presented in 2007 are new combinations of results on the W-boson mass and width and the mass of the top quark.
Challenges to the standard model of Big Bang nucleosynthesis.
Steigman, G
1993-06-01
Big Bang nucleosynthesis provides a unique probe of the early evolution of the Universe and a crucial test of the consistency of the standard hot Big Bang cosmological model. Although the primordial abundances of 2H, 3He, 4He, and 7Li inferred from current observational data are in agreement with those predicted by Big Bang nucleosynthesis, recent analysis has severely restricted the consistent range for the nucleon-to-photon ratio: 3.7 standard model and suggest that no new light particles may be allowed (N(BBN)nu
Conformal loop quantum gravity coupled to the standard model
NASA Astrophysics Data System (ADS)
Campiglia, Miguel; Gambini, Rodolfo; Pullin, Jorge
2017-01-01
We argue that a conformally invariant extension of general relativity coupled to the standard model is the fundamental theory that needs to be quantized. We show that it can be treated by loop quantum gravity techniques. Through a gauge fixing and a modified Higgs mechanism particles acquire mass and one recovers general relativity coupled to the standard model. The theory suggests new views with respect to the definition of the Hamiltonian constraint in loop quantum gravity, the semi-classical limit and the issue of finite renormalization in quantum field theory in quantum space-time. It also gives hints about the elimination of ambiguities that arise in quantum field theory in quantum space-time in the calculation of back-reaction.
Development of a standard documentation protocol for communicating exposure models.
Ciffroy, P; Altenpohl, A; Fait, G; Fransman, W; Paini, A; Radovnikovic, A; Simon-Cornu, M; Suciu, N; Verdonck, F
2016-10-15
An important step in building a computational model is its documentation; a comprehensive and structured documentation can improve the model applicability and transparency in science/research and for regulatory purposes. This is particularly crucial and challenging for environmental and/or human exposure models that aim to establish quantitative relationships between personal exposure levels and their determinants. Exposure models simulate the transport and fate of a contaminant from the source to the receptor and may involve a large set of entities (e.g. all the media the contaminants may pass though). Such complex models are difficult to be described in a comprehensive, unambiguous and accessible way. Bad communication of assumptions, theory, structure and/or parameterization can lead to lack of confidence by the user and it may be source of errors. The goal of this paper is to propose a standard documentation protocol (SDP) for exposure models, i.e. a generic format and a standard structure by which all exposure models could be documented. For this purpose, a CEN (European Committee for Standardisation) workshop was set up with objective to agree on minimum requirements for the amount and type of information to be provided on exposure models documentation along with guidelines for the structure and presentation of the information. The resulting CEN workshop agreement (CWA) was expected to facilitate a more rigorous formulation of exposure models description and the understanding by users. This paper intends to describe the process followed for defining the SDP, the standardisation approach, as well as the main components of the SDP resulting from a wide consultation of interested stakeholders. The main outcome is a CEN CWA which establishes terms and definitions for exposure models and their elements, specifies minimum requirements for the amount and type of information to be documented, and proposes a structure for communicating the documentation to different
ERIC Educational Resources Information Center
Wisconsin Department of Public Instruction, 2011
2011-01-01
Wisconsin's adoption of the Common Core State Standards provides an excellent opportunity for Wisconsin school districts and communities to define expectations from birth through preparation for college and work. By aligning the existing Wisconsin Model Early Learning Standards with the Wisconsin Common Core State Standards, expectations can be…
NASA Astrophysics Data System (ADS)
Merino, Andres; Guerrero-Higueras, Angel Manuel; López, Laura; Gascón, Estibaliz; Sánchez, José Luis; Lorente, José Manuel; Marcos, José Luis; Matía, Pedro; Ortiz de Galisteo, José Pablo; Nafría, David; Fernández-González, Sergio; Weigand, Roberto; Hermida, Lucía; García-Ortega, Eduardo
2014-05-01
The integration of various public and private observation networks into the Observation Network of Castile-León (ONet_CyL), Spain, allows us to monitor the risks in real-time. One of the most frequent risks in this region is severe precipitation. Thus, the data from the network allows us to determine the area where precipitation was registered and also to know the areas with precipitation in real-time. The observation network is managed with a LINUX system. The observation platform makes it possible to consult the observation data in a specific point in the region, or otherwise to see the spatial distribution of the precipitation in a user-defined area and time interval. In this study, we compared several rainfall estimation models, based on satellite data for Castile-León, with precipitation data from the meteorological observation network. The rainfall estimation models obtained from the meteorological satellite data provide us with a precipitation field covering a wide area, although its operational use requires a prior evaluation using ground truth data. The aim is to develop a real-time evaluation tool for rainfall estimation models that allows us to monitor the accuracy of its forecasting. This tool makes it possible to visualise different Skill Scores (Probability of Detection, False Alarm Ratio and others) of each rainfall estimation model in real time, thereby not only allowing us to know the areas where the rainfall models indicate precipitation, but also the validation of the model in real-time for each specific meteorological situation. Acknowledgements The authors would like to thank the Regional Government of Castile-León for its financial support through the project LE220A11-2. This study was supported by the following grants: GRANIMETRO (CGL2010-15930); MICROMETEO (IPT-310000-2010-22).
On the Standard Model prediction for RK and RK*
NASA Astrophysics Data System (ADS)
Pattori, A.
2016-11-01
In this article a recent work is reviewed, where we evaluated the impact of radiative corrections in RK and RK * . We find that, employing the cuts presently applied by the LHCb Collaboration, such corrections do not exceed a few percent. Moreover, their effect is well described (and corrected) by existing Monte Carlo codes. Our analysis reinforces the interest of these observables as clean probe of physics beyond the Standard Model.
Naturalness and renormalization group in the standard model
NASA Astrophysics Data System (ADS)
Pivovarov, Grigorii B.
2016-10-01
I define a naturalness criterion formalizing the intuitive notion of naturalness discussed in the literature. After that, using ϕ4 as an example, I demonstrate that a theory may be natural in the MS-scheme and, at the same time, unnatural in the Gell-Mann-Low scheme. Finally, I discuss the prospects of using a version of the Gell-Mann-Low scheme in the Standard Model.
Naturalness and Renormalization Group in the Standard Model
NASA Astrophysics Data System (ADS)
Pivovarov, Grigorii B.
I define a naturalness criterion formalizing the intuitive notion of naturalness discussed in the literature. After that, using ϕ4 as an example, I demonstrate that a theory may be natural in the MS-scheme and, at the same time, unnatural in the Gell-Mann-Low scheme. Finally, I discuss the prospects of using a version of the Gell-Mann-Low scheme in the Standard Model.
Cosmology and the noncommutative approach to the standard model
Nelson, William; Sakellariadou, Mairi
2010-04-15
We study cosmological consequences of the noncommutative approach to the standard model of particle physics. Neglecting the nonminimal coupling of the Higgs field to the curvature, noncommutative corrections to Einstein's equations are present only for inhomogeneous and anisotropic space-times. Considering the nonminimal coupling however, corrections are obtained even for background cosmologies. Links with dilatonic gravity as well as chameleon cosmology are briefly discussed, and potential experimental consequences are mentioned.
Gold-standard performance for 2D hydrodynamic modeling
NASA Astrophysics Data System (ADS)
Pasternack, G. B.; MacVicar, B. J.
2013-12-01
Two-dimensional, depth-averaged hydrodynamic (2D) models are emerging as an increasingly useful tool for environmental water resources engineering. One of the remaining technical hurdles to the wider adoption and acceptance of 2D modeling is the lack of standards for 2D model performance evaluation when the riverbed undulates, causing lateral flow divergence and convergence. The goal of this study was to establish a gold-standard that quantifies the upper limit of model performance for 2D models of undulating riverbeds when topography is perfectly known and surface roughness is well constrained. A review was conducted of published model performance metrics and the value ranges exhibited by models thus far for each one. Typically predicted velocity differs from observed by 20 to 30 % and the coefficient of determination between the two ranges from 0.5 to 0.8, though there tends to be a bias toward overpredicting low velocity and underpredicting high velocity. To establish a gold standard as to the best performance possible for a 2D model of an undulating bed, two straight, rectangular-walled flume experiments were done with no bed slope and only different bed undulations and water surface slopes. One flume tested model performance in the presence of a porous, homogenous gravel bed with a long flat section, then a linear slope down to a flat pool bottom, and then the same linear slope back up to the flat bed. The other flume had a PVC plastic solid bed with a long flat section followed by a sequence of five identical riffle-pool pairs in close proximity, so it tested model performance given frequent undulations. Detailed water surface elevation and velocity measurements were made for both flumes. Comparing predicted versus observed velocity magnitude for 3 discharges with the gravel-bed flume and 1 discharge for the PVC-bed flume, the coefficient of determination ranged from 0.952 to 0.987 and the slope for the regression line was 0.957 to 1.02. Unsigned velocity
Impersonating the Standard Model Higgs boson: Alignment without decoupling
Carena, Marcela; Low, Ian; Shah, Nausheen R.; ...
2014-04-03
In models with an extended Higgs sector there exists an alignment limit, in which the lightest CP-even Higgs boson mimics the Standard Model Higgs. The alignment limit is commonly associated with the decoupling limit, where all non-standard scalars are significantly heavier than the Z boson. However, alignment can occur irrespective of the mass scale of the rest of the Higgs sector. In this work we discuss the general conditions that lead to “alignment without decoupling”, therefore allowing for the existence of additional non-standard Higgs bosons at the weak scale. The values of tan β for which this happens are derivedmore » in terms of the effective Higgs quartic couplings in general two-Higgs-doublet models as well as in supersymmetric theories, including the MSSM and the NMSSM. In addition, we study the information encoded in the variations of the SM Higgs-fermion couplings to explore regions in the mA – tan β parameter space.« less
Impersonating the Standard Model Higgs boson: Alignment without decoupling
Carena, Marcela; Low, Ian; Shah, Nausheen R.; Wagner, Carlos E. M.
2014-04-03
In models with an extended Higgs sector there exists an alignment limit, in which the lightest CP-even Higgs boson mimics the Standard Model Higgs. The alignment limit is commonly associated with the decoupling limit, where all non-standard scalars are significantly heavier than the Z boson. However, alignment can occur irrespective of the mass scale of the rest of the Higgs sector. In this work we discuss the general conditions that lead to “alignment without decoupling”, therefore allowing for the existence of additional non-standard Higgs bosons at the weak scale. The values of tan β for which this happens are derived in terms of the effective Higgs quartic couplings in general two-Higgs-doublet models as well as in supersymmetric theories, including the MSSM and the NMSSM. In addition, we study the information encoded in the variations of the SM Higgs-fermion couplings to explore regions in the m_{A} – tan β parameter space.
BOOK REVIEW: Supersymmetry and String Theory: Beyond the Standard Model
NASA Astrophysics Data System (ADS)
Rocek, Martin
2007-11-01
When I was asked to review Michael Dine's new book, 'Supersymmetry and String Theory', I was pleased to have a chance to read a book by such an established authority on how string theory might become testable. The book is most useful as a list of current topics of interest in modern theoretical physics. It gives a succinct summary of a huge variety of subjects, including the standard model, symmetry, Yang Mills theory, quantization of gauge theories, the phenomenology of the standard model, the renormalization group, lattice gauge theory, effective field theories, anomalies, instantons, solitons, monopoles, dualities, technicolor, supersymmetry, the minimal supersymmetric standard model, dynamical supersymmetry breaking, extended supersymmetry, Seiberg Witten theory, general relativity, cosmology, inflation, bosonic string theory, the superstring, the heterotic string, string compactifications, the quintic, string dualities, large extra dimensions, and, in the appendices, Goldstone's theorem, path integrals, and exact beta-functions in supersymmetric gauge theories. Its breadth is both its strength and its weakness: it is not (and could not possibly be) either a definitive reference for experts, where the details of thorny technical issues are carefully explored, or a textbook for graduate students, with detailed pedagogical expositions. As such, it complements rather than replaces the much narrower and more focussed String Theory I and II volumes by Polchinski, with their deep insights, as well the two older volumes by Green, Schwarz, and Witten, which develop string theory pedagogically.
Elementary particles, dark matter candidate and new extended standard model
NASA Astrophysics Data System (ADS)
Hwang, Jaekwang
2017-01-01
Elementary particle decays and reactions are discussed in terms of the three-dimensional quantized space model beyond the standard model. Three generations of the leptons and quarks correspond to the lepton charges. Three heavy leptons and three heavy quarks are introduced. And the bastons (new particles) are proposed as the possible candidate of the dark matters. Dark matter force, weak force and strong force are explained consistently. Possible rest masses of the new particles are, tentatively, proposed for the experimental searches. For more details, see the conference paper at https://www.researchgate.net/publication/308723916.
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Drager, Andreas; ...
2015-10-17
In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.
2016-01-01
Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Drager, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.
2015-10-17
In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.
FCNC decays of standard model fermions into a dark photon
NASA Astrophysics Data System (ADS)
Gabrielli, Emidio; Mele, Barbara; Raidal, Martti; Venturini, Elena
2016-12-01
We analyze a new class of FCNC processes, the f →f'γ ¯ decays of a fermion f into a lighter (same-charge) fermion f' plus a massless neutral vector boson, a dark photon γ ¯. A massless dark photon does not interact at tree level with observable fields, and the f →f'γ ¯ decay presents a characteristic signature where the final fermion f' is balanced by a massless invisible system. Models recently proposed to explain the exponential spread in the standard-model Yukawa couplings can indeed foresee an extra unbroken dark U (1 ) gauge group, and the possibility to couple on-shell dark photons to standard-model fermions via one-loop magnetic-dipole kind of FCNC interactions. The latter are suppressed by the characteristic scale related to the mass of heavy messengers, connecting the standard model particles to the dark sector. We compute the corresponding decay rates for the top, bottom, and charm decays (t →c γ ¯ , u γ ¯ , b →s γ ¯ , d γ ¯ , and c →u γ ¯), and for the charged-lepton decays (τ →μ γ ¯ , e γ ¯ , and μ →e γ ¯) in terms of model parameters. We find that large branching ratios for both quark and lepton decays are allowed in case the messenger masses are in the discovery range of the LHC. Implications of these new decay channels at present and future collider experiments are briefly discussed.
A unified model of the standard genetic code
Morgado, Eberto R.
2017-01-01
The Rodin–Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.
A unified model of the standard genetic code.
José, Marco V; Zamudio, Gabriel S; Morgado, Eberto R
2017-03-01
The Rodin-Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.
Domain walls and gravitational waves in the Standard Model
NASA Astrophysics Data System (ADS)
Krajewski, Tomasz; Lalak, Zygmunt; Lewicki, Marek; Olszewski, Paweł
2016-12-01
We study domain walls which can be created in the Standard Model under the assumption that it is valid up to very high energy scales. We focus on domain walls interpolating between the physical electroweak vacuum and the global minimum appearing at very high field strengths. The creation of the network which ends up in the electroweak vacuum percolating through the Universe is not as difficult to obtain as one may expect, although it requires certain tuning of initial conditions. Our numerical simulations confirm that such domain walls would swiftly decay and thus cannot dominate the Universe. We discuss the possibility of detection of gravitational waves produced in this scenario. We have found that for the standard cosmology the energy density of these gravitational waves is too small to be observed in present and planned detectors.
On a radiative origin of the Standard Model from trinification
NASA Astrophysics Data System (ADS)
Camargo-Molina, José Eliel; Morais, António P.; Pasechnik, Roman; Wessén, Jonas
2016-09-01
In this work, we present a trinification-based grand unified theory incorporating a global SU(3) family symmetry that after a spontaneous breaking leads to a left-right symmetric model. Already at the classical level, this model can accommodate the matter content and the quark Cabbibo mixing in the Standard Model (SM) with only one Yukawa coupling at the unification scale. Considering the minimal low-energy scenario with the least amount of light states, we show that the resulting effective theory enables dynamical breaking of its gauge group down to that of the SM by means of radiative corrections accounted for by the renormalisation group evolution at one loop. This result paves the way for a consistent explanation of the SM breaking scale and fermion mass hierarchies.
Standardization of Thermo-Fluid Modeling in Modelica.Fluid
Franke, Rudiger; Casella, Francesco; Sielemann, Michael; Proelss, Katrin; Otter, Martin; Wetter, Michael
2009-09-01
This article discusses the Modelica.Fluid library that has been included in the Modelica Standard Library 3.1. Modelica.Fluid provides interfaces and basic components for the device-oriented modeling of onedimensional thermo-fluid flow in networks containing vessels, pipes, fluid machines, valves and fittings. A unique feature of Modelica.Fluid is that the component equations and the media models as well as pressure loss and heat transfer correlations are decoupled from each other. All components are implemented such that they can be used for media from the Modelica.Media library. This means that an incompressible or compressible medium, a single or a multiple substance medium with one or more phases might be used with one and the same model as long as the modeling assumptions made hold. Furthermore, trace substances are supported. Modeling assumptions can be configured globally in an outer System object. This covers in particular the initialization, uni- or bi-directional flow, and dynamic or steady-state formulation of mass, energy, and momentum balance. All assumptions can be locally refined for every component. While Modelica.Fluid contains a reasonable set of component models, the goal of the library is not to provide a comprehensive set of models, but rather to provide interfaces and best practices for the treatment of issues such as connector design and implementation of energy, mass and momentum balances. Applications from various domains are presented.
Toward Standardizing a Lexicon of Infectious Disease Modeling Terms.
Milwid, Rachael; Steriu, Andreea; Arino, Julien; Heffernan, Jane; Hyder, Ayaz; Schanzer, Dena; Gardner, Emma; Haworth-Brockman, Margaret; Isfeld-Kiely, Harpa; Langley, Joanne M; Moghadas, Seyed M
2016-01-01
Disease modeling is increasingly being used to evaluate the effect of health intervention strategies, particularly for infectious diseases. However, the utility and application of such models are hampered by the inconsistent use of infectious disease modeling terms between and within disciplines. We sought to standardize the lexicon of infectious disease modeling terms and develop a glossary of terms commonly used in describing models' assumptions, parameters, variables, and outcomes. We combined a comprehensive literature review of relevant terms with an online forum discussion in a virtual community of practice, mod4PH (Modeling for Public Health). Using a convergent discussion process and consensus amongst the members of mod4PH, a glossary of terms was developed as an online resource. We anticipate that the glossary will improve inter- and intradisciplinary communication and will result in a greater uptake and understanding of disease modeling outcomes in heath policy decision-making. We highlight the role of the mod4PH community of practice and the methodologies used in this endeavor to link theory, policy, and practice in the public health domain.
The Beyond the standard model working group: Summary report
G. Azuelos et al.
2004-03-18
In this working group we have investigated a number of aspects of searches for new physics beyond the Standard Model (SM) at the running or planned TeV-scale colliders. For the most part, we have considered hadron colliders, as they will define particle physics at the energy frontier for the next ten years at least. The variety of models for Beyond the Standard Model (BSM) physics has grown immensely. It is clear that only future experiments can provide the needed direction to clarify the correct theory. Thus, our focus has been on exploring the extent to which hadron colliders can discover and study BSM physics in various models. We have placed special emphasis on scenarios in which the new signal might be difficult to find or of a very unexpected nature. For example, in the context of supersymmetry (SUSY), we have considered: how to make fully precise predictions for the Higgs bosons as well as the superparticles of the Minimal Supersymmetric Standard Model (MSSM) (parts III and IV); MSSM scenarios in which most or all SUSY particles have rather large masses (parts V and VI); the ability to sort out the many parameters of the MSSM using a variety of signals and study channels (part VII); whether the no-lose theorem for MSSM Higgs discovery can be extended to the next-to-minimal Supersymmetric Standard Model (NMSSM) in which an additional singlet superfield is added to the minimal collection of superfields, potentially providing a natural explanation of the electroweak value of the parameter {micro} (part VIII); sorting out the effects of CP violation using Higgs plus squark associate production (part IX); the impact of lepton flavor violation of various kinds (part X); experimental possibilities for the gravitino and its sgoldstino partner (part XI); what the implications for SUSY would be if the NuTeV signal for di-muon events were interpreted as a sign of R-parity violation (part XII). Our other main focus was on the phenomenological implications of extra
Dark matter candidates in the constrained exceptional supersymmetric standard model
NASA Astrophysics Data System (ADS)
Athron, P.; Thomas, A. W.; Underwood, S. J.; White, M. J.
2017-02-01
The exceptional supersymmetric standard model is a low energy alternative to the minimal supersymmetric standard model (MSSM) with an extra U (1 ) gauge symmetry and three generations of matter filling complete 27-plet representations of E6. This provides both new D and F term contributions that raise the Higgs mass at tree level, and a compelling solution to the μ -problem of the MSSM by forbidding such a term with the extra U (1 ) symmetry. Instead, an effective μ -term is generated from the vacuum expectation value of an SM singlet which breaks the extra U (1 ) symmetry at low energies, giving rise to a massive Z'. We explore the phenomenology of the constrained version of this model in substantially more detail than has been carried out previously, performing a ten dimensional scan that reveals a large volume of viable parameter space. We classify the different mechanisms for generating the measured relic density of dark matter found in the scan, including the identification of a new mechanism involving mixed bino/inert-Higgsino dark matter. We show which mechanisms can evade the latest direct detection limits from the LUX 2016 experiment. Finally we present benchmarks consistent with all the experimental constraints and which could be discovered with the XENON1T experiment.
Electroweak baryogenesis in the exceptional supersymmetric standard model
Chao, Wei
2015-08-28
We study electroweak baryogenesis in the E{sub 6} inspired exceptional supersymmetric standard model (E{sub 6}SSM). The relaxation coefficients driven by singlinos and the new gaugino as well as the transport equation of the Higgs supermultiplet number density in the E{sub 6}SSM are calculated. Our numerical simulation shows that both CP-violating source terms from singlinos and the new gaugino can solely give rise to a correct baryon asymmetry of the Universe via the electroweak baryogenesis mechanism.
Dark Matter and Color Octets Beyond the Standard Model
Krnjaic, Gordan Zdenko
2012-07-01
Although the Standard Model (SM) of particles and interactions has survived forty years of experimental tests, it does not provide a complete description of nature. From cosmological and astrophysical observations, it is now clear that the majority of matter in the universe is not baryonic and interacts very weakly (if at all) via non-gravitational forces. The SM does not provide a dark matter candidate, so new particles must be introduced. Furthermore, recent Tevatron results suggest that SM predictions for benchmark collider observables are in tension with experimental observations. In this thesis, we will propose extensions to the SM that address each of these issues.
Searches for the standard model Higgs boson at the Tevatron
Dorigo, Tommaso; /Padua U.
2005-05-01
The CDF and D0 experiments at the Tevatron have searched for the Standard Model Higgs boson in data collected between 2001 and 2004. Upper limits have been placed on the production cross section times branching ratio to b{bar b} pairs or W{sup +}W{sup -} pairs as a function of the Higgs boson mass. projections indicate that the Tevatron experiments have a chance of discovering a M{sub H} = 115 GeV Higgs with the total dataset foreseen by 2009, or excluding it at 95% C.L. up to a mass of 135 GeV.
Quantum corrections in Higgs inflation: the Standard Model case
NASA Astrophysics Data System (ADS)
George, Damien P.; Mooij, Sander; Postma, Marieke
2016-04-01
We compute the one-loop renormalization group equations for Standard Model Higgs inflation. The calculation is done in the Einstein frame, using a covariant formalism for the multi-field system. All counterterms, and thus the betafunctions, can be extracted from the radiative corrections to the two-point functions; the calculation of higher n-point functions then serves as a consistency check of the approach. We find that the theory is renormalizable in the effective field theory sense in the small, mid and large field regime. In the large field regime our results differ slightly from those found in the literature, due to a different treatment of the Goldstone bosons.
Toward Standardizing a Lexicon of Infectious Disease Modeling Terms
Milwid, Rachael; Steriu, Andreea; Arino, Julien; Heffernan, Jane; Hyder, Ayaz; Schanzer, Dena; Gardner, Emma; Haworth-Brockman, Margaret; Isfeld-Kiely, Harpa; Langley, Joanne M.; Moghadas, Seyed M.
2016-01-01
Disease modeling is increasingly being used to evaluate the effect of health intervention strategies, particularly for infectious diseases. However, the utility and application of such models are hampered by the inconsistent use of infectious disease modeling terms between and within disciplines. We sought to standardize the lexicon of infectious disease modeling terms and develop a glossary of terms commonly used in describing models’ assumptions, parameters, variables, and outcomes. We combined a comprehensive literature review of relevant terms with an online forum discussion in a virtual community of practice, mod4PH (Modeling for Public Health). Using a convergent discussion process and consensus amongst the members of mod4PH, a glossary of terms was developed as an online resource. We anticipate that the glossary will improve inter- and intradisciplinary communication and will result in a greater uptake and understanding of disease modeling outcomes in heath policy decision-making. We highlight the role of the mod4PH community of practice and the methodologies used in this endeavor to link theory, policy, and practice in the public health domain. PMID:27734014
Big bang nucleosynthesis: The standard model and alternatives
NASA Technical Reports Server (NTRS)
Schramm, David N.
1991-01-01
Big bang nucleosynthesis provides (with the microwave background radiation) one of the two quantitative experimental tests of the big bang cosmological model. This paper reviews the standard homogeneous-isotropic calculation and shows how it fits the light element abundances ranging from He-4 at 24% by mass through H-2 and He-3 at parts in 10(exp 5) down to Li-7 at parts in 10(exp 10). Furthermore, the recent large electron positron (LEP) (and the stanford linear collider (SLC)) results on the number of neutrinos are discussed as a positive laboratory test of the standard scenario. Discussion is presented on the improved observational data as well as the improved neutron lifetime data. Alternate scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conlusions on the baryonic density relative to the critical density, omega(sub b) remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the conclusion that omega(sub b) approximately equals 0.06. This latter point is the driving force behind the need for non-baryonic dark matter (assuming omega(sub total) = 1) and the need for dark baryonic matter, since omega(sub visible) is less than omega(sub b).
ERIC Educational Resources Information Center
Grosberg, Lawrence M.
2001-01-01
Describes how medical schools have successfully used the "standardized patient" teaching technique, and the use of "standardized clients" at New York Law School. Proposes establishing consortiums among small groups of law schools to implement the standardized client technique, and using the technique in high stakes testing. (EV)
Modeling the wet bulb globe temperature using standard meteorological measurements.
Liljegren, James C; Carhart, Richard A; Lawday, Philip; Tschopp, Stephen; Sharp, Robert
2008-10-01
The U.S. Army has a need for continuous, accurate estimates of the wet bulb globe temperature to protect soldiers and civilian workers from heat-related injuries, including those involved in the storage and destruction of aging chemical munitions at depots across the United States. At these depots, workers must don protective clothing that increases their risk of heat-related injury. Because of the difficulty in making continuous, accurate measurements of wet bulb globe temperature outdoors, the authors have developed a model of the wet bulb globe temperature that relies only on standard meteorological data available at each storage depot for input. The model is composed of separate submodels of the natural wet bulb and globe temperatures that are based on fundamental principles of heat and mass transfer, has no site-dependent parameters, and achieves an accuracy of better than 1 degree C based on comparisons with wet bulb globe temperature measurements at all depots.
Modeling the wet bulb globe temperature using standard meteorological measurements.
Liljegren, J. C.; Carhart, R. A.; Lawday, P.; Tschopp, S.; Sharp, R.; Decision and Information Sciences
2008-10-01
The U.S. Army has a need for continuous, accurate estimates of the wet bulb globe temperature to protect soldiers and civilian workers from heat-related injuries, including those involved in the storage and destruction of aging chemical munitions at depots across the United States. At these depots, workers must don protective clothing that increases their risk of heat-related injury. Because of the difficulty in making continuous, accurate measurements of wet bulb globe temperature outdoors, the authors have developed a model of the wet bulb globe temperature that relies only on standard meteorological data available at each storage depot for input. The model is composed of separate submodels of the natural wet bulb and globe temperatures that are based on fundamental principles of heat and mass transfer, has no site-dependent parameters, and achieves an accuracy of better than 1 C based on comparisons with wet bulb globe temperature measurements at all depots.
Penguin-like diagrams from the standard model
Ping, Chia Swee
2015-04-24
The Standard Model is highly successful in describing the interactions of leptons and quarks. There are, however, rare processes that involve higher order effects in electroweak interactions. One specific class of processes is the penguin-like diagram. Such class of diagrams involves the neutral change of quark flavours accompanied by the emission of a gluon (gluon penguin), a photon (photon penguin), a gluon and a photon (gluon-photon penguin), a Z-boson (Z penguin), or a Higgs-boson (Higgs penguin). Such diagrams do not arise at the tree level in the Standard Model. They are, however, induced by one-loop effects. In this paper, we present an exact calculation of the penguin diagram vertices in the ‘tHooft-Feynman gauge. Renormalization of the vertex is effected by a prescription by Chia and Chong which gives an expression for the counter term identical to that obtained by employing Ward-Takahashi identity. The on-shell vertex functions for the penguin diagram vertices are obtained. The various penguin diagram vertex functions are related to one another via Ward-Takahashi identity. From these, a set of relations is obtained connecting the vertex form factors of various penguin diagrams. Explicit expressions for the gluon-photon penguin vertex form factors are obtained, and their contributions to the flavor changing processes estimated.
Penguin-like diagrams from the standard model
NASA Astrophysics Data System (ADS)
Ping, Chia Swee
2015-04-01
The Standard Model is highly successful in describing the interactions of leptons and quarks. There are, however, rare processes that involve higher order effects in electroweak interactions. One specific class of processes is the penguin-like diagram. Such class of diagrams involves the neutral change of quark flavours accompanied by the emission of a gluon (gluon penguin), a photon (photon penguin), a gluon and a photon (gluon-photon penguin), a Z-boson (Z penguin), or a Higgs-boson (Higgs penguin). Such diagrams do not arise at the tree level in the Standard Model. They are, however, induced by one-loop effects. In this paper, we present an exact calculation of the penguin diagram vertices in the `tHooft-Feynman gauge. Renormalization of the vertex is effected by a prescription by Chia and Chong which gives an expression for the counter term identical to that obtained by employing Ward-Takahashi identity. The on-shell vertex functions for the penguin diagram vertices are obtained. The various penguin diagram vertex functions are related to one another via Ward-Takahashi identity. From these, a set of relations is obtained connecting the vertex form factors of various penguin diagrams. Explicit expressions for the gluon-photon penguin vertex form factors are obtained, and their contributions to the flavor changing processes estimated.
Using clinical element models for pharmacogenomic study data standardization.
Zhu, Qian; Freimuth, Robert R; Pathak, Jyotishman; Chute, Christopher G
2013-01-01
Standardized representations for pharmacogenomics data are seldom used, which leads to data heterogeneity and hinders data reuse and integration. In this study, we attempted to represent data elements from the Pharmacogenomics Research Network (PGRN) that are related to four categories, patient, drug, disease and laboratory, in a standard way using Clinical Element Models (CEMs), which have been adopted in the Strategic Health IT Advanced Research Project, secondary use of EHR (SHARPn) as a library of common logical models that facilitate consistent data representation, interpretation, and exchange within and across heterogeneous sources and applications. This was accomplished by grouping PGRN data elements into categories based on UMLS semantic type, then mapping each to one or more CEM attributes using a web-based tool that was developed to support curation activities. This study demonstrates the successful application of SHARPn CEMs to the pharmacogenomic domain. It also identified several categories of data elements that are not currently supported by SHARPn CEMs, which represent opportunities for further development and collaboration.
Long-term archiving and data access: modelling and standardization
NASA Technical Reports Server (NTRS)
Hoc, Claude; Levoir, Thierry; Nonon-Latapie, Michel
1996-01-01
This paper reports on the multiple difficulties inherent in the long-term archiving of digital data, and in particular on the different possible causes of definitive data loss. It defines the basic principles which must be respected when creating long-term archives. Such principles concern both the archival systems and the data. The archival systems should have two primary qualities: independence of architecture with respect to technological evolution, and generic-ness, i.e., the capability of ensuring identical service for heterogeneous data. These characteristics are implicit in the Reference Model for Archival Services, currently being designed within an ISO-CCSDS framework. A system prototype has been developed at the French Space Agency (CNES) in conformance with these principles, and its main characteristics will be discussed in this paper. Moreover, the data archived should be capable of abstract representation regardless of the technology used, and should, to the extent that it is possible, be organized, structured and described with the help of existing standards. The immediate advantage of standardization is illustrated by several concrete examples. Both the positive facets and the limitations of this approach are analyzed. The advantages of developing an object-oriented data model within this contxt are then examined.
Electric-Magnetic Duality and the Dualized Standard Model
NASA Astrophysics Data System (ADS)
Tsou, Sheung Tsun
In these lectures I shall explain how a new-found nonabelian duality can be used to solve some outstanding questions in particle physics. The first lecture introduces the concept of electromagnetic duality and goes on to present its nonabelian generalization in terms of loop space variables. The second lecture discusses certain puzzles that remain with the Standard Model of particle physics, particularly aimed at nonexperts. The third lecture presents a solution to these problems in the form of the Dualized Standard Model, first proposed by Chan and the author, using nonabelian dual symmetry. The fundamental particles exist in three generations, and if this is a manifestation of dual colour symmetry, which by 't Hooft's theorem is necessarily broken, then we have a natural explanation of the generation puzzle, together with tested and testable consequences not only in particle physics, but also in astrophysics, nuclear and atomic physics. Reported is mainly work done in collaboration with Chan Hong-Mo, and also various parts with Peter Scharbach, Jacqueline Faridani, José Bordes, Jakov Pfaudler, Ricardo Gallego severally.
Vacuum stability in an extended standard model with a leptoquark
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Priyotosh; Mandal, Rusa
2017-02-01
We investigate the standard model with the extension of a charged scalar having fractional electromagnetic charge of -1 /3 unit and with lepton and baryon number-violating couplings at tree level. Without directly taking part in the electroweak (EW) symmetry breaking, this scalar can affect stability of the EW vacuum via loop effects. The impact of such a scalar, i.e., a leptoquark, on the perturbativity of standard model dimensionless couplings as well as on new physics couplings has been studied at two-loop order. The vacuum stability of the Higgs potential is checked using the one-loop renormalization group-improved effective potential approach with a two-loop beta function for all the couplings. From the stability analysis, various bounds are drawn on parameter space by identifying the region corresponding to the metastability and stability of the EW vacuum. Later, we also address the Higgs mass fine-tuning issue via the Veltman condition, and the presence of such a scalar increases the scale up to which the theory can be considered as reasonably fine-tuned. All these constraints give a very predictive parameter space for leptoquark couplings which can be tested at present and future colliders. Especially, a leptoquark with mass O (TeV ) can give rise to lepton-quark flavor-violating signatures via decaying into the t τ channel at tree level, which can be tested at the LHC or future colliders.
Generalized Pure Density Matrices and the Standard Model
NASA Astrophysics Data System (ADS)
Brannen, Carl
2015-04-01
We consider generalizations of pure density matrices that have ρρ = ρ , but give up the trace=1 requirement. Given a representation of a quantum algebra in N × N complex matrices, the elements that satisfy ρρ = ρ can be taken to be pure density matrix states. In the Standard Model, particles from different ``superselection sectors'' cannot form linear superpositions. For example, it is impossible to form a linear superposition between an electron and a neutrino. We report that some quantum algebras give symmetry, particle and generation content, gauge freedom, and superselection sectors that are similar to those of the Standard Model. Our lecture will consider as an example the 4 × 4 complex matrices. There are 16 that are diagonal with ρρ = ρ . The 4 with trace=1 give the usual pure density matrices. We will show that the 6 with trace=2 form an SU(3) triplet of three superselection sectors, with each sector consisting of an SU(2) doublet. Considering one of these sectors, the mapping to SU(2) is not unique; there is an SU(2) gauge freedom. This gauge freedom is an analogy to the U(1) gauge freedom that arises when converting a pure density matrix to a state vector.
Wisconsin's Model Academic Standards for Art and Design Education.
ERIC Educational Resources Information Center
Wisconsin State Dept. of Public Instruction, Madison.
This Wisconsin academic standards guide for art and design explains what is meant by academic standards. The guide declares that academic standards specify what students should know and be able to do; what students might be asked to do to give evidence of standards; how well students must perform; and that content, performance, and proficiency…
Application of standards and models in body composition analysis.
Müller, Manfred J; Braun, Wiebke; Pourhassan, Maryam; Geisler, Corinna; Bosy-Westphal, Anja
2016-05-01
The aim of this review is to extend present concepts of body composition and to integrate it into physiology. In vivo body composition analysis (BCA) has a sound theoretical and methodological basis. Present methods used for BCA are reliable and valid. Individual data on body components, organs and tissues are included into different models, e.g. a 2-, 3-, 4- or multi-component model. Today the so-called 4-compartment model as well as whole body MRI (or computed tomography) scans are considered as gold standards of BCA. In practice the use of the appropriate method depends on the question of interest and the accuracy needed to address it. Body composition data are descriptive and used for normative analyses (e.g. generating normal values, centiles and cut offs). Advanced models of BCA go beyond description and normative approaches. The concept of functional body composition (FBC) takes into account the relationships between individual body components, organs and tissues and related metabolic and physical functions. FBC can be further extended to the model of healthy body composition (HBC) based on horizontal (i.e. structural) and vertical (e.g. metabolism and its neuroendocrine control) relationships between individual components as well as between component and body functions using mathematical modelling with a hierarchical multi-level multi-scale approach at the software level. HBC integrates into whole body systems of cardiovascular, respiratory, hepatic and renal functions. To conclude BCA is a prerequisite for detailed phenotyping of individuals providing a sound basis for in depth biomedical research and clinical decision making.
Evolution of Climate Science Modelling Language within international standards frameworks
NASA Astrophysics Data System (ADS)
Lowe, Dominic; Woolf, Andrew
2010-05-01
The Climate Science Modelling Language (CSML) was originally developed as part of the NERC Data Grid (NDG) project in the UK. It was one of the first Geography Markup Language (GML) application schemas describing complex feature types for the metocean domain. CSML feature types can be used to describe typical climate products such as model runs or atmospheric profiles. CSML has been successfully used within NDG to provide harmonised access to a number of different data sources. For example, meteorological observations held in heterogeneous databases by the British Atmospheric Data Centre (BADC) and Centre for Ecology and Hydrology (CEH) were served uniformly as CSML features via Web Feature Service. CSML has now been substantially revised to harmonise it with the latest developments in OGC and ISO conceptual modelling for geographic information. In particular, CSML is now aligned with the near-final ISO 19156 Observations & Measurements (O&M) standard. CSML combines the O&M concept of 'sampling features' together with an observation result based on the coverage model (ISO 19123). This general pattern is specialised for particular data types of interest, classified on the basis of sampling geometry and topology. In parallel work, the OGC Met Ocean Domain Working Group has established a conceptual modelling activity. This is a cross-organisational effort aimed at reaching consensus on a common core data model that could be re-used in a number of met-related application areas: operational meteorology, aviation meteorology, climate studies, and the research community. It is significant to note that this group has also identified sampling geometry and topology as a key classification axis for data types. Using the Model Driven Architecture (MDA) approach as adopted by INSPIRE we demonstrate how the CSML application schema is derived from a formal UML conceptual model based on the ISO TC211 framework. By employing MDA tools which map consistently between UML and GML we
Standards and Guidelines for Numerical Models for Tsunami Hazard Mitigation
NASA Astrophysics Data System (ADS)
Titov, V.; Gonzalez, F.; Kanoglu, U.; Yalciner, A.; Synolakis, C. E.
2006-12-01
An increased number of nations around the workd need to develop tsunami mitigation plans which invariably involve inundation maps for warning guidance and evacuation planning. There is the risk that inundation maps may be produced with older or untested methodology, as there are currently no standards for modeling tools. In the aftermath of the 2004 megatsunami, some models were used to model inundation for Cascadia events with results much larger than sediment records and existing state-of-the-art studies suggest leading to confusion among emergency management. Incorrectly assessing tsunami impact is hazardous, as recent events in 2006 in Tonga, Kythira, Greece and Central Java have suggested (Synolakis and Bernard, 2006). To calculate tsunami currents, forces and runup on coastal structures, and inundation of coastlines one must calculate the evolution of the tsunami wave from the deep ocean to its target site, numerically. No matter what the numerical model, validation (the process of ensuring that the model solves the parent equations of motion accurately) and verification (the process of ensuring that the model used represents geophysical reality appropriately) both are an essential. Validation ensures that the model performs well in a wide range of circumstances and is accomplished through comparison with analytical solutions. Verification ensures that the computational code performs well over a range of geophysical problems. A few analytic solutions have been validated themselves with laboratory data. Even fewer existing numerical models have been both validated with the analytical solutions and verified with both laboratory measurements and field measurements, thus establishing a gold standard for numerical codes for inundation mapping. While there is in principle no absolute certainty that a numerical code that has performed well in all the benchmark tests will also produce correct inundation predictions with any given source motions, validated codes
New extended standard model, dark matters and relativity theory
NASA Astrophysics Data System (ADS)
Hwang, Jae-Kwang
2016-03-01
Three-dimensional quantized space model is newly introduced as the extended standard model. Four three-dimensional quantized spaces with total 12 dimensions are used to explain the universes including ours. Electric (EC), lepton (LC) and color (CC) charges are defined to be the charges of the x1x2x3, x4x5x6 and x7x8x9 warped spaces, respectively. Then, the lepton is the xi(EC) - xj(LC) correlated state which makes 3x3 = 9 leptons and the quark is the xi(EC) - xj(LC) - xk(CC) correlated state which makes 3x3x3 = 27 quarks. The new three bastons with the xi(EC) state are proposed as the dark matters seen in the x1x2x3 space, too. The matter universe question, three generations of the leptons and quarks, dark matter and dark energy, hadronization, the big bang, quantum entanglement, quantum mechanics and general relativity are briefly discussed in terms of this new model. The details can be found in the article titled as ``journey into the universe; three-dimensional quantized spaces, elementary particles and quantum mechanics at https://www.researchgate.net/profile/J_Hwang2''.
Delayed standard neural network models for control systems.
Liu, Meiqin
2007-09-01
In order to conveniently analyze the stability of recurrent neural networks (RNNs) and successfully synthesize the controllers for nonlinear systems, similar to the nominal model in linear robust control theory, the novel neural network model, named delayed standard neural network model (DSNNM) is presented, which is the interconnection of a linear dynamic system and a bounded static delayed (or nondelayed) nonlinear operator. By combining a number of different Lyapunov functionals with S-procedure, some useful criteria of global asymptotic stability and global exponential stability for the continuous-time DSNNMs (CDSNNMs) and discrete-time DSNNMs (DDSNNMs) are derived, whose conditions are formulated as linear matrix inequalities (LMIs). Based on the stability analysis, some state-feedback control laws for the DSNNM with input and output are designed to stabilize the closed-loop systems. Most RNNs and neurocontrol nonlinear systems with (or without) time delays can be transformed into the DSNNMs to be stability-analyzed or stabilization-synthesized in a unified way. In this paper, the DSNNMs are applied to analyzing the stability of the continuous-time and discrete-time RNNs with or without time delays, and synthesizing the state-feedback controllers for the chaotic neural-network-system and discrete-time nonlinear system. It turns out that the DSNNM makes the stability conditions of the RNNs easily verified, and provides a new idea for the synthesis of the controllers for the nonlinear systems.
Sakurai Prize: Beyond the Standard Model Higgs Boson
NASA Astrophysics Data System (ADS)
Haber, Howard
2017-01-01
The discovery of the Higgs boson strongly suggests that the first elementary spin 0 particle has been observed. Is the Higgs boson a solo act, or are there additional Higgs bosons to be discovered? Given that there are three generations of fundamental fermions, one might also expect the sector of fundamental scalars of nature to be non-minimal. However, there are already strong constraints on the possible structure of an extended Higgs sector. In this talk, I review the theoretical motivations that have been put forward for an extended Higgs sector and discuss its implications in light of the observation that the properties of the observed Higgs boson are close to those predicted by the Standard Model. supported in part by the U.S. Department of Energy Grant Number DE-SC0010107.
A Hierarchical Model for Accuracy and Choice on Standardized Tests.
Culpepper, Steven Andrew; Balamuta, James Joseph
2015-11-25
This paper assesses the psychometric value of allowing test-takers choice in standardized testing. New theoretical results examine the conditions where allowing choice improves score precision. A hierarchical framework is presented for jointly modeling the accuracy of cognitive responses and item choices. The statistical methodology is disseminated in the 'cIRT' R package. An 'answer two, choose one' (A2C1) test administration design is introduced to avoid challenges associated with nonignorable missing data. Experimental results suggest that the A2C1 design and payout structure encouraged subjects to choose items consistent with their cognitive trait levels. Substantively, the experimental data suggest that item choices yielded comparable information and discrimination ability as cognitive items. Given there are no clear guidelines for writing more or less discriminating items, one practical implication is that choice can serve as a mechanism to improve score precision.
Image contrast enhancement based on a local standard deviation model
Chang, Dah-Chung; Wu, Wen-Rong
1996-12-31
The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt`s Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details are concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm.
Quantum gravity and Lorentz invariance violation in the standard model.
Alfaro, Jorge
2005-06-10
The most important problem of fundamental physics is the quantization of the gravitational field. A main difficulty is the lack of available experimental tests that discriminate among the theories proposed to quantize gravity. Recently, Lorentz invariance violation by quantum gravity (QG) has been the source of growing interest. However, the predictions depend on an ad hoc hypothesis and too many arbitrary parameters. Here we show that the standard model itself contains tiny Lorentz invariance violation terms coming from QG. All terms depend on one arbitrary parameter alpha that sets the scale of QG effects. This parameter can be estimated using data from the ultrahigh energy cosmic ray spectrum to be |alpha|< approximately 10(-22)-10(-23).
Baryon number dissipation at finite temperature in the standard model
Mottola, E. ); Raby, S. . Dept. of Physics); Starkman, G. . Dept. of Astronomy)
1990-01-01
We analyze the phenomenon of baryon number violation at finite temperature in the standard model, and derive the relaxation rate for the baryon density in the high temperature electroweak plasma. The relaxation rate, {gamma} is given in terms of real time correlation functions of the operator E{center dot}B, and is directly proportional to the sphaleron transition rate, {Gamma}: {gamma} {preceq} n{sub f}{Gamma}/T{sup 3}. Hence it is not instanton suppressed, as claimed by Cohen, Dugan and Manohar (CDM). We show explicitly how this result is consistent with the methods of CDM, once it is recognized that a new anomalous commutator is required in their approach. 19 refs., 2 figs.
CosPA 2015 and the Standard Model
NASA Astrophysics Data System (ADS)
Pauchy Hwang, W.-Y.
2016-07-01
In this keynote speech, I describe briefly “The Universe”, a journal/newsletter launched by APCosPA Organization, and my lifetime research on the Standard Model of particle physics. In this 21st Century, we should declare that we live in the quantum 4-dimensional Minkowski space-time with the force-fields gauge-group structure SUc(3) × SUL(2) × U(1) × SUf(3) built-in from the very beginning. This background can see the lepton world, of atomic sizes, and offers us the eyes to see other things. It also can see the quark world, of the Fermi sizes, and this fact makes this entire world much more interesting.
Theories beyond the standard model, one year before the LHC
NASA Astrophysics Data System (ADS)
Dimopoulos, Savas
2006-04-01
Next year the Large Hadron Collider at CERN will begin what may well be a new golden era of particle physics. I will discuss three theories that will be tested at the LHC. I will begin with the supersymmetric standard model, proposed with Howard Georgi in 1981. This theory made a precise quantitative prediction, the unification of couplings, that has been experimentally confirmed in 1991 by experiments at CERN and SLAC. This established it as the leading theory for physics beyond the standard model. Its main prediction, the existence of supersymmetric particles, will be tested at the large hadron collider. I will next overview theories with large new dimensions, proposed with Nima Arkani-Hamed and Gia Dvali in 1998. This links the weakness of gravity to the presence of sub-millimeter size dimensions, that are presently searched for in experiments looking for deviations from Newton's law at short distances. In this framework quantum gravity, string theory, and black holes may be experimentally investigated at the large hadron collider. I will end with the recent proposal of split supersymmetry with Nima Arkani-Hamed. This theory is motivated by the possible existence of an enormous number of ground states in the fundamental theory, as suggested by the cosmological constant problem and recent developments in string theory and cosmology. It can be tested at the large hadron collider and, if confirmed, it will lend support to the idea that our universe and its laws are not unique and that there is an enormous variety of universes each with its own distinct physical laws.
Physics Beyond the Standard Model from Molecular Hydrogen Spectroscopy
NASA Astrophysics Data System (ADS)
Ubachs, Wim; Salumbides, Edcel John; Bagdonaite, Julija
2015-06-01
The spectrum of molecular hydrogen can be measured in the laboratory to very high precision using advanced laser and molecular beam techniques, as well as frequency-comb based calibration [1,2]. The quantum level structure of this smallest neutral molecule can now be calculated to very high precision, based on a very accurate (10-15 precision) Born-Oppenheimer potential [3] and including subtle non-adiabatic, relativistic and quantum electrodynamic effects [4]. Comparison between theory and experiment yields a test of QED, and in fact of the Standard Model of Physics, since the weak, strong and gravitational forces have a negligible effect. Even fifth forces beyond the Standard Model can be searched for [5]. Astronomical observation of molecular hydrogen spectra, using the largest telescopes on Earth and in space, may reveal possible variations of fundamental constants on a cosmological time scale [6]. A study has been performed at a 'look-back' time of 12.5 billion years [7]. In addition the possible dependence of a fundamental constant on a gravitational field has been investigated from observation of molecular hydrogen in the photospheres of white dwarfs [8]. The latter involves a test of the Einsteins equivalence principle. [1] E.J. Salumbides et al., Phys. Rev. Lett. 107, 143005 (2011). [2] G. Dickenson et al., Phys. Rev. Lett. 110, 193601 (2013). [3] K. Pachucki, Phys. Rev. A82, 032509 (2010). [4] J. Komasa et al., J. Chem. Theory Comp. 7, 3105 (2011). [5] E.J. Salumbides et al., Phys. Rev. D87, 112008 (2013). [6] F. van Weerdenburg et al., Phys. Rev. Lett. 106, 180802 (2011). [7] J. Badonaite et al., Phys. Rev. Lett. 114, 071301 (2015). [8] J. Bagdonaite et al., Phys. Rev. Lett. 113, 123002 (2014).
Physics beyond the Standard Model from hydrogen spectroscopy
NASA Astrophysics Data System (ADS)
Ubachs, W.; Koelemeij, J. C. J.; Eikema, K. S. E.; Salumbides, E. J.
2016-02-01
Spectroscopy of hydrogen can be used for a search into physics beyond the Standard Model. Differences between the absorption spectra of the Lyman and Werner bands of H2 as observed at high redshift and those measured in the laboratory can be interpreted in terms of possible variations of the proton-electron mass ratio μ =mp /me over cosmological history. Investigation of ten such absorbers in the redshift range z = 2.0 -4.2 yields a constraint of | Δμ / μ | < 5 ×10-6 at 3σ. Observation of H2 from the photospheres of white dwarf stars inside our Galaxy delivers a constraint of similar magnitude on a dependence of μ on a gravitational potential 104 times as strong as on the Earth's surface. While such astronomical studies aim at finding quintessence in an indirect manner, laboratory precision measurements target such additional quantum fields in a direct manner. Laser-based precision measurements of dissociation energies, vibrational splittings and rotational level energies in H2 molecules and their deuterated isotopomers HD and D2 produce values for the rovibrational binding energies fully consistent with quantum ab initio calculations including relativistic and quantum electrodynamical (QED) effects. Similarly, precision measurements of high-overtone vibrational transitions of HD+ ions, captured in ion traps and sympathetically cooled to mK temperatures, also result in transition frequencies fully consistent with calculations including QED corrections. Precision measurements of inter-Rydberg transitions in H2 can be extrapolated to yield accurate values for level splittings in the H2+ -ion. These comprehensive results of laboratory precision measurements on neutral and ionic hydrogen molecules can be interpreted to set bounds on the existence of possible fifth forces and of higher dimensions, phenomena describing physics beyond the Standard Model.
Standard Model in multiscale theories and observational constraints
NASA Astrophysics Data System (ADS)
Calcagni, Gianluca; Nardelli, Giuseppe; Rodríguez-Fernández, David
2016-08-01
We construct and analyze the Standard Model of electroweak and strong interactions in multiscale spacetimes with (i) weighted derivatives and (ii) q -derivatives. Both theories can be formulated in two different frames, called fractional and integer picture. By definition, the fractional picture is where physical predictions should be made. (i) In the theory with weighted derivatives, it is shown that gauge invariance and the requirement of having constant masses in all reference frames make the Standard Model in the integer picture indistinguishable from the ordinary one. Experiments involving only weak and strong forces are insensitive to a change of spacetime dimensionality also in the fractional picture, and only the electromagnetic and gravitational sectors can break the degeneracy. For the simplest multiscale measures with only one characteristic time, length and energy scale t*, ℓ* and E*, we compute the Lamb shift in the hydrogen atom and constrain the multiscale correction to the ordinary result, getting the absolute upper bound t*<10-23 s . For the natural choice α0=1 /2 of the fractional exponent in the measure, this bound is strengthened to t*<10-29 s , corresponding to ℓ*<10-20 m and E*>28 TeV . Stronger bounds are obtained from the measurement of the fine-structure constant. (ii) In the theory with q -derivatives, considering the muon decay rate and the Lamb shift in light atoms, we obtain the independent absolute upper bounds t*<10-13 s and E*>35 MeV . For α0=1 /2 , the Lamb shift alone yields t*<10-27 s , ℓ*<10-19 m and E*>450 GeV .
Observational consequences of the standard model Higgs inflation variants
Popa, L.A.
2011-10-01
We consider the possibility to observationally differentiate the Standard Model (SM) Higgs driven inflation with non-minimal coupling to gravity from other variants of SM Higgs inflation based on the scalar field theories with non-canonical kinetic term such as Galileon-like kinetic term and kinetic term with non-minimal derivative coupling to the Einstein tensor. In order to ensure consistent results, we study the SM Higgs inflation variants by using the same method, computing the full dynamics of the background and perturbations of the Higgs field during inflation at quantum level. Assuming that all the SM Higgs inflation variants are consistent theories, we use the MCMC technique to derive constraints on the inflationary parameters and the Higgs boson mass from their fit to WMAP7+SN+BAO data set. We conclude that a combination of the SM Higgs mass measurement by the LHC and accurate determination by the PLANCK satellite of the spectral index of curvature perturbations and tensor-to-scalar ratio will enable to distinguish among these models. We also show that the consistency relations of the SM Higgs inflation variants are distinct enough to differentiate among them.
Setting, Evaluating, and Maintaining Certification Standards with the Rasch Model.
ERIC Educational Resources Information Center
Grosse, Martin E.; Wright, Benjamin D.
1986-01-01
Based on the standard setting procedures or the American Board of Preventive Medicine for their Core Test, this article describes how Rasch measurement can facilitate using test content judgments in setting a standard. Rasch measurement can then be used to evaluate and improve the precision of the standard and to hold it constant across time.…
Wisconsin's Model Academic Standards for Business. Bulletin No. 9004.
ERIC Educational Resources Information Center
Wisconsin State Dept. of Public Instruction, Madison.
This document contains standards for the academic content of the Wisconsin K-12 curriculum in the area of business. Developed by task forces of educators, parents, board of education members, and employers and employees, the standards cover content, performance, and proficiency areas. They are cross-referenced to the state standards for English…
Dark matter and color octets beyond the Standard Model
NASA Astrophysics Data System (ADS)
Krnjaic, Gordan Z.
Although the Standard Model (SM) of particles and interactions has survived forty years of experimental tests, it does not provide a complete description of nature. From cosmological and astrophysical observations, it is now clear that the majority of matter in the universe is not baryonic and interacts very weakly (if at all) via non-gravitational forces. The SM does not provide a dark matter candidate, so new particles must be introduced. Furthermore, recent Tevatron results suggest that SM predictions for benchmark collider observables are in tension with experimental observations. In this thesis, we will propose extensions to the SM that address each of these issues. Although there is abundant indirect evidence for the existence of dark matter, terrestrial efforts to observe its interactions have yielded conflicting results. We address this situation with a simple model of dark matter that features hydrogen-like bound states that scatter off SM nuclei by undergoing inelastic hyperfine transitions. We explore the available parameter space that results from demanding that DM self-interactions satisfy experimental bounds and ameliorate the tension between positive and null signals at the DAMA and CDMS experiments respectively. However, this simple model does not explain the cosmological abundance of dark matter and also encounters a Landau pole at a low energy scale. We, therefore, extend the field content and gauge group of the dark sector to resolve these issues with a renormalizable UV completion. We also explore the galactic dynamics of unbound dark matter and find that "dark ions" settle into a diffuse isothermal halo that differs from that of the bound states. This suppresses the local dark-ion density and expands the model's viable parameter space. We also consider the > 3σ excess in W plus dijet events recently observed at the Tevatron collider. We show that decays of a color-octet, electroweak-triplet scalar particle ("octo-triplet") can yield the
Challenging the standard model at the Tevatron collider
Filthaut, Frank; /Nijmegen U.
2011-03-01
Even at a time where the world's eyes are focused on the Large Hadron Collider at CERN, which has reached the energy frontier in 2010, many important results are still being obtained from data analyses performed at the Tevatron collider at Fermilab. This contribution discusses recent highlights in the areas of B hadron, electroweak, top quark, and Higgs boson physics. The standard model (SM) of particle physics forms the cornerstone of our understanding of elementary particles and their interactions, and many of its aspects have been investigated in great detail. Yet it is generally suspected to be incomplete (e.g. by not allowing for the incorporation of gravity in a field theoretical setting) and un-natural (e.g. the mass of the Higgs boson is not well protected against radiative corrections). In addition, it does not explain the dark matter and dark energy content of the Universe. It is therefore of eminent importance to test the limits of validity of the SM. In the decade since its upgrade to a centre-of-mass energy {radical}s = 1.96 TeV, the Tevatron p{bar p} collider has delivered an integrated luminosity of about 10 fb{sup -1}, up to 9 fb{sup -1} of which are available for analysis by its CDF and D0 collaborations. These large datasets allow for stringent tests of the SM in two areas: direct searches for particles or final states that are not very heavy but that suffer from small production cross sections (e.g. the Higgs boson), and searches for indirect manifestations of beyond-the-standard-model (BSM) effects through virtual effects. The latter searches can often be carried out by precise measurements of otherwise known processes. This contribution describes such tests of the SM carried out by the CDF and D0 collaborations. In particular, recent highlights in the areas of B hadron physics, electroweak physics, top quark physics, and Higgs boson physics are discussed. Recent results of tests of QCD and of direct searches for new phenomena are described in
The pion: an enigma within the Standard Model
NASA Astrophysics Data System (ADS)
Horn, Tanja; Roberts, Craig D.
2016-07-01
Quantum chromodynamics (QCDs) is the strongly interacting part of the Standard Model. It is supposed to describe all of nuclear physics; and yet, almost 50 years after the discovery of gluons and quarks, we are only just beginning to understand how QCD builds the basic bricks for nuclei: neutrons and protons, and the pions that bind them together. QCD is characterised by two emergent phenomena: confinement and dynamical chiral symmetry breaking (DCSB). They have far-reaching consequences, expressed with great force in the character of the pion; and pion properties, in turn, suggest that confinement and DCSB are intimately connected. Indeed, since the pion is both a Nambu-Goldstone boson and a quark-antiquark bound-state, it holds a unique position in nature and, consequently, developing an understanding of its properties is critical to revealing some very basic features of the Standard Model. We describe experimental progress toward meeting this challenge that has been made using electromagnetic probes, highlighting both dramatic improvements in the precision of charged-pion form factor data that have been achieved in the past decade and new results on the neutral-pion transition form factor, both of which challenge existing notions of pion structure. We also provide a theoretical context for these empirical advances, which begins with an explanation of how DCSB works to guarantee that the pion is un-naturally light; but also, nevertheless, ensures that the pion is the best object to study in order to reveal the mechanisms that generate nearly all the mass of hadrons. In canvassing advances in these areas, our discussion unifies many aspects of pion structure and interactions, connecting the charged-pion elastic form factor, the neutral-pion transition form factor and the pion's leading-twist parton distribution amplitude. It also sketches novel ways in which experimental and theoretical studies of the charged-kaon electromagnetic form factor can provide
The pion: an enigma within the Standard Model
Horn, Tanja; Roberts, Craig D.
2016-05-27
Quantum chromodynamics (QCDs) is the strongly interacting part of the Standard Model. It is supposed to describe all of nuclear physics; and yet, almost 50 years after the discovery of gluons and quarks, we are only just beginning to understand how QCD builds the basic bricks for nuclei: neutrons and protons, and the pions that bind them together. QCD is characterised by two emergent phenomena: confinement and dynamical chiral symmetry breaking (DCSB). They have far-reaching consequences, expressed with great force in the character of the pion; and pion properties, in turn, suggest that confinement and DCSB are intimately connected. Indeed, since the pion is both a Nambu–Goldstone boson and a quark–antiquark bound-state, it holds a unique position in nature and, consequently, developing an understanding of its properties is critical to revealing some very basic features of the Standard Model. We describe experimental progress toward meeting this challenge that has been made using electromagnetic probes, highlighting both dramatic improvements in the precision of charged-pion form factor data that have been achieved in the past decade and new results on the neutral-pion transition form factor, both of which challenge existing notions of pion structure. We also provide a theoretical context for these empirical advances, which begins with an explanation of how DCSB works to guarantee that the pion is un-naturally light; but also, nevertheless, ensures that the pion is the best object to study in order to reveal the mechanisms that generate nearly all the mass of hadrons. In canvassing advances in these areas, our discussion unifies many aspects of pion structure and interactions, connecting the charged-pion elastic form factor, the neutral-pion transition form factor and the pion's leading-twist parton distribution amplitude. It also sketches novel ways in which experimental and theoretical studies of the charged-kaon electromagnetic form factor can provide
On the fate of the Standard Model at finite temperature
NASA Astrophysics Data System (ADS)
Rose, Luigi Delle; Marzo, Carlo; Urbano, Alfredo
2016-05-01
In this paper we revisit and update the computation of thermal corrections to the stability of the electroweak vacuum in the Standard Model. At zero temperature, we make use of the full two-loop effective potential, improved by three-loop beta functions with two-loop matching conditions. At finite temperature, we include one-loop thermal corrections together with resummation of daisy diagrams. We solve numerically — both at zero and finite temperature — the bounce equation, thus providing an accurate description of the thermal tunneling. Assuming a maximum temperature in the early Universe of the order of 1018 GeV, we find that the instability bound excludes values of the top mass M t ≳ 173 .6 GeV, with M h ≃ 125 GeV and including uncertainties on the strong coupling. We discuss the validity and temperature-dependence of this bound in the early Universe, with a special focus on the reheating phase after inflation.
Effects of the Noncommutative Standard Model in WW Scattering
Conley, John A.; Hewett, JoAnne L.
2008-12-02
We examine W pair production in the Noncommutative Standard Model constructed with the Seiberg-Witten map. Consideration of partial wave unitarity in the reactions WW {yields} WW and e{sup +}e{sup -} {yields} WW shows that the latter process is more sensitive and that tree-level unitarity is violated when scattering energies are of order a TeV and the noncommutative scale is below about a TeV. We find that WW production at the LHC is not sensitive to scales above the unitarity bounds. WW production in e{sup +}e{sup -} annihilation, however, provides a good probe of such effects with noncommutative scales below 300-400 GeV being excluded at LEP-II, and the ILC being sensitive to scales up to 10-20 TeV. In addition, we find that the ability to measure the helicity states of the final state W bosons at the ILC provides a diagnostic tool to determine and disentangle the different possible noncommutative contributions.
Gravitational wave background from Standard Model physics: qualitative features
Ghiglieri, J.; Laine, M. E-mail: laine@itp.unibe.ch
2015-07-01
Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at 0T > 16 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors.
CP violation outside the standard model phenomenology for pedestrians
Lipkin, H.J. ||
1993-09-23
So far the only experimental evidence for CP violation is the 1964 discovery of K{sub L}{yields}2{pi} where the two mass eigenstates produced by neutral meson mixing both decay into the same CP eigenstate. This result is described by two parameters {epsilon} and {epsilon}{prime}. Today {epsilon} {approx} its 1964 value, {epsilon}{prime} data are still inconclusive and there is no new evidence for CP violation. One might expect to observe similar phenomena in other systems and also direct CP violation as charge asymmetries between decays of charge conjugate hadrons H{sup {+-}} {yields} f{sup {+-}}. Why is it so hard to find CP violation? How can B Physics help? Does CP lead beyond the standard model? The author presents a pedestrian symmetry approach which exhibits the difficulties and future possibilities of these two types of CP-violation experiments, neutral meson mixing and direct charge asymmetry: what may work, what doesn`t work and why.
Gravitational wave background from Standard Model physics: qualitative features
Ghiglieri, J.; Laine, M.
2015-07-16
Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at T>160 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors.
On push-forward representations in the standard gyrokinetic model
Miyato, N. Yagi, M.; Scott, B. D.
2015-01-15
Two representations of fluid moments in terms of a gyro-center distribution function and gyro-center coordinates, which are called push-forward representations, are compared in the standard electrostatic gyrokinetic model. In the representation conventionally used to derive the gyrokinetic Poisson equation, the pull-back transformation of the gyro-center distribution function contains effects of the gyro-center transformation and therefore electrostatic potential fluctuations, which is described by the Poisson brackets between the distribution function and scalar functions generating the gyro-center transformation. Usually, only the lowest order solution of the generating function at first order is considered to explicitly derive the gyrokinetic Poisson equation. This is true in explicitly deriving representations of scalar fluid moments with polarization terms. One also recovers the particle diamagnetic flux at this order because it is associated with the guiding-center transformation. However, higher-order solutions are needed to derive finite Larmor radius terms of particle flux including the polarization drift flux from the conventional representation. On the other hand, the lowest order solution is sufficient for the other representation, in which the gyro-center transformation part is combined with the guiding-center one and the pull-back transformation of the distribution function does not appear.
Affine group formulation of the Standard Model coupled to gravity
Chou, Ching-Yi; Ita, Eyo; Soo, Chopin
2014-04-15
In this work we apply the affine group formalism for four dimensional gravity of Lorentzian signature, which is based on Klauder’s affine algebraic program, to the formulation of the Hamiltonian constraint of the interaction of matter and all forces, including gravity with non-vanishing cosmological constant Λ, as an affine Lie algebra. We use the hermitian action of fermions coupled to gravitation and Yang–Mills theory to find the density weight one fermionic super-Hamiltonian constraint. This term, combined with the Yang–Mills and Higgs energy densities, are composed with York’s integrated time functional. The result, when combined with the imaginary part of the Chern–Simons functional Q, forms the affine commutation relation with the volume element V(x). Affine algebraic quantization of gravitation and matter on equal footing implies a fundamental uncertainty relation which is predicated upon a non-vanishing cosmological constant. -- Highlights: •Wheeler–DeWitt equation (WDW) quantized as affine algebra, realizing Klauder’s program. •WDW formulated for interaction of matter and all forces, including gravity, as affine algebra. •WDW features Hermitian generators in spite of fermionic content: Standard Model addressed. •Constructed a family of physical states for the full, coupled theory via affine coherent states. •Fundamental uncertainty relation, predicated on non-vanishing cosmological constant.
Flavor democracy in standard models at high energies
NASA Astrophysics Data System (ADS)
Cvetič, G.; Kim, C. S.
1993-10-01
It is possible that the standard model (SM) is replaced around some transition energy Λ by a new, possibly Higgsless, "flavor gauge theory" such that the Yukawa (running) parameters of SM at E ˜ Λ show up an (approximate) flavor democracy (FD). We investigate the latter possibility by studying the renormalization group equations for the Yukawa couplings of SM with one and two Higgs doublets, by evolving them from given physical values at low energies ( E ⋍ 1 GeV) to Λ (˜ Λpole) and comparing the resulting fermion masses and CKM matrix elements at E ⋍ Λ for various mtphy and ratios νu/ νd of vacuum expectation values. We find that the minimal SM and the closely related SM with two Higgs doublets (type I) show increasing deviation from FD when energy is increased, but that SM with two Higgs doublets (type II) clearly tends to FD with increasing energy—in both the quark and the leptonic sector (q-q and l- l FD). Furthermore, we find within the type-II model that, for Λpole ≪ ΛPlack, mtphy can be less than 200 GeV in most cases of chosen νu/ νd. Under the assumption that also the corresponding Yukawa couplings in the quark and the leptonic sector at E ⋍ Λ are equal ( l-q FD), we derive estimates of bounds on masses of top quark and tau-neutrino, which are compatible with experimental bounds.
Standard Model with a real singlet scalar and inflation
Enqvist, Kari; Nurmi, Sami; Tenkanen, Tommi; Tuominen, Kimmo E-mail: sami.nurmi@helsinki.fi E-mail: kimmo.i.tuominen@helsinki.fi
2014-08-01
We study the post-inflationary dynamics of the Standard Model Higgs and a real singlet scalar s, coupled together through a renormalizable coupling λ{sub sh}h{sup 2}s{sup 2}, in a Z{sub 2} symmetric model that may explain the observed dark matter abundance and/or the origin of baryon asymmetry. The initial values for the Higgs and s condensates are given by inflationary fluctuations, and we follow their dissipation and relaxation to the low energy vacua. We find that both the lowest order perturbative and the non-perturbative decays are blocked by thermal effects and large background fields and that the condensates decay by two-loop thermal effects. Assuming instant reheating at T=10{sup 16} GeV, the characteristic temperature for the Higgs condensate thermalization is found to be T{sub h} ∼ 10{sup 14} GeV, whereas s thermalizes typically around T{sub s} ∼ 10{sup 6} GeV. By that time, the amplitude of the singlet is driven very close to the vacuum value by the expansion of the universe, unless the portal coupling takes a value λ{sub sh}∼< 10{sup -7} and the singlet s never thermalizes. With these values of the coupling, it is possible to slowly produce a sizeable fraction of the observed dark matter abundance via singlet condensate fragmentation and thermal Higgs scattering. Physics also below the electroweak scale can therefore be affected by the non-vacuum initial conditions generated by inflation.
Honrubia-Escribano, A.; Gomez Lazaro, E.; Jimenez-Buendia, F.; Muljadi, Eduard
2016-11-01
The International Electrotechnical Commission Standard 61400-27-1 was published in February 2015. This standard deals with the development of generic terms and parameters to specify the electrical characteristics of wind turbines. Generic models of very complex technological systems, such as wind turbines, are thus defined based on the four common configurations available in the market. Due to its recent publication, the comparison of the response of generic models with specific vendor models plays a key role in ensuring the widespread use of this standard. This paper compares the response of a specific Gamesa dynamic wind turbine model to the corresponding generic IEC Type III wind turbine model response when the wind turbine is subjected to a three-phase voltage dip. This Type III model represents the doubly-fed induction generator wind turbine, which is not only one of the most commonly sold and installed technologies in the current market but also a complex variable-speed operation implementation. In fact, active and reactive power transients are observed due to the voltage reduction. Special attention is given to the reactive power injection provided by the wind turbine models because it is a requirement of current grid codes. Further, the boundaries of the generic models associated with transient events that cannot be represented exactly are included in the paper.
ERIC Educational Resources Information Center
Li, Yuan H.; Schafer, William D.
An empirical study of the Yen (W. Yen, 1997) analytic formula for the standard error of a percent-above-cut [SE(PAC)] was conducted. This formula was derived from variance component information gathered in the context of generalizability theory. SE(PAC)s were estimated by different methods of estimating variance components (e.g., W. Yens…
Chang, Catherine S; Swanson, Jordan; Yu, Jason; Taylor, Jesse A
2017-04-11
Traditionally, maxillary hypoplasia in the setting of cleft lip and palate is treated via orthognathic surgery at skeletal maturity, which condemns these patients to abnormal facial proportions during adolescence. The authors sought to determine the safety profile of computer-aided design/computer-aided modeling (CAD/CAM) planned, Le Fort I distraction osteogenesis with internal distractors in select patients presenting at a young age with severe maxillary retrusion. The authors retrospectively reviewed our "early" Le Fort I distraction osteogenesis experience-patients performed for severe maxillary retrusion (≥12 mm underjet), after canine eruption but prior to skeletal maturity-at a single institution. Patient demographics, cleft characteristics, CAD/CAM operative plans, surgical complications, postoperative imaging, and outcomes were analyzed. Four patients were reviewed, with a median age of 12.8 years at surgery (range 8.6-16.1 years). Overall mean advancement was 17.95 + 2.9 mm (range 13.7-19.9 mm) with mean SNA improved 18.4° to 87.4 ± 5.7°. Similarly, ANB improved 17.7° to a postoperative mean of 2.4 ± 3.1°. Mean follow-up was 100.7 weeks, with 3 of 4 patients in a Class I occlusion with moderate-term follow-up; 1 of 4 will need an additional maxillary advancement due to pseudo-relapse. In conclusion, Le Fort I distraction osteogenesis with internal distractors is a safe procedure to treat severe maxillary hypoplasia after canine eruption but before skeletal maturity. Short-term follow-up demonstrates safety of the procedure and relative stability of the advancement. Pseudo-relapse is a risk of the procedure that must be discussed at length with patients and families.
Wisconsin's Model Academic Standards for Marketing Education. Bulletin No. 9005.
ERIC Educational Resources Information Center
Wisconsin State Dept. of Public Instruction, Madison.
This document contains standards for the academic content of the Wisconsin K-12 curriculum in the area of marketing education. Developed by task forces of educators, parents, board of education members, and employers and employees, the standards cover content, performance, and proficiency areas. The first part of the guide is an introduction that…
Fourth standard model family neutrino at future linear colliders
Ciftci, A.K.; Ciftci, R.; Sultansoy, S.
2005-09-01
It is known that flavor democracy favors the existence of the fourth standard model (SM) family. In order to give nonzero masses for the first three-family fermions flavor democracy has to be slightly broken. A parametrization for democracy breaking, which gives the correct values for fundamental fermion masses and, at the same time, predicts quark and lepton Cabibbo-Kobayashi-Maskawa (CKM) matrices in a good agreement with the experimental data, is proposed. The pair productions of the fourth SM family Dirac ({nu}{sub 4}) and Majorana (N{sub 1}) neutrinos at future linear colliders with {radical}(s)=500 GeV, 1 TeV, and 3 TeV are considered. The cross section for the process e{sup +}e{sup -}{yields}{nu}{sub 4}{nu}{sub 4}(N{sub 1}N{sub 1}) and the branching ratios for possible decay modes of the both neutrinos are determined. The decays of the fourth family neutrinos into muon channels ({nu}{sub 4}(N{sub 1}){yields}{mu}{sup {+-}}W{sup {+-}}) provide cleanest signature at e{sup +}e{sup -} colliders. Meanwhile, in our parametrization this channel is dominant. W bosons produced in decays of the fourth family neutrinos will be seen in detector as either di-jets or isolated leptons. As an example, we consider the production of 200 GeV mass fourth family neutrinos at {radical}(s)=500 GeV linear colliders by taking into account di-muon plus four jet events as signatures.
The pion: an enigma within the Standard Model
Horn, Tanja; Roberts, Craig D.
2016-05-27
Almost 50 years after the discovery of gluons & quarks, we are only just beginning to understand how QCD builds the basic bricks for nuclei: neutrons, protons, and the pions that bind them. QCD is characterised by two emergent phenomena: confinement & dynamical chiral symmetry breaking (DCSB). They are expressed with great force in the character of the pion. In turn, pion properties suggest that confinement & DCSB are closely connected. As both a Nambu-Goldstone boson and a quark-antiquark bound-state, the pion is unique in Nature. Developing an understanding of its properties is thus critical to revealing basic features of the Standard Model. We describe experimental progress in this direction, made using electromagnetic probes, highlighting both improvements in the precision of charged-pion form factor data, achieved in the past decade, and new results on the neutral-pion transition form factor. Both challenge existing notions of pion structure. We also provide a theoretical context for these empirical advances, first explaining how DCSB works to guarantee that the pion is unnaturally light; but also, nevertheless, ensures the pion is key to revealing the mechanisms that generate nearly all the mass of hadrons. Our discussion unifies the charged-pion elastic and neutral-pion transition form factors, and the pion's twist-2 parton distribution amplitude. It also indicates how studies of the charged-kaon form factor can provide significant contributions. Importantly, recent predictions for the large-$Q^2$ behaviour of the pion form factor can be tested by experiments planned at JLab 12. Those experiments will extend precise charged-pion form factor data to momenta that can potentially serve in validating factorisation theorems in QCD, exposing the transition between the nonperturbative and perturbative domains, and thereby reaching a goal that has long driven hadro-particle physics.
NASA Technical Reports Server (NTRS)
Maddox, M.; Rastatter, L.; Hesse, M.
2005-01-01
The disparate nature of space weather model output provides many challenges with regards to the portability and reuse of not only the data itself, but also any tools that are developed for analysis and visualization. We are developing and implementing a comprehensive data format standardization methodology that allows heterogeneous model output data to be stored uniformly in any common science data format. We will discuss our approach to identifying core meta-data elements that can be used to supplement raw model output data, thus creating self-descriptive files. The meta-data should also contain information describing the simulation grid. This will ultimately assists in the development of efficient data access tools capable of extracting data at any given point and time. We will also discuss our experiences standardizing the output of two global magnetospheric models, and how we plan to apply similar procedures when standardizing the output of the solar, heliospheric, and ionospheric models that are also currently hosted at the Community Coordinated Modeling Center.
Improved anatomy of ɛ'/ ɛ in the Standard Model
NASA Astrophysics Data System (ADS)
Buras, Andrzej J.; Gorbahn, Martin; Jäger, Sebastian; Jamin, Matthias
2015-11-01
We present a new analysis of the ratio ɛ'/ ɛ within the Standard Model (SM) using a formalism that is manifestly independent of the values of leading ( V - A) ⊗ ( V - A) QCD penguin, and EW penguin hadronic matrix elements of the operators Q 4, Q 9, and Q 10, and applies to the SM as well as extensions with the same operator structure. It is valid under the assumption that the SM exactly describes the data on CP-conserving K → ππ amplitudes. As a result of this and the high precision now available for CKM and quark mass parameters, to high accuracy ɛ' /ɛ depends only on two non-perturbative parameters, B 6 (1/2) and B 8 (3/2) , and perturbatively calculable Wilson coefficients. Within the SM, we are separately able to determine the hadronic matrix element < Q 4>0 from CP-conserving data, significantly more precisely than presently possible with lattice QCD. Employing B 6 (1/2) = 0 .57 ± 0 .19 and B 8 (3/2) = 0 .76 ± 0 .05, extracted from recent results by the RBC-UKQCD collaboration, we obtain ɛ' /ɛ = (1 .9 ± 4 .5) × 10-4, substantially more precise than the recent RBC-UKQCD prediction and 2 .9 σ below the experimental value (16 .6 ± 2 .3) × 10-4, with the error being fully dominated by that on B 6 (1/2) . Even discarding lattice input completely, but employing the recently obtained bound B 6 (1/2) ≤ B 8 (3/2) ≤ 1 from the large- N approach, the SM value is found more than 2 σ below the experimental value. At B 6 (1/2) = B 8 (3/2) = 1, varying all other parameters within one sigma, we find ɛ' /ɛ = (8 .6 ± 3 .2) × 10-4. We present a detailed anatomy of the various SM uncertainties, including all sub-leading hadronic matrix elements, briefly commenting on the possibility of underestimated SM contributions as well as on the impact of our results on new physics models.
1980-08-01
mime au bard d ’attaque et au bord de fuite. Le nombre de Mach Man’est donc d~fini qu’entre deux limites et seule I’expdrience peut nous permettre de...vabien dans ie Sans de ce qui eat observ6 expdrimentalement mais Is correction eat trop importante principsiement au bord d’attaque. On peut donc penser...Si ion consid~re lea valeurs de Ia portance at du moment quart avant pour ce cas on obtient le tableau suivant: CC rsn Module Phase Module Phase P.P.T
A Journey in Standard Development: The Core Manufacturing Simulation Data (CMSD) Information Model.
Lee, Yung-Tsun Tina
2015-01-01
This report documents a journey "from research to an approved standard" of a NIST-led standard development activity. That standard, Core Manufacturing Simulation Data (CMSD) information model, provides neutral structures for the efficient exchange of manufacturing data in a simulation environment. The model was standardized under the auspices of the international Simulation Interoperability Standards Organization (SISO). NIST started the research in 2001 and initiated the standardization effort in 2004. The CMSD standard was published in two SISO Products. In the first Product, the information model was defined in the Unified Modeling Language (UML) and published in 2010 as SISO-STD-008-2010. In the second Product, the information model was defined in Extensible Markup Language (XML) and published in 2013 as SISO-STD-008-01-2012. Both SISO-STD-008-2010 and SISO-STD-008-01-2012 are intended to be used together.
NASA Astrophysics Data System (ADS)
Peckham, Scott
2016-04-01
Over the last decade, model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that make it much easier for modelers to connect heterogeneous sets of process models in a plug-and-play manner to create composite "system models". These mechanisms greatly simplify code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing with standardized metadata. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can use the self description functions to learn about each process model in a collection to be coupled and then automatically call framework service components (e.g. regridders
Topics in physics beyond the standard model with strong interactions
NASA Astrophysics Data System (ADS)
Gomez Sanchez, Catalina
In this thesis we study a few complementary topics related to some of the open questions in the Standard Model (SM). We first consider the scalar spectrum of gauge theories with walking dynamics. The question of whether or not a light pseudo-Nambu-Goldstone boson associated with the spontaneous breaking of approximate dilatation symmetry appears in these theories has been long withstanding. We derive an effective action for the scalars, including new terms not previously considered in the literature, and obtain solutions for the lightest scalar's momentum-dependent form factor that determines the value of its pole mass. Our results for the lowest-lying state suggest that this scalar is never expected to be light, but it can have some properties that closely resemble the SM Higgs boson. We then propose a new leptonic charge-asymmetry observable well suited for the study of some Beyond the SM (BSM) physics objects at the LHC. New resonances decaying to one or many leptons could constitute the first signs of BSM physics that we observe at the LHC; if these new objects carry QCD charge they may have an associated charge asymmetry in their daughter leptons. Our observable can be used in events with single or multiple leptons in the final state. We discuss this measurement in the context of coloured scalar diquarks, as well as that of top-antitop pairs. We argue that, although a fainter signal is expected relative to other charge asymmetry observables, the low systematic uncertainties keep this particular observable relevant, especially in cases where reconstruction of the parent particle is not a viable strategy. Finally, we propose a simple dark-sector extension to the SM that communicates with ordinary quarks and leptons only through a small kinetic mixing of the dark photon and the photon. The dark sector is assumed to undergo a series of phase transitions such that monopoles and strings arise. These objects form long-lived states that eventually decay and can
Prototyping an online wetland ecosystem services model using open model sharing standards
Feng, M.; Liu, S.; Euliss, N.H.; Young, Caitlin; Mushet, D.M.
2011-01-01
Great interest currently exists for developing ecosystem models to forecast how ecosystem services may change under alternative land use and climate futures. Ecosystem services are diverse and include supporting services or functions (e.g., primary production, nutrient cycling), provisioning services (e.g., wildlife, groundwater), regulating services (e.g., water purification, floodwater retention), and even cultural services (e.g., ecotourism, cultural heritage). Hence, the knowledge base necessary to quantify ecosystem services is broad and derived from many diverse scientific disciplines. Building the required interdisciplinary models is especially challenging as modelers from different locations and times may develop the disciplinary models needed for ecosystem simulations, and these models must be identified and made accessible to the interdisciplinary simulation. Additional difficulties include inconsistent data structures, formats, and metadata required by geospatial models as well as limitations on computing, storage, and connectivity. Traditional standalone and closed network systems cannot fully support sharing and integrating interdisciplinary geospatial models from variant sources. To address this need, we developed an approach to openly share and access geospatial computational models using distributed Geographic Information System (GIS) techniques and open geospatial standards. We included a means to share computational models compliant with Open Geospatial Consortium (OGC) Web Processing Services (WPS) standard to ensure modelers have an efficient and simplified means to publish new models. To demonstrate our approach, we developed five disciplinary models that can be integrated and shared to simulate a few of the ecosystem services (e.g., water storage, waterfowl breeding) that are provided by wetlands in the Prairie Pothole Region (PPR) of North America.
NASA Astrophysics Data System (ADS)
Varlet, Madeleine
Le recours aux modeles et a la modelisation est mentionne dans la documentation scientifique comme un moyen de favoriser la mise en oeuvre de pratiques d'enseignement-apprentissage constructivistes pour pallier les difficultes d'apprentissage en sciences. L'etude prealable du rapport des enseignantes et des enseignants aux modeles et a la modelisation est alors pertinente pour comprendre leurs pratiques d'enseignement et identifier des elements dont la prise en compte dans les formations initiale et disciplinaire peut contribuer au developpement d'un enseignement constructiviste des sciences. Plusieurs recherches ont porte sur ces conceptions sans faire de distinction selon les matieres enseignees, telles la physique, la chimie ou la biologie, alors que les modeles ne sont pas forcement utilises ou compris de la meme maniere dans ces differentes disciplines. Notre recherche s'est interessee aux conceptions d'enseignantes et d'enseignants de biologie au secondaire au sujet des modeles scientifiques, de quelques formes de representations de ces modeles ainsi que de leurs modes d'utilisation en classe. Les resultats, que nous avons obtenus au moyen d'une serie d'entrevues semi-dirigees, indiquent que globalement leurs conceptions au sujet des modeles sont compatibles avec celle scientifiquement admise, mais varient quant aux formes de representations des modeles. L'examen de ces conceptions temoigne d'une connaissance limitee des modeles et variable selon la matiere enseignee. Le niveau d'etudes, la formation prealable, l'experience en enseignement et un possible cloisonnement des matieres pourraient expliquer les differentes conceptions identifiees. En outre, des difficultes temporelles, conceptuelles et techniques peuvent freiner leurs tentatives de modelisation avec les eleves. Toutefois, nos resultats accreditent l'hypothese que les conceptions des enseignantes et des enseignants eux-memes au sujet des modeles, de leurs formes de representation et de leur approche
Implementing the Standards: Incorporating Mathematical Modeling into the Curriculum.
ERIC Educational Resources Information Center
Swetz, Frank
1991-01-01
Following a brief historical review of the mechanism of mathematical modeling, examples are included that associate a mathematical model with given data (changes in sea level) and that model a real-life situation (process of parallel parking). Also provided is the rationale for the curricular implementation of mathematical modeling. (JJK)
Massive neutrinos in the standard model and beyond
NASA Astrophysics Data System (ADS)
Thalapillil, Arun Madhav
The generation of the fermion mass hierarchy in the standard model of particle physics is a long-standing puzzle. The recent discoveries from neutrino physics suggests that the mixing in the lepton sector is large compared to the quark mixings. To understand this asymmetry between the quark and lepton mixings is an important aim for particle physics. In this regard, two promising approaches from the theoretical side are grand unified theories and family symmetries. In the first part of my thesis we try to understand certain general features of grand unified theories with Abelian family symmetries by taking the simplest SU(5) grand unified theory as a prototype. We construct an SU(5) toy model with U(1) F ⊗Z'2 ⊗Z'' 2⊗Z''' 2 family symmetry that, in a natural way, duplicates the observed mass hierarchy and mixing matrices to lowest approximation. The system for generating the mass hierarchy is through a Froggatt-Nielsen type mechanism. One idea that we use in the model is that the quark and charged lepton sectors are hierarchical with small mixing angles while the light neutrino sector is democratic with larger mixing angles. We also discuss some of the difficulties in incorporating finer details into the model without making further assumptions or adding a large scalar sector. In the second part of my thesis, the interaction of high energy neutrinos with weak gravitational fields is explored. The form of the graviton-neutrino vertex is motivated from Lorentz and gauge invariance and the non-relativistic interpretations of the neutrino gravitational form factors are obtained. We comment on the renormalization conditions, the preservation of the weak equivalence principle and the definition of the neutrino mass radius. We associate the neutrino gravitational form factors with specific angular momentum states. Based on Feynman diagrams, spin-statistics, CP invariance and symmetries of the angular momentum states in the neutrino-graviton vertex, we deduce
Colorado Model Content Standards for Dance: Suggested Grade Level Expectations.
ERIC Educational Resources Information Center
Colorado State Dept. of Education, Denver.
The state of Colorado has set forth six content standards for dance education in its public schools: (1) students will understand and demonstrate dance skills; (2) students will understand and apply the principles of choreography; (3) students will create, communicate, and problem solve through dance; (4) students will understand and relate the…
Addressing Standardized Testing through a Novel Assesment Model
ERIC Educational Resources Information Center
Schifter, Catherine C.; Carey, Martha
2014-01-01
The No Child Left Behind (NCLB) legislation spawned a plethora of standardized testing services for all the high stakes testing required by the law. We argue that one-size-fits all assessments disadvantage students who are English Language Learners, in the USA, as well as students with limited economic resources, special needs, and not reading on…
Extending the standard model effective field theory with the complete set of dimension-7 operators
NASA Astrophysics Data System (ADS)
Lehman, Landon
2014-12-01
We present a complete list of the independent dimension-7 operators that are constructed using the standard model degrees of freedom and are invariant under the standard model gauge group. This list contains only 20 independent operators, far fewer than the 63 operators available at dimension 6. All of these dimension-7 operators contain fermions and violate lepton number, and 7 of the 20 violate baryon number as well. This result extends the standard model effective field theory and allows a more detailed exploration of the structure and properties of possible deformations from the standard model Lagrangian.
ISO 9000 quality standards: a model for blood banking?
Nevalainen, D E; Lloyd, H L
1995-06-01
The recent American Association of Blood Banks publications Quality Program and Quality Systems in the Blood Bank and Laboratory Environment, the FDA's draft guidelines, and recent changes in the GMP regulations all discuss the benefits of implementing quality systems in blood center and/or manufacturing operations. While the medical device GMPs in the United States have been rewritten to accommodate a quality system approach similar to ISO 9000, the Center for Biologics Evaluation and Research of the FDA is also beginning to make moves toward adopting "quality systems audits" as an inspection process rather than using the historical approach of record reviews. The approach is one of prevention of errors rather than detection after the fact (Tourault MA, oral communication, November 1994). The ISO 9000 series of standards is a quality system that has worldwide scope and can be applied in any industry or service. The use of such international standards in blood banking should raise the level of quality within an organization, among organizations on a regional level, within a country, and among nations on a worldwide basis. Whether an organization wishes to become registered to a voluntary standard or not, the use of such standards to become ISO 9000-compliant would be a move in the right direction and would be a positive sign to the regulatory authorities and the public that blood banking is making a visible effort to implement world-class quality systems in its operations. Implementation of quality system standards such as the ISO 9000 series will provide an organized approach for blood banks and blood bank testing operations. With the continued trend toward consolidation and mergers, resulting in larger operational units with more complexity, quality systems will become even more important as the industry moves into the future.(ABSTRACT TRUNCATED AT 250 WORDS)
A Journey in Standard Development: The Core Manufacturing Simulation Data (CMSD) Information Model
Lee, Yung-Tsun Tina
2015-01-01
This report documents a journey “from research to an approved standard” of a NIST-led standard development activity. That standard, Core Manufacturing Simulation Data (CMSD) information model, provides neutral structures for the efficient exchange of manufacturing data in a simulation environment. The model was standardized under the auspices of the international Simulation Interoperability Standards Organization (SISO). NIST started the research in 2001 and initiated the standardization effort in 2004. The CMSD standard was published in two SISO Products. In the first Product, the information model was defined in the Unified Modeling Language (UML) and published in 2010 as SISO-STD-008-2010. In the second Product, the information model was defined in Extensible Markup Language (XML) and published in 2013 as SISO-STD-008-01-2012. Both SISO-STD-008-2010 and SISO-STD-008-01-2012 are intended to be used together. PMID:26958450
ERIC Educational Resources Information Center
Newton, Jill A.; Kasten, Sarah E.
2013-01-01
The release of the Common Core State Standards for Mathematics and their adoption across the United States calls for careful attention to the alignment between mathematics standards and assessments. This study investigates 2 models that measure alignment between standards and assessments, the Surveys of Enacted Curriculum (SEC) and the Webb…
Job Grading Standard for Model Maker, WG-4714.
ERIC Educational Resources Information Center
Civil Service Commission, Washington, DC. Bureau of Policies and Standards.
The pamphlet explains the different job requirements for different grades of model maker (WG-14 and WG-15) and contrasts them to the position of premium journeyman. It includes comment on what a model maker is (a nonsupervisory job involved in planning and fabricating complex research and prototype models which are made from a variety of materials…
Plot Scale Factor Models for Standard Orthographic Views
ERIC Educational Resources Information Center
Osakue, Edward E.
2007-01-01
Geometric modeling provides graphic representations of real or abstract objects. Realistic representation requires three dimensional (3D) attributes since natural objects have three principal dimensions. CAD software gives the user the ability to construct realistic 3D models of objects, but often prints of these models must be generated on two…
Battery Ownership Model - Medium Duty HEV Battery Leasing & Standardization
Kelly, Ken; Smith, Kandler; Cosgrove, Jon; Prohaska, Robert; Pesaran, Ahmad; Paul, James; Wiseman, Marc
2015-12-01
Prepared for the U.S. Department of Energy, this milestone report focuses on the economics of leasing versus owning batteries for medium-duty hybrid electric vehicles as well as various battery standardization scenarios. The work described in this report was performed by members of the Energy Storage Team and the Vehicle Simulation Team in NREL's Transportation and Hydrogen Systems Center along with members of the Vehicles Analysis Team at Ricardo.
Relating electrophotographic printing model and ISO13660 standard attributes
NASA Astrophysics Data System (ADS)
Barney Smith, Elisa H.
2010-01-01
A mathematical model of the electrophotographic printing process has been developed. This model can be used for analysis. From this a print simulation process has been developed to simulate the effects of the model components on toner particle placement. A wide variety of simulated prints are produced from the model's three main inputs, laser spread, charge to toner proportionality factor and toner particle size. While the exact placement of toner particles is a random process, the total effect is not. The effect of each model parameter on the ISO 13660 print quality attributes line width, fill, raggedness and blurriness is described.
ERIC Educational Resources Information Center
Helfferich, Friedrich G.
1985-01-01
Presents a class exercise designed to find out how well students understand the nature and consequences of the mass action law and Le Chatelier's principle as applied to chemical equilibria. The exercise relates to a practical situation and provides simple relations for maximizing equilibrium quantities not found in standard textbooks. (JN)
New framework for standardized notation in wastewater treatment modelling.
Corominas, L L; Rieger, L; Takács, I; Ekama, G; Hauduc, H; Vanrolleghem, P A; Oehmen, A; Gernaey, K V; van Loosdrecht, M C M; Comeau, Y
2010-01-01
Many unit process models are available in the field of wastewater treatment. All of these models use their own notation, causing problems for documentation, implementation and connection of different models (using different sets of state variables). The main goal of this paper is to propose a new notational framework which allows unique and systematic naming of state variables and parameters of biokinetic models in the wastewater treatment field. The symbols are based on one main letter that gives a general description of the state variable or parameter and several subscript levels that provide greater specification. Only those levels that make the name unique within the model context are needed in creating the symbol. The paper describes specific problems encountered with the currently used notation, presents the proposed framework and provides additional practical examples. The overall result is a framework that can be used in whole plant modelling, which consists of different fields such as activated sludge, anaerobic digestion, sidestream treatment, membrane bioreactors, metabolic approaches, fate of micropollutants and biofilm processes. The main objective of this consensus building paper is to establish a consistent set of rules that can be applied to existing and most importantly, future models. Applying the proposed notation should make it easier for everyone active in the wastewater treatment field to read, write and review documents describing modelling projects.
Why does the Standard GARCH(1, 1) Model Work Well?
NASA Astrophysics Data System (ADS)
Jafari, G. R.; Bahraminasab, A.; Norouzzadeh, P.
The AutoRegressive Conditional Heteroskedasticity (ARCH) and its generalized version (GARCH) family of models have grown to encompass a wide range of specifications, each of them is designed to enhance the ability of the model to capture the characteristics of stochastic data, such as financial time series. The existing literature provides little guidance on how to select optimal parameters, which are critical in efficiency of the model, among the infinite range of available parameters. We introduce a new criterion to find suitable parameters in GARCH models by using Markov length, which is the minimum time interval over which the data can be considered as constituting a Markov process. This criterion is applied to various time series and its results support the known idea that GARCH(1, 1) model works well.
The Model Standards Project: Creating Inclusive Systems for LGBT Youth in Out-of-Home Care
ERIC Educational Resources Information Center
Wilber, Shannan; Reyes, Carolyn; Marksamer, Jody
2006-01-01
This article describes the Model Standards Project (MSP), a collaboration of Legal Services for Children and the National Center for Lesbian Rights. The MSP developed a set of model professional standards governing the care of lesbian, gay, bisexual and transgender (LGBT) youth in out-of-home care. This article provides an overview of the…
Physical Education Teachers Fidelity to and Perspectives of a Standardized Curricular Model
ERIC Educational Resources Information Center
Kloeppel, Tiffany; Stylianou, Michalis; Kulinna, Pamela Hodges
2014-01-01
Relatively little is known about the use of standardized physical education curricular models and teachers perceptions of and fidelity to such curricula. The purpose of this study was to examine teachers perceptions of and fidelity to a standardized physical education curricular model (i.e., Dynamic Physical Education [DPE]). Participants for this…
Higgs boson mass in the standard model at two-loop order and beyond
Martin, Stephen P.; Robertson, David G.
2014-10-01
We calculate the mass of the Higgs boson in the standard model in terms of the underlying Lagrangian parameters at complete 2-loop order with leading 3-loop corrections. A computer program implementing the results is provided. The program also computes and minimizes the standard model effective potential in Landau gauge at 2-loop order with leading 3-loop corrections.
Needed: A Standard Information Processing Model of Learning and Learning Processes.
ERIC Educational Resources Information Center
Carifio, James
One strategy to prevent confusion as new paradigms emerge is to have professionals in the area develop and use a standard model of the phenomenon in question. The development and use of standard models in physics, genetics, archaeology, and cosmology have been very productive. The cognitive revolution in psychology and education has produced a…
Standard Codon Substitution Models Overestimate Purifying Selection for Nonstationary Data
Yap, Von Bing; Huttley, Gavin A.
2017-01-01
Estimation of natural selection on protein-coding sequences is a key comparative genomics approach for de novo prediction of lineage-specific adaptations. Selective pressure is measured on a per-gene basis by comparing the rate of nonsynonymous substitutions to the rate of synonymous substitutions. All published codon substitution models have been time-reversible and thus assume that sequence composition does not change over time. We previously demonstrated that if time-reversible DNA substitution models are applied in the presence of changing sequence composition, the number of substitutions is systematically biased towards overestimation. We extend these findings to the case of codon substitution models and further demonstrate that the ratio of nonsynonymous to synonymous rates of substitution tends to be underestimated over three data sets of mammals, vertebrates, and insects. Our basis for comparison is a nonstationary codon substitution model that allows sequence composition to change. Goodness-of-fit results demonstrate that our new model tends to fit the data better. Direct measurement of nonstationarity shows that bias in estimates of natural selection and genetic distance increases with the degree of violation of the stationarity assumption. Additionally, inferences drawn under time-reversible models are systematically affected by compositional divergence. As genomic sequences accumulate at an accelerating rate, the importance of accurate de novo estimation of natural selection increases. Our results establish that our new model provides a more robust perspective on this fundamental quantity. PMID:28175284
Physics beyond the standard model: Focusing on the muon anomaly
Chavez, Helder; Ferreira, Cristine N.; Helayel-Neto, Jose A.
2006-08-01
We present a model based on the implication of an exceptional E{sub 6}-GUT symmetry for the anomalous magnetic moment of the muon. We follow a particular chain of breakings with Higgses in the 78 and 351 representations. We analyze the radiative correction contributions to the muon mass and the effects of the breaking of the so-called Weinberg symmetry. We also estimate the range of values of the parameters of our model.
Testing the Standard Model with the Primordial Inflation Explorer
NASA Technical Reports Server (NTRS)
Kogut, Alan J.
2011-01-01
The Primordial Inflation Explorer is an Explorer-class mission to measure the gravity-wave signature of primordial inflation through its distinctive imprint on the linear polarization of the cosmic microwave background. PIXIE uses an innovative optical design to achieve background-limited sensitivity in 400 spectral channels spanning 2.5 decades in frequency from 30 GHz to 6 THz (1 cm to 50 micron wavelength). The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r < 10A{-3) at 5 standard deviations. The rich PIXIE data set will also constrain physical processes ranging from Big Bang cosmology to the nature of the first stars to physical conditions within the interstellar medium of the Galaxy. I describe the PIXIE instrument and mission architecture needed to detect the inflationary signature using only 4 semiconductor bolometers.
ERIC Educational Resources Information Center
Vickner, Edward Henry, Jr.
An electronic simulation model was designed, constructed, and then field tested to determine student opinion of its effectiveness as an instructional aid. The model was designated as the Equilibrium System Simulator (ESS). The model was built on the principle of electrical symmetry applied to the Wheatstone bridge and was constructed from readily…
The SLq(2) extension of the standard model
NASA Astrophysics Data System (ADS)
Finkelstein, Robert J.
2015-06-01
The idea that the elementary particles might have the symmetry of knots has had a long history. In any modern formulation of this idea, however, the knot must be quantized. The present review is a summary of a small set of papers that began as an attempt to correlate the properties of quantized knots with empirical properties of the elementary particles. As the ideas behind these papers have developed over a number of years, the model has evolved, and this review is intended to present the model in its current form. The original picture of an elementary fermion as a solitonic knot of field, described by the trefoil representation of SUq(2), has expanded into its present form in which a knotted field is complementary to a composite structure composed of three preons that in turn are described by the fundamental representation of SLq(2). Higher representations of SLq(2) are interpreted as describing composite particles composed of three or more preons bound by a knotted field. This preon model unexpectedly agrees in important detail with the Harari-Shupe model. There is an associated Lagrangian dynamics capable in principle of describing the interactions and masses of the particles generated by the model.
Dimensional reduction of the Standard Model coupled to a new singlet scalar field
NASA Astrophysics Data System (ADS)
Brauner, Tomáš; Tenkanen, Tuomas V. I.; Tranberg, Anders; Vuorinen, Aleksi; Weir, David J.
2017-03-01
We derive an effective dimensionally reduced theory for the Standard Model augmented by a real singlet scalar. We treat the singlet as a superheavy field and integrate it out, leaving an effective theory involving only the Higgs and SU(2) L × U(1) Y gauge fields, identical to the one studied previously for the Standard Model. This opens up the possibility of efficiently computing the order and strength of the electroweak phase transition, numerically and nonperturbatively, in this extension of the Standard Model. Understanding the phase diagram is crucial for models of electroweak baryogenesis and for studying the production of gravitational waves at thermal phase transitions.
Ex-Nihilo: Obstacles Surrounding Teaching the Standard Model
ERIC Educational Resources Information Center
Pimbblet, Kevin A.
2002-01-01
The model of the Big Bang is an integral part of the national curricula in England and Wales. Previous work (e.g. Baxter 1989) has shown that pupils often come into education with many and varied prior misconceptions emanating from both internal and external sources. Whilst virtually all of these misconceptions can be remedied, there will remain…
Beyond the standard gauging: gauge symmetries of Dirac sigma models
NASA Astrophysics Data System (ADS)
Chatzistavrakidis, Athanasios; Deser, Andreas; Jonke, Larisa; Strobl, Thomas
2016-08-01
In this paper we study the general conditions that have to be met for a gauged extension of a two-dimensional bosonic σ-model to exist. In an inversion of the usual approach of identifying a global symmetry and then promoting it to a local one, we focus directly on the gauge symmetries of the theory. This allows for action functionals which are gauge invariant for rather general background fields in the sense that their invariance conditions are milder than the usual case. In particular, the vector fields that control the gauging need not be Killing. The relaxation of isometry for the background fields is controlled by two connections on a Lie algebroid L in which the gauge fields take values, in a generalization of the common Lie-algebraic picture. Here we show that these connections can always be determined when L is a Dirac structure in the H-twisted Courant algebroid. This also leads us to a derivation of the general form for the gauge symmetries of a wide class of two-dimensional topological field theories called Dirac σ-models, which interpolate between the G/G Wess-Zumino-Witten model and the (Wess-Zumino-term twisted) Poisson sigma model.
Research and development of the evolving architecture for beyond the Standard Model
NASA Astrophysics Data System (ADS)
Cho, Kihyeon; Kim, Jangho; Kim, Junghyun
2015-12-01
The Standard Model (SM) has been successfully validated with the discovery of Higgs boson. However, the model is not yet fully regarded as a complete description. There are efforts to develop phenomenological models that are collectively termed beyond the standard model (BSM). The BSM requires several orders of magnitude more simulations compared with those required for the Higgs boson events. On the other hand, particle physics research involves major investments in hardware coupled with large-scale theoretical and computational efforts along with experiments. These fields include simulation toolkits based on an evolving computing architecture. Using the simulation toolkits, we study particle physics beyond the standard model. Here, we describe the state of this research and development effort for evolving computing architecture of high throughput computing (HTC) and graphic processing units (GPUs) for searching beyond the standard model.
ERIC Educational Resources Information Center
Tumthong, Suwut; Piriyasurawong, Pullop; Jeerangsuwan, Namon
2016-01-01
This research proposes a functional competency development model for academic personnel based on international professional qualification standards in computing field and examines the appropriateness of the model. Specifically, the model consists of three key components which are: 1) functional competency development model, 2) blended training…
Solving the standard model problems in softened gravity
NASA Astrophysics Data System (ADS)
Salvio, Alberto
2016-11-01
The Higgs naturalness problem is solved if the growth of Einstein's gravitational interaction is softened at an energy ≲1 011 GeV (softened gravity). We work here within an explicit realization where the Einstein-Hilbert Lagrangian is extended to include terms quadratic in the curvature and a nonminimal coupling with the Higgs. We show that this solution is preserved by adding three right-handed neutrinos with masses below the electroweak scale, accounting for neutrino oscillations, dark matter and the baryon asymmetry. The smallness of the right-handed neutrino masses (compared to the Planck scale) and the QCD θ -term are also shown to be natural. We prove that a possible gravitational source of C P violation cannot spoil the model, thanks to the presence of right-handed neutrinos. Inflation is approximately described by the Starobinsky model in this context and can occur even if we live in a metastable vacuum.
Beyond the Standard Model Searches at the Tevatron
Sajot, G.
2007-11-20
Recent searches for non-SUSY exotics in pp-bar collisions at a center-of-mass energy of 1.96 TeV at the Tevatron Run II are reported. The emphasis is put on the results of model-driven analyses which were updated to the full Run IIa datasets corresponding to integrated luminosities of about 1 fb{sup -1}.
Metabolomics, Standards, and Metabolic Modeling for Synthetic Biology in Plants
Hill, Camilla Beate; Czauderna, Tobias; Klapperstück, Matthias; Roessner, Ute; Schreiber, Falk
2015-01-01
Life on earth depends on dynamic chemical transformations that enable cellular functions, including electron transfer reactions, as well as synthesis and degradation of biomolecules. Biochemical reactions are coordinated in metabolic pathways that interact in a complex way to allow adequate regulation. Biotechnology, food, biofuel, agricultural, and pharmaceutical industries are highly interested in metabolic engineering as an enabling technology of synthetic biology to exploit cells for the controlled production of metabolites of interest. These approaches have only recently been extended to plants due to their greater metabolic complexity (such as primary and secondary metabolism) and highly compartmentalized cellular structures and functions (including plant-specific organelles) compared with bacteria and other microorganisms. Technological advances in analytical instrumentation in combination with advances in data analysis and modeling have opened up new approaches to engineer plant metabolic pathways and allow the impact of modifications to be predicted more accurately. In this article, we review challenges in the integration and analysis of large-scale metabolic data, present an overview of current bioinformatics methods for the modeling and visualization of metabolic networks, and discuss approaches for interfacing bioinformatics approaches with metabolic models of cellular processes and flux distributions in order to predict phenotypes derived from specific genetic modifications or subjected to different environmental conditions. PMID:26557642
ERIC Educational Resources Information Center
Laija-Rodriguez, Wilda; Grites, Karen; Bouman, Doug; Pohlman, Craig; Goldman, Richard L.
2013-01-01
Current assessments in the schools are based on a deficit model (Epstein, 1998). "The National Association of School Psychologists (NASP) Model for Comprehensive and Integrated School Psychological Services" (2010), federal initiatives and mandates, and experts in the field of assessment have highlighted the need for the comprehensive…
New perspectives in physics beyond the standard model
Weiner, Neal Jonathan
2000-09-01
In 1934 Fermi postulated a theory for weak interactions containing a dimensionful coupling with a size of roughly 250 GeV. Only now are we finally exploring this energy regime. What arises is an open question: supersymmetry and large extra dimensions are two possible scenarios. Meanwhile, other experiments will begin providing definitive information into the nature of neutrino masses and CP violation. In this paper, we explore features of possible theoretical scenarios, and study the phenomenological implications of various models addressing the open questions surrounding these issues.
Cosmic strings in hidden sectors: 1. Radiation of standard model particles
Long, Andrew J.; Hyde, Jeffrey M.; Vachaspati, Tanmay E-mail: jmhyde@asu.edu
2014-09-01
In hidden sector models with an extra U(1) gauge group, new fields can interact with the Standard Model only through gauge kinetic mixing and the Higgs portal. After the U(1) is spontaneously broken, these interactions couple the resultant cosmic strings to Standard Model particles. We calculate the spectrum of radiation emitted by these ''dark strings'' in the form of Higgs bosons, Z bosons, and Standard Model fermions assuming that string tension is above the TeV scale. We also calculate the scattering cross sections of Standard Model fermions on dark strings due to the Aharonov-Bohm interaction. These radiation and scattering calculations will be applied in a subsequent paper to study the cosmological evolution and observational signatures of dark strings.
O (θ ) Feynman rules for quadrilinear gauge boson couplings in the noncommutative standard model
NASA Astrophysics Data System (ADS)
Sajadi, Seyed Shams; Boroun, G. R.
2017-02-01
We examine the electroweak gauge sector of the noncommutative standard model and, in particular, obtain the O (θ ) Feynman rules for all quadrilinear gauge boson couplings. Surprisingly, an electroweak-chromodynamics mixing appears in the gauge sector of the noncommutative standard model, where the photon as well as the neutral weak boson is coupled directly to three gluons. The phenomenological perspectives of the model in W-W+→Z Z scattering are studied and it is shown that there is a characteristic oscillatory behavior in azimuthal distribution of scattering cross sections that can be interpreted as a direct signal of the noncommutative standard model. Assuming the integrated luminosity 100 fb-1, the number of W-W+→Z Z subprocesses are estimated for some values of noncommutative scale ΛNC at different center of mass energies and the results are compared with predictions of the standard model.
Search for the Standard Model Higgs Boson Produced in Association with Top Quarks
Wilson, Jonathan Samuel
2011-01-01
We have performed a search for the Standard Model Higgs boson produced in association with top quarks in the lepton plus jets channel. We impose no constraints on the decay of the Higgs boson. We employ ensembles of neural networks to discriminate events containing a Higgs boson from the dominant tt¯background, and set upper bounds on the Higgs production cross section. At a Higgs boson mass mH = 120 GeV/c2 , we expect to exclude a cross section 12.7 times the Standard Model prediction, and we observe an exclusion 27.4 times the Standard Model prediction with 95 % confidence.
Exploring New Physics Beyond the Standard Model: Final Technical Report
Wang, Liantao
2016-10-17
This grant in 2015 to 2016 was for support in the area of theoretical High Energy Physics. The research supported focused mainly on the energy frontier, but it also has connections to both the cosmic and intensity frontiers. Lian-Tao Wang (PI) focused mainly on signal of new physics at colliders. The year 2015 - 2016, covered by this grant, has been an exciting period of digesting the influx of LHC data, understanding its meaning, and using it to refine strategies for deeper exploration. The PI proposed new methods of searching for new physics at the LHC, such as for the compressed stops. He also investigated in detail the signal of composite Higgs models, focusing on spin-1 composite resonances in the di-boson channel. He has also considered di-photon as a probe for such models. He has also made contributions in formulating search strategies of dark matter at the LHC, resulting in two documents with recommendations. The PI has also been active in studying the physics potential of future colliders, including Higgs factories and 100 TeV pp colliders. He has given comprehensive overview of the physics potential of the high energy proton collider, and outline its luminosity targets. He has also studied the use of lepton colliders to probe fermionic Higgs portal and bottom quark couplings to the Z boson.
Ex-nihilo: obstacles surrounding teaching the Standard Model
NASA Astrophysics Data System (ADS)
Pimbblet, Kevin A.
2002-11-01
The model of the Big Bang is an integral part of the national curricula in England and Wales. Previous work (e.g. Baxter 1989) has shown that pupils often come into education with many and varied prior misconceptions emanating from both internal and external sources. Whilst virtually all of these misconceptions can be remedied, there will remain (by its very nature) the obstacle of ex-nihilo, as characterized by the question `how do you get something from nothing?' There are two origins of this obstacle: conceptual (i.e. knowledge-based) and cultural (e.g. deeply held religious viewpoints). This article shows how the citizenship section of the national curriculum, which came `online' in England from September 2002, presents a new opportunity for exploiting these.
Can Cognitive Writing Models Inform the Design of the Common Core State Standards?
ERIC Educational Resources Information Center
Hayes, John R.; Olinghouse, Natalie G.
2015-01-01
In this article, we compare the Common Core State Standards in Writing to the Hayes cognitive model of writing, adapted to describe the performance of young and developing writers. Based on the comparison, we propose the inclusion of standards for motivation, goal setting, writing strategies, and attention by writers to the text they have just…
Diagnostic Profiles: A Standard Setting Method for Use with a Cognitive Diagnostic Model
ERIC Educational Resources Information Center
Skaggs, Gary; Hein, Serge F.; Wilkins, Jesse L. M.
2016-01-01
This article introduces the Diagnostic Profiles (DP) standard setting method for setting a performance standard on a test developed from a cognitive diagnostic model (CDM), the outcome of which is a profile of mastered and not-mastered skills or attributes rather than a single test score. In the DP method, the key judgment task for panelists is a…
CP violation in neutrino oscillations in Minimal Supersymmetric extension of the Standard Model
Delepine, David; Gonzalez Macias, Vannia
2008-07-02
In this talk, we estimate the size of lepton flavor and CP violation in neutrino oscillations in the framework of Minimal Supersymmetric extension of the Standard Model (MSSM). We find that we may have significant CP-violating contributions up to an order of magnitude ({approx}10{sup -2}) smaller than the standard four-Fermi couplings.
Model Core Teaching Standards: A Resource for State Dialogue. (Draft for Public Comment)
ERIC Educational Resources Information Center
Council of Chief State School Officers, 2010
2010-01-01
With this document, the Council of Chief State School Officers (CCSSO) offers for public dialogue and comment a set of model core teaching standards that outline what teachers should know and be able to do to help all students reach the goal of being college- and career-ready in today's world. These standards are an update of the 1992 Interstate…
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
ERIC Educational Resources Information Center
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
78 FR 19152 - Revisions to Modeling, Data, and Analysis Reliability Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-29
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission 18 CFR Part 40 Revisions to Modeling, Data, and Analysis Reliability Standard AGENCY: Federal Energy Regulatory Commission, DOE. ACTION: Notice of proposed rulemaking....
Tests of local Lorentz invariance violation of gravity in the standard model extension with pulsars.
Shao, Lijing
2014-03-21
The standard model extension is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model and general relativity (GR). In the pure-gravity sector of minimal standard model extension, nine coefficients describe dominant observable deviations from GR. We systematically implemented 27 tests from 13 pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of minimal standard model extension with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for the convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravity.
Classical conformality in the Standard Model from Coleman’s theory
NASA Astrophysics Data System (ADS)
Kawana, Kiyoharu
2016-09-01
The classical conformality (CC) is one of the possible candidates for explaining the gauge hierarchy of the Standard Model (SM). We show that it is naturally obtained from the Coleman’s theory on baby universe.
An extended standard model and its Higgs geometry from the matrix model
NASA Astrophysics Data System (ADS)
Steinacker, Harold C.; Zahn, Jochen
2014-08-01
We find a simple brane configuration in the IKKT matrix model which resembles the standard model at low energies, with a second Higgs doublet and right-handed neutrinos. The electroweak sector is realized geometrically in terms of two minimal fuzzy ellipsoids, which can be interpreted in terms of four point-branes in the extra dimensions. The electroweak Higgs connects these branes and is an indispensable part of the geometry. Fermionic would-be zero modes arise at the intersections with two larger branes, leading precisely to the correct chiral matter fields at low energy, along with right-handed neutrinos which can acquire a Majorana mass due to a Higgs singlet. The larger branes give rise to SU(3)_c, extended by U(1)_B and another U(1) which are anomalous at low energies and expected to disappear. At higher energies, mirror fermions and additional fields arise, completing the full {N}=4 supersymmetry. The brane configuration is a solution of the model, assuming a suitable effective potential and a non-linear stabilization of the singlet Higgs. The basic results can be carried over to {N}=4 SU(N) super Yang-Mills on ordinary Minkowski space with sufficiently large N.
From many body wee partons dynamics to perfect fluid: a standard model for heavy ion collisions
Venugopalan, R.
2010-07-22
We discuss a standard model of heavy ion collisions that has emerged both from experimental results of the RHIC program and associated theoretical developments. We comment briefly on the impact of early results of the LHC program on this picture. We consider how this standard model of heavy ion collisions could be solidified or falsified in future experiments at RHIC, the LHC and a future Electro-Ion Collider.
Search for New Physics Beyond the Standard Model at BaBar
Barrett, Matthew; /Brunel U.
2008-04-16
A review of selected recent BaBar results are presented that illustrate the ability of the experiment to search for physics beyond the standard model. The decays B {yields} {tau}{nu} and B {yields} s{gamma} provide constraints on the mass of a charged Higgs. Searches for Lepton Flavour Violation could provide a clear signal for beyond the standard model physics. Babar does not observe any signal for New Physics with the current dataset.
ERIC Educational Resources Information Center
Crawford, Linda
These instructional materials are designed for students with some French reading skills and vocabulary in late beginning or early intermediate senior high school French. The objectives are to introduce students to a French newspaper, "Le Figaro," and develop reading skills for skimming, gathering specific information, and relying on cognates. The…
The model standards project: creating inclusive systems for LGBT youth in out-of-home care.
Wilber, Shannan; Reyes, Carolyn; Marksamer, Jody
2006-01-01
This article describes the Model Standards Project (MSP), a collaboration of Legal Services for Children and the National Center for Lesbian Rights. The MSP developed a set of model professional standards governing the care of lesbian, gay, bisexual and transgender (LGBT) youth in out-of-home care. This article provides an overview of the experiences of LGBT youth in state custody, drawing from existing research, as well as the actual experiences of youth who participated in the project or spoke with project staff. It will describe existing professional standards applicable to child welfare and juvenile justice systems, and the need for standards specifically focused on serving LGBT youth. The article concludes with recommendations for implementation of the standards in local jurisdictions.
Search for Standard Model $ZH \\to \\ell^+\\ell^-b\\bar{b}$ at DØ
Jiang, Peng
2014-07-01
We present a search for the Standard Model Higgs boson in the ZH → ℓ + ℓ ₋ $b\\bar{b}$ channel, using data collected with the DØ detector at the Fermilab Tevatron Collider. This analysis is based on a sample of reprocessed data incorporating several improve ments relative to a previous published result, and a modified multivariate analysis strategy. For a Standard Model Higgs boson of mass 125 GeV, the expected cross section limit over the Standard M odel prediction is improved by about 5% compared to the previous published results in this c hannel from the DØ Collaboration
Hucka, Michael; Nickerson, David P.; Bader, Gary D.; Bergmann, Frank T.; Cooper, Jonathan; Demir, Emek; Garny, Alan; Golebiewski, Martin; Myers, Chris J.; Schreiber, Falk; Waltemath, Dagmar; Le Novère, Nicolas
2015-01-01
The Computational Modeling in Biology Network (COMBINE) is a consortium of groups involved in the development of open community standards and formats used in computational modeling in biology. COMBINE’s aim is to act as a coordinator, facilitator, and resource for different standardization efforts whose domains of use cover related areas of the computational biology space. In this perspective article, we summarize COMBINE, its general organization, and the community standards and other efforts involved in it. Our goals are to help guide readers toward standards that may be suitable for their research activities, as well as to direct interested readers to relevant communities where they can best expect to receive assistance in how to develop interoperable computational models. PMID:25759811
NASA Astrophysics Data System (ADS)
Bizouard, Christian
2012-03-01
Les variations de la rotation terrestre. En conditionnant à la fois notre vie quotidienne, notre perception du ciel, et bon nombre de phénomènes géophysiques comme la formation des cyclones, la rotation de la Terre se trouve au croisement de plusieurs disciplines. Si le phenomena se faisait uniformément, le sujet serait vite discuté, mais c'est parce que la rotation terrestre varie, même imperceptiblement pour nos sens, dans sa vitesse angulaire comme dans la direction de son axe, qu'elle suscite un grand intérêt. D'abord pour des raisons pratiques : non seulement les aléas de la rotation terrestre modi_ent à la longue les pointés astrométriques à un instant donné de la journée mais in_uencent aussi les mesures opérées par les techniques spatiales ; en consequence l'exploitation de ces mesures, par exemple pour déterminer les orbites des satellites impliqués ou pratiquer le positionnement au sol, nécessite une connaissance précise de ces variations. Plus fondamentalement, elles traduisent les propriétés globales de la Terre comme les processus physiques qui s'y déroulent, si bien qu'en analysant les causes des fluctuations observées, on dispose d'un moyen de mieux connaître notre globe. La découverte progressive des fluctuations de la rotation de la Terre a une longue histoire. Sous l'angle des techniques d'observation, trois époques se pro-celle du pointé astrométrique à l'oeil nu, à l'aide d'instruments en bois ou métalliques (quart de cercle muraux par exemple). À partir du XVIIe siècle débute l'astrométrie télescopique dont les pointés sont complétés par des datations de plus en plus précises grâce à l'invention d'horloges régulées par balancier. Cette deuxième époque se termine vers 1960, avec l'avènement des techniques spatiales : les pointés astrométriques sont délaissés au profit de la mesure ultra-précise de durées ou de fréquences de signaux électromagnétiques, grâce à l'invention des horloges
NASA Technical Reports Server (NTRS)
1981-01-01
The use of an International Standards Organization (ISO) Open Systems Interconnection (OSI) Reference Model and its relevance to interconnecting an Applications Data Service (ADS) pilot program for data sharing is discussed. A top level mapping between the conjectured ADS requirements and identified layers within the OSI Reference Model was performed. It was concluded that the OSI model represents an orderly architecture for the ADS networking planning and that the protocols being developed by the National Bureau of Standards offer the best available implementation approach.
ERIC Educational Resources Information Center
Kartal, Ozgul; Dunya, Beyza Aksu; Diefes-Dux, Heidi A.; Zawojewski, Judith S.
2016-01-01
Critical to many science, technology, engineering, and mathematics (STEM) career paths is mathematical modeling--specifically, the creation and adaptation of mathematical models to solve problems in complex settings. Conventional standardized measures of mathematics achievement are not structured to directly assess this type of mathematical…
Stationary distribution and extinction of a stochastic SIRS epidemic model with standard incidence
NASA Astrophysics Data System (ADS)
Liu, Qun; Jiang, Daqing; Shi, Ningzhong; Hayat, Tasawar; Alsaedi, Ahmed
2017-03-01
In this paper, we consider a stochastic SIRS epidemic model with standard incidence. By constructing suitable stochastic Lyapunov function, we establish sufficient conditions for the existence of ergodic stationary distribution of the model. Moreover, we also establish sufficient conditions for extinction of the disease.
Standardization Process for Space Radiation Models Used for Space System Design
NASA Technical Reports Server (NTRS)
Barth, Janet; Daly, Eamonn; Brautigam, Donald
2005-01-01
The space system design community has three concerns related to models of the radiation belts and plasma: 1) AP-8 and AE-8 models are not adequate for modern applications; 2) Data that have become available since the creation of AP-8 and AE-8 are not being fully exploited for modeling purposes; 3) When new models are produced, there is no authorizing organization identified to evaluate the models or their datasets for accuracy and robustness. This viewgraph presentation provided an overview of the roadmap adopted by the Working Group Meeting on New Standard Radiation Belt and Space Plasma Models.
Assessment of the Draft AIAA S-119 Flight Dynamic Model Exchange Standard
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Murri, Daniel G.; Hill, Melissa A.; Jessick, Matthew V.; Penn, John M.; Hasan, David A.; Crues, Edwin Z.; Falck, Robert D.; McCarthy, Thomas G.; Vuong, Nghia; Zimmerman, Curtis
2011-01-01
An assessment of a draft AIAA standard for flight dynamics model exchange, ANSI/AIAA S-119-2011, was conducted on behalf of NASA by a team from the NASA Engineering and Safety Center. The assessment included adding the capability of importing standard models into real-time simulation facilities at several NASA Centers as well as into analysis simulation tools. All participants were successful at importing two example models into their respective simulation frameworks by using existing software libraries or by writing new import tools. Deficiencies in the libraries and format documentation were identified and fixed; suggestions for improvements to the standard were provided to the AIAA. An innovative tool to generate C code directly from such a model was developed. Performance of the software libraries compared favorably with compiled code. As a result of this assessment, several NASA Centers can now import standard models directly into their simulations. NASA is considering adopting the now-published S-119 standard as an internal recommended practice.
A Lifecycle Approach to Brokered Data Management for Hydrologic Modeling Data Using Open Standards.
NASA Astrophysics Data System (ADS)
Blodgett, D. L.; Booth, N.; Kunicki, T.; Walker, J.
2012-12-01
The U.S. Geological Survey Center for Integrated Data Analytics has formalized an information management-architecture to facilitate hydrologic modeling and subsequent decision support throughout a project's lifecycle. The architecture is based on open standards and open source software to decrease the adoption barrier and to build on existing, community supported software. The components of this system have been developed and evaluated to support data management activities of the interagency Great Lakes Restoration Initiative, Department of Interior's Climate Science Centers and WaterSmart National Water Census. Much of the research and development of this system has been in cooperation with international interoperability experiments conducted within the Open Geospatial Consortium. Community-developed standards and software, implemented to meet the unique requirements of specific disciplines, are used as a system of interoperable, discipline specific, data types and interfaces. This approach has allowed adoption of existing software that satisfies the majority of system requirements. Four major features of the system include: 1) assistance in model parameter and forcing creation from large enterprise data sources; 2) conversion of model results and calibrated parameters to standard formats, making them available via standard web services; 3) tracking a model's processes, inputs, and outputs as a cohesive metadata record, allowing provenance tracking via reference to web services; and 4) generalized decision support tools which rely on a suite of standard data types and interfaces, rather than particular manually curated model-derived datasets. Recent progress made in data and web service standards related to sensor and/or model derived station time series, dynamic web processing, and metadata management are central to this system's function and will be presented briefly along with a functional overview of the applications that make up the system. As the separate
NASA Astrophysics Data System (ADS)
García-Alegre, Ana; Sánchez, Francisco; Gómez-Ballesteros, María; Hinz, Hilmar; Serrano, Alberto; Parra, Santiago
2014-08-01
The management and protection of potentially vulnerable species and habitats require the availability of detailed spatial data. However, such data are often not readily available in particular areas that are challenging for sampling by traditional sampling techniques, for example seamounts. Within this study habitat modelling techniques were used to create predictive maps of six species of conservation concern for the Le Danois Bank (El Cachucho Marine Protected Area in the South of the Bay of Biscay). The study used data from ECOMARG multidisciplinary surveys that aimed to create a representative picture of the physical and biological composition of the area. Classical fishing gear (otter trawl and beam trawl) was used to sample benthic communities that inhabit sedimentary areas, and non-destructive visual sampling techniques (ROV and photogrammetric sled) were used to determine the presence of epibenthic macrofauna in complex and vulnerable habitats. Multibeam echosounder data, high-resolution seismic profiles (TOPAS system) and geological data from box-corer were used to characterize the benthic terrain. ArcGIS software was used to produce high-resolution maps (75×75 m2) of such variables in the entire area. The Maximum Entropy (MAXENT) technique was used to process these data and create Habitat Suitability maps for six species of special conservation interest. The model used seven environmental variables (depth, rugosity, aspect, slope, Bathymetric Position Index (BPI) in fine and broad scale and morphosedimentary characteristics) to identify the most suitable habitats for such species and indicates which environmental factors determine their distribution. The six species models performed highly significantly better than random (p<0.0001; Mann-Whitney test) when Area Under the Curve (AUC) values were tested. This indicates that the environmental variables chosen are relevant to distinguish the distribution of these species. The Jackknife test estimated depth
Exploring the Standard Model with the High Luminosity, Polarized Electron-Ion Collider
Milner, Richard G.
2009-08-04
The Standard Model is only a few decades old and has been successfully confirmed by experiment, particularly at the high energy frontier. This will continue with renewed vigor at the LHC. However, many important elements of the Standard Model remain poorly understood. In particular, the exploration of the strong interaction theory Quantum Chromodynamics is in its infancy. How does the spin-1/2 of the proton arise from the fundamental quark and gluon constituents? Can we understand the new QCD world of virtual quarks and gluons in the nucleon? Using precision measurements can we test the limits of the Standard Model and look for new physics? To address these and other important questions, physicists have developed a concept for a new type of accelerator, namely a high luminosity, polarized electron-ion collider. Here the scientific motivation is summarized and the accelerator concepts are outlined.
B → K∗ ℓ + ℓ - decays at large recoil in the Standard Model: a theoretical reappraisal
NASA Astrophysics Data System (ADS)
Ciuchini, Marco; Fedele, Marco; Franco, Enrico; Mishima, Satoshi; Paul, Ayan; Silvestrini, Luca; Valli, Mauro
2016-06-01
We critically reassess the theoretical uncertainties in the Standard Model calculation of the B → K ∗ ℓ + ℓ - observables, focusing on the low q 2 region. We point out that even optimized observables are affected by sizable uncertainties, since hadronic contributions generated by current-current operators with charm are difficult to estimate, especially for q 2 ˜ 4 m c 2 ≃ 6.8 GeV2. We perform a detailed numerical analysis and present both predictions and results from the fit obtained using most recent data. We find that non-factorizable power corrections of the expected order of magnitude are sufficient to give a good description of current experimental data within the Standard Model. We discuss in detail the q 2 dependence of the corrections and their possible interpretation as shifts of the Standard Model Wilson coefficients.
Search for mono-Higgs signals at the LHC in the B -L supersymmetric standard model
NASA Astrophysics Data System (ADS)
Abdallah, W.; Hammad, A.; Khalil, S.; Moretti, S.
2017-03-01
We study mono-Higgs signatures emerging in the B -L supersymmetric standard model induced by new channels not present in the minimal supersymmetric standard model, i.e., via topologies in which the mediator is either a heavy Z', with mass of O (2 TeV ) , or an intermediate h' (the lightest C P -even Higgs state of B -L origin), with a mass of O (0.2 TeV ) . The mono-Higgs probe considered is the standard model-like Higgs state recently discovered at the Large Hadron Collider, so as to enforce its mass reconstruction for background reduction purposes. With this in mind, its two cleanest signatures are selected: γ γ and Z Z*→4 l (l =e , μ ). We show how both of these can be accessed with foreseen energy and luminosity options using a dedicated kinematic analysis performed in the presence of partonic, showering, hadronization and detector effects.
40 CFR 1036.620 - Alternate CO2 standards based on model year 2011 compression-ignition engines.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Alternate CO2 standards based on model... HEAVY-DUTY HIGHWAY ENGINES Special Compliance Provisions § 1036.620 Alternate CO2 standards based on... compression-ignition engines to the CO2 standards of this section instead of the CO2 standards in §...
Blowout Jets: Hinode X-Ray Jets that Don't Fit the Standard Model
NASA Technical Reports Server (NTRS)
Moore, Ronald L.; Cirtain, Jonathan W.; Sterling, Alphonse C.; Falconer, David A.
2010-01-01
Nearly half of all H-alpha macrospicules in polar coronal holes appear to be miniature filament eruptions. This suggests that there is a large class of X-ray jets in which the jet-base magnetic arcade undergoes a blowout eruption as in a CME, instead of remaining static as in most solar X-ray jets, the standard jets that fit the model advocated by Shibata. Along with a cartoon depicting the standard model, we present a cartoon depicting the signatures expected of blowout jets in coronal X-ray images. From Hinode/XRT movies and STEREO/EUVI snapshots in polar coronal holes, we present examples of (1) X-ray jets that fit the standard model, and (2) X-ray jets that do not fit the standard model but do have features appropriate for blowout jets. These features are (1) a flare arcade inside the jet-base arcade in addition to the small flare arcade (bright point) outside that standard jets have, (2) a filament of cool (T is approximately 80,000K) plasma that erupts from the core of the jetbase arcade, and (3) an extra jet strand that should not be made by the reconnection for standard jets but could be made by reconnection between the ambient unipolar open field and the opposite-polarity leg of the filament-carrying flux-rope core field of the erupting jet-base arcade. We therefore infer that these non-standard jets are blowout jets, jets made by miniature versions of the sheared-core-arcade eruptions that make CMEs
Testing non-standard inflationary models with the cosmic microwave background
NASA Astrophysics Data System (ADS)
Landau, Susana J.
2015-03-01
The emergence of the seeds of cosmic structure from an isotropic and homogeneuous universe has not been clearly explained by the standard version of inflationary models. We review a proposal that attempts to deal with this problem by introducing "the self induced collapse hypothesis". As a consequence of this modification of standard inflationary scenarios, the predicted primordial power spectrum and the CMB spectrum are modified. We show the results of statistical analyses comparing the predictions of these models with recent CMB observations and the matter power spectrum from galaxy surveys.
Beyond Standard Model Physics: At the Frontiers of Cosmology and Particle Physics
NASA Astrophysics Data System (ADS)
Lopez-Suarez, Alejandro O.
I begin to write this thesis at a time of great excitement in the field of cosmology and particle physics. The aim of this thesis is to study and search for beyond the standard model (BSM) physics in the cosmological and high energy particle fields. There are two main questions, which this thesis aims to address: 1) what can we learn about the inflationary epoch utilizing the pioneer gravitational wave detector Adv. LIGO?, and 2) what are the dark matter particle properties and interactions with the standard model particles?. This thesis will focus on advances in answering both questions.
Les Houches 2015: Physics at TeV Colliders Standard Model Working Group Report
Andersen, J.R.; et al.
2016-05-16
This Report summarizes the proceedings of the 2015 Les Houches workshop on Physics at TeV Colliders. Session 1 dealt with (I) new developments relevant for high precision Standard Model calculations, (II) the new PDF4LHC parton distributions, (III) issues in the theoretical description of the production of Standard Model Higgs bosons and how to relate experimental measurements, (IV) a host of phenomenological studies essential for comparing LHC data from Run I with theoretical predictions and projections for future measurements in Run II, and (V) new developments in Monte Carlo event generators.
Performance of preproduction model cesium beam frequency standards for spacecraft applications
NASA Technical Reports Server (NTRS)
Levine, M. W.
1978-01-01
A cesium beam frequency standards for spaceflight application on Navigation Development Satellites was designed and fabricated and preliminary testing was completed. The cesium standard evolved from an earlier prototype model launched aboard NTS-2 and the engineering development model to be launched aboard NTS satellites during 1979. A number of design innovations, including a hybrid analog/digital integrator and the replacement of analog filters and phase detectors by clocked digital sampling techniques are discussed. Thermal and thermal-vacuum testing was concluded and test data are presented. Stability data for 10 to 10,000 seconds averaging interval, measured under laboratory conditions, are shown.
Standard Model Extension and Casimir effect for fermions at finite temperature
NASA Astrophysics Data System (ADS)
Santos, A. F.; Khanna, Faqir C.
2016-11-01
Lorentz and CPT symmetries are foundations for important processes in particle physics. Recent studies in Standard Model Extension (SME) at high energy indicate that these symmetries may be violated. Modifications in the lagrangian are necessary to achieve a hermitian hamiltonian. The fermion sector of the standard model extension is used to calculate the effects of the Lorentz and CPT violation on the Casimir effect at zero and finite temperature. The Casimir effect and Stefan-Boltzmann law at finite temperature are calculated using the thermo field dynamics formalism.
Distinguishing Standard Model Extensions using MonoTop Chirality at the LHC
NASA Astrophysics Data System (ADS)
Mueller, Ryan; Allahverdi, Rouzbeh; Dalchenko, Mykhailo; Dutta, Bhaskar; Flórez, Andrés; Gao, Yu; Kamon, Teruki; Kolev, Nikolay; Segura, Manuel
2017-01-01
Spectral analysis of the top quark final states is a promising method to distinguish physics beyond the standard model (BSM) from the SM. Many BSM physics with top quark final states feature top quarks with right or left handed polarized helicity. The energy spectrum of the top quark decay products can be used to distinguish the top quark helicity. A Delphes simulation of a minimal standard model extension featuring a color scalar triplet that decays into a left handed top and a dark matter (DM) candidate is compared with a right handed model to demonstrate how such an energy spectrum varies and differentiates models. Both the hadronic and leptonic decay channels of the top quark are considered in the analysis. In the hadronic channel the right and left handed models are separated at 95% CL with a production cross section of 20 fb and 100 fb-1 integrated luminosity of 13 TeV proton-proton collisions at the LHC.
Adventures in model-building beyond the Standard Model and esoterica in six dimensions
NASA Astrophysics Data System (ADS)
Stone, David C.
This dissertation is most easily understood as two distinct periods of research. The first three chapters are dedicated to phenomenological interests in physics. An anomalous measurement of the top quark forward-backward asymmetry in both detectors at the Tevatron collider is explained by particle content from beyond the Standard Model. The extra field content is assumed to have originated from a grand unified group SU(5), and so only specific content may be added. Methods for spontaneously breaking the R-symmetry of supersymmetric theories, of phenomenological interest for any realistic supersymmetric model, are studied in the context of two-loop Coleman-Weinberg potentials. For a superpotential with a certain structure, which must include two different couplings, a robust method of spontaneously breaking the R-symmetry is established. The phenomenological studies conclude with an isospin analysis of B decays to kaons and pions. When the parameters of the analysis are fit to data, it is seen that an enhancement of matrix elements in certain representations of isospin emerge. This is highly reminiscent of the infamous and unexplained enhancements seen in the K → pipi system. We conjecture that this enhancement may be a universal feature of the flavor group, isospin in this case, rather than of just the K → pipi system. The final two chapters approach the problem of counting degrees of freedom in quantum field theories. We examine the form of the Weyl anomaly in six dimensions with the Weyl consistency conditions. These consistency conditions impose constraints that lead to a candidate for the alpha-theorem in six dimensions. This candidate has all the properties that the equivalent theorems in two and four dimensions did, and, in fact, we show that in an even number of dimensions the form of the Euler density, the generalized Einstein tensor, and the Weyl transformations guarantee such a candidate exists. We go on to show that, unlike in two and four dimensions
230Th-234U Model-Ages of Some Uranium Standard Reference Materials
Williams, R W; Gaffney, A M; Kristo, M J; Hutcheon, I D
2009-05-28
The 'age' of a sample of uranium is an important aspect of a nuclear forensic investigation and of the attribution of the material to its source. To the extent that the sample obeys the standard rules of radiochronometry, then the production ages of even very recent material can be determined using the {sup 230}Th-{sup 234}U chronometer. These standard rules may be summarized as (a) the daughter/parent ratio at time=zero must be known, and (b) there has been no daughter/parent fractionation since production. For most samples of uranium, the 'ages' determined using this chronometer are semantically 'model-ages' because (a) some assumption of the initial {sup 230}Th content in the sample is required and (b) closed-system behavior is assumed. The uranium standard reference materials originally prepared and distributed by the former US National Bureau of Standards and now distributed by New Brunswick Laboratory as certified reference materials (NBS SRM = NBL CRM) are good candidates for samples where both rules are met. The U isotopic standards have known purification and production dates, and closed-system behavior in the solid form (U{sub 3}O{sub 8}) may be assumed with confidence. We present here {sup 230}Th-{sup 234}U model-ages for several of these standards, determined by isotope dilution mass spectrometry using a multicollector ICP-MS, and compare these ages with their known production history.
Standards in Modeling and Simulation: The Next Ten Years MODSIM World Paper 2010
NASA Technical Reports Server (NTRS)
Collins, Andrew J.; Diallo, Saikou; Sherfey, Solomon R.; Tolk, Andreas; Turnitsa, Charles D.; Petty, Mikel; Wiesel, Eric
2011-01-01
The world has moved on since the introduction of the Distributed Interactive Simulation (DIS) standard in the early 1980s. The cold-war maybe over but there is still a requirement to train for and analyze the next generation of threats that face the free world. With the emergence of new and more powerful computer technology and techniques means that modeling and simulation (M&S) has become an important and growing, part in satisfying this requirement. As an industry grows, the benefits from standardization within that industry grow with it. For example, it is difficult to imagine what the USA would be like without the 110 volts standard for domestic electricity supply. This paper contains an overview of the outcomes from a recent workshop to investigate the possible future of M&S standards within the federal government.
NASA Technical Reports Server (NTRS)
Avila, Arturo
2011-01-01
The Standard JPL thermal engineering practice prescribes worst-case methodologies for design. In this process, environmental and key uncertain thermal parameters (e.g., thermal blanket performance, interface conductance, optical properties) are stacked in a worst case fashion to yield the most hot- or cold-biased temperature. Thus, these simulations would represent the upper and lower bounds. This, effectively, represents JPL thermal design margin philosophy. Uncertainty in the margins and the absolute temperatures is usually estimated by sensitivity analyses and/or by comparing the worst-case results with "expected" results. Applicability of the analytical model for specific design purposes along with any temperature requirement violations are documented in peer and project design review material. In 2008, NASA released NASA-STD-7009, Standard for Models and Simulations. The scope of this standard covers the development and maintenance of models, the operation of simulations, the analysis of the results, training, recommended practices, the assessment of the Modeling and Simulation (M&S) credibility, and the reporting of the M&S results. The Mars Exploration Rover (MER) project thermal control system M&S activity was chosen as a case study determining whether JPL practice is in line with the standard and to identify areas of non-compliance. This paper summarizes the results and makes recommendations regarding the application of this standard to JPL thermal M&S practices.
Geometrically engineering the standard model: Locally unfolding three families out of E{sub 8}
Bourjaily, Jacob L.
2007-08-15
This paper extends and builds upon the results of [J. L. Bourjaily, arXiv:0704.0444.], in which we described how to use the tools of geometrical engineering to deform geometrically engineered grand unified models into ones with lower symmetry. This top-down unfolding has the advantage that the relative positions of singularities giving rise to the many 'low-energy' matter fields are related by only a few parameters which deform the geometry of the unified model. And because the relative positions of singularities are necessary to compute the superpotential, for example, this is a framework in which the arbitrariness of geometrically engineered models can be greatly reduced. In [J. L. Bourjaily, arXiv:0704.0444.], this picture was made concrete for the case of deforming the representations of an SU{sub 5} model into their standard model content. In this paper we continue that discussion to show how a geometrically engineered 16 of SO{sub 10} can be unfolded into the standard model, and how the three families of the standard model uniquely emerge from the unfolding of a single, isolated E{sub 8} singularity.
Standard cosmological evolution in a wide range of f(R) models
Evans, Jonathan D.; Hall, Lisa M. H.; Caillol, Philippe
2008-04-15
Using techniques from singular perturbation theory, we explicitly calculate the cosmological evolution in a class of modified gravity models. By considering both the CDTT and modified CDTT (mCDTT) models, which aims to explain the current acceleration of the universe with a modification of gravity, we show that Einstein evolution can be recovered for most of cosmic history in at least one f(R) model. We show that a standard epoch of matter domination can be obtained in the mCDTT model, providing a sufficiently long epoch to satisfy observations. We note that the additional inverse term will not significantly alter standard evolution until today and that the solution lies well within present constraints from big bang nucleosynthesis. For the CDTT model, we analyze the 'recent radiation epoch' behavior (a{proportional_to}t{sup 1/2}) found by previous authors. We finally generalize our findings to the class of inverse power-law models. Even in this class of models, we expect a standard cosmological evolution, with a sufficient matter domination era, although the sign of the additional term is crucial.
Streptococcus pneumoniae, le transformiste.
Johnston, Calum; Campo, Nathalie; Bergé, Matthieu J; Polard, Patrice; Claverys, Jean-Pierre
2014-03-01
Streptococcus pneumoniae (the pneumococcus) is an important human pathogen. Natural genetic transformation, which was discovered in this species, involves internalization of exogenous single-stranded DNA and its incorporation into the chromosome. It allows acquisition of pathogenicity islands and antibiotic resistance and promotes vaccine escape via capsule switching. This opinion article discusses how recent advances regarding several facets of pneumococcal transformation support the view that the process has evolved to maximize plasticity potential in this species, making the pneumococcus le transformiste of the bacterial kingdom and providing an advantage in the constant struggle between this pathogen and its host.
Johnsen, David C; Williams, John N; Baughman, Pauletta Gay; Roesch, Darren M; Feldman, Cecile A
2015-10-01
This opinion article applauds the recent introduction of a new dental accreditation standard addressing critical thinking and problem-solving, but expresses a need for additional means for dental schools to demonstrate they are meeting the new standard because articulated outcomes, learning models, and assessments of competence are still being developed. Validated, research-based learning models are needed to define reference points against which schools can design and assess the education they provide to their students. This article presents one possible learning model for this purpose and calls for national experts from within and outside dental education to develop models that will help schools define outcomes and assess performance in educating their students to become practitioners who are effective critical thinkers and problem-solvers.
Search for a Standard Model Higgs Boson with a Dilepton and Missing Energy Signature
Gerbaudo, Davide
2011-09-01
The subject of this thesis is the search for a standard model Higgs boson decaying to a pair of W bosons that in turn decay leptonically, H → W^{+}W^{-} → $\\bar{ℓ}$vℓ$\\bar{v}$. This search is performed considering events produced in p$\\bar{p}$ collisions at √s = 1.96 TeV, where two oppositely charged lepton candidates (e^{+}e^{-}, e^{±}μ^{±}, or μ^{+}μ}^{-}), and missing transverse energy, have been reconstructed. The data were collected with the D0 detector at the Fermilab Tevatron collider, and are tested against the standard model predictions computed for a Higgs boson with mass in the range 115-200 GeV. No excess of events over background is observed, and limits on Standard Model Higgs boson production are determined. An interpretation of these limits within the hypothesis of a fourth-generation extension to the standard model is also given. The overall analysis scheme is the same for the three dilepton pairs being considered (e^{+}e^{-}, e^{±}μ^{±}, or μ^{+}μ^{-}); this thesis, however, describes in detail the study of the dimuon final state.
Existence of standard models of conic fibrations over non-algebraically-closed fields
Avilov, A A
2014-12-31
We prove an analogue of Sarkisov's theorem on the existence of a standard model of a conic fibration over an algebraically closed field of characteristic different from two for three-dimensional conic fibrations over an arbitrary field of characteristic zero with an action of a finite group. Bibliography: 16 titles.
Standard pre-main sequence models of low-mass stars
Prada Moroni, P. G.; Degl'Innocenti, S.; Tognelli, E.
2014-05-09
The main characteristics of standard pre-main sequence (PMS) models are described. A discussion of the uncer-tainties affecting the current generation of PMS evolutionary tracks and isochrones is also provided. In particular, the impact of the uncertainties in the adopted equation of state, radiative opacity, nuclear cross sections, and initial chemical abundances are analysed.
Beyond standard model searches in the MiniBooNE experiment
Katori, Teppei; Conrad, Janet M.
2014-08-05
Tmore » he MiniBooNE experiment has contributed substantially to beyond standard model searches in the neutrino sector. he experiment was originally designed to test the Δm2~1eV2 region of the sterile neutrino hypothesis by observing νe(ν-e) charged current quasielastic signals from a νμ(ν-μ) beam. MiniBooNE observed excesses of νe and ν-e candidate events in neutrino and antineutrino mode, respectively. o date, these excesses have not been explained within the neutrino standard model (νSM); the standard model extended for three massive neutrinos. Confirmation is required by future experiments such as MicroBooNE. MiniBooNE also provided an opportunity for precision studies of Lorentz violation. he results set strict limits for the first time on several parameters of the standard-model extension, the generic formalism for considering Lorentz violation. Most recently, an extension to MiniBooNE running, with a beam tuned in beam-dump mode, is being performed to search for dark sector particles. In addition, this review describes these studies, demonstrating that short baseline neutrino experiments are rich environments in new physics searches.« less
Neutron Spin Structure Studies and Low-Energy Tests of the Standard Model at JLab
Jager, Kees de
2008-10-13
The most recent results on the spin structure of the neutron from Hall A are presented and discussed. Then, an overview is given of various experiments planned with the 12 GeV upgrade at Jefferson Lab to provide sensitive tests of the Standard Model at relatively low energies.
78 FR 45447 - Revisions to Modeling, Data, and Analysis Reliability Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-29
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission 18 CFR Part 40 Revisions to Modeling, Data, and Analysis Reliability Standard AGENCY: Federal Energy Regulatory Commission. ACTION: Final rule. SUMMARY: In this Final...
Analogous behavior in the quantum hall effect, anyon superconductivity, and the standard model
Laughlin, R.B. . Dept. of Physics); Libby, S.B. )
1991-07-01
Similarities between physical behavior known to occur, or suspected of occurring, in simple condensed matter systems and behavior postulated by the standard model are identified and discussed. Particular emphasis is given to quantum number fractionalization, spontaneous occurrence of gauge forces, spontaneous violation of P and T, and anomaly cancellation. 46 refs.
W / Z + heavy flavor production and the standard model Higgs searches at the Tevatron
Choi, S.Y.; /UC, Riverside
2004-08-01
Searches for the Standard Model Higgs in WH and H {yields} WW channels by CDF and D0 collaborations are presented. The preliminary results are based on < 180 pb{sup -1} of data analyzed by each experiment. Important backgrounds to Higgs searches, such as heavy flavor production in association with massive vector bosons (W and Z) are studied in the process.
ERIC Educational Resources Information Center
Kulgemeyer, Christoph; Schecker, Horst
2014-01-01
This paper gives an overview of research on modelling science competence in German science education. Since the first national German educational standards for physics, chemistry and biology education were released in 2004 research projects dealing with competences have become prominent strands. Most of this research is about the structure of…
New exact solutions of the standard pairing model for well-deformed nuclei
Pan Feng; Xie Mingxia; Guan Xin; Dai Lianrong; Draayer, J. P.
2009-10-15
A new step-by-step diagonalization procedure for evaluating exact solutions of the nuclear deformed mean-field plus pairing interaction model is proposed via a simple Bethe ansatz in each step from which the eigenvalues and corresponding eigenstates can be obtained progressively. This new approach draws upon an observation that the original one- plus two-body problem in a k-particle Hilbert subspace can be mapped onto a one-body grand hard-core boson picture that can be solved step by step with a simple Bethe ansatz known from earlier work. Based on this new procedure, it is further shown that the extended pairing model for deformed nuclei [Feng Pan, V. G. Gueorguiev, and J. P. Draayer, Phys. Rev. Lett. 92, 112503 (2004)] is similar to the standard pairing model with the first step approximation, in which only the lowest energy eigenstate of the standard pure pairing interaction part is taken into consideration. Our analysis shows that the standard pairing model with the first step approximation displays similar pair structures of the first few exact low-lying states of the model, which, therefore, provides a link between the two models.
The Standard Model in the history of the Natural Sciences, Econometrics, and the social sciences
NASA Astrophysics Data System (ADS)
Fisher, W. P., Jr.
2010-07-01
In the late 18th and early 19th centuries, scientists appropriated Newton's laws of motion as a model for the conduct of any other field of investigation that would purport to be a science. This early form of a Standard Model eventually informed the basis of analogies for the mathematical expression of phenomena previously studied qualitatively, such as cohesion, affinity, heat, light, electricity, and magnetism. James Clerk Maxwell is known for his repeated use of a formalized version of this method of analogy in lectures, teaching, and the design of experiments. Economists transferring skills learned in physics made use of the Standard Model, especially after Maxwell demonstrated the value of conceiving it in abstract mathematics instead of as a concrete and literal mechanical analogy. Haavelmo's probability approach in econometrics and R. Fisher's Statistical Methods for Research Workers brought a statistical approach to bear on the Standard Model, quietly reversing the perspective of economics and the social sciences relative to that of physics. Where physicists, and Maxwell in particular, intuited scientific method as imposing stringent demands on the quality and interrelations of data, instruments, and theory in the name of inferential and comparative stability, statistical models and methods disconnected theory from data by removing the instrument as an essential component. New possibilities for reconnecting economics and the social sciences to Maxwell's sense of the method of analogy are found in Rasch's probabilistic models for measurement.
Search for the standard model Higgs boson in association with a W boson at D0.
Shaw, Savanna Marie
2013-01-01
I present a search for the standard model Higgs boson, H, produced in association with a W boson in data events containing a charged lepton (electron or muon), missing energy, and two or three jets. The data analysed correspond to 9.7 fb^{-1} of integrated luminosity collected at a center-of-momentum energy of √s = 1.96 TeV with the D0 detector at the Fermilab Tevatron p$\\bar{p}$ collider. This search uses algorithms to identify the signature of bottom quark production and multivariate techniques to improve the purity of H → b$\\bar{b}$ production. We validate our methodology by measuring WZ and ZZ production with Z → b$\\bar{b}$ and find production rates consistent with the standard model prediction. For a Higgs boson mass of 125 GeV, we determine a 95% C.L. upper limit on the production of a standard model Higgs boson of 4.8 times the standard model Higgs boson production cross section, while the expected limit is 4.7 times the standard model production cross section. I also present a novel method for improving the energy resolution for charged particles within hadronic signatures. This is achieved by replacing the calorimeter energy measurement for charged particles within a hadronic signature with the tracking momentum measurement. This technique leads to a ~ 20% improvement in the jet energy resolution, which yields a ~ 7% improvement in the reconstructed dijet mass width for H → b$\\bar{b}$ events. The improved energy calculation leads to a ~ 5% improvement in our expected 95% C.L. upper limit on the Higgs boson production cross section.
Rübel, Oliver; Dougherty, Max; Prabhat; Denes, Peter; Conant, David; Chang, Edward F.; Bouchard, Kristofer
2016-01-01
Neuroscience continues to experience a tremendous growth in data; in terms of the volume and variety of data, the velocity at which data is acquired, and in turn the veracity of data. These challenges are a serious impediment to sharing of data, analyses, and tools within and across labs. Here, we introduce BRAINformat, a novel data standardization framework for the design and management of scientific data formats. The BRAINformat library defines application-independent design concepts and modules that together create a general framework for standardization of scientific data. We describe the formal specification of scientific data standards, which facilitates sharing and verification of data and formats. We introduce the concept of Managed Objects, enabling semantic components of data formats to be specified as self-contained units, supporting modular and reusable design of data format components and file storage. We also introduce the novel concept of Relationship Attributes for modeling and use of semantic relationships between data objects. Based on these concepts we demonstrate the application of our framework to design and implement a standard format for electrophysiology data and show how data standardization and relationship-modeling facilitate data analysis and sharing. The format uses HDF5, enabling portable, scalable, and self-describing data storage and integration with modern high-performance computing for data-driven discovery. The BRAINformat library is open source, easy-to-use, and provides detailed user and developer documentation and is freely available at: https://bitbucket.org/oruebel/brainformat. PMID:27867355
Standard plane localization in ultrasound by radial component model and selective search.
Ni, Dong; Yang, Xin; Chen, Xin; Chin, Chien-Ting; Chen, Siping; Heng, Pheng Ann; Li, Shengli; Qin, Jing; Wang, Tianfu
2014-11-01
Acquisition of the standard plane is crucial for medical ultrasound diagnosis. However, this process requires substantial experience and a thorough knowledge of human anatomy. Therefore it is very challenging for novices and even time consuming for experienced examiners. We proposed a hierarchical, supervised learning framework for automatically detecting the standard plane from consecutive 2-D ultrasound images. We tested this technique by developing a system that localizes the fetal abdominal standard plane from ultrasound video by detecting three key anatomical structures: the stomach bubble, umbilical vein and spine. We first proposed a novel radial component-based model to describe the geometric constraints of these key anatomical structures. We then introduced a novel selective search method which exploits the vessel probability algorithm to produce probable locations for the spine and umbilical vein. Next, using component classifiers trained by random forests, we detected the key anatomical structures at their probable locations within the regions constrained by the radial component-based model. Finally, a second-level classifier combined the results from the component detection to identify an ultrasound image as either a "fetal abdominal standard plane" or a "non- fetal abdominal standard plane." Experimental results on 223 fetal abdomen videos showed that the detection accuracy of our method was as high as 85.6% and significantly outperformed both the full abdomen and the separate anatomy detection methods without geometric constraints. The experimental results demonstrated that our system shows great promise for application to clinical practice.
Ecotoxicological models generally have large data requirements and are frequently based on existing information from diverse sources. Standardizing data for toxicological models may be necessary to reduce extraneous variation and to ensure models reflect intrinsic relationships. ...
NASA Astrophysics Data System (ADS)
Mirvis, E.; Iredell, M.
2015-12-01
The operational (OPS) NOAA National Centers for Environmental Prediction (NCEP) suite, traditionally, consist of a large set of multi- scale HPC models, workflows, scripts, tools and utilities, which are very much depending on the variety of the additional components. Namely, this suite utilizes a unique collection of the in-house developed 20+ shared libraries (NCEPLIBS), certain versions of the 3-rd party libraries (like netcdf, HDF, ESMF, jasper, xml etc.), HPC workflow tool within dedicated (sometimes even vendors' customized) HPC system homogeneous environment. This domain and site specific, accompanied with NCEP's product- driven large scale real-time data operations complicates NCEP collaborative development tremendously by reducing chances to replicate this OPS environment anywhere else. The NOAA/NCEP's Environmental Modeling Center (EMC) missions to develop and improve numerical weather, climate, hydrological and ocean prediction through the partnership with the research community. Realizing said difficulties, lately, EMC has been taken an innovative approach to improve flexibility of the HPC environment by building the elements and a foundation for NCEP OPS functionally equivalent environment (FEE), which can be used to ease the external interface constructs as well. Aiming to reduce turnaround time of the community code enhancements via Research-to-Operations (R2O) cycle, EMC developed and deployed several project sub-set standards that already paved the road to NCEP OPS implementation standards. In this topic we will discuss the EMC FEE for O2R requirements and approaches in collaborative standardization, including NCEPLIBS FEE and models code version control paired with the models' derived customized HPC modules and FEE footprints. We will share NCEP/EMC experience and potential in the refactoring of EMC development processes, legacy codes and in securing model source code quality standards by using combination of the Eclipse IDE, integrated with the
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Luckring, James M.; Morrison, Joseph H.; Blattnig, Steve R.; Green, Lawrence L.; Tripathi, Ram K.
2007-01-01
The National Aeronautics and Space Administration (NASA) recently issued an interim version of the Standard for Models and Simulations (M&S Standard) [1]. The action to develop the M&S Standard was identified in an internal assessment [2] of agency-wide changes needed in the wake of the Columbia Accident [3]. The primary goal of this standard is to ensure that the credibility of M&S results is properly conveyed to those making decisions affecting human safety or mission success criteria. The secondary goal is to assure that the credibility of the results from models and simulations meets the project requirements (for credibility). This presentation explains the motivation and key aspects of the M&S Standard, with a special focus on the requirements for verification, validation and uncertainty quantification. Some pilot applications of this standard to computational fluid dynamics applications will be provided as illustrations. The authors of this paper are the members of the team that developed the initial three drafts of the standard, the last of which benefited from extensive comments from most of the NASA Centers. The current version (number 4) incorporates modifications made by a team representing 9 of the 10 NASA Centers. A permanent version of the M&S Standard is expected by December 2007. The scope of the M&S Standard is confined to those uses of M&S that support program and project decisions that may affect human safety or mission success criteria. Such decisions occur, in decreasing order of importance, in the operations, the test & evaluation, and the design & analysis phases. Requirements are placed on (1) program and project management, (2) models, (3) simulations and analyses, (4) verification, validation and uncertainty quantification (VV&UQ), (5) recommended practices, (6) training, (7) credibility assessment, and (8) reporting results to decision makers. A key component of (7) and (8) is the use of a Credibility Assessment Scale, some of the details
ERIC Educational Resources Information Center
Kaliski, Pamela; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna; Plake, Barbara; Reshetar, Rosemary
2012-01-01
The Many-Facet Rasch (MFR) Model is traditionally used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR Model by examining the quality of ratings obtained from a…
Standard model explanations for the NuTeV electroweak measurements
R. H. Bernstein
2003-12-23
The NuTeV Collaboration has measured the electroweak parameters sin{sup 2} {theta}{sub W} and {rho} in neutrino-nucleon deep-inelastic scattering using a sign-selected beam. The nearly pure {nu} or {bar {nu}} beams that result provide many of the cancellations of systematics associated with the Paschos-Wolfenstein relation. The extracted result for sin{sup 2} {theta}{sub W}(on-shell) = 1 - M{sub W}{sup 2}/M{sub Z}{sup 2} is three standard deviations from prediction. We discuss Standard Model explanations for the puzzle.
Paul, R.A.; Johnson, T.
1985-04-01
The report describes the results obtained when the carbon monoxide (CO) version of NEM is used to estimate national exposures associated with attaining the current CO standard (9 ppm, one observed exceedance). This standard was not analyzed in the basic report of the same title (EPA-450/5-83-003). NEM is a simulation model that simulates the intersection of a population with pollutant concentrations over space and time to estimate exposures that would obtain if various alternative NAAQs were just met. Estimates are presented for adults with cardiovascular disease in four urban study areas and for a nationwide extrapolation.
NASA Standard for Models and Simulations (M and S): Development Process and Rationale
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Blattnig, Steve R.; Green, Lawrence L.; Hemsch, Michael J.; Luckring, James M.; Morison, Joseph H.; Tripathi, Ram K.
2009-01-01
After the Columbia Accident Investigation Board (CAIB) report. the NASA Administrator at that time chartered an executive team (known as the Diaz Team) to identify the CAIB report elements with Agency-wide applicability, and to develop corrective measures to address each element. This report documents the chronological development and release of an Agency-wide Standard for Models and Simulations (M&S) (NASA Standard 7009) in response to Action #4 from the report, "A Renewed Commitment to Excellence: An Assessment of the NASA Agency-wide Applicability of the Columbia Accident Investigation Board Report, January 30, 2004".
Flaxion: a minimal extension to solve puzzles in the standard model
NASA Astrophysics Data System (ADS)
Ema, Yohei; Hamaguchi, Koichi; Moroi, Takeo; Nakayama, Kazunori
2017-01-01
We propose a minimal extension of the standard model which includes only one additional complex scalar field, flavon, with flavor-dependent global U(1) symmetry. It not only explains the hierarchical flavor structure in the quark and lepton sector (including neutrino sector), but also solves the strong CP problem by identifying the CP-odd component of the flavon as the QCD axion, which we call flaxion. Furthermore, the flaxion model solves the cosmological puzzles in the standard model, i.e., origin of dark matter, baryon asymmetry of the universe, and inflation. We show that the radial component of the flavon can play the role of inflaton without isocurvature nor domain wall problems. The dark matter abundance can be explained by the flaxion coherent oscillation, while the baryon asymmetry of the universe is generated through leptogenesis.
VandeVord, Pamela J; Leonardi, Alessandra Dal Cengio; Ritzel, David
2016-01-01
Recent military combat has heightened awareness to the complexity of blast-related traumatic brain injuries (bTBI). Experiments using animal, cadaver, or biofidelic physical models remain the primary measures to investigate injury biomechanics as well as validate computational simulations, medical diagnostics and therapies, or protection technologies. However, blast injury research has seen a range of irregular and inconsistent experimental methods for simulating blast insults generating results which may be misleading, cannot be cross-correlated between laboratories, or referenced to any standard for exposure. Both the US Army Medical Research and Materiel Command and the National Institutes of Health have noted that there is a lack of standardized preclinical models of TBI. It is recommended that the blast injury research community converge on a consistent set of experimental procedures and reporting of blast test conditions. This chapter describes the blast conditions which can be recreated within a laboratory setting and methodology for testing in vivo models within the appropriate environment.
Laomettachit, Teeraphan; Chen, Katherine C.; Baumann, William T.
2016-01-01
To understand the molecular mechanisms that regulate cell cycle progression in eukaryotes, a variety of mathematical modeling approaches have been employed, ranging from Boolean networks and differential equations to stochastic simulations. Each approach has its own characteristic strengths and weaknesses. In this paper, we propose a “standard component” modeling strategy that combines advantageous features of Boolean networks, differential equations and stochastic simulations in a framework that acknowledges the typical sorts of reactions found in protein regulatory networks. Applying this strategy to a comprehensive mechanism of the budding yeast cell cycle, we illustrate the potential value of standard component modeling. The deterministic version of our model reproduces the phenotypic properties of wild-type cells and of 125 mutant strains. The stochastic version of our model reproduces the cell-to-cell variability of wild-type cells and the partial viability of the CLB2-dbΔ clb5Δ mutant strain. Our simulations show that mathematical modeling with “standard components” can capture in quantitative detail many essential properties of cell cycle control in budding yeast. PMID:27187804
Liu, Yan; Cai, Wensheng; Shao, Xueguang
2016-12-05
Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.
Laomettachit, Teeraphan; Chen, Katherine C; Baumann, William T; Tyson, John J
2016-01-01
To understand the molecular mechanisms that regulate cell cycle progression in eukaryotes, a variety of mathematical modeling approaches have been employed, ranging from Boolean networks and differential equations to stochastic simulations. Each approach has its own characteristic strengths and weaknesses. In this paper, we propose a "standard component" modeling strategy that combines advantageous features of Boolean networks, differential equations and stochastic simulations in a framework that acknowledges the typical sorts of reactions found in protein regulatory networks. Applying this strategy to a comprehensive mechanism of the budding yeast cell cycle, we illustrate the potential value of standard component modeling. The deterministic version of our model reproduces the phenotypic properties of wild-type cells and of 125 mutant strains. The stochastic version of our model reproduces the cell-to-cell variability of wild-type cells and the partial viability of the CLB2-dbΔ clb5Δ mutant strain. Our simulations show that mathematical modeling with "standard components" can capture in quantitative detail many essential properties of cell cycle control in budding yeast.
NASA Astrophysics Data System (ADS)
Liu, Yan; Cai, Wensheng; Shao, Xueguang
2016-12-01
Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.
NASA Astrophysics Data System (ADS)
Cao, Zhenggang; Ding, Zengqian; Hu, Zhixiong; Wen, Tao; Qiao, Wen; Liu, Wenli
2016-10-01
Optical coherence tomography (OCT) has been widely applied in diagnosis of eye diseases during the last 20 years. Differing from traditional two-dimension imaging technologies, OCT could also provide cross-sectional information of target tissues simultaneously and precisely. As well known, axial resolution is one of the most critical parameters impacting the OCT image quality, which determines whether an accurate diagnosis could be obtained. Therefore, it is important to evaluate the axial resolution of an OCT equipment. Phantoms always play an important role in the standardization and validation process. Here, a standard model eye with micro-scale multilayer structure was custom designed and manufactured. Mimicking a real human eye, analyzing the physical characteristic of layer structures of retina and cornea in-depth, appropriate materials were selected by testing the scattering coefficient of PDMS phantoms with difference concentration of TiO2 or BaSO4 particles. An artificial retina and cornea with multilayer-films which have a thickness of 10 to 60 micrometers for each layer were fabricated using spin coating technology. Considering key parameters of the standard model eye need to be traceable as well as accurate, the optical refractive index and layer structure thicknesses of phantoms were verified by utilizing Thickness Monitoring System. Consequently, a standard OCT model eye was obtained after the retinal or corneal phantom was embedded into a water-filled model eye which has been fabricated by 3D printing technology to simulate ocular dispersion and emmetropic refraction. The eye model was manufactured with a transparent resin to simulate realistic ophthalmic testing environment, and most key optical elements including cornea, lens and vitreous body were realized. By investigating with a research and a clinical OCT system respectively, the OCT model eye was demonstrated with similar physical properties as natural eye, and the multilayer film measurement
NASA Astrophysics Data System (ADS)
Eliassen, Lene; Andersen, Søren
2016-09-01
The wind turbine design standards recommend two different methods to generate turbulent wind for design load analysis, the Kaimal spectra combined with an exponential coherence function and the Mann turbulence model. The two turbulence models can give very different estimates of fatigue life, especially for offshore floating wind turbines. In this study the spatial distributions of the two turbulence models are investigated using Proper Orthogonal Decomposition, which is used to characterize large coherent structures. The main focus has been on the structures that contain the most energy, which are the lowest POD modes. The Mann turbulence model generates coherent structures that stretches in the horizontal direction for the longitudinal component, while the structures found in the Kaimal model are more random in their shape. These differences in the coherent structures at lower frequencies for the two turbulence models can be the reason for differences in fatigue life estimates for wind turbines.
A standard protocol for describing individual-based and agent-based models
Grimm, Volker; Berger, Uta; Bastiansen, Finn; Eliassen, Sigrunn; Ginot, Vincent; Giske, Jarl; Goss-Custard, John; Grand, Tamara; Heinz, Simone K.; Huse, Geir; Huth, Andreas; Jepsen, Jane U.; Jorgensen, Christian; Mooij, Wolf M.; Muller, Birgit; Pe'er, Guy; Piou, Cyril; Railsback, Steven F.; Robbins, Andrew M.; Robbins, Martha M.; Rossmanith, Eva; Ruger, Nadja; Strand, Espen; Souissi, Sami; Stillman, Richard A.; Vabo, Rune; Visser, Ute; DeAngelis, Donald L.
2006-01-01
Simulation models that describe autonomous individual organisms (individual based models, IBM) or agents (agent-based models, ABM) have become a widely used tool, not only in ecology, but also in many other disciplines dealing with complex systems made up of autonomous entities. However, there is no standard protocol for describing such simulation models, which can make them difficult to understand and to duplicate. This paper presents a proposed standard protocol, ODD, for describing IBMs and ABMs, developed and tested by 28 modellers who cover a wide range of fields within ecology. This protocol consists of three blocks (Overview, Design concepts, and Details), which are subdivided into seven elements: Purpose, State variables and scales, Process overview and scheduling, Design concepts, Initialization, Input, and Submodels. We explain which aspects of a model should be described in each element, and we present an example to illustrate the protocol in use. In addition, 19 examples are available in an Online Appendix. We consider ODD as a first step for establishing a more detailed common format of the description of IBMs and ABMs. Once initiated, the protocol will hopefully evolve as it becomes used by a sufficiently large proportion of modellers.
NASA Astrophysics Data System (ADS)
Sharapatov, Abish
2015-04-01
Standard models of skarn-magnetite deposits in folded regions of Kazakhstan, is made by using generalized geological and geophysical parameters of the similar existing deposits. Such models might be Sarybay, Sokolovskoe and other deposits of Valeryanovskaya structural-facies zone (SFZ) in Torgay paleorifts structure. They are located in the north of SFZ. Forecasting area located in the south of SFZ - in the North of Aral Sea region. These models are outlined from the study of deep structure of the region using geophysical data. Upper and deep zones were studied by separating gravity and magnetic fields on the regional and local components. Seismic and geoelectric data of region were used in interpretation. Thus, the similarity between northern and southern part of SFZ has been identified in geophysical aspects, regional and local geophysical characteristics. Creation of standard models of scarn-magnetite deposits for GIS database allows highlighting forecast criteria of such deposits type. These include: - the presence of fault zones; - thickness of volcanic strata - about 2 km or more, the total capacity of circum-ore metasomatic rocks - about 1.5 km and more; - spatial positions and geometric data of the ore bodies - steeply dipping bodies in the medium gabbroic intrusions and their contact with carbonate-dolomitic strata; - presence in the geological section of the near surface zone with the electrical resistance of 200 Om*m, corresponding to the Devonian, Early Carboniferous volcanic sediments and volcanics associated with subvolcanic bodies and intrusions; - a relatively shallow depth of the zone at a rate of Vp = 6.4-6.8 km/s - uplifting Conrad border, thickening of the granulite-basic layer; - positive values of magnetic (high-amplitude) and gravitational field. A geological forecast model is carried out by structuring geodata based on detailed analysis and aggregation of geological and formal knowledge bases on standard targets. Aggregation method of
Standardized 3D Bioprinting of Soft Tissue Models with Human Primary Cells.
Rimann, Markus; Bono, Epifania; Annaheim, Helene; Bleisch, Matthias; Graf-Hausner, Ursula
2016-08-01
Cells grown in 3D are more physiologically relevant than cells cultured in 2D. To use 3D models in substance testing and regenerative medicine, reproducibility and standardization are important. Bioprinting offers not only automated standardizable processes but also the production of complex tissue-like structures in an additive manner. We developed an all-in-one bioprinting solution to produce soft tissue models. The holistic approach included (1) a bioprinter in a sterile environment, (2) a light-induced bioink polymerization unit, (3) a user-friendly software, (4) the capability to print in standard labware for high-throughput screening, (5) cell-compatible inkjet-based printheads, (6) a cell-compatible ready-to-use BioInk, and (7) standard operating procedures. In a proof-of-concept study, skin as a reference soft tissue model was printed. To produce dermal equivalents, primary human dermal fibroblasts were printed in alternating layers with BioInk and cultured for up to 7 weeks. During long-term cultures, the models were remodeled and fully populated with viable and spreaded fibroblasts. Primary human dermal keratinocytes were seeded on top of dermal equivalents, and epidermis-like structures were formed as verified with hematoxylin and eosin staining and immunostaining. However, a fully stratified epidermis was not achieved. Nevertheless, this is one of the first reports of an integrative bioprinting strategy for industrial routine application.
Aspects of nonrenormalizable terms in a superstring standard-like models
NASA Astrophysics Data System (ADS)
Faraggi, Alon E.
1992-06-01
I investigate the role of nonrenormalizable terms, up to order N=8, in a superstring derived standard-like model. I argue that nonrenormalizable terms restrict the gauge symmetry, at the Planck scale, to be SU(3)xSU(2)xU(1)(sub B-L)xU(1)(sub T(sub 3R)) rather than SU(3)xSU(2)xU(1)(sub Y). I show that the breaking the gauge symmetry directly to the Standard Model leads to breaking the supersymmetry at the Planck scale, or to dimension four, baryon and lepton violating, operators. I show that if the gauge symmetry is broken directly to the Standard Model the cubic level solution to the F and D flatness constraints is violated by higher order terms, while if U(1)(sub Z') remains unbroken at the Planck scale, the cubic level solution is valid to all orders of nonrenormalizable terms. I discuss the Higgs and fermion mass spectrum. I demonstrate that realistic, hierarchical, fermion mass spectrum can be generated in this model.
Aspects of non-renormalizable terms in a superstring derived standard-like model
NASA Astrophysics Data System (ADS)
Faraggi, Alon E.
1993-08-01
I investigate the role of non-renormalizable terms, up to order N = 8, in a superstring derived standard-like model. I argue that non-renormalizable terms restrict the gauge symmetry, at the Planck scale, to be SU(3)×SU(2)×U(1) B ×U(1) T3 R rather than SU(3)×SU(2)×U(1) Y. I show that breaking the gauge symmetry directly to the Standard Model leads to breaking of supersymmetry at the Planck scale, or to dimension four, baryon and lepton violating, operators. I show that if the gauge symmetry is broken directly to the Standard Model the cubic-level solution to the F and D flatness constraints is violated by higher-order terms, while if U(1) Z' remains unbroken at the Planck scale, the cubic-level solution is valid to all orders of non-renormalizable terms. I discuss the Higgs and fermion mass spectrum. I demonstrate that realistic, hierarchical, fermion mass spectrum can be generated in this model.
3D Building Modeling in LoD2 Using the CityGML Standard
NASA Astrophysics Data System (ADS)
Preka, D.; Doulamis, A.
2016-10-01
Over the last decade, scientific research has been increasingly focused on the third dimension in all fields and especially in sciences related to geographic information, the visualization of natural phenomena and the visualization of the complex urban reality. The field of 3D visualization has achieved rapid development and dynamic progress, especially in urban applications, while the technical restrictions on the use of 3D information tend to subside due to advancements in technology. A variety of 3D modeling techniques and standards has already been developed, as they gain more traction in a wide range of applications. Such a modern standard is the CityGML, which is open and allows for sharing and exchanging of 3D city models. Within the scope of this study, key issues for the 3D modeling of spatial objects and cities are considered and specifically the key elements and abilities of CityGML standard, which is used in order to produce a 3D model of 14 buildings that constitute a block at the municipality of Kaisariani, Athens, in Level of Detail 2 (LoD2), as well as the corresponding relational database. The proposed tool is based upon the 3DCityDB package in tandem with a geospatial database (PostgreSQL w/ PostGIS 2.0 extension). The latter allows for execution of complex queries regarding the spatial distribution of data. The system is implemented in order to facilitate a real-life scenario in a suburb of Athens.
Creating Better Child Care Jobs: Model Work Standards for Teaching Staff in Center-Based Child Care.
ERIC Educational Resources Information Center
Center for the Child Care Workforce, Washington, DC.
This document presents model work standards articulating components of the child care center-based work environment that enable teachers to do their jobs well. These standards establish criteria to assess child care work environments and identify areas to improve in order to assure good jobs for adults and good care for children. The standards are…
NASA Astrophysics Data System (ADS)
Bermudez, L. E.; Percivall, G.; Idol, T. A.
2015-12-01
Experts in climate modeling, remote sensing of the Earth, and cyber infrastructure must work together in order to make climate predictions available to decision makers. Such experts and decision makers worked together in the Open Geospatial Consortium's (OGC) Testbed 11 to address a scenario of population displacement by coastal inundation due to the predicted sea level rise. In a Policy Fact Sheet "Harnessing Climate Data to Boost Ecosystem & Water Resilience", issued by White House Office of Science and Technology (OSTP) in December 2014, OGC committed to increase access to climate change information using open standards. In July 2015, the OGC Testbed 11 Urban Climate Resilience activity delivered on that commitment with open standards based support for climate-change preparedness. Using open standards such as the OGC Web Coverage Service and Web Processing Service and the NetCDF and GMLJP2 encoding standards, Testbed 11 deployed an interoperable high-resolution flood model to bring climate model outputs together with global change assessment models and other remote sensing data for decision support. Methods to confirm model predictions and to allow "what-if-scenarios" included in-situ sensor webs and crowdsourcing. A scenario was in two locations: San Francisco Bay Area and Mozambique. The scenarios demonstrated interoperation and capabilities of open geospatial specifications in supporting data services and processing services. The resultant High Resolution Flood Information System addressed access and control of simulation models and high-resolution data in an open, worldwide, collaborative Web environment. The scenarios examined the feasibility and capability of existing OGC geospatial Web service specifications in supporting the on-demand, dynamic serving of flood information from models with forecasting capacity. Results of this testbed included identification of standards and best practices that help researchers and cities deal with climate-related issues
Cai, Longyan; He, Hong S; Wu, Zhiwei; Lewis, Benard L; Liang, Yu
2014-01-01
Understanding the fire prediction capabilities of fuel models is vital to forest fire management. Various fuel models have been developed in the Great Xing'an Mountains in Northeast China. However, the performances of these fuel models have not been tested for historical occurrences of wildfires. Consequently, the applicability of these models requires further investigation. Thus, this paper aims to develop standard fuel models. Seven vegetation types were combined into three fuel models according to potential fire behaviors which were clustered using Euclidean distance algorithms. Fuel model parameter sensitivity was analyzed by the Morris screening method. Results showed that the fuel model parameters 1-hour time-lag loading, dead heat content, live heat content, 1-hour time-lag SAV(Surface Area-to-Volume), live shrub SAV, and fuel bed depth have high sensitivity. Two main sensitive fuel parameters: 1-hour time-lag loading and fuel bed depth, were determined as adjustment parameters because of their high spatio-temporal variability. The FARSITE model was then used to test the fire prediction capabilities of the combined fuel models (uncalibrated fuel models). FARSITE was shown to yield an unrealistic prediction of the historical fire. However, the calibrated fuel models significantly improved the capabilities of the fuel models to predict the actual fire with an accuracy of 89%. Validation results also showed that the model can estimate the actual fires with an accuracy exceeding 56% by using the calibrated fuel models. Therefore, these fuel models can be efficiently used to calculate fire behaviors, which can be helpful in forest fire management.
The Standard Model Results from a Scheme to Protect the Mass of the Scalar Bosons
NASA Astrophysics Data System (ADS)
Chaves, M.
2005-08-01
The masses of phenomenological scalar bosons are not protected against large contributions due to quantum loops involving the hypothetical very high energy theory that we assume is the true theory. Here I present a generalization of Yang-Mills theories that treats vector and scalar bosons on the same footing and results in Ward identities and current conservations that protect the masses of the scalars. It is remarkable that that the Standard Model obtains immediately upon choosing an appropriate gauge symmetry and allowing for one scalar field and one vector field to possess large vacuum expectation values. The Standard Model then appears as the unitary gauge of the chosen gauge symmetry. Due to the vectorial VEV there is a small breaking of the Lorentz invariance.
Francium spectroscopy: Towards a low energy test of the standard model
Orozco, L. A.; Simsarian, J. E.; Sprouse, G. D.; Zhao, W. Z.
1997-03-15
An atomic parity non-conservation measurement can test the predictions of the standard model for the electron-quark coupling constants. The measurements, performed at very low energies compared to the Z{sup 0} pole, can be sensitive to physics beyond the standard model. Francium, the heaviest alkali, is a viable candidate for atomic parity violation measurements. The extraction of weak interaction parameters requires a detailed knowledge of the electronic wavefunctions of the atom. Measurements of atomic properties of francium provide data for careful comparisons with ab initio calculations of its atomic structure. The spectroscopy, including energy level location and atomic lifetimes, is carried out using the recently developed techniques of laser cooling and trapping of atoms.
Remarks on the Standard Model predictions for R (D ) and R (D*)
NASA Astrophysics Data System (ADS)
Kim, C. S.; Lopez-Castro, G.; Tostado, S. L.; Vicente, A.
2017-01-01
Semileptonic b →c transitions, and in particular the ratios R (D(*))=Γ/(B →D(*)τ ν ) Γ (B →D(*)ℓν ) , can be used to test the universality of the weak interactions. In light of the recent discrepancies between the experimental measurements of these observables by the BABAR, Belle, and LHCb collaborations and the Standard Model predicted values, we study the robustness of the latter. Our analysis reveals that R (D ) might be enhanced by lepton mass effects associated to the mostly unknown scalar form factor. In contrast, the Standard Model prediction for R (D*) is found to be more robust, because possible pollutions from B* contributions turn out to be negligibly small; this indicates that R (D*) is a promising observable for searches of new physics.
Conceptual explanation for the algebra in the noncommutative approach to the standard model.
Chamseddine, Ali H; Connes, Alain
2007-11-09
The purpose of this Letter is to remove the arbitrariness of the ad hoc choice of the algebra and its representation in the noncommutative approach to the standard model, which was begging for a conceptual explanation. We assume as before that space-time is the product of a four-dimensional manifold by a finite noncommmutative space F. The spectral action is the pure gravitational action for the product space. To remove the above arbitrariness, we classify the irreducible geometries F consistent with imposing reality and chiral conditions on spinors, to avoid the fermion doubling problem, which amounts to have total dimension 10 (in the K-theoretic sense). It gives, almost uniquely, the standard model with all its details, predicting the number of fermions per generation to be 16, their representations and the Higgs breaking mechanism, with very little input.
Search for the standard model Higgs boson in tau lepton final states
Abazov, Victor Mukhamedovich; et al.
2012-08-01
We present a search for the standard model Higgs boson in final states with an electron or muon and a hadronically decaying tau lepton in association with zero, one, or two or more jets using data corresponding to an integrated luminosity of up to 7.3 fb{sup -1} collected with the D0 detector at the Fermilab Tevatron collider. The analysis is sensitive to Higgs boson production via gluon gluon fusion, associated vector boson production, and vector boson fusion, and to Higgs boson decays to tau lepton pairs or W boson pairs. Observed (expected) limits are set on the ratio of 95% C.L. upper limits on the cross section times branching ratio, relative to those predicted by the Standard Model, of 14 (22) at a Higgs boson mass of 115 GeV and 7.7 (6.8) at 165 GeV.
The Higgs Boson as a Window to Beyond the Standard Model
Vega-Morales, Roberto
2013-08-01
The recent discovery of a Higgs boson at the LHC with properties resembling those predicted by the Standard Model (SM) gives strong indication that the final missing piece of the SM is now in place. In particular, the mechanism responsible for Electroweak Symmetry Breaking (EWSB) and generating masses for the Z and W vector bosons appears to have been established. Even with this amazing discovery there are still many outstanding theoretical and phenomenological questions which suggest that there must be physics Beyond the Standard Model (BSM). As we investigate in this thesis, the Higgs boson offers the exciting possibility of acting as a window to this new physics through various avenues which are experimentally testable in the coming years. We investigate a subset of these possibilities and begin by discussing them briefly below before a detailed examination in the following chapters.
Search for the standard model Higgs boson in tau final states.
Abazov, V M; Abbott, B; Abolins, M; Acharya, B S; Adams, M; Adams, T; Aguilo, E; Ahsan, M; Alexeev, G D; Alkhazov, G; Alton, A; Alverson, G; Alves, G A; Ancu, L S; Andeen, T; Anzelc, M S; Aoki, M; Arnoud, Y; Arov, M; Arthaud, M; Askew, A; Asman, B; Atramentov, O; Avila, C; Backusmayes, J; Badaud, F; Bagby, L; Baldin, B; Bandurin, D V; Banerjee, S; Barberis, E; Barfuss, A-F; Bargassa, P; Baringer, P; Barreto, J; Bartlett, J F; Bassler, U; Bauer, D; Beale, S; Bean, A; Begalli, M; Begel, M; Belanger-Champagne, C; Bellantoni, L; Bellavance, A; Benitez, J A; Beri, S B; Bernardi, G; Bernhard, R; Bertram, I; Besançon, M; Beuselinck, R; Bezzubov, V A; Bhat, P C; Bhatnagar, V; Blazey, G; Blessing, S; Bloom, K; Boehnlein, A; Boline, D; Bolton, T A; Boos, E E; Borissov, G; Bose, T; Brandt, A; Brock, R; Brooijmans, G; Bross, A; Brown, D; Bu, X B; Buchholz, D; Buehler, M; Buescher, V; Bunichev, V; Burdin, S; Burnett, T H; Buszello, C P; Calfayan, P; Calpas, B; Calvet, S; Cammin, J; Carrasco-Lizarraga, M A; Carrera, E; Carvalho, W; Casey, B C K; Castilla-Valdez, H; Chakrabarti, S; Chakraborty, D; Chan, K M; Chandra, A; Cheu, E; Cho, D K; Choi, S; Choudhary, B; Christoudias, T; Cihangir, S; Claes, D; Clutter, J; Cooke, M; Cooper, W E; Corcoran, M; Couderc, F; Cousinou, M-C; Crépé-Renaudin, S; Cuplov, V; Cutts, D; Cwiok, M; Das, A; Davies, G; De, K; de Jong, S J; De La Cruz-Burelo, E; DeVaughan, K; Déliot, F; Demarteau, M; Demina, R; Denisov, D; Denisov, S P; Desai, S; Diehl, H T; Diesburg, M; Dominguez, A; Dorland, T; Dubey, A; Dudko, L V; Duflot, L; Duggan, D; Duperrin, A; Dutt, S; Dyshkant, A; Eads, M; Edmunds, D; Ellison, J; Elvira, V D; Enari, Y; Eno, S; Ermolov, P; Escalier, M; Evans, H; Evdokimov, A; Evdokimov, V N; Facini, G; Ferapontov, A V; Ferbel, T; Fiedler, F; Filthaut, F; Fisher, W; Fisk, H E; Fortner, M; Fox, H; Fu, S; Fuess, S; Gadfort, T; Galea, C F; Garcia-Bellido, A; Gavrilov, V; Gay, P; Geist, W; Geng, W; Gerber, C E; Gershtein, Y; Gillberg, D; Ginther, G; Gómez, B; Goussiou, A; Grannis, P D; Greder, S; Greenlee, H; Greenwood, Z D; Gregores, E M; Grenier, G; Gris, Ph; Grivaz, J-F; Grohsjean, A; Grünendahl, S; Grünewald, M W; Guo, F; Guo, J; Gutierrez, G; Gutierrez, P; Haas, A; Hadley, N J; Haefner, P; Hagopian, S; Haley, J; Hall, I; Hall, R E; Han, L; Harder, K; Harel, A; Hauptman, J M; Hays, J; Hebbeker, T; Hedin, D; Hegeman, J G; Heinson, A P; Heintz, U; Hensel, C; Heredia-De La Cruz, I; Herner, K; Hesketh, G; Hildreth, M D; Hirosky, R; Hoang, T; Hobbs, J D; Hoeneisen, B; Hohlfeld, M; Hossain, S; Houben, P; Hu, Y; Hubacek, Z; Huske, N; Hynek, V; Iashvili, I; Illingworth, R; Ito, A S; Jabeen, S; Jaffré, M; Jain, S; Jakobs, K; Jamin, D; Jarvis, C; Jesik, R; Johns, K; Johnson, C; Johnson, M; Johnston, D; Jonckheere, A; Jonsson, P; Juste, A; Kajfasz, E; Karmanov, D; Kasper, P A; Katsanos, I; Kaushik, V; Kehoe, R; Kermiche, S; Khalatyan, N; Khanov, A; Kharchilava, A; Kharzheev, Y N; Khatidze, D; Kim, T J; Kirby, M H; Kirsch, M; Klima, B; Kohli, J M; Konrath, J-P; Kozelov, A V; Kraus, J; Kuhl, T; Kumar, A; Kupco, A; Kurca, T; Kuzmin, V A; Kvita, J; Lacroix, F; Lam, D; Lammers, S; Landsberg, G; Lebrun, P; Lee, W M; Leflat, A; Lellouch, J; Li, J; Li, L; Li, Q Z; Lietti, S M; Lim, J K; Lincoln, D; Linnemann, J; Lipaev, V V; Lipton, R; Liu, Y; Liu, Z; Lobodenko, A; Lokajicek, M; Love, P; Lubatti, H J; Luna-Garcia, R; Lyon, A L; Maciel, A K A; Mackin, D; Mättig, P; Magerkurth, A; Mal, P K; Malbouisson, H B; Malik, S; Malyshev, V L; Maravin, Y; Martin, B; McCarthy, R; McGivern, C L; Meijer, M M; Melnitchouk, A; Mendoza, L; Menezes, D; Mercadante, P G; Merkin, M; Merritt, K W; Meyer, A; Meyer, J; Mitrevski, J; Mommsen, R K; Mondal, N K; Moore, R W; Moulik, T; Muanza, G S; Mulhearn, M; Mundal, O; Mundim, L; Nagy, E; Naimuddin, M; Narain, M; Neal, H A; Negret, J P; Neustroev, P; Nilsen, H; Nogima, H; Novaes, S F; Nunnemann, T; Obrant, G; Ochando, C; Onoprienko, D; Orduna, J; Oshima, N; Osman, N; Osta, J; Otec, R; Otero Y Garzón, G J; Owen, M; Padilla, M; Padley, P; Pangilinan, M; Parashar, N; Park, S-J; Park, S K; Parsons, J; Partridge, R; Parua, N; Patwa, A; Pawloski, G; Penning, B; Perfilov, M; Peters, K; Peters, Y; Pétroff, P; Piegaia, R; Piper, J; Pleier, M-A; Podesta-Lerma, P L M; Podstavkov, V M; Pogorelov, Y; Pol, M-E; Polozov, P; Popov, A V; Potter, C; Prado da Silva, W L; Protopopescu, S; Qian, J; Quadt, A; Quinn, B; Rakitine, A; Rangel, M S; Ranjan, K; Ratoff, P N; Renkel, P; Rich, P; Rijssenbeek, M; Ripp-Baudot, I; Rizatdinova, F; Robinson, S; Rodrigues, R F; Rominsky, M; Royon, C; Rubinov, P; Ruchti, R; Safronov, G; Sajot, G; Sánchez-Hernández, A; Sanders, M P; Sanghi, B; Savage, G; Sawyer, L; Scanlon, T; Schaile, D; Schamberger, R D; Scheglov, Y; Schellman, H; Schliephake, T; Schlobohm, S; Schwanenberger, C; Schwienhorst, R; Sekaric, J; Severini, H; Shabalina, E; Shamim, M; Shary, V; Shchukin, A A; Shivpuri, R K; Siccardi, V; Simak, V; Sirotenko, V; Skubic, P; Slattery, P; Smirnov, D; Snow, G R; Snow, J; Snyder, S; Söldner-Rembold, S; Sonnenschein, L; Sopczak, A; Sosebee, M; Soustruznik, K; Spurlock, B; Stark, J; Stolin, V; Stoyanova, D A; Strandberg, J; Strandberg, S; Strang, M A; Strauss, E; Strauss, M; Ströhmer, R; Strom, D; Stutte, L; Sumowidagdo, S; Svoisky, P; Takahashi, M; Tanasijczuk, A; Taylor, W; Tiller, B; Tissandier, F; Titov, M; Tokmenin, V V; Torchiani, I; Tsybychev, D; Tuchming, B; Tully, C; Tuts, P M; Unalan, R; Uvarov, L; Uvarov, S; Uzunyan, S; Vachon, B; van den Berg, P J; Van Kooten, R; van Leeuwen, W M; Varelas, N; Varnes, E W; Vasilyev, I A; Verdier, P; Vertogradov, L S; Verzocchi, M; Vilanova, D; Vint, P; Vokac, P; Voutilainen, M; Wagner, R; Wahl, H D; Wang, M H L S; Warchol, J; Watts, G; Wayne, M; Weber, G; Weber, M; Welty-Rieger, L; Wenger, A; Wetstein, M; White, A; Wicke, D; Williams, M R J; Wilson, G W; Wimpenny, S J; Wobisch, M; Wood, D R; Wyatt, T R; Xie, Y; Xu, C; Yacoob, S; Yamada, R; Yang, W-C; Yasuda, T; Yatsunenko, Y A; Ye, Z; Yin, H; Yip, K; Yoo, H D; Youn, S W; Yu, J; Zeitnitz, C; Zelitch, S; Zhao, T; Zhou, B; Zhu, J; Zielinski, M; Zieminska, D; Zivkovic, L; Zutshi, V; Zverev, E G
2009-06-26
We present a search for the standard model Higgs boson using hadronically decaying tau leptons, in 1 fb(-1) of data collected with the D0 detector at the Fermilab Tevatron pp collider. We select two final states: tau+/- plus missing transverse energy and b jets, and tau+ tau- plus jets. These final states are sensitive to a combination of associated W/Z boson plus Higgs boson, vector boson fusion, and gluon-gluon fusion production processes. The observed ratio of the combined limit on the Higgs production cross section at the 95% C.L. to the standard model expectation is 29 for a Higgs boson mass of 115 GeV.
Automated Verification of Design Patterns with LePUS3
NASA Technical Reports Server (NTRS)
Nicholson, Jonathan; Gasparis, Epameinondas; Eden, Ammon H.; Kazman, Rick
2009-01-01
Specification and [visual] modelling languages are expected to combine strong abstraction mechanisms with rigour, scalability, and parsimony. LePUS3 is a visual, object-oriented design description language axiomatized in a decidable subset of the first-order predicate logic. We demonstrate how LePUS3 is used to formally specify a structural design pattern and prove ( verify ) whether any JavaTM 1.4 program satisfies that specification. We also show how LePUS3 specifications (charts) are composed and how they are verified fully automatically in the Two-Tier Programming Toolkit.
Precision Higgs Boson Physics and Implications for Beyond the Standard Model Physics Theories
Wells, James
2015-06-10
The discovery of the Higgs boson is one of science's most impressive recent achievements. We have taken a leap forward in understanding what is at the heart of elementary particle mass generation. We now have a significant opportunity to develop even deeper understanding of how the fundamental laws of nature are constructed. As such, we need intense focus from the scientific community to put this discovery in its proper context, to realign and narrow our understanding of viable theory based on this positive discovery, and to detail the implications the discovery has for theories that attempt to answer questions beyond what the Standard Model can explain. This project's first main object is to develop a state-of-the-art analysis of precision Higgs boson physics. This is to be done in the tradition of the electroweak precision measurements of the LEP/SLC era. Indeed, the electroweak precision studies of the past are necessary inputs to the full precision Higgs program. Calculations will be presented to the community of Higgs boson observables that detail just how well various couplings of the Higgs boson can be measured, and more. These will be carried out using state-of-the-art theory computations coupled with the new experimental results coming in from the LHC. The project's second main objective is to utilize the results obtained from LHC Higgs boson experiments and the precision analysis, along with the direct search studies at LHC, and discern viable theories of physics beyond the Standard Model that unify physics to a deeper level. Studies will be performed on supersymmetric theories, theories of extra spatial dimensions (and related theories, such as compositeness), and theories that contain hidden sector states uniquely accessible to the Higgs boson. In addition, if data becomes incompatible with the Standard Model's low-energy effective lagrangian, new physics theories will be developed that explain the anomaly and put it into a more unified framework beyond
Aerodynamic characteristics of the standard dynamics model in coning motion at Mach 0.6
NASA Technical Reports Server (NTRS)
Jermey, C.; Schiff, L. B.
1985-01-01
A wind tunnel test was conducted on the Standard Dynamics Model (a simplified generic fighter aircraft shape) undergoing coning motion at Mach 0.6. Six component force and moment data are presented for a range of angle of attack, sideslip, and coning rates. At the relatively low non-dimensional coning rate employed (omega b/2V less than or equal to 0.04), the lateral aerodynamic characteristics generally show a linear variation with coning rate.
New Path for Building Compliance in ASHRAE Model Energy Efficiency Standard
Rosenberg, Michael I.
2016-02-09
This article describes a new path for performance based compliance with ASHRAE Standard 90.1. Using Appendix G, the Performance Rating Method, this path allows the same simulated baseline model to be used for both code compliance and beyond code programs. The baseline is fixed at approximately 90.1-2004 and independent of the designer's chosen design solutions to the greatest extent possible
Observation of an excess in the search for the Standard Model Higgs boson at ALEPH
NASA Astrophysics Data System (ADS)
ALEPH Collaboration; Barate, R.; De Bonis, I.; Decamp, D.; Ghez, P.; Goy, C.; Jezequel, S.; Lees, J.-P.; Martin, F.; Merle, E.; Minard, M.-N.; Pietrzyk, B.; Bravo, S.; Casado, M. P.; Chmeissani, M.; Crespo, J. M.; Fernandez, E.; Fernandez-Bosman, M.; Garrido, Ll.; Graugés, E.; Lopez, J.; Martinez, M.; Merino, G.; Miquel, R.; Mir, Ll. M.; Pacheco, A.; Paneque, D.; Ruiz, H.; Colaleo, A.; Creanza, D.; De Filippis, N.; de Palma, M.; Iaselli, G.; Maggi, G.; Maggi, M.; Nuzzo, S.; Ranieri, A.; Raso, G.; Ruggieri, F.; Selvaggi, G.; Silvestris, L.; Tempesta, P.; Tricomi, A.; Zito, G.; Huang, X.; Lin, J.; Ouyang, Q.; Wang, T.; Xie, Y.; Xu, R.; Xue, S.; Zhang, J.; Zhang, L.; Zhao, W.; Abbaneo, D.; Azzurri, P.; Barklow, T.; Boix, G.; Buchmüller, O.; Cattaneo, M.; Cerutti, F.; Clerbaux, B.; Dissertori, G.; Drevermann, H.; Forty, R. W.; Frank, M.; Gianotti, F.; Greening, T. C.; Hansen, J. B.; Harvey, J.; Hutchcroft, D. E.; Janot, P.; Jost, B.; Kado, M.; Lemaitre, V.; Maley, P.; Mato, P.; Minten, A.; Moutoussi, A.; Ranjard, F.; Rolandi, L.; Schlatter, D.; Schmitt, M.; Schneider, O.; Spagnolo, P.; Tejessy, W.; Teubert, F.; Tournefier, E.; Valassi, A.; Ward, J. J.; Wright, A. E.; Ajaltouni, Z.; Badaud, F.; Dessagne, S.; Falvard, A.; Fayolle, D.; Gay, P.; Henrard, P.; Jousset, J.; Michel, B.; Monteil, S.; Montret, J.-C.; Pallin, D.; Pascolo, J. M.; Perret, P.; Podlyski, F.; Hansen, J. D.; Hansen, J. R.; Hansen, P. H.; Nilsson, B. S.; Wäänänen, A.; Daskalakis, G.; Kyriakis, A.; Markou, C.; Simopoulou, E.; Vayaki, A.; Blondel, A.; Brient, J.-C.; Machefert, F.; Rougé, A.; Swynghedauw, M.; Tanaka, R.; Videau, H.; Focardi, E.; Parrini, G.; Zachariadou, K.; Antonelli, A.; Antonelli, M.; Bencivenni, G.; Bologna, G.; Bossi, F.; Campana, P.; Capon, G.; Chiarella, V.; Laurelli, P.; Mannocchi, G.; Murtas, F.; Murtas, G. P.; Passalacqua, L.; Pepe-Altarelli, M.; Chalmers, M.; Halley, A. W.; Kennedy, J.; Lynch, J. G.; Negus, P.; O'Shea, V.; Raeven, B.; Smith, D.; Teixeira-Dias, P.; Thompson, A. S.; Cavanaugh, R.; Dhamotharan, S.; Geweniger, C.; Hanke, P.; Hepp, V.; Kluge, E. E.; Leibenguth, G.; Putzer, A.; Tittel, K.; Werner, S.; Wunsch, M.; Beuselinck, R.; Binnie, D. M.; Cameron, W.; Davies, G.; Dornan, P. J.; Girone, M.; Marinelli, N.; Nowell, J.; Przysiezniak, H.; Sedgbeer, J. K.; Thompson, J. C.; Thomson, E.; White, R.; Ghete, V. M.; Girtler, P.; Kneringer, E.; Kuhn, D.; Rudolph, G.; Bouhova-Thacker, E.; Bowdery, C. K.; Clarke, D. P.; Ellis, G.; Finch, A. J.; Foster, F.; Hughes, G.; Jones, R. W. L.; Pearson, M. R.; Robertson, N. A.; Smizanska, M.; Giehl, I.; Hölldorfer, F.; Jakobs, K.; Kleinknecht, K.; Kröcker, M.; Müller, A.-S.; Nürnberger, H.-A.; Quast, G.; Renk, B.; Rohne, E.; Sander, H.-G.; Schmeling, S.; Wachsmuth, H.; Zeitnitz, C.; Ziegler, T.; Bonissent, A.; Carr, J.; Coyle, P.; Curtil, C.; Ealet, A.; Fouchez, D.; Leroy, O.; Kachelhoffer, T.; Payre, P.; Rousseau, D.; Tilquin, A.; Aleppo, M.; Gilardoni, S.; Ragusa, F.; David, A.; Dietl, H.; Ganis, G.; Heister, A.; Hüttmann, K.; Lütjens, G.; Mannert, C.; Männer, W.; Moser, H.-G.; Schael, S.; Settles, R.; Stenzel, H.; Wolf, G.; Boucrot, J.; Callot, O.; Davier, M.; Duflot, L.; Grivaz, J.-F.; Heusse, Ph.; Jacholkowska, A.; Serin, L.; Veillet, J.-J.; Videau, I.; de Vivie de Régie, J.-B.; Yuan, C.; Zerwas, D.; Bagliesi, G.; Boccali, T.; Calderini, G.; Ciulli, V.; Foà, L.; Giammanco, A.; Giassi, A.; Ligabue, F.; Messineo, A.; Palla, F.; Rizzo, G.; Sanguinetti, G.; Sciabà, A.; Sguazzoni, G.; Steinberger, J.; Tenchini, R.; Venturi, A.; Verdini, P. G.; Blair, G. A.; Coles, J.; Cowan, G.; Green, M. G.; Jones, L. T.; Medcalf, T.; Strong, J. A.; Clifft, R. W.; Edgecock, T. R.; Norton, P. R.; Tomalin, I. R.; Bloch-Devaux, B.; Boumediene, D.; Colas, P.; Fabbro, B.; Lançon, E.; Lemaire, M.-C.; Locci, E.; Perez, P.; Rander, J.; Renardy, J.-F.; Rosowsky, A.; Seager, P.; Trabelsi, A.; Tuchming, B.; Vallage, B.; Konstantinidis, N.; Loomis, C.; Litke, A. M.; Taylor, G.; Booth, C. N.; Cartwright, S.; Combley, F.; Hodgson, P. N.; Lehto, M.; Thompson, L. F.; Affholderbach, K.; Böhrer, A.; Brandt, S.; Grupen, C.; Hess, J.; Misiejuk, A.; Prange, G.; Sieler, U.; Borean, C.; Giannini, G.; Gobbo, B.; He, H.; Putz, J.; Rothberg, J.; Wasserbaech, S.; Armstrong, S. R.; Cranmer, K.; Elmer, P.; Ferguson, D. P. S.; Gao, Y.; González, S.; Hayes, O. J.; Hu, H.; Jin, S.; Kile, J.; McNamara, P. A.; Nielsen, J.; Orejudos, W.; Pan, Y. B.; Saadi, Y.; Scott, I. J.; Shao, N.; von Wimmersperg-Toeller, J. H.; Walsh, J.; Wiedenmann, W.; Wu, J.; Wu, S. L.; Wu, X.; Zobernig, G.
2000-12-01
A search has been performed for the Standard Model Higgs boson in the data sample collected with the ALEPH detector at LEP, at centre-of-mass energies up to 209GeV. An excess of /3σ beyond the background expectation is found, consistent with the production of the Higgs boson with a mass near 114GeV/c2. Much of this excess is seen in the four-jet analyses, where three high purity events are selected.
Two-loop results for MW in the standard model and the MSSM
Ayres Freitas; Sven Heinemeyer; Georg Weiglein
2002-12-09
Recent higher-order results for the prediction of the W-boson mass, M{sub W}, within the Standard Model are reviewed and an estimate of the remaining theoretical uncertainties of the electroweak precision observables is given. An updated version of a simple numerical parameterization of the result for M{sub W} is presented. Furthermore, leading electroweak two-loop contributions to the precision observables within the MSSM are discussed.
Mass and mixing angle patterns in the Standard Model and its material Supersymmetric Extension
Ramond, P.
1992-01-01
Using renormalization group techniques, we examine several interesting relations among masses and mixing angles of quarks and lepton in the Standard Model of Elementary Particle Interactions as a functionof scale. We extend the analysis to the minimal Supersymmetric Extension to determine its effect on these mass relations. For a heavy to quark, and minimal supersymmetry, most of these relations, can be made to agree at one unification scale.
Search for the Standard Model Higgs Boson in the $WH \\to \\ell \
Nagai, Yoshikazu
2010-02-01
We have searched for the Standard Model Higgs boson in the WH → lvbb channel in 1.96 TeV pp collisions at CDF. This search is based on the data collected by March 2009, corresponding to an integrated luminosity of 4.3 fb^{-1}. The W H channel is one of the most promising channels for the Higgs boson search at Tevatron in the low Higgs boson mass region.
Status of searches for Higgs and physics beyond the standard model at CDF
Tsybychev, D.; /Florida U.
2004-12-01
This article presents selected experimental results on searches for Higgs and physics beyond the standard model (BSM) at the Collider Detector at Fermilab (CDF). The results are based on about 350 pb{sup -1} of proton-antiproton collisions data at {radical}s = 1.96 TeV, collected during Run II of the Tevatron. No evidence of signal was found and limits on the production cross section of various physics processes BSM are derived.
Wang Lei; Han Xiaofang
2010-11-01
In the framework of the simplest little Higgs model, we perform a comprehensive study for the pair productions of the pseudoscalar boson {eta} and standard model-like Higgs boson h at LHC, namely gg(bb){yields}{eta}{eta}, gg(qq){yields}{eta}h, and gg(bb){yields}hh. These production processes provide a way to probe the couplings between Higgs bosons. We find that the cross section of gg{yields}{eta}{eta} always dominates over that of bb{yields}{eta}{eta}. When the Higgs boson h which mediates these two processes is on-shell, their cross sections can reach several thousand fb and several hundred fb, respectively. When the intermediate state h is off-shell, those two cross sections are reduced by 2 orders of magnitude, respectively. The cross sections of gg{yields}{eta}h and qq{yields}{eta}h are about in the same order of magnitude, which can reach O(10{sup 2} fb) for a light {eta} boson. Besides, compared with the standard model prediction, the cross section of a pair of standard model-like Higgs bosons production at LHC can be enhanced sizably. Finally, we briefly discuss the observable signatures of {eta}{eta}, {eta}h, and hh at the LHC.
Revisiting the global electroweak fit of the Standard Model and beyond with Gfitter
NASA Astrophysics Data System (ADS)
Flächer, H.; Goebel, M.; Haller, J.; Hoecker, A.; Mönig, K.; Stelzer, J.
2009-04-01
The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing for flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plug-ins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model (SM), and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. In the SM fit including the direct Higgs searches, we find M H =116.4{-1.3/+18.3} GeV, and the 2 σ and 3 σ allowed regions [114,145] GeV and [[113,168] and [180,225
RiverML: Standardizing the Communication of River Model Data (Invited)
NASA Astrophysics Data System (ADS)
Jackson, S.; Maidment, D. R.; Arctur, D. K.
2013-12-01
RiverML is a proposed language for conveying a description of river channel and floodplain geometry and flow characteristics through the internet in a standardized way. A key goal of the RiverML project is to allow interoperability between all hydraulic and hydrologic models, whether they are industry standard software packages or custom-built research tools. By providing a common transfer format for common model inputs and outputs, RiverML can shorten the development time and enhance the immediate utility of innovative river modeling tools. RiverML will provide descriptions of cross sections and multiple flow lines, allowing the construction of wireframe representations. In addition, RiverML will support descriptions of network connectivity, properties such as roughness coefficients, and time series observations such as water surface elevation and flow rate. The language is constructed in a modular fashion such that the geometry information, network information, and time series observations can be communicated independently of each other, allowing an arbitrary suite of software packages to contribute to a coherently modeled scenario. Funding for the development of RiverML is provided through an NSF grant to CUAHSI HydroShare project, a web-based collaborative environment for sharing data & models. While RiverML is geared toward the transfer of data, HydroShare will serve as a repository for storing water-related data and models of any format, while providing enhanced functionality for standardized formats such as RiverML, WaterML, and shapefiles. RiverML is a joint effort between the CUAHSI HydroShare development team, the Open Geospatial Consortium (OGC) Hydrology Domain Working Group, and an international community of data providers, data users, and software developers.
2009-09-01
technologie de détection, de classification et de localisation des torpilles à partir de capteurs multiples (DCLTCM), qui vise l’amélioration des...projet de démonstration de la technologie (PDT) de détection, de classification et de localisation des torpilles à partir de capteurs multiples (DCLTCM...de la mise au point d’une capacité de prédiction du rendement de capteurs acoustiques sous-marins en temps quasi réel. Plus tôt, dans le cadre du
Testing the Standard Model by precision measurement of the weak charges of quarks
Ross Young; Roger Carlini; Anthony Thomas; Julie Roche
2007-05-01
In a global analysis of the latest parity-violating electron scattering measurements on nuclear targets, we demonstrate a significant improvement in the experimental knowledge of the weak neutral-current lepton-quark interactions at low-energy. The precision of this new result, combined with earlier atomic parity-violation measurements, limits the magnitude of possible contributions from physics beyond the Standard Model - setting a model-independent, lower-bound on the scale of new physics at ~1 TeV.
Phenomenology of the minimal B-L extension of the standard model: The Higgs sector
Basso, Lorenzo; Moretti, Stefano; Pruna, Giovanni Marco
2011-03-01
We investigate the phenomenology of the Higgs sector of the minimal B-L extension of the standard model. We present results for both the foreseen energy stages of the Large Hadron Collider ({radical}(s)=7 and 14 TeV). We show that in such a scenario several novel production and decay channels involving the two physical Higgs states could be accessed at such a machine. Amongst these, several Higgs signatures have very distinctive features with respect to those of other models with an enlarged Higgs sector, as they involve interactions of Higgs bosons between themselves, with Z{sup '} bosons as well as with heavy neutrinos.
Searches for Higgs bosons beyond the Standard Model at the Tevatron
Biscarat, Catherine; /Lancaster U.
2004-08-01
Preliminary results from the CDF and D0 Collaborations on the searches for Higgs bosons beyond the Standard Model at the Run II Tevatron are reviewed. These results are based on datasets corresponding to an integrated luminosity of 100-200 pb{sup -1} collected from proton anti-proton collisions at a center of mass energy of 1.96 TeV. No evidence of signal is observed and limits on Higgs bosons production cross sections times branching ratio, couplings and masses from various models are set.
The effect of an organisational model on the standard of care.
Gullick, Janice; Shepherd, Mark; Ronald, Tracey
An Australian hospital was experiencing a long-term nursing staff shortage (in common with many hospitals throughout the world). The shortage led to concerns that patient care and supervision of less-experienced nursing staff was compromised. A survey was undertaken to ascertain which organisational models were used in the hospital, and how well these enabled nurses to provide a high standard of care. The findings suggest that the patient-allocation model should be maintained where practical and that team nursing should be trialled where poor numbers and skill-mix demand a greater degree of supervision and support.
A refined 'standard' thermal model for asteroids based on observations of 1 Ceres and 2 Pallas
NASA Technical Reports Server (NTRS)
Lebofsky, Larry A.; Sykes, Mark V.; Tedesco, Edward F.; Veeder, Glenn J.; Matson, Dennis L.
1986-01-01
An analysis of ground-based thermal IR observations of 1 Ceres and 2 Pallas in light of their recently determined occultation diameters and small amplitude light curves has yielded a new value for the IR beaming parameter employed in the standard asteroid thermal emission model which is significantly lower than the previous one. When applied to the reduction of thermal IR observations of other asteroids, this new value is expected to yield model diameters closer to actual values. The present formulation incorporates the IAU magnitude convention for asteroids that employs zero-phase magnitudes, including the opposition effect.
Standard model extended by a heavy singlet: Linear vs. nonlinear EFT
NASA Astrophysics Data System (ADS)
Buchalla, G.; Catà, O.; Celis, A.; Krause, C.
2017-04-01
We consider the Standard Model extended by a heavy scalar singlet in different regions of parameter space and construct the appropriate low-energy effective field theories up to first nontrivial order. This top-down exercise in effective field theory is meant primarily to illustrate with a simple example the systematics of the linear and nonlinear electroweak effective Lagrangians and to clarify the relation between them. We discuss power-counting aspects and the transition between both effective theories on the basis of the model, confirming in all cases the rules and procedures derived in previous works from a bottom-up approach.
Unification and Dark Matter in a Minimal Scalar Extension of the Standard Model
Lisanti, Mariangela; Wacker, Jay G.
2007-04-25
The six Higgs doublet model is a minimal extension of the Standard Model (SM) that addresses dark matter and gauge coupling unification. Another Higgs doublet in the 5 representation of a discrete symmetry group, such as S{sub 6}, is added to the SM. The lightest components of the 5-Higgs are neutral, stable and serve as dark matter so long as the discrete symmetry is not broken. Direct and indirect detection signals, as well as collider signatures are discussed. The five-fold multiplicity of the dark matter decreases its mass and typically helps make the dark matter more visible in upcoming experiments.
Higgs masses and stability in the standard and the two Higgs doublet models
Juarez W, S. R.; Morales C, D.; Kielanowski, P.
2010-07-29
Within the framework of the standard model (SM) of elementary particles and the two Higgs doublet extension to this model (2DHM), we obtained analytical and numerical solutions for the gauge couplings, the vacuum expectation values (VEV) of the Higgs fields, the quark Yukawa couplings and quark masses, the quartic Higgs couplings, and the running Higgs masses, considering the renormalization group equations. The bounds on the SM Higgs running mass have been fixed, and the region of validity of the SM was determined through it, at the one and two loop approximations, using the triviality and stability conditions for the Higgs quartic coupling {lambda}{sub H}.
NASA Astrophysics Data System (ADS)
Benedict, K. K.; Yang, C.; Huang, Q.
2010-12-01
The availability of high-speed research networks such as the US National Lambda Rail and the GÉANT network, scalable on-demand commodity computing resources provided by public and private "cloud" computing systems, and increasing demand for rapid access to the products of environmental models for both research and public policy development contribute to a growing need for the evaluation and development of environmental modeling systems that distribute processing, storage, and data delivery capabilities between network connected systems. In an effort to address the feasibility of developing a standards-based distributed modeling system in which model execution systems are physically separate from data storage and delivery systems, the research project presented in this paper developed a distributed dust forecasting system in which two nested atmospheric dust models are executed at George Mason University (GMU, in Fairfax, VA) while data and model output processing services are hosted at the University of New Mexico (UNM, in Albuquerque, NM). Exchange of model initialization and boundary condition parameters between the servers at UNM and the model execution systems at GMU is accomplished through Open Geospatial Consortium (OGC) Web Coverage Services (WCS) and Web Feature Services (WFS) while model outputs are pushed from GMU systems back to UNM using a REST web service interface. In addition to OGC and non-OGC web services for exchange between UNM and GMU, the servers at UNM also provide access to the input meteorological model products, intermediate and final dust model outputs, and other products derived from model outputs through OGC WCS, WFS, and OGC Web Map Services (WMS). The performance of the nested versus non-nested models is assessed in this research, with the results of the performance analysis providing the core content of the produced feasibility study. System integration diagram illustrating the storage and service platforms hosted at the Earth Data
Behavioral modeling and simulation of multi-standard RF receivers using MATLAB/SIMULINK
NASA Astrophysics Data System (ADS)
Morgado, Alonso; del Río, Rocío; de la Rosa, José M.
2007-05-01
This paper presents a SIMULINK block set for the behavioral modeling and high-level simulation of RF receiver frontends. The toolbox includes a library with the main RF circuit models that are needed to implement wireless transceivers, namely: low noise amplifiers, mixers, oscillators, filters and programmable gain amplifiers. There is also a library including other blocks like the antenna, duplexer filter and switches, required to implement reconfigurable architectures. Behavioral models of building blocks include the main ideal functionality as well as the following non-idealities: thermal noise characterized by the Noise Figure (NF) and the Signal-to-Noise Ratio (SNR) and nonlinearity expressed by the input-referred 2nd- and 3rd-order intercept points, IIP II and IIP 3, respectively. In addition to these general parameters, some block specific errors have been also included, like oscillator phase noise and mixer offset. These models have been incorporated into the SIMULINK environment making an extensive use of C-coded S-functions and reducing the number of library block elements. This approach reduces the simulation time while keeping high accuracy, what makes the proposed toolbox very appropriate to be combined with an optimizer for the automated high-level synthesis of radio receivers. As an application of the capabilities of the presented toolbox, a multi-standard Direct-Conversion Receiver (DCR) intended for 4G telecom systems is modeled and simulated considering the building-block requirements for the different standards.
A heterotic standard model with B - L symmetry and a stable proton
NASA Astrophysics Data System (ADS)
Buchbinder, Evgeny I.; Constantin, Andrei; Lukas, Andre
2014-06-01
We consider heterotic Calabi-Yau compactifications with S(U(4) × U(1)) background gauge fields. These models lead to gauge groups with an additional U(1) factor which, under certain conditions, can combine with hypercharge to a B - L symmetry. The associated gauge boson is automatically super-massive and, hence, does not constitute a phenomenological problem. We illustrate this class of compactifications with a model based on the monad construction, which leads to a supersymmetric standard model with three families of quarks and leptons, one pair of Higgs doublets, three right-handed neutrinos and no exotics charged under the standard model group. The presence of the B - L symmetry means that the model is safe from proton decay induced by dimension four operators. Due to the presence of a special locus in moduli space where the bundle structure group is Abelian and the low-energy symmetry enhances we can also show the absence of dimension five proton-decay inducing operators.
Comparison of distributed acceleration and standard models of cosmic-ray transport
NASA Technical Reports Server (NTRS)
Letaw, J. R.; Silberberg, R.; Tsao, C. H.
1995-01-01
Recent cosmic-ray abundance measurements for elements in the range 3 less than or equal to Z less than or equal to 28 and energies 10 MeV/n less than or equal to E less than or equal to 1 TeV/n have been analyzed with computer transport modeling. About 500 elemental and isotopic measurements have been explored in this analysis. The transport code includes the effects of ionization losses, nuclear spallation reactions (including those of secondaries), all nuclear decay modes, stripping and attachment of electrons, escape from the Galaxy, weak reacceleration and solar modulation. Four models of reacceleration (with several submodels of various reacceleration strengths) were explored. A chi (exp 2) analysis show that the reacceleration models yield at least equally good fits to the data as the standard propagation model. However, with reacceleration, the ad hoc assumptions of the standard model regarding discontinuities in the energy dependence of the mean path length traversed by cosmic rays, and in the momentum spectrum of the cosmic-ray source spectrum are eliminated. Futhermore, the difficulty between rigidity dependent leakage and energy independent anisotropy below energies of 10(exp 14) eV is alleviated.
Application of TDCR-Geant4 modeling to standardization of 63Ni.
Thiam, C; Bobin, C; Chauvenet, B; Bouchard, J
2012-09-01
As an alternative to the classical TDCR model applied to liquid scintillation (LS) counting, a stochastic approach based on the Geant4 toolkit is presented for the simulation of light emission inside the dedicated three-photomultiplier detection system. To this end, the Geant4 modeling includes a comprehensive description of optical properties associated with each material constituting the optical chamber. The objective is to simulate the propagation of optical photons from their creation in the LS cocktail to the production of photoelectrons in the photomultipliers. First validated for the case of radionuclide standardization based on Cerenkov emission, the scintillation process has been added to a TDCR-Geant4 modeling using the Birks expression in order to account for the light-emission nonlinearity owing to ionization quenching. The scintillation yield of the commercial Ultima Gold LS cocktail has been determined from double-coincidence detection efficiencies obtained for (60)Co and (54)Mn with the 4π(LS)β-γ coincidence method. In this paper, the stochastic TDCR modeling is applied for the case of the standardization of (63)Ni (pure β(-)-emitter; E(max)=66.98 keV) and the activity concentration is compared with the result given by the classical model.
Wengier, Diego; Valsecchi, Isabel; Cabanas, María Laura; Tang, Wei-hua; McCormick, Sheila; Muschietti, Jorge
2003-01-01
After pollen grains germinate on the stigma, pollen tubes traverse the extracellular matrix of the style on their way to the ovules. We previously characterized two pollen-specific, receptor-like kinases, LePRK1 and LePRK2, from tomato (Lycopersicon esculentum). Their structure and immunolocalization pattern and the specific dephosphorylation of LePRK2 suggested that these kinases might interact with signaling molecules in the style extracellular matrix. Here, we show that LePRK1 and LePRK2 can be coimmunoprecipitated from pollen or when expressed together in yeast. In yeast, their association requires LePRK2 kinase activity. In pollen, LePRK1 and LePRK2 are found in an ≈400-kDa protein complex that persists on pollen germination, but this complex is disrupted when pollen is germinated in vitro in the presence of style extract. In yeast, the addition of style extract also disrupts the interaction between LePRK1 and LePRK2. Fractionation of the style extract reveals that the disruption activity is enriched in the 3- to 10-kDa fraction. A component(s) in this fraction also is responsible for the specific dephosphorylation of LePRK2. The style component(s) that dephosphorylates LePRK2 is likely to be a heat-stable peptide that is present in exudate from the style. The generally accepted model of receptor kinase signaling involves binding of a ligand to extracellular domains of receptor kinases and subsequent activation of the signaling pathway by receptor autophosphorylation. In contrast to this typical scenario, we propose that a putative style ligand transduces the signal in pollen tubes by triggering the specific dephosphorylation of LePRK2, followed by dissociation of the LePRK complex. PMID:12748390
English, Sinéad; Bateman, Andrew W; Clutton-Brock, Tim H
2012-05-01
Lifetime records of changes in individual size or mass in wild animals are scarce and, as such, few studies have attempted to model variation in these traits across the lifespan or to assess the factors that affect them. However, quantifying lifetime growth is essential for understanding trade-offs between growth and other life history parameters, such as reproductive performance or survival. Here, we used model selection based on information theory to measure changes in body mass over the lifespan of wild meerkats, and compared the relative fits of several standard growth models (monomolecular, von Bertalanffy, Gompertz, logistic and Richards). We found that meerkats exhibit monomolecular growth, with the best model incorporating separate growth rates before and after nutritional independence, as well as effects of season and total rainfall in the previous nine months. Our study demonstrates how simple growth curves may be improved by considering life history and environmental factors, which may be particularly relevant when quantifying growth patterns in wild populations.
HIV AND POPULATION DYNAMICS: A GENERAL MODEL AND MAXIMUM-LIKELIHOOD STANDARDS FOR EAST AFRICA*
HEUVELINE, PATRICK
2014-01-01
In high-prevalence populations, the HIV epidemic undermines the validity of past empirical models and related demographic techniques. A parsimonious model of HIV and population dynamics is presented here and fit to 46,000 observations, gathered from 11 East African populations. The fitted model simulates HIV and population dynamics with standard demographic inputs and only two additional parameters for the onset and scale of the epidemic. The underestimation of the general prevalence of HIV in samples of pregnant women and the fertility impact of HIV are examples of the dynamic interactions that demographic models must reproduce and are shown here to increase over time even with constant prevalence levels. As a result, the impact of HIV on population growth appears to have been underestimated by current population projections that ignore this dynamic. PMID:12846130
NASA Astrophysics Data System (ADS)
Martiny, N.; Santer, R.
Over ocean, the total radiance measured by the satellite sensors at the top of the atmo- sphere is mainly atmospheric. In order to access to the water leaving radiance, directly related to the concentration of the different components of the water, we need to cor- rect the satellite measurements from the important atmospheric contribution. In the atmosphere, the light emitted by the sun is scattered by the molecules, absorbed by the gases, and both scattered and absorbed in unknown proportions by the aerosols, particles confined in the first layer of the atmosphere due to their large size. The remote sensing of the aerosols represents then a complex step in the atmospheric correction scheme. Over ocean, the principle of the aerosol remote sensing lies on the assump- tion that the water is absorbent in the red and the near-infrared. The aerosol model is then deduced from these spectral bands and used to extrapolate the aerosol optical properties in the visible wavelengths. For ocean color sensors such as CZCS, OCTS, POLDER, SeaWiFS or MODIS, the atmospheric correction algorithms use standard aerosol models defined by Shettle &Fenn for their look-up-tables. Over coastal wa- ters, are these models still suitable? The goal of this work is to validate the standard aerosol models used in the atmospheric correction algorithms over coastal zones. For this work, we use ground-based in-situ measurements from the CIMEL sunphotome- ter instrument. Using the extinction measurements, we can deduce the aerosol spectral dependency which falls between the spectral dependency of two standard Shettle &Fenn aerosol models. After the interpolation of the aerosol model, we can use it to extrapolate in the visible the optical parameters needed for the atmospheric correction scheme: Latm, the atmospheric radiance and T, the atmospheric transmittance. The simulations are done using a radiative transfer code based on the successive order of scattering. Latm and T are then used for
1983-06-16
AD 3 020 DETERMINING THE’ NUMBER OF COMPONENT CLUSTERS IN THESTANDARD MULTIVARIAT (U) ILLI OIS UNIV AT CHICAGO CIRCLE DEPT OF QUANTITATIVE METHODS ...ARMY RESEARCH OFFICE UNDER CONTRACT OAAG29-82-K-0155 with the University of Illinois at Chicago Statistical Models and Methods for Cluster Analysis and...CRITERIA* HAMPARSUM BOZDOGAN Department of Quantitative Methods University of Illinois ’- ,,, CONTENTS D Abstract 1. Introduction 2. The Standard
NASA Astrophysics Data System (ADS)
Kopylov, A. V.
1993-01-01
The ratios of the fluxes of solar neutrinos from the CNO cycle to those of boron neutrinos are less model-dependent than the fluxes themselves in the standard Bahcall-Ulrich solar model. The uncertainties for these ratios are calculated at the level of three standard deviations. Their importance in the overall formulation of the problem of detecting solar neutrinos is discussed.
40 CFR 85.2204 - Short test standards for 1981 and later model year light-duty trucks.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Short test standards for 1981 and... Control System Performance Warranty Short Tests § 85.2204 Short test standards for 1981 and later model... Warranty eligibility (that is, 1981 and later model year light-duty trucks at low altitude and 1982...
40 CFR 85.2204 - Short test standards for 1981 and later model year light-duty trucks.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Short test standards for 1981 and... Control System Performance Warranty Short Tests § 85.2204 Short test standards for 1981 and later model... Warranty eligibility (that is, 1981 and later model year light-duty trucks at low altitude and 1982...
40 CFR 85.2203 - Short test standards for 1981 and later model year light-duty vehicles.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Short test standards for 1981 and... Control System Performance Warranty Short Tests § 85.2203 Short test standards for 1981 and later model... Performance Warranty eligibility (that is, 1981 and later model year light-duty vehicles at low altitude...
40 CFR 85.2203 - Short test standards for 1981 and later model year light-duty vehicles.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Short test standards for 1981 and... Control System Performance Warranty Short Tests § 85.2203 Short test standards for 1981 and later model... Performance Warranty eligibility (that is, 1981 and later model year light-duty vehicles at low altitude...
40 CFR 85.2204 - Short test standards for 1981 and later model year light-duty trucks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Short test standards for 1981 and... Control System Performance Warranty Short Tests § 85.2204 Short test standards for 1981 and later model... Warranty eligibility (that is, 1981 and later model year light-duty trucks at low altitude and 1982...
40 CFR 85.2203 - Short test standards for 1981 and later model year light-duty vehicles.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Short test standards for 1981 and... Control System Performance Warranty Short Tests § 85.2203 Short test standards for 1981 and later model... Performance Warranty eligibility (that is, 1981 and later model year light-duty vehicles at low altitude...
A simple modelling approach for prediction of standard state real gas entropy of pure materials.
Bagheri, M; Borhani, T N G; Gandomi, A H; Manan, Z A
2014-01-01
The performance of an energy conversion system depends on exergy analysis and entropy generation minimisation. A new simple four-parameter equation is presented in this paper to predict the standard state absolute entropy of real gases (SSTD). The model development and validation were accomplished using the Linear Genetic Programming (LGP) method and a comprehensive dataset of 1727 widely used materials. The proposed model was compared with the results obtained using a three-layer feed forward neural network model (FFNN model). The root-mean-square error (RMSE) and the coefficient of determination (r(2)) of all data obtained for the LGP model were 52.24 J/(mol K) and 0.885, respectively. Several statistical assessments were used to evaluate the predictive power of the model. In addition, this study provides an appropriate understanding of the most important molecular variables for exergy analysis. Compared with the LGP based model, the application of FFNN improved the r(2) to 0.914. The developed model is useful in the design of materials to achieve a desired entropy value.
Analysis and modeling of zero-threshold voltage native devices with industry standard BSIM6 model
NASA Astrophysics Data System (ADS)
Gupta, Chetan; Agarwal, Harshit; Lin, Y. K.; Ito, Akira; Hu, Chenming; Singh Chauhan, Yogesh
2017-04-01
In this paper, we present the modeling of zero-threshold voltage (V TH) bulk MOSFET, also called native devices, using enhanced BSIM6 model. Devices under study show abnormally high leakage current in weak inversion, leading to degraded subthreshold slope. The reasons for such abnormal behavior are identified using technology computer-aided design (TCAD) simulations. Since the zero-V TH transistors have quite low doping, the depletion layer from drain may extend upto the source (at some non-zero value of V DS) which leads to punch-through phenomenon. This source–drain leakage current adds with the main channel current, causing the unexpected current characteristics in these devices. TCAD simulations show that, as we increase the channel length (L eff) and channel doping (N SUB), the source–drain leakage due to punch-through decreases. We propose a model to capture the source–drain leakage in these devices. The model incorporates gate, drain, body biases and channel length as well as channel doping dependency too. The proposed model is validated with the measured data of production level device over various conditions of biases and channel lengths.
1997-10-01
It is widely recognized that cascade models are potentially effective and powerful tools for interpreting and predicting multi-particle observables in heavy ion physics. However, the lack of common standards, documentation, version control, and accessibility have made it difficult to apply objective scientific criteria for evaluating the many physical and algorithmic assumptions or even to reproduce some published results. The first RIKEN Research Center workshop was proposed by Yang Pang to address this problem by establishing open standards for original codes for applications to nuclear collisions at RHIC energies. The aim of this first workshop is: (1) to prepare a WWW depository site for original source codes and detailed documentation with examples; (2) to develop and perform standardized test for the models such as Lorentz invariance, kinetic theory comparisons, and thermodynamic simulations; (3) to publish a compilation of results of the above work in a journal e.g., ``Heavy Ion Physics``; and (4) to establish a policy statement on a set of minimal requirements for inclusion in the OSCAR-WWW depository.
A Search for the Standard Model Higgs Boson Produced in Association with a $W$ Boson
Frank, Martin Johannes
2011-05-01
We present a search for a standard model Higgs boson produced in association with a W boson using data collected with the CDF II detector from p$\\bar{p}$ collisions at √s = 1.96 TeV. The search is performed in the WH → ℓvb$\\bar{b}$ channel. The two quarks usually fragment into two jets, but sometimes a third jet can be produced via gluon radiation, so we have increased the standard two-jet sample by including events that contain three jets. We reconstruct the Higgs boson using two or three jets depending on the kinematics of the event. We find an improvement in our search sensitivity using the larger sample together with this multijet reconstruction technique. Our data show no evidence of a Higgs boson, so we set 95% confidence level upper limits on the WH production rate. We set limits between 3.36 and 28.7 times the standard model prediction for Higgs boson masses ranging from 100 to 150 GeV/c^{2}.
The framed Standard Model (I) — A physics case for framing the Yang-Mills theory?
NASA Astrophysics Data System (ADS)
Chan, Hong-Mo; Tsou, Sheung Tsun
2015-10-01
Introducing, in the underlying gauge theory of the Standard Model, the frame vectors in internal space as field variables (framons), in addition to the usual gauge boson and matter fermions fields, one obtains: the standard Higgs scalar as the framon in the electroweak sector; a global su˜(3) symmetry dual to colour to play the role of fermion generations. Renormalization via framon loops changes the orientation in generation space of the vacuum, hence also of the mass matrices of leptons and quarks, thus making them rotate with changing scale μ. From previous work, it is known already that a rotating mass matrix will lead automatically to: CKM mixing and neutrino oscillations, hierarchical masses for quarks and leptons, a solution to the strong-CP problem transforming the theta-angle into a Kobayashi-Maskawa phase. Here in the framed standard model (FSM), the renormalization group equation has some special properties which explain the main qualitative features seen in experiment both for mixing matrices of quarks and leptons, and for their mass spectrum. Quantitative results will be given in Paper II. The present paper ends with some tentative predictions on Higgs decay, and with some speculations on the origin of dark matter.
The Framed Standard Model (I) -- A Physics Case for Framing the Yang-Mills Theory?
NASA Astrophysics Data System (ADS)
Chan, Hong-Mo; Tsou, Sheung Tsun
Introducing, in the underlying gauge theory of the Standard Model, the frame vectors in internal space as field variables (framons), in addition to the usual gauge boson and matter fermions fields, one obtains: * the standard Higgs scalar as the framon in the electroweak sector; * a global widetilde{su}(3) symmetry dual to colour to play the role of fermion generations. Renormalization via framon loops changes the orientation in generation space of the vacuum, hence also of the mass matrices of leptons and quarks, thus making them rotate with changing scale μ. From previous work, it is known already that a rotating mass matrix will lead automatically to: * CKM mixing and neutrino oscillations, * hierarchical masses for quarks and leptons, * a solution to the strong-CP problem transforming the theta-angle into a Kobayashi-Maskawa phase. Here in the framed standard model (FSM), the renormalization group equation has some special properties which explain the main qualitative features seen in experiment both for mixing matrices of quarks and leptons, and for their mass spectrum. Quantitative results will be given in Paper II. The present paper ends with some tentative predictions on Higgs decay, and with some speculations on the origin of dark matter...
Electric currents in flare ribbons: Observations and three-dimensional standard model
Janvier, M.; Aulanier, G.; Bommier, V.; Schmieder, B.; Démoulin, P.; Pariat, E.
2014-06-10
We present for the first time the evolution of the photospheric electric currents during an eruptive X-class flare, accurately predicted by the standard three-dimensional (3D) flare model. We analyze this evolution for the 2011 February 15 flare using Helioseismic and Magnetic Imager/Solar Dynamics Observatory magnetic observations and find that localized currents in J-shaped ribbons increase to double their pre-flare intensity. Our 3D flare model, developed with the OHM code, suggests that these current ribbons, which develop at the location of extreme ultraviolet brightenings seen with Atmospheric Imaging Assembly imagery, are driven by the collapse of the flare's coronal current layer. These findings of increased currents restricted in localized ribbons are consistent with the overall free energy decrease during a flare, and the shapes of these ribbons also give an indication of how twisted the erupting flux rope is. Finally, this study further enhances the close correspondence obtained between the theoretical predictions of the standard 3D model and flare observations, indicating that the main key physical elements are incorporated in the model.
V3885 SAGITTARIUS: A COMPARISON WITH A RANGE OF STANDARD MODEL ACCRETION DISKS
Linnell, Albert P.; Szkody, Paula; Godon, Patrick; Sion, Edward M.; Hubeny, Ivan; Barrett, Paul E. E-mail: szkody@astro.washington.ed E-mail: edward.sion@villanova.ed E-mail: barrett.paul@usno.navy.mi
2009-10-01
A chi-tilde{sup 2} analysis of standard model accretion disk synthetic spectrum fits to combined Far Ultraviolet Spectroscopic Explorer and Space Telescope Imaging Spectrograph spectra of V3885 Sagittarius, on an absolute flux basis, selects a model that accurately represents the observed spectral energy distribution. Calculation of the synthetic spectrum requires the following system parameters. The cataclysmic variable secondary star period-mass relation calibrated by Knigge in 2006 and 2007 sets the secondary component mass. A mean white dwarf (WD) mass from the same study, which is consistent with an observationally determined mass ratio, sets the adopted WD mass of 0.7 M {sub sun}, and the WD radius follows from standard theoretical models. The adopted inclination, i = 65 deg., is a literature consensus, and is subsequently supported by chi-tilde{sup 2} analysis. The mass transfer rate is the remaining parameter to set the accretion disk T {sub eff} profile, and the Hipparcos parallax constrains that parameter to M-dot=(5.0+-2.0) x 10{sup -9} M odot yr{sup -1} by a comparison with observed spectra. The fit to the observed spectra adopts the contribution of a 57, 000 +- 5000 K WD. The model thus provides realistic constraints on M-dot and T {sub eff} for a large M-dot system above the period gap.
V3885 Sagittarius: A Comparison With a Range of Standard Model Accretion Disks
NASA Technical Reports Server (NTRS)
Linnell, Albert P.; Godon, Patrick; Hubeny, Ivan; Sion, Edward M; Szkody, Paula; Barrett, Paul E.
2009-01-01
A chi-squared analysis of standard model accretion disk synthetic spectrum fits to combined Far Ultraviolet Spectroscopic Explorer and Space Telescope Imaging Spectrograph spectra of V3885 Sagittarius, on an absolute flux basis, selects a model that accurately represents the observed spectral energy distribution. Calculation of the synthetic spectrum requires the following system parameters. The cataclysmic variable secondary star period-mass relation calibrated by Knigge in 2006 and 2007 sets the secondary component mass. A mean white dwarf (WD) mass from the same study, which is consistent with an observationally determined mass ratio, sets the adopted WD mass of 0.7M(solar mass), and the WD radius follows from standard theoretical models. The adopted inclination, i = 65 deg, is a literature consensus, and is subsequently supported by chi-squared analysis. The mass transfer rate is the remaining parameter to set the accretion disk T(sub eff) profile, and the Hipparcos parallax constrains that parameter to mas transfer = (5.0 +/- 2.0) x 10(exp -9) M(solar mass)/yr by a comparison with observed spectra. The fit to the observed spectra adopts the contribution of a 57,000 +/- 5000 K WD. The model thus provides realistic constraints on mass transfer and T(sub eff) for a large mass transfer system above the period gap.
NASA Astrophysics Data System (ADS)
Godon, Patrick
Many ultraviolet spectra of cataclysmic variables (CVs) with a white dwarf (WD) accreting at a high rate have been difficult, even impossible, to model with standard disk models. The standard disk models appear to be too blue in comparison to the observed spectra. We propose to carry out a systematic and consistent analysis of archival ultraviolet spectra of 90 CVs using a truncated inner disk model (based and backed by observational data and theoretical results). We use the synthetic stellar spectra codes TLUSTY and SYNSPEC to generate these synthetic spectra. Deriving mass accretion rates for CVs will advance the theories of evolution of CVs as well as shed light on the Physics of accretion disks. As a by product we will make our theoretical spectra publicly available online. This will be of invaluable importance to future NASA UV missions. The WD is the most common end-product of stellar evolution and the accretion disk is the most common universal structure resulting from mass transfer with angular momentum, and both can be observed in CVs in the UV. As a consequence, an understanding of accretion in CV systems is the first step toward a global understanding of accretion in other systems throughout the universe, ranging from Young Stellar Objects, galactic binaries to AGN. This ADP proposal address the NASA Strategic Goals and Science Outcomes 3D: Discover the origin, structure, evolution, and destiny of the universe, and search for Earth-like planets.
Twisted Spectral Triple for the Standard Model and Spontaneous Breaking of the Grand Symmetry
NASA Astrophysics Data System (ADS)
Devastato, Agostino; Martinetti, Pierre
2017-03-01
Grand symmetry models in noncommutative geometry, characterized by a non-trivial action of functions on spinors, have been introduced to generate minimally (i.e. without adding new fermions) and in agreement with the first order condition an extra scalar field beyond the standard model, which both stabilizes the electroweak vacuum and makes the computation of the mass of the Higgs compatible with its experimental value. In this paper, we use a twist in the sense of Connes-Moscovici to cure a technical problem due to the non-trivial action on spinors, that is the appearance together with the extra scalar field of unbounded vectorial terms. The twist makes these terms bounded and - thanks to a twisted version of the first-order condition that we introduce here - also permits to understand the breaking to the standard model as a dynamical process induced by the spectral action, as conjectured in [24]. This is a spontaneous breaking from a pre-geometric Pati-Salam model to the almost-commutativegeometryofthestandardmodel,withtwoHiggs-likefields: scalar and vector.
NASA Astrophysics Data System (ADS)
Signell, R. P.; Camossi, E.
2015-11-01
Work over the last decade has resulted in standardized web-services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by: (1) making it simple for providers to enable web service access to existing output files; (2) using technology that is free, and that is easy to deploy and configure; and (3) providing tools to communicate with web services that work in existing research environments. We present a simple, local brokering approach that lets modelers continue producing custom data, but virtually aggregates and standardizes the data using NetCDF Markup Language. The THREDDS Data Server is used for data delivery, pycsw for data search, NCTOOLBOX (Matlab®1) and Iris (Python) for data access, and Ocean Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.1 Mention of trade names or commercial products does not constitute endorsement or recommendation for use by the US Government.
b--> sl{sup +}l{sup {minus}} Decays in and Beyond the Standard Model
Hiller, Gudrun
2000-08-11
The authors briefly review the status of rare radiative and semileptonic b --> s(gamma,l{sup +}l{sup {minus}}), (l=e,mu) decays. They discuss possible signatures of new physics in these modes and emphasize the role of the exclusive channels. In particular, measurements of the Forward-Backward asymmetry in B -->K*l{sup +}l{sup {minus}} decays and its zero provide a clean test of the Standard Model, complementary to studies in b -->s gamma decays. Further, the Forward-Backward CP asymmetry in B --> K*l{sup +}l{sup {minus}} decays is sensitive to possible non-standard sources of CP violation mediated by Flavor changing neutral current Z-penguins.
Standard-Model Tests with Superallowed {beta} Decay: Nuclear Data Applied to Fundamental Physics
Hardy, J.C.
2005-05-24
The study of superallowed nuclear {beta} decay currently provides the most precise and convincing confirmation of the conservation of the vector current (CVC) and is a key component of the most demanding available test of the unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) matrix, a basic pillar of the Electroweak Standard Model. Experimentally, the Q-value, half-life, and branching ratio for superallowed transitions must be determined with a precision better than 0.1%. This demands metrological techniques be applied to short-lived ({approx}1 s) activities and that strict standards be employed in surveying the body of world data. The status of these fundamental studies is summarized and recent work described.
NASA Technical Reports Server (NTRS)
Hildreth, Bruce L.; Jackson, E. Bruce
2009-01-01
The American Institute of Aeronautics Astronautics (AIAA) Modeling and Simulation Technical Committee is in final preparation of a new standard for the exchange of flight dynamics models. The standard will become an ANSI standard and is under consideration for submission to ISO for acceptance by the international community. The standard has some a spects that should provide benefits to the simulation training community. Use of the new standard by the training simulation community will reduce development, maintenance and technical refresh investment on each device. Furthermore, it will significantly lower the cost of performing model updates to improve fidelity or expand the envelope of the training device. Higher flight fidelity should result in better transfer of training, a direct benefit to the pilots under instruction. Costs of adopting the standard are minimal and should be paid back within the cost of the first use for that training device. The standard achie ves these advantages by making it easier to update the aerodynamic model. It provides a standard format for the model in a custom eXtensible Markup Language (XML) grammar, the Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML). It employs an existing XML grammar, MathML, to describe the aerodynamic model in an input data file, eliminating the requirement for actual software compilation. The major components of the aero model become simply an input data file, and updates are simply new XML input files. It includes naming and axis system conventions to further simplify the exchange of information.
A Simple Mathematical Model for Standard Model of Elementary Particles and Extension Thereof
NASA Astrophysics Data System (ADS)
Sinha, Ashok
2016-03-01
An algebraically (and geometrically) simple model representing the masses of the elementary particles in terms of the interaction (strong, weak, electromagnetic) constants is developed, including the Higgs bosons. The predicted Higgs boson mass is identical to that discovered by LHC experimental programs; while possibility of additional Higgs bosons (and their masses) is indicated. The model can be analyzed to explain and resolve many puzzles of particle physics and cosmology including the neutrino masses and mixing; origin of the proton mass and the mass-difference between the proton and the neutron; the big bang and cosmological Inflation; the Hubble expansion; etc. A novel interpretation of the model in terms of quaternion and rotation in the six-dimensional space of the elementary particle interaction-space - or, equivalently, in six-dimensional spacetime - is presented. Interrelations among particle masses are derived theoretically. A new approach for defining the interaction parameters leading to an elegant and symmetrical diagram is delineated. Generalization of the model to include supersymmetry is illustrated without recourse to complex mathematical formulation and free from any ambiguity. This Abstract represents some results of the Author's Independent Theoretical Research in Particle Physics, with possible connection to the Superstring Theory. However, only very elementary mathematics and physics is used in my presentation.
NASA Technical Reports Server (NTRS)
Vetrone, Robert H.
1993-01-01
The topics are presented in viewgraph form and include the following: the Electric Propulsion Research Building (no. 16) the Electric Power Laboratory (BLDG. 301); the Tank 6 Vacuum Facility; and test facilities for electric propulsion and LeRC.
Perez, Hector R.; Stoeckle, James H.
2016-01-01
Résumé Objectif Fournir une mise à jour sur l’épidémiologie, l’hérédité, la physiopathologie, le diagnostic et le traitement du bégaiement développemental. Qualité des données Une recherche d’études récentes ou non portant sur l’épidémiologie, l’hérédité, la physiopathologie, le diagnostic et le traitement du bégaiement développemental a été effectuée dans les bases de données MEDLINE et Cochrane. La plupart des recommandations s’appuient sur des études de petite envergure, des données probantes de qualité limitée ou des consensus. Message principal Le bégaiement est un trouble d’élocution fréquent chez les personnes de tous âges, il altère la fluidité verbale normale et l’enchaînement du discours. Le bégaiement a été lié à des différences de l’anatomie, du fonctionnement et de la régulation dopaminergique du cerveau qui seraient de source génétique. Il importe de poser le diagnostic avec attention et de faire les recommandations qui conviennent chez les enfants, car de plus en plus, le consensus veut que l’intervention précoce par un traitement d’orthophonie soit cruciale chez les enfants bègues. Chez les adultes, le bégaiement est lié à une morbidité psychosociale substantielle, dont l’anxiété sociale et une piètre qualité de vie. Les traitements pharmacologiques ont soulevé l’intérêt depuis quelques années, mais les données cliniques sont limitées. Le traitement des enfants et des adultes repose sur l’orthophonie. Conclusion De plus en plus de recherches ont tenté de lever le voile sur la physiopathologie du bégaiement. La meilleure solution pour les enfants et les adultes bègues demeure la recommandation à un traitement d’orthophonie.
Mid-infrared interferometry of Seyfert galaxies: Challenging the Standard Model
NASA Astrophysics Data System (ADS)
López-Gonzaga, N.; Jaffe, W.
2016-06-01
Aims: We aim to find torus models that explain the observed high-resolution mid-infrared (MIR) measurements of active galactic nuclei (AGN). Our goal is to determine the general properties of the circumnuclear dusty environments. Methods: We used the MIR interferometric data of a sample of AGNs provided by the instrument MIDI/VLTI and followed a statistical approach to compare the observed distribution of the interferometric measurements with the distributions computed from clumpy torus models. We mainly tested whether the diversity of Seyfert galaxies can be described using the Standard Model idea, where differences are solely due to a line-of-sight (LOS) effect. In addition to the LOS effects, we performed different realizations of the same model to include possible variations that are caused by the stochastic nature of the dusty models. Results: We find that our entire sample of AGNs, which contains both Seyfert types, cannot be explained merely by an inclination effect and by including random variations of the clouds. Instead, we find that each subset of Seyfert type can be explained by different models, where the filling factor at the inner radius seems to be the largest difference. For the type 1 objects we find that about two thirds of our objects could also be described using a dusty torus similar to the type 2 objects. For the remaining third, it was not possible to find a good description using models with high filling factors, while we found good fits with models with low filling factors. Conclusions: Within our model assumptions, we did not find one single set of model parameters that could simultaneously explain the MIR data of all 21 AGN with LOS effects and random variations alone. We conclude that at least two distinct cloud configurations are required to model the differences in Seyfert galaxies, with volume-filling factors differing by a factor of about 5-10. A continuous transition between the two types cannot be excluded.
Resonant leptogenesis in the minimal B-L extended standard model at TeV
Iso, Satoshi; Orikasa, Yuta; Okada, Nobuchika
2011-05-01
We investigate the resonant leptogenesis scenario in the minimal B-L extended standard model with the B-L symmetry breaking at the TeV scale. Through detailed analysis of the Boltzmann equations, we show how much the resultant baryon asymmetry via leptogenesis is enhanced or suppressed, depending on the model parameters, in particular, the neutrino Dirac-Yukawa couplings and the TeV scale Majorana masses of heavy degenerate neutrinos. In order to consider a realistic case, we impose a simple ansatz for the model parameters and analyze the neutrino oscillation parameters and the baryon asymmetry via leptogenesis as a function of only a single CP phase. We find that for a fixed CP phase all neutrino oscillation data and the observed baryon asymmetry of the present Universe can be simultaneously reproduced.
Model-Invariant Hybrid Computations of Separated Flows for RCA Standard Test Cases
NASA Technical Reports Server (NTRS)
Woodruff, Stephen
2016-01-01
NASA's Revolutionary Computational Aerosciences (RCA) subproject has identified several smooth-body separated flows as standard test cases to emphasize the challenge these flows present for computational methods and their importance to the aerospace community. Results of computations of two of these test cases, the NASA hump and the FAITH experiment, are presented. The computations were performed with the model-invariant hybrid LES-RANS formulation, implemented in the NASA code VULCAN-CFD. The model- invariant formulation employs gradual LES-RANS transitions and compensation for model variation to provide more accurate and efficient hybrid computations. Comparisons revealed that the LES-RANS transitions employed in these computations were sufficiently gradual that the compensating terms were unnecessary. Agreement with experiment was achieved only after reducing the turbulent viscosity to mitigate the effect of numerical dissipation. The stream-wise evolution of peak Reynolds shear stress was employed as a measure of turbulence dynamics in separated flows useful for evaluating computations.
Criticality and the onset of ordering in the standard Vicsek model
Baglietto, Gabriel; Albano, Ezequiel V.; Candia, Julián
2012-01-01
Experimental observations of animal collective behaviour have shown stunning evidence for the emergence of large-scale cooperative phenomena resembling phase transitions in physical systems. Indeed, quantitative studies have found scale-free correlations and critical behaviour consistent with the occurrence of continuous, second-order phase transitions. The standard Vicsek model (SVM), a minimal model of self-propelled particles in which their tendency to align with each other competes with perturbations controlled by a noise term, appears to capture the essential ingredients of critical flocking phenomena. In this paper, we review recent finite-size scaling and dynamical studies of the SVM, which present a full characterization of the continuous phase transition through dynamical and critical exponents. We also present a complex network analysis of SVM flocks and discuss the onset of ordering in connection with XY-like spin models. PMID:24312724
A perturbative approach to the redshift space power spectrum: beyond the Standard Model
NASA Astrophysics Data System (ADS)
Bose, Benjamin; Koyama, Kazuya
2016-08-01
We develop a code to produce the power spectrum in redshift space based on standard perturbation theory (SPT) at 1-loop order. The code can be applied to a wide range of modified gravity and dark energy models using a recently proposed numerical method by A.Taruya to find the SPT kernels. This includes Horndeski's theory with a general potential, which accommodates both chameleon and Vainshtein screening mechanisms and provides a non-linear extension of the effective theory of dark energy up to the third order. Focus is on a recent non-linear model of the redshift space power spectrum which has been shown to model the anisotropy very well at relevant scales for the SPT framework, as well as capturing relevant non-linear effects typical of modified gravity theories. We provide consistency checks of the code against established results and elucidate its application within the light of upcoming high precision RSD data.
Criticality and the onset of ordering in the standard Vicsek model.
Baglietto, Gabriel; Albano, Ezequiel V; Candia, Julián
2012-12-06
Experimental observations of animal collective behaviour have shown stunning evidence for the emergence of large-scale cooperative phenomena resembling phase transitions in physical systems. Indeed, quantitative studies have found scale-free correlations and critical behaviour consistent with the occurrence of continuous, second-order phase transitions. The standard Vicsek model (SVM), a minimal model of self-propelled particles in which their tendency to align with each other competes with perturbations controlled by a noise term, appears to capture the essential ingredients of critical flocking phenomena. In this paper, we review recent finite-size scaling and dynamical studies of the SVM, which present a full characterization of the continuous phase transition through dynamical and critical exponents. We also present a complex network analysis of SVM flocks and discuss the onset of ordering in connection with XY-like spin models.
Special Mixing of the Neutral Higgs Bosons States in the Standard Model with Two Higgs Doublets
Juarez W, S. R.; Morales C, D.
2008-07-02
The Higgs sector of the Standard Model (SM) requires careful investigation in order to look for new physics. In the SM the masses of the physical particles arise, after the spontaneous symmetry breaking (SSB), through its couplings with a single Higgs doublet. In the simplest extension of the SM, called the Two Higgs Doublet Model (2HDM), a second Higgs doublet is introduced, with a potential dependent on seven parameters, which are related to five Higgs bosons, whose existence is predicted by the model. In this context, we obtain the masses and the physical eigenstates of the new scalar particles: two charged ones (H{sup {+-}}) and three neutral (A{sup 0}), (h{sup 0},H{sup 0}). We explore a particular situation in which very simple relations between the parameters and the masses are satisfied.
Ignition-and-Growth Modeling of NASA Standard Detonator and a Linear Shaped Charge
NASA Technical Reports Server (NTRS)
Oguz, Sirri
2010-01-01
The main objective of this study is to quantitatively investigate the ignition and shock sensitivity of NASA Standard Detonator (NSD) and the shock wave propagation of a linear shaped charge (LSC) after being shocked by NSD flyer plate. This combined explosive train was modeled as a coupled Arbitrary Lagrangian-Eulerian (ALE) model with LS-DYNA hydro code. An ignition-and-growth (I&G) reactive model based on unreacted and reacted Jones-Wilkins-Lee (JWL) equations of state was used to simulate the shock initiation. Various NSD-to-LSC stand-off distances were analyzed to calculate the shock initiation (or failure to initiate) and detonation wave propagation along the shaped charge. Simulation results were verified by experimental data which included VISAR tests for NSD flyer plate velocity measurement and an aluminum target severance test for LSC performance verification. Parameters used for the analysis were obtained from various published data or by using CHEETAH thermo-chemical code.
NASA Astrophysics Data System (ADS)
Gronewold, A. D.; Ritzenthaler, A.; Fry, L. M.; Anderson, E. J.
2012-12-01
There is a clear need in the water resource and public health management communities to develop and test modeling systems which provide robust predictions of water quality and water quality standard violations, particularly in coastal communities. These predictions have the potential to supplement, or even replace, conventional human health protection strategies which (in the case of controlling public access to beaches, for example) are often based on day-old fecal indicator bacteria monitoring results. Here, we present a coupled modeling system which builds upon recent advancements in watershed-scale hydrological modeling and coastal hydrodynamic modeling, including the evolution of the Huron-Erie Connecting Waterways Forecasting System (HECWFS), developed through a partnership between NOAA's Great Lakes Environmental Research Laboratory (GLERL) and the University of Michigan Cooperative Institute for Limnology and Ecosystems Research (CILER). Our study is based on applying the modeling system to a popular beach in the metro-Detroit (Michigan, USA) area and implementing a routine shoreline monitoring program to help assess model forecasting skill. This research presents an important stepping stone towards the application of similar modeling systems in frequently-closed beaches throughout the Great Lakes region.
NASA Astrophysics Data System (ADS)
Meisel, David D.; Szasz, Csilla; Kero, Johan
2008-06-01
The Arecibo UHF radar is able to detect the head-echos of micron-sized meteoroids up to velocities of 75 km/s over a height range of 80 140 km. Because of their small size there are many uncertainties involved in calculating their above atmosphere properties as needed for orbit determination. An ab initio model of meteor ablation has been devised that should work over the mass range 10-16 kg to 10-7 kg, but the faint end of this range cannot be observed by any other method and so direct verification is not possible. On the other hand, the EISCAT UHF radar system detects micrometeors in the high mass part of this range and its observations can be fit to a “standard” ablation model and calibrated to optical observations (Szasz et al. 2007). In this paper, we present a preliminary comparison of the two models, one observationally confirmable. Among the features of the ab initio model that are different from the “standard” model are: (1) uses the experimentally based low pressure vaporization theory of O’Hanlon (A users’s guide to vacuum technology, 2003) for ablation, (2) uses velocity dependent functions fit from experimental data on heat transfer, luminosity and ionization efficiencies measured by Friichtenicht and Becker (NASA Special Publication 319: 53, 1973) for micron sized particles, (3) assumes a density and temperature dependence of the micrometeoroids and ablation product specific heats, (4) assumes a density and size dependent value for the thermal emissivity and (5) uses a unified synthesis of experimental data for the most important meteoroid elements and their oxides through least square fits (as functions of temperature, density, and/or melting point) of the tables of thermodynamic parameters given in Weast (CRC Handbook of Physics and Chemistry, 1984), Gray (American Institute of Physics Handbook, 1972), and Cox (Allen’s Astrophysical Quantities 2000). This utilization of mostly experimentally determined data is the main reason for
Chatrchyan, Serguei; et al.,
2014-01-21
A search for the standard model Higgs boson (H) decaying to b b-bar when produced in association with a weak vector boson (V) is reported for the following channels: W(mu nu)H, W(e nu)H, W(tau nu)H, Z(mu mu)H, Z(e e)H, and Z(nu nu)H. The search is performed in data samples corresponding to integrated luminosities of up to 5.1 inverse femtobarns at sqrt(s) = 7 TeV and up to 18.9 inverse femtobarns at sqrt(s) = 8 TeV, recorded by the CMS experiment at the LHC. An excess of events is observed above the expected background with a local significance of 2.1 standard deviations for a Higgs boson mass of 125 GeV, consistent with the expectation from the production of the standard model Higgs boson. The signal strength corresponding to this excess, relative to that of the standard model Higgs boson, is 1.0 +/- 0.5.
Charge quantization and the Standard Model from the CP2 and CP3 nonlinear σ-models
NASA Astrophysics Data System (ADS)
Hellerman, Simeon; Kehayias, John; Yanagida, Tsutomu T.
2014-04-01
We investigate charge quantization in the Standard Model (SM) through a CP2 nonlinear sigma model (NLSM), SU(3/(SU(2×U(1), and a CP3 model, SU(4/(SU(3×U(1). We also generalize to any CPk model. Charge quantization follows from the consistency and dynamics of the NLSM, without a monopole or Grand Unified Theory, as shown in our earlier work on the CP1 model (arXiv:1309.0692). We find that representations of the matter fields under the unbroken non-abelian subgroup dictate their charge quantization under the U(1 factor. In the CP2 model the unbroken group is identified with the weak and hypercharge groups of the SM, and the Nambu-Goldstone boson (NGB) has the quantum numbers of a SM Higgs. There is the intriguing possibility of a connection with the vanishing of the Higgs self-coupling at the Planck scale. Interestingly, with some minor assumptions (no vector-like matter and minimal representations) and starting with a single quark doublet, anomaly cancellation requires the matter structure of a generation in the SM. Similar analysis holds in the CP3 model, with the unbroken group identified with QCD and hypercharge, and the NGB having the up quark as a partner in a supersymmetric model. This can motivate solving the strong CP problem with a vanishing up quark mass.
NASA Astrophysics Data System (ADS)
Peng, Qiu-He; Liu, Jing-Jing; Chou, Chi-Kang
2016-12-01
Recent observational evidence indicates that the center of our Milky Way galaxy harbors a super-massive object with ultra-strong radial magnetic field (Eatough et al. in Nature 591:391, 2013). Here we demonstrate that the radiations observed in the vicinity of the Galactic Center (GC) (Falcke and Marko in arXiv:1311.1841v1, 2013) cannot be emitted by the gas of the accretion disk since the accreting plasma is prevented from approaching to the GC by the abnormally strong radial magnetic field. These fields obstruct the infalling accretion flow from the inner region of the disk and the central massive black hole in the standard model. It is expected that the observed radiations near the GC can not be generated by the central black hole. We also demonstrate that the observed ultra-strong radial magnetic field near the GC (Eatough et al. in Nature 591:391, 2013) can not be generated by the generalized α-turbulence type dynamo mechanism since preliminary qualitative estimate in terms of this mechanism gives a magnetic field strength six orders of magnitude smaller than the observed field strength at r=0.12 pc. However, both these difficulties or the dilemma of the standard model can be overcome if the central black hole in the standard model is replaced by a model of a super-massive star with magnetic monopoles (SMSMM) (Peng and Chou in Astrophys. J. Lett. 551:23, 2001). Five predictions about the GC have been proposed in the SMSMM model. Especially, three of them are quantitatively consistent with the observations. They are: (1) Plenty of positrons are produced, the production rate is 6×10^{42} e+ s^{-1} or so, this prediction is confirmed by the observation (Kn ödlseder et al. 2003); (2) The lower limit of the observed ultra-strong radial magnetic field near the GC (Eatough et al. in Nature 591:391, 2013), is just good agreement with the predicted estimated radial magnetic field from the SMSMM model, which really is an exclusive and a key prediction; (3) The
NASA Technical Reports Server (NTRS)
Sakuraba, K.; Tsuruda, Y.; Hanada, T.; Liou, J.-C.; Akahoshi, Y.
2007-01-01
This paper summarizes two new satellite impact tests conducted in order to investigate on the outcome of low- and hyper-velocity impacts on two identical target satellites. The first experiment was performed at a low velocity of 1.5 km/s using a 40-gram aluminum alloy sphere, whereas the second experiment was performed at a hyper-velocity of 4.4 km/s using a 4-gram aluminum alloy sphere by two-stage light gas gun in Kyushu Institute of Technology. To date, approximately 1,500 fragments from each impact test have been collected for detailed analysis. Each piece was analyzed based on the method used in the NASA Standard Breakup Model 2000 revision. The detailed analysis will conclude: 1) the similarity in mass distribution of fragments between low and hyper-velocity impacts encourages the development of a general-purpose distribution model applicable for a wide impact velocity range, and 2) the difference in area-to-mass ratio distribution between the impact experiments and the NASA standard breakup model suggests to describe the area-to-mass ratio by a bi-normal distribution.
Ferrer, Francesc; Krauss, Lawrence M.; Profumo, Stefano
2006-12-01
We explore the prospects for indirect detection of neutralino dark matter in supersymmetric models with an extended Higgs sector (next-to-minimal supersymmetric standard model, or NMSSM). We compute, for the first time, one-loop amplitudes for NMSSM neutralino pair annihilation into two photons and two gluons, and point out that extra diagrams (with respect to the minimal supersymmetric standard model, or MSSM), featuring a potentially light CP-odd Higgs boson exchange, can strongly enhance these radiative modes. Expected signals in neutrino telescopes due to the annihilation of relic neutralinos in the Sun and in the Earth are evaluated, as well as the prospects of detection of a neutralino annihilation signal in space-based gamma-ray, antiproton and positron search experiments, and at low-energy antideuteron searches. We find that in the low mass regime the signals from capture in the Earth are enhanced compared to the MSSM, and that NMSSM neutralinos have a remote possibility of affecting solar dynamics. Also, antimatter experiments are an excellent probe of galactic NMSSM dark matter. We also find enhanced two-photon decay modes that make the possibility of the detection of a monochromatic gamma-ray line within the NMSSM more promising than in the MSSM, although likely below the sensitivity of next generation gamma-ray telescopes.
VoICE: A semi-automated pipeline for standardizing vocal analysis across models
Burkett, Zachary D.; Day, Nancy F.; Peñagarikano, Olga; Geschwind, Daniel H.; White, Stephanie A.
2015-01-01
The study of vocal communication in animal models provides key insight to the neurogenetic basis for speech and communication disorders. Current methods for vocal analysis suffer from a lack of standardization, creating ambiguity in cross-laboratory and cross-species comparisons. Here, we present VoICE (Vocal Inventory Clustering Engine), an approach to grouping vocal elements by creating a high dimensionality dataset through scoring spectral similarity between all vocalizations within a recording session. This dataset is then subjected to hierarchical clustering, generating a dendrogram that is pruned into meaningful vocalization “types” by an automated algorithm. When applied to birdsong, a key model for vocal learning, VoICE captures the known deterioration in acoustic properties that follows deafening, including altered sequencing. In a mammalian neurodevelopmental model, we uncover a reduced vocal repertoire of mice lacking the autism susceptibility gene, Cntnap2. VoICE will be useful to the scientific community as it can standardize vocalization analyses across species and laboratories. PMID:26018425
Rouvroye, Jan L; Wiegerinck, Jan A M
2006-10-01
In industry, potentially hazardous (technical) structures are equipped with safety systems in order to protect people, the environment, and assets from the consequences of accidents by reducing the probability of incidents occurring. Not only companies but also society will want to know what the effect of these safety measures is: society in terms of "likelihood of undesired events" and companies in addition in terms of "value for money," the expected benefits per dollar or euro invested that these systems provide. As a compromise between demands from society (the safer the better) and industry (but against what cost), in many countries government has decided to impose standards to industry with respect to safety requirements. These standards use the average probability of failure on demand as the main performance indicator for these systems, and require, for the societal reason given before, that this probability remain below a certain value depending on a given risk. The main factor commonly used in industry to "fine-tune" the average probability of failure on demand for a given system configuration in order to comply with these standards against financial risk for the company is "optimizing" the test strategy (interval, coverage, and procedure). In industry, meeting the criterion on the average probability of failure on demand is often demonstrated by using well accepted mathematical models such as Markov models from literature and adapting them for the actual situation. This paper shows the implications and potential pitfalls when using this commonly used practical approach for a situation where the test strategy is changed. Adapting an existing Markov model can lead to unexpected results, and this paper will demonstrate that a different model has to be developed. In addition, the authors propose an approach that can be applied in industry without suffering from the problems mentioned above.
Results on the search for the standard model Higgs boson at CMS
NASA Astrophysics Data System (ADS)
Fabozzi, Francesco; CMS Collaboration
2012-10-01
A summary of the results from searches for the Standard Model Higgs Boson in the CMS experiment at LHC with data collected from proton-proton collisions at √s = 7TeV is presented. The Higgs boson is searched in a multiplicity of decay channels using data samples corresponding to integrated luminosities in the range 4.6 - 4.8 fb-1. The investigated mass range is 110 - 600 GeV. Results are reported for each channel as well as for their combination.
Gravitational waves from domain walls in the next-to-minimal supersymmetric standard model
Kadota, Kenji; Kawasaki, Masahiro; Saikawa, Ken'ichi E-mail: kawasaki@icrr.u-tokyo.ac.jp
2015-10-01
The next-to-minimal supersymmetric standard model predicts the formation of domain walls due to the spontaneous breaking of the discrete Z{sub 3}-symmetry at the electroweak phase transition, and they collapse before the epoch of big bang nucleosynthesis if there exists a small bias term in the potential which explicitly breaks the discrete symmetry. Signatures of gravitational waves produced from these unstable domain walls are estimated and their parameter dependence is investigated. It is shown that the amplitude of gravitational waves becomes generically large in the decoupling limit, and that their frequency is low enough to be probed in future pulsar timing observations.
mr: A C++ library for the matching and running of the Standard Model parameters
NASA Astrophysics Data System (ADS)
Kniehl, Bernd A.; Pikelner, Andrey F.; Veretin, Oleg L.
2016-09-01
We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS bar renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.
Accurate verification of the conserved-vector-current and standard-model predictions
Sirlin, A.; Zucchini, R.
1986-10-20
An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.
On the standard model predictions for R_K and R_{K^*}
NASA Astrophysics Data System (ADS)
Bordone, Marzia; Isidori, Gino; Pattori, Andrea
2016-08-01
We evaluate the impact of radiative corrections in the ratios Γ [B→ M μ ^+μ ^-]/Γ [B→ M e^+e^-] when the meson M is a K or a K^*. Employing the cuts on m^2_{ℓ ℓ } and the reconstructed B-meson mass presently applied by the LHCb Collaboration, such corrections do not exceed a few %. Moreover, their effect is well described (and corrected for) by existing Monte Carlo codes. Our analysis reinforces the interest of these observables as clean probe of physics beyond the Standard Model.
Beyond standard model searches in the MiniBooNE experiment
Katori, Teppei; Conrad, Janet M.
2014-08-05
The MiniBooNE experiment has contributed substantially to beyond standard model searches in the neutrino sector. The experiment was originally designed to test the $\mathrm{\Delta}{m}^{2}~1$eV^{2} region of the sterile neutrino hypothesis by observing ${\nu}_{e}$(${\stackrel{-}{\nu}}_{e}$) charged current quasielastic signals from a ${\nu}_{\mu}$(${\stackrel{-}{\nu}}_{\mu}$) beam. MiniBooNE observed excesses of ${\nu}_{e}$ and ${\stackrel{-}{\nu}}_{e}$ candidate events in neutrino and antineutrino mode, respectively. To date, these excesses have not been explained within the neutrino standard model ($\nu $SM); the standard model extended for three massive neutrinos. Confirmation is required by future experiments such as MicroBooNE. MiniBooNE also provided an opportunity for precision studies of Lorentz violation. The results set strict limits for the first time on several parameters of the standard-model extension, the generic formalism for considering Lorentz violation. Most recently, an extension to MiniBooNE running, with a beam tuned in beam-dump mode, is being performed to search for dark sector particles. In addition, this review describes these studies, demonstrating that short baseline neutrino experiments
$B^0_s$ and $B^0$ Mixing in the Standard Model and Beyond: A Progress Report
Bouchard, C.; El-Khadra, A.X.; Freeland, E.D.; Gamiz, E.; Kronfeld, A.S.; /Fermilab
2010-11-01
We give a progress report on the calculation of B meson mixing matrix elements, focusing on contributions that could arise beyond the Standard Model. The calculation uses asqtad (light quark) and Fermilab (heavy quark) valence actions and MILC ensembles with 2+1 flavors of asqtad sea quarks. We report preliminary B{sub s}{sup 0} fit results, at a lattice spacing of 0.12 fm, for the SUSY basis of effective four-quark mixing operators and include an estimate for the final error budget.
Group Signatures with Verifier-Local Revocation and Backward Unlinkability in the Standard Model
NASA Astrophysics Data System (ADS)
Libert, Benoît; Vergnaud, Damien
Group signatures allow users to anonymously sign messages in the name of a group. Membership revocation has always been a critical issue in such systems. In 2004, Boneh and Shacham formalized the concept of group signatures with verifier-local revocation where revocation messages are only sent to signature verifiers (as opposed to both signers and verifiers). This paper presents an efficient verifier-local revocation group signature (VLR-GS) providing backward unlinkability (i.e. previously issued signatures remain anonymous even after the signer's revocation) with a security proof in the standard model (i.e. without resorting to the random oracle heuristic).
Standard model anatomy of WIMP dark matter direct detection. I. Weak-scale matching
NASA Astrophysics Data System (ADS)
Hill, Richard J.; Solon, Mikhail P.
2015-02-01
We present formalism necessary to determine weak-scale matching coefficients in the computation of scattering cross sections for putative dark matter candidates interacting with the Standard Model. We pay particular attention to the heavy-particle limit. A consistent renormalization scheme in the presence of nontrivial residual masses is implemented. Two-loop diagrams appearing in the matching to gluon operators are evaluated. Details are given for the computation of matching coefficients in the universal limit of WIMP-nucleon scattering for pure states of arbitrary quantum numbers, and for singlet-doublet and doublet-triplet mixed states.
The early universe history from contraction-deformation of the Standard Model
NASA Astrophysics Data System (ADS)
Gromov, N. A.
2017-03-01
The elementary particles evolution in the early Universe from Plank time up to several milliseconds is presented. The developed theory is based on the high-temperature (high-energy) limit of the Standard Model which is generated by the contractions of its gauge groups. At the infinite temperature all particles lose masses. Only massless neutral -bosons, massless Z-quarks, neutrinos and photons are survived in this limit. The weak interactions become long-range and are mediated by neutral currents, quarks have only one color degree of freedom.
Constraints on extra dimensions within the framework of the Standard Model Extension
NASA Astrophysics Data System (ADS)
Ali, Hamna; Overduin, James
2017-01-01
We consider Kaluza-Klein-type extensions of General Relativity in which extra dimensions may be large but do not necessarily have units of length. Additional coordinates of this kind necessarily violate Lorentz symmetry in principle, but whether or not the violations are detectable in practice depends on the dimension-transposing constants that convert them into lengths. We parametrize these violations in terms of coefficients associated with the matter sector of the Standard Model Extension, and show that the associated variation in fundamental quantities, such as rest mass or charge, must occur slowly, on cosmological scales.
Order g{sup 2} susceptibilities in the symmetric phase of the Standard Model
Bödeker, D.; Sangel, M.
2015-04-23
Susceptibilities of conserved charges such as baryon minus lepton number enter baryogenesis computations, since they provide the relationship between conserved charges and chemical potentials. Their next-to-leading order corrections are of order g, where g is a generic Standard Model coupling. They are due to soft Higgs boson exchange, and have been calculated recently, together with some order g{sup 2} corrections. Here we compute the complete g{sup 2} contributions. Close to the electroweak crossover the soft Higgs contribution is of order g{sup 2}, and is determined by the non-perturbative physics at the magnetic screening scale.
Search for the Standard Model Higgs Boson with the Atlas Detector
NASA Astrophysics Data System (ADS)
Orestano, Domizia
2015-01-01
This document presents a brief overview of some of the experimental techniques employed by the ATLAS experiment at the CERN Large Hadron Collider in the search for the Higgs boson predicted by the standard model of particle physics. The data and the statistical analyses which allowed in July 2012, only few days before this presentation at the Marcel Grossman Meeting, to firmly establish the observation of a new particle, are described. The additional studies needed to check the consistency between the newly discovered particle and the Higgs boson are also discussed.
Search for the Standard Model Higgs Boson with the Atlas Detector
NASA Astrophysics Data System (ADS)
Orestano, Domizia
2013-06-01
This document presents a brief overview of some of the experimental techniques employed by the ATLAS experiment at the CERN Large Hadron Collider (LHC) in the search for the Higgs boson predicted by the standard model (SM) of particle physics. The data and the statistical analyses that allowed in July 2012, only few days before this presentation at the Marcel Grossman Meeting, to firmly establish the observation of a new particle are described. The additional studies needed to check the consistency between the newly discovered particle and the Higgs boson are also discussed.
New Identity-Based Blind Signature and Blind Decryption Scheme in the Standard Model
NASA Astrophysics Data System (ADS)
Phong, Le Trieu; Ogata, Wakaha
We explicitly describe and analyse blind hierachical identity-based encryption (blind HIBE) schemes, which are natural generalizations of blind IBE schemes [20]. We then uses the blind HIBE schemes to construct: (1) An identity-based blind signature scheme secure in the standard model, under the computational Diffie-Hellman (CDH) assumption, and with much shorter signature size and lesser communication cost, compared to existing proposals. (2) A new mechanism supporting a user to buy digital information over the Internet without revealing what he/she has bought, while protecting the providers from cheating users.
Klise, Geoffrey T.; Hill, Roger; Walker, Andy; Dobos, Aron; Freeman, Janine
2016-11-21
The use of the term 'availability' to describe a photovoltaic (PV) system and power plant has been fraught with confusion for many years. A term that is meant to describe equipment operational status is often omitted, misapplied or inaccurately combined with PV performance metrics due to attempts to measure performance and reliability through the lens of traditional power plant language. This paper discusses three areas where current research in standards, contract language and performance modeling is improving the way availability is used with regards to photovoltaic systems and power plants.
Three-loop Standard Model effective potential at leading order in strong and top Yukawa couplings
Martin, Stephen P.
2014-01-08
I find the three-loop contribution to the effective potential for the Standard Model Higgs field, in the approximation that the strong and top Yukawa couplings are large compared to all other couplings, using dimensional regularization with modified minimal subtraction. Checks follow from gauge invariance and renormalization group invariance. I also briefly comment on the special problems posed by Goldstone boson contributions to the effective potential, and on the numerical impact of the result on the relations between the Higgs vacuum expectation value, mass, and self-interaction coupling.
Long-range magnetic fields in the ground state of the Standard Model plasma.
Boyarsky, Alexey; Ruchayskiy, Oleg; Shaposhnikov, Mikhail
2012-09-14
In thermal equilibrium the ground state of the plasma of Standard Model particles is determined by temperature and exactly conserved combinations of baryon and lepton numbers. We show that at nonzero values of the global charges a translation invariant and homogeneous state of the plasma becomes unstable and the system transits into a new equilibrium state, containing a large-scale magnetic field. The origin of this effect is the parity-breaking character of weak interactions and chiral anomaly. This situation could occur in the early Universe and may play an important role in its subsequent evolution.
Long-Range Magnetic Fields in the Ground State of the Standard Model Plasma
NASA Astrophysics Data System (ADS)
Boyarsky, Alexey; Ruchayskiy, Oleg; Shaposhnikov, Mikhail
2012-09-01
In thermal equilibrium the ground state of the plasma of Standard Model particles is determined by temperature and exactly conserved combinations of baryon and lepton numbers. We show that at nonzero values of the global charges a translation invariant and homogeneous state of the plasma becomes unstable and the system transits into a new equilibrium state, containing a large-scale magnetic field. The origin of this effect is the parity-breaking character of weak interactions and chiral anomaly. This situation could occur in the early Universe and may play an important role in its subsequent evolution.
A note on the dimensional regularization of the Standard Model coupled with quantum gravity
NASA Astrophysics Data System (ADS)
Anselmi, Damiano
2004-08-01
In flat space, γ5 and the epsilon tensor break the dimensionally continued Lorentz symmetry, but propagators have fully Lorentz invariant denominators. When the Standard Model is coupled with quantum gravity γ5 breaks the continued local Lorentz symmetry. I show how to deform the Einstein Lagrangian and gauge-fix the residual local Lorentz symmetry so that the propagators of the graviton, the ghosts and the BRST auxiliary fields have fully Lorentz invariant denominators. This makes the calculation of Feynman diagrams more efficient.
NASA Astrophysics Data System (ADS)
Schreck, M.
2016-05-01
This article is devoted to finding classical point-particle equivalents for the fermion sector of the nonminimal standard model extension (SME). For a series of nonminimal operators, such Lagrangians are derived at first order in Lorentz violation using the algebraic concept of Gröbner bases. Subsequently, the Lagrangians serve as a basis for reanalyzing the results of certain kinematic tests of special relativity that were carried out in the past century. Thereby, a number of new constraints on coefficients of the nonminimal SME is obtained. In the last part of the paper we point out connections to Finsler geometry.
Casimir effect at finite temperature for pure-photon sector of the minimal Standard Model Extension
NASA Astrophysics Data System (ADS)
Santos, A. F.; Khanna, Faqir C.
2016-12-01
Dynamics between particles is governed by Lorentz and CPT symmetry. There is a violation of Parity (P) and CP symmetry at low levels. The unified theory, that includes particle physics and quantum gravity, may be expected to be covariant with Lorentz and CPT symmetry. At high enough energies, will the unified theory display violation of any symmetry? The Standard Model Extension (SME), with Lorentz and CPT violating terms, has been suggested to include particle dynamics. The minimal SME in the pure photon sector is considered in order to calculate the Casimir effect at finite temperature.
Gravitational waves from domain walls in the next-to-minimal supersymmetric standard model
Kadota, Kenji; Kawasaki, Masahiro; Saikawa, Ken’ichi
2015-10-16
The next-to-minimal supersymmetric standard model predicts the formation of domain walls due to the spontaneous breaking of the discrete Z{sub 3}-symmetry at the electroweak phase transition, and they collapse before the epoch of big bang nucleosynthesis if there exists a small bias term in the potential which explicitly breaks the discrete symmetry. Signatures of gravitational waves produced from these unstable domain walls are estimated and their parameter dependence is investigated. It is shown that the amplitude of gravitational waves becomes generically large in the decoupling limit, and that their frequency is low enough to be probed in future pulsar timing observations.
Perceptual video quality assessment in H.264 video coding standard using objective modeling.
Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu
2014-01-01
Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.
NASA Technical Reports Server (NTRS)
Albus, James S.; Mccain, Harry G.; Lumia, Ronald
1989-01-01
The document describes the NASA Standard Reference Model (NASREM) Architecture for the Space Station Telerobot Control System. It defines the functional requirements and high level specifications of the control system for the NASA space Station document for the functional specification, and a guideline for the development of the control system architecture, of the 10C Flight Telerobot Servicer. The NASREM telerobot control system architecture defines a set of standard modules and interfaces which facilitates software design, development, validation, and test, and make possible the integration of telerobotics software from a wide variety of sources. Standard interfaces also provide the software hooks necessary to incrementally upgrade future Flight Telerobot Systems as new capabilities develop in computer science, robotics, and autonomous system control.
Sunaga, Masayo; Minabe, Masato; Inagaki, Koji; Kinoshita, Atsuhiro
2016-12-01
The aim of this study was to evaluate the effectiveness of a dental model in training, evaluation, and standardization of examiners in pocket probing and to determine the appropriate thresholds of accuracy and measuring time when using this model for evaluation of probing skills without measuring patients' pockets repeatedly. In 2011-12, a total of 66 dental professionals and 20 dental students in Japan measured the probing depths of 24 artificial teeth using the six-point method on a dental model. All examiners measured the probing depths of six tooth groups and then checked the correct depths in each group. Each examiner measured four groups in a group-by-group manner. For each group, the measuring time and examiner's accuracy were recorded. Receiver operating characteristic (ROC) curves for various thresholds of measuring time were drawn for thresholds of accuracies to determine the passing mark as a skilled examiner. The accuracy significantly increased from the first to the fourth measurements, and the measuring time was significantly reduced for both the professionals and students. The total measuring time was significantly longer for the students than the professionals. The students' accuracy was significantly lower than that of the professionals in the first measurement group. The increasing rate of accuracy was significantly higher for the students than the professionals. These results and ROC curves suggested that the dental model is effective for periodontal pocket probing training and for the evaluation and standardization of examiners' probing skill at a preclinical level. An examiner having accuracy ≥80% within four minutes for six tooth measurements in this model could be considered a skilled examiner.
Leveraging Open Standard Interfaces in Accessing and Processing NASA Data Model Outputs
NASA Astrophysics Data System (ADS)
Falke, S. R.; Alameh, N. S.; Hoijarvi, K.; de La Beaujardiere, J.; Bambacus, M. J.
2006-12-01
An objective of NASA's Earth Science Division is to develop advanced information technologies for processing, archiving, accessing, visualizing, and communicating Earth Science data. To this end, NASA and other federal agencies have collaborated with the Open Geospatial Consortium (OGC) to research, develop, and test interoperability specifications within projects and testbeds benefiting the government, industry, and the public. This paper summarizes the results of a recent effort under the auspices of the OGC Web Services testbed phase 4 (OWS-4) to explore standardization approaches for accessing and processing the outputs of NASA models of physical phenomena. Within the OWS-4 context, experiments were designed to leverage the emerging OGC Web Processing Service (WPS) and Web Coverage Service (WCS) specifications to access, filter and manipulate the outputs of the NASA Goddard Earth Observing System (GEOS) and Goddard Chemistry Aerosol Radiation and Transport (GOCART) forecast models. In OWS-4, the intent is to provide the users with more control over the subsets of data that they can extract from the model results as well as over the final portrayal of that data. To meet that goal, experiments have been designed to test the suitability of use of OGC's Web Processing Service (WPS) and Web Coverage Service (WCS) for filtering, processing and portraying the model results (including slices by height or by time), and to identify any enhancements to the specs to meet the desired objectives. This paper summarizes the findings of the experiments highlighting the value of the Web Processing Service in providing standard interfaces for accessing and manipulating model data within spatial and temporal frameworks. The paper also points out the key shortcomings of the WPS especially in terms in comparison with a SOAP/WSDL approach towards solving the same problem.
Estimating Loss-of-Coolant Accident Frequencies for the Standardized Plant Analysis Risk Models
S. A. Eide; D. M. Rasmuson; C. L. Atwood
2008-09-01
The U.S. Nuclear Regulatory Commission maintains a set of risk models covering the U.S. commercial nuclear power plants. These standardized plant analysis risk (SPAR) models include several loss-of-coolant accident (LOCA) initiating events such as small (SLOCA), medium (MLOCA), and large (LLOCA). All of these events involve a loss of coolant inventory from the reactor coolant system. In order to maintain a level of consistency across these models, initiating event frequencies generally are based on plant-type average performance, where the plant types are boiling water reactors and pressurized water reactors. For certain risk analyses, these plant-type initiating event frequencies may be replaced by plant-specific estimates. Frequencies for SPAR LOCA initiating events previously were based on results presented in NUREG/CR-5750, but the newest models use results documented in NUREG/CR-6928. The estimates in NUREG/CR-6928 are based on historical data from the initiating events database for pressurized water reactor SLOCA or an interpretation of results presented in the draft version of NUREG-1829. The information in NUREG-1829 can be used several ways, resulting in different estimates for the various LOCA frequencies. Various ways NUREG-1829 information can be used to estimate LOCA frequencies were investigated and this paper presents two methods for the SPAR model standard inputs, which differ from the method used in NUREG/CR-6928. In addition, results obtained from NUREG-1829 are compared with actual operating experience as contained in the initiating events database.
Ellis, John; Olive, Keith A.; Savage, Christopher; Spanos, Vassilis C.
2010-04-15
We evaluate the neutrino fluxes to be expected from neutralino lightest supersymmetric particle (LSP) annihilations inside the Sun, within the minimal supersymmetric extension of the standard model with supersymmetry-breaking scalar and gaugino masses constrained to be universal at the grand unified theory scale [the constrained minimal supersymmetric standard model (CMSSM)]. We find that there are large regions of typical CMSSM (m{sub 1/2},m{sub 0}) planes where the LSP density inside the Sun is not in equilibrium, so that the annihilation rate may be far below the capture rate. We show that neutrino fluxes are dependent on the solar model at the 20% level, and adopt the AGSS09 model of Serenelli et al. for our detailed studies. We find that there are large regions of the CMSSM (m{sub 1/2},m{sub 0}) planes where the capture rate is not dominated by spin-dependent LSP-proton scattering, e.g., at large m{sub 1/2} along the CMSSM coannihilation strip. We calculate neutrino fluxes above various threshold energies for points along the coannihilation/rapid-annihilation and focus-point strips where the CMSSM yields the correct cosmological relic density for tan{beta}=10 and 55 for {mu}>0, exploring their sensitivities to uncertainties in the spin-dependent and -independent scattering matrix elements. We also present detailed neutrino spectra for four benchmark models that illustrate generic possibilities within the CMSSM. Scanning the cosmologically favored parts of the parameter space of the CMSSM, we find that the IceCube/DeepCore detector can probe at best only parts of this parameter space, notably the focus-point region and possibly also at the low-mass tip of the coannihilation strip.
Reifenrath, Janin; Angrisani, Nina; Lalk, Mareike; Besdo, Silke
2014-08-01
In the field of fracture healing it is essential to know the impacts of new materials. Fracture healing of long bones is studied in various animal models and extrapolated for use in humans, although there are differences between the micro- and macrostructure of human versus animal bone. Unfortunately, recommended standardized models for fracture repair studies do not exist. Many different study designs with various animal models are used. Concerning the general principles of replacement, refinement and reduction in animal experiments (three "Rs"), a standardization would be desirable to facilitate better comparisons between different studies. In addition, standardized methods allow better prediction of bone healing properties and implant requirements with computational models. In this review, the principles of bone fracture healing and differences between osteotomy and artificial fracture models as well as influences of fixation devices are summarized. Fundamental considerations regarding animal model choice are discussed, as it is very important to know the limitations of the chosen model. In addition, a compendium of common animal models is assembled with special focus on rats, rabbits, and sheep as most common fracture models. Fracture healing simulation is a basic tool in reducing the number of experimental animals, so its progress is also presented here. In particular, simulation of different animal models is presented. In conclusion, a standardized fracture model is of utmost importance for the best adaption of simulation to experimental setups and comparison between different studies. One of the basic goals should be to reach a consensus for standardized fracture models.
Choi, Jeeyae; Jansen, Kay; Coenen, Amy
2015-01-01
In recent years, Decision Support Systems (DSSs) have been developed and used to achieve “meaningful use”. One approach to developing DSSs is to translate clinical guidelines into a computer-interpretable format. However, there is no specific guideline modeling approach to translate nursing guidelines to computer-interpretable guidelines. This results in limited use of DSSs in nursing. Unified modeling language (UML) is a software writing language known to accurately represent the end-users’ perspective, due to its expressive characteristics. Furthermore, standard terminology enabled DSSs have been shown to smoothly integrate into existing health information systems. In order to facilitate development of nursing DSSs, the UML was used to represent a guideline for medication management for older adults encode with the International Classification for Nursing Practice (ICNP®). The UML was found to be a useful and sufficient tool to model a nursing guideline for a DSS. PMID:26958174
Choi, Jeeyae; Jansen, Kay; Coenen, Amy
In recent years, Decision Support Systems (DSSs) have been developed and used to achieve "meaningful use". One approach to developing DSSs is to translate clinical guidelines into a computer-interpretable format. However, there is no specific guideline modeling approach to translate nursing guidelines to computer-interpretable guidelines. This results in limited use of DSSs in nursing. Unified modeling language (UML) is a software writing language known to accurately represent the end-users' perspective, due to its expressive characteristics. Furthermore, standard terminology enabled DSSs have been shown to smoothly integrate into existing health information systems. In order to facilitate development of nursing DSSs, the UML was used to represent a guideline for medication management for older adults encode with the International Classification for Nursing Practice (ICNP®). The UML was found to be a useful and sufficient tool to model a nursing guideline for a DSS.
Le pompage optique naturel dans le milieu astrophysique
NASA Astrophysics Data System (ADS)
Pecker, J.-C.
The title of this lecture abstracts only a part of it : the importance in astrophysics of the study of non-LTE situations has become considerable, as well in the stellar atmospheres as, still more, in the study of fortuitous coincidences as a mechanism of formation of emission line nebular spectra, or of molecular interstellar « masers ». Another part of this talk underlines the role of Kastler in his time, and describes his warm personality through his public reactions in front of the nuclear armement, of the Viet-Nam and Algerian wars, of the problems of political refugees... Kastler was a great scientist ; he was also a courageous humanist. 1976 : Les accords nucléaires du Brésil : allocution d'ouverture (19 mars). Colloque sur le sujet ci-dessus. 1976 : La promotion de la culture dans le nouvel ordre économique international, allocution à l'occasion d'une table ronde sur ce thème par l'UNESCO (23-27 juin 1976) ; « Sciences et Techniques », octobre 1976. 1979 : La bête immonde (avec J.-C. Pecker), « Le Matin », 20 mars. 1979 : Appel à nos ministres (avec J.-C. Pecker), « Le Monde », 13 décembre. 1979 : Le flou, le ténébreux, l'irrationnel (avec J.-C. Pecker), « Le Monde », 14 septembre. 1980 : Education à la paix, Préface, in : Publ. UNESCO. 1981 : Le vrai danger, « Le Monde », 6 août 1981. 1982 : Nucléaire civil et militaire, « Le Monde », 1er juin 1982. 1982 : Les scientifiques face à la perspective d'holocauste nucléaire (texte inédit). Le titre de cette communication en résume seulement une partie : l'importance prise en astrophysique par l'analyse des situations hors ETL est devenue considérable, qu'il s'agisse des atmosphères stellaires, ou plus encore, des coïncidences fortuites de la formation des spectres d'émission nébulaires, ou des « masers » moléculaires interstellaires. Une autre partie de cet exposé souligne le rôle de Kastler dans son époque, et décrit sa personnalité généreuse à travers ses r
Use of Open Standards and Technologies at the Lunar Mapping and Modeling Project
NASA Astrophysics Data System (ADS)
Law, E.; Malhotra, S.; Bui, B.; Chang, G.; Goodale, C. E.; Ramirez, P.; Kim, R. M.; Sadaqathulla, S.; Rodriguez, L.
2011-12-01
The Lunar Mapping and Modeling Project (LMMP), led by the Marshall Space Flight center (MSFC), is tasked by NASA. The project is responsible for the development of an information system to support lunar exploration activities. It provides lunar explorers a set of tools and lunar map and model products that are predominantly derived from present lunar missions (e.g., the Lunar Reconnaissance Orbiter (LRO)) and from historical missions (e.g., Apollo). At Jet Propulsion Laboratory (JPL), we have built the LMMP interoperable geospatial information system's underlying infrastructure and a single point of entry - the LMMP Portal by employing a number of open standards and technologies. The Portal exposes a set of services to users to allow search, visualization, subset, and download of lunar data managed by the system. Users also have access to a set of tools that visualize, analyze and annotate the data. The infrastructure and Portal are based on web service oriented architecture. We designed the system to support solar system bodies in general including asteroids, earth and planets. We employed a combination of custom software, commercial and open-source components, off-the-shelf hardware and pay-by-use cloud computing services. The use of open standards and web service interfaces facilitate platform and application independent access to the services and data, offering for instances, iPad and Android mobile applications and large screen multi-touch with 3-D terrain viewing functions, for a rich browsing and analysis experience from a variety of platforms. The web services made use of open standards including: Representational State Transfer (REST); and Open Geospatial Consortium (OGC)'s Web Map Service (WMS), Web Coverage Service (WCS), Web Feature Service (WFS). Its data management services have been built on top of a set of open technologies including: Object Oriented Data Technology (OODT) - open source data catalog, archive, file management, data grid framework
Parker, Albert E; Hamilton, Martin A; Tomasino, Stephen F
2014-01-01
A performance standard for a disinfectant test method can be evaluated by quantifying the (Type I) pass-error rate for ineffective products and the (Type II) fail-error rate for highly effective products. This paper shows how to calculate these error rates for test methods where the log reduction in a microbial population is used as a measure of antimicrobial efficacy. The calculations can be used to assess performance standards that may require multiple tests of multiple microbes at multiple laboratories. Notably, the error rates account for among-laboratory variance of the log reductions estimated from a multilaboratory data set and the correlation among tests of different microbes conducted in the same laboratory. Performance standards that require that a disinfectant product pass all tests or multiple tests on average, are considered. The proposed statistical methodology is flexible and allows for a different acceptable outcome for each microbe tested, since, for example, variability may be different for different microbes. The approach can also be applied to semiquantitative methods for which product efficacy is reported as the number of positive carriers out of a treated set and the density of the microbes on control carriers is quantified, thereby allowing a log reduction to be calculated. Therefore, using the approach described in this paper, the error rates can also be calculated for semiquantitative method performance standards specified solely in terms of the maximum allowable number of positive carriers per test. The calculations are demonstrated in a case study of the current performance standard for the semiquantitative AOAC Use-Dilution Methods for Pseudomonas aeruginosa (964.02) and Staphylococcus aureus (955.15), which allow up to one positive carrier out of a set of 60 inoculated and treated carriers in each test. A simulation study was also conducted to verify the validity of the model's assumptions and accuracy. Our approach, easily implemented
Standard solar models, with and without helium diffusion, and the solar neutrino problem
NASA Astrophysics Data System (ADS)
Bahcall, J. N.; Pinsonneault, M. H.
1992-10-01
We first show that, with the same input parameters, the standard solar models of Bahcall and Ulrich; of Sienkiewicz, Bahcall, and Paczyński of Turck-Chièze, Cahen, Cassé, and Doom; and of the current Yale code all predict event rates for the chlorine experiment that are the same within +/-0.1 SNU (solar neutrino units), i.e., approximately 1% of the total calculated rate. We then construct new standard solar models using the Yale stellar evolution computer code supplemented with a more accurate (exportable) nuclear energy generation routine, an improved equation of state, recent determinations of element abundances, and the new Livermore (OPAL) opacity calculations. We evaluate the individual effects of different improvements by calculating a series of precise models, changing only one aspect of the solar model at a time. We next add a new subroutine that calculates the diffusion of helium with respect to hydrogen with the aid of the Bahcall-Loeb formalism. Finally, we compare the neutrino fluxes computed from our best solar models constructed with and without helium diffusion. We find that helium diffusion increases the predicted event rates by about 0.8 SNU, or 11% of the total rate, in the chlorine experiment; by about 3.5 SNU, or 3%, in the gallium experiments; and by about 12% in the Kamiokande and SNO experiments. The best standard solar model including helium diffusion and the most accurate nuclear parameters, element abundances, radiative opacity, and equation of state predicts a value of 8.0+/-3.0 SNU for the 37Cl experiment and 132+21-17 SNU for the 71Ga experiment. The quoted errors represent the total theoretical range and include the effects on the model predictions of 3σ errors in measured input parameters. All 15 calculations since 1968 of the predicted rate in the chlorine experiment given in this series of papers are consistent with both the range estimated in the present work and the 1968 best-estimate value of 7.5+/-2.3 SNU. Including the
Nicolini, Paolo; Guàrdia, Elvira; Masia, Marco
2013-11-14
In this work, ab initio parametrization of water force field is used to get insights into the functional form of empirical potentials to properly model the physics underlying dispersion interactions. We exploited the force matching algorithm to fit the interaction forces obtained with dispersion corrected density functional theory based molecular dynamics simulations. We found that the standard Lennard-Jones interaction potentials poorly reproduce the attractive character of dispersion forces. This drawback can be resolved by accounting for the distinctive short range behavior of dispersion interactions, multiplying the r(-6) term by a damping function. We propose two novel parametrizations of the force field using different damping functions. Structural and dynamical properties of the new models are computed and compared with the ones obtained from the non-damped force field, showing an improved agreement with reference first principle calculations.
Review of searches for rare processes and physics beyond the Standard Model at HERA
NASA Astrophysics Data System (ADS)
South, David M.; Turcato, Monica
2016-06-01
The electron-proton collisions collected by the H1 and ZEUS experiments at HERA comprise a unique particle physics data set, and a comprehensive range of measurements has been performed to provide new insight into the structure of the proton. The high centre of mass energy at HERA has also allowed rare processes to be studied, including the production of W and Z0 bosons and events with multiple leptons in the final state. The data have also opened up a new domain to searches for physics beyond the Standard Model including contact interactions, leptoquarks, excited fermions and a number of supersymmetric models. This review presents a summary of such results, where the analyses reported correspond to an integrated luminosity of up to 1 fb^{-1}, representing the complete data set recorded by the H1 and ZEUS experiments.
Marsh, Herbert W; Trautwein, Ulrich; Lüdtke, Oliver; Köller, Olaf; Baumert, Jürgen
2005-01-01
Reciprocal effects models of longitudinal data show that academic self-concept is both a cause and an effect of achievement. In this study this model was extended to juxtapose self-concept with academic interest. Based on longitudinal data from 2 nationally representative samples of German 7th-grade students (Study 1: N = 5,649, M age = 13.4; Study 2: N = 2,264, M age = 13.7 years), prior self-concept significantly affected subsequent math interest, school grades, and standardized test scores, whereas prior math interest had only a small effect on subsequent math self-concept. Despite stereotypic gender differences in means, linkages relating these constructs were invariant over gender. These results demonstrate the positive effects of academic self-concept on a variety of academic outcomes and integrate self-concept with the developmental motivation literature.
Standard model predictions and new physics sensitivity in B →D D decays
NASA Astrophysics Data System (ADS)
Jung, Martin; Schacht, Stefan
2015-02-01
An extensive model-independent analysis of B →D D decays is carried out employing S U (3 ) flavor symmetry, including symmetry-breaking corrections. Several theoretically clean observables are identified which allow for testing the standard model. These include the known time-dependent C P asymmetries, the penguin pollution of which can be controlled in this framework, but notably also quasi-isospin relations which are experimentally well accessible and unaffected by symmetry-breaking corrections. Theoretical assumptions can be kept to a minimum and controlled by additional sum rules. Available data are used in global fits to predict the branching ratio for the B0→Ds+Ds- decay as well as several C P asymmetries which have not been measured so far, and future prospects are analyzed.
NASA Astrophysics Data System (ADS)
Iacobellis, Giuseppe; Masina, Isabella
2016-10-01
We study the gauge-independent observables associated with two interesting stationary configurations of the Standard Model Higgs potential (extrapolated to high energy according to the present state of the art, namely the next-to-next-to-leading order): i) the value of the top mass ensuring the stability of the SM electroweak minimum and ii) the value of the Higgs potential at a rising inflection point. We examine in detail and reappraise the experimental and theoretical uncertainties which plague their determination, finding that i) the stability of the SM is compatible with the present data at the 1.5 σ level and ii) despite the large theoretical error plaguing the value of the Higgs potential at a rising inflection point, the application of such a configuration to models of primordial inflation displays a 3 σ tension with the recent bounds on the tensor-to-scalar ratio of cosmological perturbations.
The Brown Muck of $B^0$ and $B^0_s$ Mixing: Beyond the Standard Model
Bouchard, Christopher Michael
2011-01-01
Standard Model contributions to neutral $B$ meson mixing begin at the one loop level where they are further suppressed by a combination of the GIM mechanism and Cabibbo suppression. This combination makes $B$ meson mixing a promising probe of new physics, where as yet undiscovered particles and/or interactions can participate in the virtual loops. Relating underlying interactions of the mixing process to experimental observation requires a precise calculation of the non-perturbative process of hadronization, characterized by hadronic mixing matrix elements. This thesis describes a calculation of the hadronic mixing matrix elements relevant to a large class of new physics models. The calculation is performed via lattice QCD using the MILC collaboration's gauge configurations with $2+1$ dynamical sea quarks.
Meystre, Stéphane M; Lee, Sanghoon; Jung, Chai Young; Chevrier, Raphaël D
2012-08-01
An increasing need for collaboration and resources sharing in the Natural Language Processing (NLP) research and development community motivates efforts to create and share a common data model and a common terminology for all information annotated and extracted from clinical text. We have combined two existing standards: the HL7 Clinical Document Architecture (CDA), and the ISO Graph Annotation Format (GrAF; in development), to develop such a data model entitled "CDA+GrAF". We experimented with several methods to combine these existing standards, and eventually selected a method wrapping separate CDA and GrAF parts in a common standoff annotation (i.e., separate from the annotated text) XML document. Two use cases, clinical document sections, and the 2010 i2b2/VA NLP Challenge (i.e., problems, tests, and treatments, with their assertions and relations), were used to create examples of such standoff annotation documents, and were successfully validated with the XML schemata provided with both standards. We developed a tool to automatically translate annotation documents from the 2010 i2b2/VA NLP Challenge format to GrAF, and automatically generated 50 annotation documents using this tool, all successfully validated. Finally, we adapted the XSL stylesheet provided with HL7 CDA to allow viewing annotation XML documents in a web browser, and plan to adapt existing tools for translating annotation documents between CDA+GrAF and the UIMA and GATE frameworks. This common data model may ease directly comparing NLP tools and applications, combining their output, transforming and "translating" annotations between different NLP applications, and eventually "plug-and-play" of different modules in NLP applications.
Brodin, N. Patrik; Vogelius, Ivan R.; Björk-Eriksson, Thomas; Munck af Rosenschöld, Per; Bentzen, Søren M.
2013-10-01
Purpose: As pediatric medulloblastoma (MB) is a relatively rare disease, it is important to extract the maximum information from trials and cohort studies. Here, a framework was developed for modeling tumor control with multiple modes of failure and time-to-progression for standard-risk MB, using published pattern of failure data. Methods and Materials: Outcome data for standard-risk MB published after 1990 with pattern of relapse information were used to fit a tumor control dose-response model addressing failures in both the high-dose boost volume and the elective craniospinal volume. Estimates of 5-year event-free survival from 2 large randomized MB trials were used to model the time-to-progression distribution. Uncertainty in freedom from progression (FFP) was estimated by Monte Carlo sampling over the statistical uncertainty in input data. Results: The estimated 5-year FFP (95% confidence intervals [CI]) for craniospinal doses of 15, 18, 24, and 36 Gy while maintaining 54 Gy to the posterior fossa was 77% (95% CI, 70%-81%), 78% (95% CI, 73%-81%), 79% (95% CI, 76%-82%), and 80% (95% CI, 77%-84%) respectively. The uncertainty in FFP was considerably larger for craniospinal doses below 18 Gy, reflecting the lack of data in the lower dose range. Conclusions: Estimates of tumor control and time-to-progression for standard-risk MB provides a data-driven setting for hypothesis generation or power calculations for prospective trials, taking the uncertainties into account. The presented methods can also be applied to incorporate further risk-stratification for example based on molecular biomarkers, when the necessary data become available.
Baryogenesis in the two doublet and inert singlet extension of the Standard Model
Alanne, Tommi; Kainulainen, Kimmo; Tuominen, Kimmo; Vaskonen, Ville
2016-08-25
We investigate an extension of the Standard Model containing two Higgs doublets and a singlet scalar field (2HDSM). We show that the model can have a strongly first-order phase transition and give rise to the observed baryon asymmetry of the Universe, consistent with all experimental constraints. In particular, the constraints from the electron and neutron electric dipole moments are less constraining here than in pure two-Higgs-doublet model (2HDM). The two-step, first-order transition in 2HDSM, induced by the singlet field, may lead to strong supercooling and low nucleation temperatures in comparison with the critical temperature, T{sub n}≪T{sub c}, which can significantly alter the usual phase-transition pattern in 2HD models with T{sub n}≈T{sub c}. Furthermore, the singlet field can be the dark matter particle. However, in models with a strong first-order transition its abundance is typically but a thousandth of the observed dark matter abundance.
Human experimental pain models: A review of standardized methods in drug development
Reddy, K. Sunil kumar; Naidu, M. U. R.; Rani, P. Usha; Rao, T. Ramesh Kumar
2012-01-01
Human experimental pain models are essential in understanding the pain mechanisms and appear to be ideally suited to test analgesic compounds. The challenge that confronts both the clinician and the scientist is to match specific treatments to different pain-generating mechanisms and hence reach a pain treatment tailored to each individual patient. Experimental pain models offer the possibility to explore the pain system under controlled settings. Standardized stimuli of different modalities (i.e., mechanical, thermal, electrical, or chemical) can be applied to the skin, muscles, and viscera for a differentiated and comprehensive assessment of various pain pathways and mechanisms. Using a multimodel-multistructure testing, the nociception arising from different body structures can be explored and modulation of specific biomarkers by new and existing analgesic drugs can be profiled. The value of human experimental pain models is to link animal and clinical pain studies, providing new possibilities for designing successful clinical trials. Spontaneous pain, the main compliant of the neuropathic patients, but currently there is no human model available that would mimic chronic pain. Therefore, current human pain models cannot replace patient studies for studying efficacy of analgesic compounds, although being helpful for proof-of-concept studies and dose finding. PMID:23626642
Shen, Jiacheng; Wyman, Charles E
2012-01-01
A kinetic model was applied to improve determination of the sugar recovery standard (SRS) for biomass analysis. Three sets of xylose (0.10-1.00 g/L and 0.999-19.995 g/L) and glucose (0.206-1.602 g/L) concentrations were measured by HPLC following reaction of each for 1 h. Then, parameters in a kinetic model were fit to the resulting sugar concentration data, and the model was applied to predict the initial sugar concentrations and the best SRS value (SRS(p)). The initial sugar concentrations predicted by the model agreed with the actual initial sugar concentrations. Although the SRS(e) calculated directly from experimental data oscillated considerably with sugar concentration, the SRS(p) trend was smooth. Statistical analysis of errors and application of the F-test confirmed that application of the model reduced experimental errors in SRS(e). Reference SRS(e) values are reported for the three series of concentrations.
Latent class models in diagnostic studies when there is no reference standard--a systematic review.
van Smeden, Maarten; Naaktgeboren, Christiana A; Reitsma, Johannes B; Moons, Karel G M; de Groot, Joris A H
2014-02-15
Latent class models (LCMs) combine the results of multiple diagnostic tests through a statistical model to obtain estimates of disease prevalence and diagnostic test accuracy in situations where there is no single, accurate reference standard. We performed a systematic review of the methodology and reporting of LCMs in diagnostic accuracy studies. This review shows that the use of LCMs in such studies increased sharply in the past decade, notably in the domain of infectious diseases (overall contribution: 59%). The 64 reviewed studies used a range of differently specified parametric latent variable models, applying Bayesian and frequentist methods. The critical assumption underlying the majority of LCM applications (61%) is that the test observations must be independent within 2 classes. Because violations of this assumption can lead to biased estimates of accuracy and prevalence, performing and reporting checks of whether assumptions are met is essential. Unfortunately, our review shows that 28% of the included studies failed to report any information that enables verification of model assumptions or performance. Because of the lack of information on model fit and adequate evidence "external" to the LCMs, it is often difficult for readers to judge the validity of LCM-based inferences and conclusions reached.
NASA Technical Reports Server (NTRS)
Guenther, D. B.
1994-01-01
The nonadiabatic frequencies of a standard solar model and a solar model that includes helium diffusion are discussed. The nonadiabatic pulsation calculation includes physics that describes the losses and gains due to radiation. Radiative gains and losses are modeled in both the diffusion approximation, which is only valid in optically thick regions, and the Eddington approximation, which is valid in both optically thin and thick regions. The calculated pulsation frequencies for modes with l less than or equal to 1320 are compared to the observed spectrum of the Sun. Compared to a strictly adiabatic calculation, the nonadiabatic calculation of p-mode frequencies improves the agreement between model and observation. When helium diffusion is included in the model the frequencies of the modes that are sensitive to regions near the base of the convection zone are improved (i.e., brought into closer agreement with observation), but the agreement is made worse for other modes. Cyclic variations in the frequency spacings of the Sun as a function of frequency of n are presented as evidence for a discontinuity in the structure of the Sun, possibly located near the base of the convection zone.
Creating NDA working standards through high-fidelity spent fuel modeling
Skutnik, Steven E; Gauld, Ian C; Romano, Catherine E; Trellue, Holly
2012-01-01
The Next Generation Safeguards Initiative (NGSI) is developing advanced non-destructive assay (NDA) techniques for spent nuclear fuel assemblies to advance the state-of-the-art in safeguards measurements. These measurements aim beyond the capabilities of existing methods to include the evaluation of plutonium and fissile material inventory, independent of operator declarations. Testing and evaluation of advanced NDA performance will require reference assemblies with well-characterized compositions to serve as working standards against which the NDA methods can be benchmarked and for uncertainty quantification. To support the development of standards for the NGSI spent fuel NDA project, high-fidelity modeling of irradiated fuel assemblies is being performed to characterize fuel compositions and radiation emission data. The assembly depletion simulations apply detailed operating history information and core simulation data as it is available to perform high fidelity axial and pin-by-pin fuel characterization for more than 1600 nuclides. The resulting pin-by-pin isotopic inventories are used to optimize the NDA measurements and provide information necessary to unfold and interpret the measurement data, e.g., passive gamma emitters, neutron emitters, neutron absorbers, and fissile content. A key requirement of this study is the analysis of uncertainties associated with the calculated compositions and signatures for the standard assemblies; uncertainties introduced by the calculation methods, nuclear data, and operating information. An integral part of this assessment involves the application of experimental data from destructive radiochemical assay to assess the uncertainty and bias in computed inventories, the impact of parameters such as assembly burnup gradients and burnable poisons, and the influence of neighboring assemblies on periphery rods. This paper will present the results of high fidelity assembly depletion modeling and uncertainty analysis from independent
Precision experiments to test the Standard Model at the University of Notre Dame
NASA Astrophysics Data System (ADS)
Brodeur, Maxime
2017-01-01
The Standard Model of Physics as a description of matter in the universe contains many unexplained features. One way to search for physics beyond the Standard Model (SM) is accomplished by testing the unitarity of the Cabibbo-Kobayashi-Maskawa matrix. Such a unitarity test requires a precise and accurate determination of the Vud matrix element, which is currently achieved via the precise determination of the comparative half-life of superallowed beta decays. While Vud is currently determined mostly from an ensemble of precise experimental quantities of superallowed pure Fermi transitions, there is currently a growing interest in obtaining Vud from superallowed mixed transitions to test the accuracy of Vud and the calculation of the isospin symmetry breaking theoretical correction. In the past year our group has performed several half-life measurements of mirror decay transitions using radioactive ion beams produced by the TwinSol facility of the Nuclear Science Laboratory of Notre Dame. In the future we also plan on building an ion trapping system to measure the Fermi to Gamow-Teller mixing ratio in many mirror decays for the first time. This work is supported in part by the National Science Foundation and the University of Notre Dame.
Comparing the Rξ gauge and the unitary gauge for the standard model: An example
NASA Astrophysics Data System (ADS)
Wu, Tai Tsun; Wu, Sau Lan
2017-01-01
For gauge theory, the matrix element for any physical process is independent of the gauge used. However, since this is a formal statement, it does not guarantee this gauge independence in every case. An example is given here where, for a physical process in the standard model, the matrix elements calculated with two different gauge - the Rξ gauge and the unitary gauge - are explicitly verified to be different. This is accomplished by subtracting one matrix element from the other. This non-zero difference turns out to have a subtle origin. Two simple operators are found not to commute with each other: in one gauge these two operations are carried out in one order, while in the other gauge these same two operations are carried out in the opposite order. Because of this result, a series of question are raised such that the answers to these question may lead to a deeper understanding of the Yang-Mills non-Abelian gauge theory in general and the standard model in particular.
Lifetime of Cosmic-Ray Muons and the Standard Model of Fundamental Particles
NASA Astrophysics Data System (ADS)
Mukherji, Sahansha; Shevde, Yash; Majewski, Walerian
2015-04-01
Muon is one of the twelve fundamental particles of matter, having the longest free-particle lifetime. It decays into three other leptons through an exchange of the weak vector bosons W+/W-. Muons are present in the secondary cosmic ray showers in the atmosphere, reaching the sea level. By detecting time delay between arrival of the muon and an appearance of the decay electron in our single scintillation detector (donated by the Thomas Jefferson National Accelerator Facility, Newport News, VA), we measured muon's lifetime at rest. It compares well with the value predicted by the Standard Model of Particles. From the lifetime we were able to calculate the ratio gw /MW of the weak coupling constant gw (an analog of the electric charge) to the mass of the W-boson MW. Using further Standard Model relations and an experimental value for MW, we calculated the weak coupling constant, the electric charge of the muon, and the vacuum expectation value of the Higgs field. We determined the sea-level flux of cosmic muons.
Standardization of a method to study angiogenesis in a mouse model.
Feder, David; Perrazo, Fabio F; Pereira, Edimar C; Forsait, Silvana; Feder, Cecília K R; Junqueira, Paulo E B; Junqueira, Virginia B C; Azzalis, Ligia A; Fonseca, Fernando L A
2013-01-01
In the adult organism, angiogenesis is restricted to a few physiological conditions. On the other hand, uncontrolled angiogenesis have often been associated to angiogenesis-dependent pathologies. A variety of animal models have been described to provide more quantitative analysis of in vivo angiogenesis and to characterize pro- and antiangiogenic molecules. However, it is still necessary to establish a quantitative, reproducible and specific method for studies of angiogenesis factors and inhibitors. This work aimed to standardize a method for the study of angiogenesis and to investigate the effects of thalidomide on angiogenesis. Sponges of 0.5 x 0.5 x 0.5 cm were implanted in the back of mice groups, control and experimental (thalidomide 200 mg/K/day by gavage). After seven days, the sponges were removed. The dosage of hemoglobin in sponge and in circulation was performed and the ratio between the values was tested using nonparametric Mann-Whitney test. Results have shown that sponge-induced angiogenesis quantitated by ratio between hemoglobin content in serum and in sponge is a helpful model for in vivo studies on angiogenesis. Moreover, it was observed that sponge-induced angiogenesis can be suppressed by thalidomide, corroborating to the validity of the standardized method.
Extension of Standard Model in Multi-spinor Field Formalism - Visible and Dark Sectors
NASA Astrophysics Data System (ADS)
Sogami, Ikuo S.
With multi-spinor fields which behave as triple-tensor products of the Dirac spinors, the Standard Model is extended so as to embrace three families of ordinary quarks and leptons in the visible sector and an additional family of exotic quarks and leptons in the dark sector of our Universe. Apart from the gauge and Higgs fields of the Standard Model symmetry G, new gauge and Higgs fields of a symmetry isomorphic to G are postulated to exist in the dark sector. It is the bi-quadratic interaction between visible and dark Higgs fields that opens a main portal to the dark sector. Breakdowns of the visible and dark electroweak symmetries result in the Higgs boson with mass 125 GeV and a new boson which can be related to the diphoton excess around 750 GeV. Subsequent to a common inationary phase and a reheating period, the visible and dark sectors follow weakly-interacting paths of thermal histories. We propose scenarios for dark matter in which no dark nuclear reaction takes place. A candidate for the main component of the dark matter is a stable dark hadron with spin 3/2, and the upper limit of its mass is estimated to be 15.1 GeV/c2.
Standardization of an experimental model of human taeniosis for oral vaccination.
León-Cabrera, Sonia; Cruz-Rivera, Mayra; Mendlovic, Fela; Avila-Ramírez, Guillermina; Carrero, Julio César; Laclette, Juan Pedro; Flisser, Ana
2009-12-01
Neurocysticercosis in humans is caused by the tapeworm Taenia solium and generates substantial morbidity in Latin America, Africa and Asia.The life cycle of T. solium includes pigs as intermediate hosts and human beings as definitive hosts. Tapeworm carriers are the main risk factor for acquiring cysticercosis in the household, thus prevention and control programs are being developed. Infected people have no symptoms, therefore are difficult to identify and treat, thus vaccination against the adult tapeworm is an alternative control measure. Since the infection occurs naturally only in human beings, experimental models have been standardized. Hamsters are believed to be good models to study the infection but they have not been properly evaluated for vaccination. Since taeniosis is gained by ingesting pork meat with cysticerci, oral vaccination was evaluated, and given that intestinal immunity is enhanced with adjuvants, cholera toxin was used, because it is one of the most potent adjuvants, in view of the fact that it increases epithelium permeability enhancing entrance of the co-administered unrelated antigens. Recombinant functional T. solium calreticulin was employed for the standardization of the methodology and the evaluation of oral vaccination. Protection was associated with the type of cysticerci and the age of the hamsters used. When reddish bigger parasites were orally introduced in hamsters as challenge, protection was around 40%, while when yellowish small parasites were used, protection increased to 100%, suggesting that the characteristics of cysticerci are determinant. Protection was gained in 9month old hamsters, but not in 3month old animals.
Search for standard model Higgs boson production in association with a W boson at CDF
NASA Astrophysics Data System (ADS)
Aaltonen, T.; Adelman, J.; Akimoto, T.; Albrow, M. G.; Álvarez González, B.; Amerio, S.; Amidei, D.; Anastassov, A.; Annovi, A.; Antos, J.; Apollinari, G.; Apresyan, A.; Arisawa, T.; Artikov, A.; Ashmanskas, W.; Attal, A.; Aurisano, A.; Azfar, F.; Azzurri, P.; Badgett, W.; Barbaro-Galtieri, A.; Barnes, V. E.; Barnett, B. A.; Bartsch, V.; Bauer, G.; Beauchemin, P.-H.; Bedeschi, F.; Bednar, P.; Beecher, D.; Behari, S.; Bellettini, G.; Bellinger, J.; Benjamin, D.; Beretvas, A.; Beringer, J.; Bhatti, A.; Binkley, M.; Bisello, D.; Bizjak, I.; Blair, R. E.; Blocker, C.; Blumenfeld, B.; Bocci, A.; Bodek, A.; Boisvert, V.; Bolla, G.; Bortoletto, D.; Boudreau, J.; Boveia, A.; Brau, B.; Bridgeman, A.; Brigliadori, L.; Bromberg, C.; Brubaker, E.; Budagov, J.; Budd, H. S.; Budd, S.; Burkett, K.; Busetto, G.; Bussey, P.; Buzatu, A.; Byrum, K. L.; Cabrera, S.; Calancha, C.; Campanelli, M.; Campbell, M.; Canelli, F.; Canepa, A.; Carlsmith, D.; Carosi, R.; Carrillo, S.; Carron, S.; Casal, B.; Casarsa, M.; Castro, A.; Catastini, P.; Cauz, D.; Cavaliere, V.; Cavalli-Sforza, M.; Cerri, A.; Cerrito, L.; Chang, S. H.; Chen, Y. C.; Chertok, M.; Chiarelli, G.; Chlachidze, G.; Chlebana, F.; Cho, K.; Chokheli, D.; Chou, J. P.; Choudalakis, G.; Chuang, S. H.; Chung, K.; Chung, W. H.; Chung, Y. S.; Ciobanu, C. I.; Ciocci, M. A.; Clark, A.; Clark, D.; Compostella, G.; Convery, M. E.; Conway, J.; Copic, K.; Cordelli, M.; Cortiana, G.; Cox, D. J.; Crescioli, F.; Cuenca Almenar, C.; Cuevas, J.; Culbertson, R.; Cully, J. C.; Datta, M.; Davies, T.; de Barbaro, P.; de Cecco, S.; Deisher, A.; de Lorenzo, G.; Dell'Orso, M.; Deluca, C.; Demortier, L.; Deng, J.; Deninno, M.; Derwent, P. F.; di Giovanni, G. P.; Dionisi, C.; di Ruzza, B.; Dittmann, J. R.; D'Onofrio, M.; Donati, S.; Dong, P.; Donini, J.; Dorigo, T.; Dube, S.; Efron, J.; Elagin, A.; Erbacher, R.; Errede, D.; Errede, S.; Eusebi, R.; Fang, H. C.; Farrington, S.; Fedorko, W. T.; Feild, R. G.; Feindt, M.; Fernandez, J. P.; Ferrazza, C.; Field, R.; Flanagan, G.; Forrest, R.; Franklin, M.; Freeman, J. C.; Furic, I.; Gallinaro, M.; Galyardt, J.; Garberson, F.; Garcia, J. E.; Garfinkel, A. F.; Genser, K.; Gerberich, H.; Gerdes, D.; Gessler, A.; Giagu, S.; Giakoumopoulou, V.; Giannetti, P.; Gibson, K.; Gimmell, J. L.; Ginsburg, C. M.; Giokaris, N.; Giordani, M.; Giromini, P.; Giunta, M.; Giurgiu, G.; Glagolev, V.; Glenzinski, D.; Gold, M.; Goldschmidt, N.; Golossanov, A.; Gomez, G.; Gomez-Ceballos, G.; Goncharov, M.; González, O.; Gorelov, I.; Goshaw, A. T.; Goulianos, K.; Gresele, A.; Grinstein, S.; Grosso-Pilcher, C.; Group, R. C.; Grundler, U.; Guimaraes da Costa, J.; Gunay-Unalan, Z.; Haber, C.; Hahn, K.; Hahn, S. R.; Halkiadakis, E.; Han, B.-Y.; Han, J. Y.; Handler, R.; Happacher, F.; Hara, K.; Hare, D.; Hare, M.; Harper, S.; Harr, R. F.; Harris, R. M.; Hartz, M.; Hatakeyama, K.; Hauser, J.; Hays, C.; Heck, M.; Heijboer, A.; Heinemann, B.; Heinrich, J.; Henderson, C.; Herndon, M.; Heuser, J.; Hewamanage, S.; Hidas, D.; Hill, C. S.; Hirschbuehl, D.; Hocker, A.; Hou, S.; Houlden, M.; Hsu, S.-C.; Huffman, B. T.; Hughes, R. E.; Husemann, U.; Huston, J.; Incandela, J.; Introzzi, G.; Iori, M.; Ivanov, A.; James, E.; Jayatilaka, B.; Jeon, E. J.; Jindariani, S.; Johnson, W.; Jones, M.; Joo, K. K.; Jun, S. Y.; Jung, J. E.; Junk, T. R.; Kamon, T.; Kar, D.; Karchin, P. E.; Kato, Y.; Kephart, R.; Keung, J.; Khotilovich, V.; Kilminster, B.; Kim, D. H.; Kim, H. S.; Kim, J. E.; Kim, M. J.; Kim, S. B.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kirsch, L.; Klimenko, S.; Knuteson, B.; Ko, B. R.; Koay, S. A.; Kondo, K.; Kong, D. J.; Konigsberg, J.; Korytov, A.; Kotwal, A. V.; Kreps, M.; Kroll, J.; Krumnack, N.; Kruse, M.; Krutelyov, V.; Kubo, T.; Kuhr, T.; Kulkarni, N. P.; Kurata, M.; Kusakabe, Y.; Kwang, S.; Laasanen, A. T.; Lami, S.; Lammel, S.; Lancaster, M.; Lander, R. L.; Lannon, K.; Lath, A.; Latino, G.; Lazzizzera, I.; Lecompte, T.; Lee, E.; Lee, J.; Lee, Y. J.; Lee, S. W.; Leone, S.; Levy, S.; Lewis, J. D.; Lin, C. S.; Linacre, J.; Lindgren, M.; Lipeles, E.; Lister, A.; Litvintsev, D. O.; Liu, C.; Liu, T.; Lockyer, N. S.; Loginov, A.; Loreti, M.; Lovas, L.; Lu, R.-S.; Lucchesi, D.; Lueck, J.; Luci, C.; Lujan, P.; Lukens, P.; Lungu, G.; Lyons, L.; Lys, J.; Lysak, R.; Lytken, E.; Mack, P.; MacQueen, D.; Madrak, R.; Maeshima, K.; Makhoul, K.; Maki, T.; Maksimovic, P.; Malde, S.; Malik, S.; Manca, G.; Manousakis-Katsikakis, A.; Margaroli, F.; Marino, C.; Marino, C. P.; Martin, A.; Martin, V.; Martínez, M.; Martínez-Ballarín, R.; Maruyama, T.; Mastrandrea, P.; Masubuchi, T.; Mattson, M. E.; Mazzanti, P.; McFarland, K. S.; McIntyre, P.; McNulty, R.; Mehta, A.; Mehtala, P.; Menzione, A.; Merkel, P.; Mesropian, C.; Miao, T.; Miladinovic, N.; Miller, R.; Mills, C.; Milnik, M.; Mitra, A.; Mitselmakher, G.; Miyake, H.; Moggi, N.; Moon, C. S.; Moore, R.; Morello, M. J.; Morlok, J.; Movilla Fernandez, P.; Mülmenstädt, J.; Mukherjee, A.; Muller, Th.; Mumford, R.; Murat, P.; Mussini, M.; Nachtman, J.; Nagai, Y.; Nagano, A.; Naganoma, J.; Nakamura, K.; Nakano, I.; Napier, A.; Necula, V.; Neu, C.; Neubauer, M. S.; Nielsen, J.; Nodulman, L.; Norman, M.; Norniella, O.; Nurse, E.; Oakes, L.; Oh, S. H.; Oh, Y. D.; Oksuzian, I.; Okusawa, T.; Orava, R.; Osterberg, K.; Pagan Griso, S.; Pagliarone, C.; Palencia, E.; Papadimitriou, V.; Papaikonomou, A.; Paramonov, A. A.; Parks, B.; Pashapour, S.; Patrick, J.; Pauletta, G.; Paulini, M.; Paus, C.; Pellett, D. E.; Penzo, A.; Phillips, T. J.; Piacentino, G.; Pianori, E.; Pinera, L.; Pitts, K.; Plager, C.; Pondrom, L.; Poukhov, O.; Pounder, N.; Prakoshyn, F.; Pronko, A.; Proudfoot, J.; Ptohos, F.; Pueschel, E.; Punzi, G.; Pursley, J.; Rademacker, J.; Rahaman, A.; Ramakrishnan, V.; Ranjan, N.; Redondo, I.; Reisert, B.; Rekovic, V.; Renton, P.; Rescigno, M.; Richter, S.; Rimondi, F.; Ristori, L.; Robson, A.; Rodrigo, T.; Rodriguez, T.; Rogers, E.; Rolli, S.; Roser, R.; Rossi, M.; Rossin, R.; Roy, P.; Ruiz, A.; Russ, J.; Rusu, V.; Saarikko, H.; Safonov, A.; Sakumoto, W. K.; Saltó, O.; Santi, L.; Sarkar, S.; Sartori, L.; Sato, K.; Savoy-Navarro, A.; Scheidle, T.; Schlabach, P.; Schmidt, A.; Schmidt, E. E.; Schmidt, M. A.; Schmidt, M. P.; Schmitt, M.; Schwarz, T.; Scodellaro, L.; Scott, A. L.; Scribano, A.; Scuri, F.; Sedov, A.; Seidel, S.; Seiya, Y.; Semenov, A.; Sexton-Kennedy, L.; Sfyrla, A.; Shalhout, S. Z.; Shears, T.; Shepard, P. F.; Sherman, D.; Shimojima, M.; Shochet, M.; Shon, Y.; Shreyber, I.; Sidoti, A.; Sinervo, P.; Sisakyan, A.; Slaughter, A. J.; Slaunwhite, J.; Sliwa, K.; Smith, J. R.; Snider, F. D.; Snihur, R.; Soha, A.; Somalwar, S.; Sorin, V.; Spalding, J.; Spreitzer, T.; Squillacioti, P.; Stanitzki, M.; St. Denis, R.; Stelzer, B.; Stelzer-Chilton, O.; Stentz, D.; Strologas, J.; Stuart, D.; Suh, J. S.; Sukhanov, A.; Suslov, I.; Suzuki, T.; Taffard, A.; Takashima, R.; Takeuchi, Y.; Tanaka, R.; Tecchio, M.; Teng, P. K.; Terashi, K.; Thom, J.; Thompson, A. S.; Thompson, G. A.; Thomson, E.; Tipton, P.; Tiwari, V.; Tkaczyk, S.; Toback, D.; Tokar, S.; Tollefson, K.; Tomura, T.; Tonelli, D.; Torre, S.; Torretta, D.; Totaro, P.; Tourneur, S.; Tu, Y.; Turini, N.; Ukegawa, F.; Vallecorsa, S.; van Remortel, N.; Varganov, A.; Vataga, E.; Vázquez, F.; Velev, G.; Vellidis, C.; Veszpremi, V.; Vidal, M.; Vidal, R.; Vila, I.; Vilar, R.; Vine, T.; Vogel, M.; Volobouev, I.; Volpi, G.; Würthwein, F.; Wagner, P.; Wagner, R. G.; Wagner, R. L.; Wagner-Kuhr, J.; Wagner, W.; Wakisaka, T.; Wallny, R.; Wang, S. M.; Warburton, A.; Waters, D.; Weinberger, M.; Wester, W. C., III; Whitehouse, B.; Whiteson, D.; Wicklund, A. B.; Wicklund, E.; Williams, G.; Williams, H. H.; Wilson, P.; Winer, B. L.; Wittich, P.; Wolbers, S.; Wolfe, C.; Wright, T.; Wu, X.; Wynne, S. M.; Yagil, A.; Yamamoto, K.; Yamaoka, J.; Yamashita, T.; Yang, U. K.; Yang, Y. C.; Yao, W. M.; Yeh, G. P.; Yoh, J.; Yorita, K.; Yoshida, T.; Yu, G. B.; Yu, I.; Yu, S. S.; Yun, J. C.; Zanello, L.; Zanetti, A.; Zaw, I.; Zhang, X.; Zheng, Y.; Zucchelli, S.
2008-08-01
We present a search for standard model Higgs boson production in association with a W boson in proton-antiproton collisions (p pmacr →W±H→ℓνb bmacr ) at a center of mass energy of 1.96 TeV. The search employs data collected with the CDF II detector which correspond to an integrated luminosity of approximately 1fb-1. We select events consistent with a signature of a single lepton (e±/μ±), missing transverse energy, and two jets. Jets corresponding to bottom quarks are identified with a secondary vertex tagging method and a neural network filter technique. The observed number of events and the dijet mass distributions are consistent with the standard model background expectations, and we set 95% confidence level upper limits on the production cross section times branching ratio ranging from 3.9 to 1.3 pb for Higgs boson masses from 110 to 150GeV/c2, respectively.
The neutralino sector in the U(1)-extended supersymmetric Standard Model
NASA Astrophysics Data System (ADS)
Choi, S. Y.; Haber, H. E.; Kalinowski, J.; Zerwas, P. M.
2007-08-01
Motivated by grand unified theories and string theories we analyze the general structure of the neutralino sector in the USSM, an extension of the minimal supersymmetric Standard Model that involves a broken extra U(1) gauge symmetry. This supersymmetric U(1)-extended model includes an Abelian gauge superfield and a Higgs singlet superfield in addition to the standard gauge and Higgs superfields of the MSSM. The interactions between the MSSM fields and the new fields are in general weak and the mixing is small, so that the coupling of the two subsystems can be treated perturbatively. As a result, the mass spectrum and mixing matrix in the neutralino sector can be analyzed analytically and the structure of this 6-state system is under good theoretical control. We describe the decay modes of the new states and the impact of this extension on decays of the original MSSM neutralinos, including radiative transitions in cross-over zones. Production channels in cascade decays at the LHC and pair production at ee colliders are also discussed.
Search for the Standard Model Higgs Boson in associated production with w boson at the Tevatron
Chun, Xu
2009-11-01
A search for the Standard Model Higgs boson in proton-antiproton collisions with center-of-mass energy 1.96 TeV at the Tevatron is presented in this dissertation. The process of interest is the associated production of W boson and Higgs boson, with the W boson decaying leptonically and the Higgs boson decaying into a pair of bottom quarks. The dataset in the analysis is accumulated by the D0 detector from April 2002 to April 2008 and corresponding to an integrated luminosity of 2.7 fb^{-1}. The events are reconstructed and selected following the criteria of an isolated lepton, missing transverse energy and two jets. The D0 Neural Network b-jet identification algorithm is further used to discriminate b jets from light jets. A multivariate analysis combining Matrix Element and Neural Network methods is explored to improve the Higgs boson signal significance. No evidence of the Higgs boson is observed in this analysis. In consequence, an observed (expected) limit on the ratio of σ (p$\\bar{p}$ → WH) x Br (H → b$\\bar{b}$) to the Standard Model prediction is set to be 6.7 (6.4) at 95% C.L. for the Higgs boson with a mass of 115 GeV.