Sample records for standard model fields

  1. Partially composite particle physics with and without supersymmetry

    NASA Astrophysics Data System (ADS)

    Kramer, Thomas A.

    Theories in which the Standard Model fields are partially compositeness provide elegant and phenomenologically viable solutions to the Hierarchy Problem. In this thesis we will study types of models from two different perspectives. We first derive an effective field theory describing the interactions of the Standard Models fields with their lightest composite partners based on two weakly coupled sectors. Technically, via the AdS/CFT correspondence, our model is dual to a highly deconstructed theory with a single warped extra-dimension. This two sector theory provides a simplified approach to the phenomenology of this important class of theories. We then use this effective field theoretic approach to study models with weak scale accidental supersymmetry. Particularly, we will investigate the possibility that the Standard Model Higgs field is a member of a composite supersymmetric sector interacting weakly with the known Standard Model fields.

  2. Supersymmetric preons and the standard model

    NASA Astrophysics Data System (ADS)

    Raitio, Risto

    2018-06-01

    The experimental fact that standard model superpartners have not been observed compels one to consider an alternative implementation for supersymmetry. The basic supermultiplet proposed here consists of a photon and a charged spin 1/2 preon field, and their superpartners. These fields are shown to yield the standard model fermions, Higgs fields and gauge symmetries. Supersymmetry is defined for unbound preons only. Quantum group SLq (2) representations are introduced to classify topologically scalars, preons, quarks and leptons.

  3. Dimensional reduction of the Standard Model coupled to a new singlet scalar field

    NASA Astrophysics Data System (ADS)

    Brauner, Tomáš; Tenkanen, Tuomas V. I.; Tranberg, Anders; Vuorinen, Aleksi; Weir, David J.

    2017-03-01

    We derive an effective dimensionally reduced theory for the Standard Model augmented by a real singlet scalar. We treat the singlet as a superheavy field and integrate it out, leaving an effective theory involving only the Higgs and SU(2) L × U(1) Y gauge fields, identical to the one studied previously for the Standard Model. This opens up the possibility of efficiently computing the order and strength of the electroweak phase transition, numerically and nonperturbatively, in this extension of the Standard Model. Understanding the phase diagram is crucial for models of electroweak baryogenesis and for studying the production of gravitational waves at thermal phase transitions.

  4. Functional Competency Development Model for Academic Personnel Based on International Professional Qualification Standards in Computing Field

    ERIC Educational Resources Information Center

    Tumthong, Suwut; Piriyasurawong, Pullop; Jeerangsuwan, Namon

    2016-01-01

    This research proposes a functional competency development model for academic personnel based on international professional qualification standards in computing field and examines the appropriateness of the model. Specifically, the model consists of three key components which are: 1) functional competency development model, 2) blended training…

  5. Inflation in the mixed Higgs-R2 model

    NASA Astrophysics Data System (ADS)

    He, Minxi; Starobinsky, Alexei A.; Yokoyama, Jun'ichi

    2018-05-01

    We analyze a two-field inflationary model consisting of the Ricci scalar squared (R2) term and the standard Higgs field non-minimally coupled to gravity in addition to the Einstein R term. Detailed analysis of the power spectrum of this model with mass hierarchy is presented, and we find that one can describe this model as an effective single-field model in the slow-roll regime with a modified sound speed. The scalar spectral index predicted by this model coincides with those given by the R2 inflation and the Higgs inflation implying that there is a close relation between this model and the R2 inflation already in the original (Jordan) frame. For a typical value of the self-coupling of the standard Higgs field at the high energy scale of inflation, the role of the Higgs field in parameter space involved is to modify the scalaron mass, so that the original mass parameter in the R2 inflation can deviate from its standard value when non-minimal coupling between the Ricci scalar and the Higgs field is large enough.

  6. Holomorphy without supersymmetry in the Standard Model Effective Field Theory

    DOE PAGES

    Alonso, Rodrigo; Jenkins, Elizabeth E.; Manohar, Aneesh V.

    2014-12-12

    The anomalous dimensions of dimension-six operators in the Standard Model Effective Field Theory (SMEFT) respect holomorphy to a large extent. Holomorphy conditions are reminiscent of supersymmetry, even though the SMEFT is not a supersymmetric theory.

  7. Existence of standard models of conic fibrations over non-algebraically-closed fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avilov, A A

    2014-12-31

    We prove an analogue of Sarkisov's theorem on the existence of a standard model of a conic fibration over an algebraically closed field of characteristic different from two for three-dimensional conic fibrations over an arbitrary field of characteristic zero with an action of a finite group. Bibliography: 16 titles.

  8. Darkflation-One scalar to rule them all?

    NASA Astrophysics Data System (ADS)

    Lalak, Zygmunt; Nakonieczny, Łukasz

    2017-03-01

    The problem of explaining both inflationary and dark matter physics in the framework of a minimal extension of the Standard Model was investigated. To this end, the Standard Model completed by a real scalar singlet playing a role of the dark matter candidate has been considered. We assumed both the dark matter field and the Higgs doublet to be nonminimally coupled to gravity. Using quantum field theory in curved spacetime we derived an effective action for the inflationary period and analyzed its consequences. In this approach, after integrating out both dark matter and Standard Model sectors we obtained the effective action expressed purely in terms of the gravitational field. We paid special attention to determination, by explicit calculations, of the form of coefficients controlling the higher-order in curvature gravitational terms. Their connection to the Standard Model coupling constants has been discussed.

  9. About non standard Lagrangians in cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dimitrijevic, Dragoljub D.; Milosevic, Milan

    A review of non standard Lagrangians present in modern cosmological models will be considered. Well known example of non standard Lagrangian is Dirac-Born-Infeld (DBI) type Lagrangian for tachyon field. Another type of non standard Lagrangian under consideration contains scalar field which describes open p-adic string tachyon and is called p-adic string theory Lagrangian. We will investigate homogenous cases of both DBI and p-adic fields and obtain Lagrangians of the standard type which have the same equations of motions as aforementioned non standard one.

  10. Electroweak baryogenesis and the standard model effective field theory

    NASA Astrophysics Data System (ADS)

    de Vries, Jordy; Postma, Marieke; van de Vis, Jorinde; White, Graham

    2018-01-01

    We investigate electroweak baryogenesis within the framework of the Standard Model Effective Field Theory. The Standard Model Lagrangian is supplemented by dimension-six operators that facilitate a strong first-order electroweak phase transition and provide sufficient CP violation. Two explicit scenarios are studied that are related via the classical equations of motion and are therefore identical at leading order in the effective field theory expansion. We demonstrate that formally higher-order dimension-eight corrections lead to large modifications of the matter-antimatter asymmetry. The effective field theory expansion breaks down in the modified Higgs sector due to the requirement of a first-order phase transition. We investigate the source of the breakdown in detail and show how it is transferred to the CP-violating sector. We briefly discuss possible modifications of the effective field theory framework.

  11. Extended spin symmetry and the standard model

    NASA Astrophysics Data System (ADS)

    Besprosvany, J.; Romero, R.

    2010-12-01

    We review unification ideas and explain the spin-extended model in this context. Its consideration is also motivated by the standard-model puzzles. With the aim of constructing a common description of discrete degrees of freedom, as spin and gauge quantum numbers, the model departs from q-bits and generalized Hilbert spaces. Physical requirements reduce the space to one that is represented by matrices. The classification of the representations is performed through Clifford algebras, with its generators associated with Lorentz and scalar symmetries. We study a reduced space with up to two spinor elements within a matrix direct product. At given dimension, the demand that Lorentz symmetry be maintained, determines the scalar symmetries, which connect to vector-and-chiral gauge-interacting fields; we review the standard-model information in each dimension. We obtain fermions and bosons, with matter fields in the fundamental representation, radiation fields in the adjoint, and scalar particles with the Higgs quantum numbers. We relate the fields' representation in such spaces to the quantum-field-theory one, and the Lagrangian. The model provides a coupling-constant definition.

  12. The scientifically substantiated art of teaching: A study in the development of standards in the new academic field of neuroeducation (mind, brain, and education science)

    NASA Astrophysics Data System (ADS)

    Tokuhama-Espinosa, Tracey Noel

    Concepts from neuroeducation, commonly referred in the popular press as "brain-based learning," have been applied indiscreetly and inconsistently to classroom teaching practices for many years. While standards exist in neurology, psychology and pedagogy, there are no agreed upon standards in their intersection, neuroeducation, and a formal bridge linking the fields is missing. This study used grounded theory development to determine the parameters of the emerging neuroeducational field based on a meta-analysis of the literature over the past 30 years, which included over 2,200 documents. This research results in a new model for neuroeducation. The design of the new model was followed by a Delphi survey of 20 international experts from six different countries that further refined the model contents over several months of reflection. Finally, the revised model was compared to existing information sources, including popular press, peer review journals, academic publications, teacher training textbooks and the Internet, to determine to what extent standards in neuroeducation are met in the current literature. This study determined that standards in the emerging field, now labeled Mind, Brain, and Education: The Science of Teaching and Learning after the Delphi rounds, are the union of standards in the parent fields of neuroscience, psychology, and education. Additionally, the Delphi expert panel agreed upon the goals of the new discipline, its history, the thought leaders, and a model for judging quality information. The study culminated in a new model of the academic discipline of Mind, Brain, and Education science, which explains the tenets, principles and instructional guidelines supported by the meta-analysis of the literature and the Delphi response.

  13. European standardization effort: interworking the goal

    NASA Astrophysics Data System (ADS)

    Mattheus, Rudy A.

    1993-09-01

    In the European Standardization Committee (CEN), the technical committee responsible for the standardization activities in Medical Informatics (CEN TC 251), has agreed upon the directions of the scopes to follow in this field. They are described in the Directory of the European Standardization Requirements for Healthcare Informatics and Programme for the Development of Standards adopted on 02-28-1991 by CEN/TC 251 and approved by CEN/BT. Top-down objectives describe the common framework and items like terminology, security, more bottom up oriented items describe fields like medical imaging and multi-media. The draft standard is described; the general framework model and object oriented model; the interworking aspects, the relation to ISO standards, and the DICOM proposal. This paper also focuses on all the boundaries in the standardization work, which are also influencing the standardization process.

  14. LATERAL OFFSET OF THE CORONAL MASS EJECTIONS FROM THE X-FLARE OF 2006 DECEMBER 13 AND ITS TWO PRECURSOR ERUPTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterling, Alphonse C.; Moore, Ronald L.; Harra, Louise K., E-mail: alphonse.sterling@nasa.gov, E-mail: ron.moore@nasa.gov, E-mail: lkh@mssl.ucl.ac.uk

    2011-12-10

    Two GOES sub-C-class precursor eruptions occurred within {approx}10 hr prior to and from the same active region as the 2006 December 13 X4.3-class flare. Each eruption generated a coronal mass ejection (CME) with center laterally far offset ({approx}> 45 Degree-Sign ) from the co-produced bright flare. Explaining such CME-to-flare lateral offsets in terms of the standard model for solar eruptions has been controversial. Using Hinode/X-Ray Telescope (XRT) and EUV Imaging Spectrometer (EIS) data, and Solar and Heliospheric Observatory (SOHO)/Large Angle and Spectrometric Coronagraph (LASCO) and Michelson Doppler Imager (MDI) data, we find or infer the following. (1) The first precursormore » was a 'magnetic-arch-blowout' event, where an initial standard-model eruption of the active region's core field blew out a lobe on one side of the active region's field. (2) The second precursor began similarly, but the core-field eruption stalled in the side-lobe field, with the side-lobe field erupting {approx}1 hr later to make the CME either by finally being blown out or by destabilizing and undergoing a standard-model eruption. (3) The third eruption, the X-flare event, blew out side lobes on both sides of the active region and clearly displayed characteristics of the standard model. (4) The two precursors were offset due in part to the CME originating from a side-lobe coronal arcade that was offset from the active region's core. The main eruption (and to some extent probably the precursor eruptions) was offset primarily because it pushed against the field of the large sunspot as it escaped outward. (5) All three CMEs were plausibly produced by a suitable version of the standard model.« less

  15. A standard telemental health evaluation model: the time is now.

    PubMed

    Kramer, Greg M; Shore, Jay H; Mishkind, Matt C; Friedl, Karl E; Poropatich, Ronald K; Gahm, Gregory A

    2012-05-01

    The telehealth field has advanced historic promises to improve access, cost, and quality of care. However, the extent to which it is delivering on its promises is unclear as the scientific evidence needed to justify success is still emerging. Many have identified the need to advance the scientific knowledge base to better quantify success. One method for advancing that knowledge base is a standard telemental health evaluation model. Telemental health is defined here as the provision of mental health services using live, interactive video-teleconferencing technology. Evaluation in the telemental health field largely consists of descriptive and small pilot studies, is often defined by the individual goals of the specific programs, and is typically focused on only one outcome. The field should adopt new evaluation methods that consider the co-adaptive interaction between users (patients and providers), healthcare costs and savings, and the rapid evolution in communication technologies. Acceptance of a standard evaluation model will improve perceptions of telemental health as an established field, promote development of a sounder empirical base, promote interagency collaboration, and provide a framework for more multidisciplinary research that integrates measuring the impact of the technology and the overall healthcare aspect. We suggest that consideration of a standard model is timely given where telemental health is at in terms of its stage of scientific progress. We will broadly recommend some elements of what such a standard evaluation model might include for telemental health and suggest a way forward for adopting such a model.

  16. The role of the global magnetic field and thermal conduction on the structure of the accretion disks of all models

    NASA Astrophysics Data System (ADS)

    Farahinezhad, M.; Khesali, A. R.

    2018-05-01

    In this paper, the effects of global magnetic field and thermal conduction on the vertical structure of the accretion disks has been investigated. In this study, four types disks were examined: Gas pressure dominated the standard disk, while radiation pressure dominated the standard disk, ADAF disk, slim disk. Moreover, the general shape of the magnetic field, including toroidal and poloidal components, is considered. The magnetohydrodynamic equations were solved in spherical coordinates using self-similar assumptions in the radial direction. Following previous authors, the polar velocity vθ is non-zero and Trφ was considered as a dominant component of the stress tensor. The results show that the disk becomes thicker compared to the non-magnetic fields. It has also been shown that the presence of the thermal conduction in the ADAF model makes the disk thicker; the disk is expanded in the standard model.

  17. Improved model of hydrated calcium ion for molecular dynamics simulations using classical biomolecular force fields.

    PubMed

    Yoo, Jejoong; Wilson, James; Aksimentiev, Aleksei

    2016-10-01

    Calcium ions (Ca(2+) ) play key roles in various fundamental biological processes such as cell signaling and brain function. Molecular dynamics (MD) simulations have been used to study such interactions, however, the accuracy of the Ca(2+) models provided by the standard MD force fields has not been rigorously tested. Here, we assess the performance of the Ca(2+) models from the most popular classical force fields AMBER and CHARMM by computing the osmotic pressure of model compounds and the free energy of DNA-DNA interactions. In the simulations performed using the two standard models, Ca(2+) ions are seen to form artificial clusters with chloride, acetate, and phosphate species; the osmotic pressure of CaAc2 and CaCl2 solutions is a small fraction of the experimental values for both force fields. Using the standard parameterization of Ca(2+) ions in the simulations of Ca(2+) -mediated DNA-DNA interactions leads to qualitatively wrong outcomes: both AMBER and CHARMM simulations suggest strong inter-DNA attraction whereas, in experiment, DNA molecules repel one another. The artificial attraction of Ca(2+) to DNA phosphate is strong enough to affect the direction of the electric field-driven translocation of DNA through a solid-state nanopore. To address these shortcomings of the standard Ca(2+) model, we introduce a custom model of a hydrated Ca(2+) ion and show that using our model brings the results of the above MD simulations in quantitative agreement with experiment. Our improved model of Ca(2+) can be readily applied to MD simulations of various biomolecular systems, including nucleic acids, proteins and lipid bilayer membranes. © 2016 Wiley Periodicals, Inc. Biopolymers 105: 752-763, 2016. © 2016 Wiley Periodicals, Inc.

  18. CFD Modeling of Flow, Temperature, and Concentration Fields in a Pilot-Scale Rotary Hearth Furnace

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Su, Fu-Yong; Wen, Zhi; Li, Zhi; Yong, Hai-Quan; Feng, Xiao-Hong

    2014-01-01

    A three-dimensional mathematical model for simulation of flow, temperature, and concentration fields in a pilot-scale rotary hearth furnace (RHF) has been developed using a commercial computational fluid dynamics software, FLUENT. The layer of composite pellets under the hearth is assumed to be a porous media layer with CO source and energy sink calculated by an independent mathematical model. User-defined functions are developed and linked to FLUENT to process the reduction process of the layer of composite pellets. The standard k-ɛ turbulence model in combination with standard wall functions is used for modeling of gas flow. Turbulence-chemistry interaction is taken into account through the eddy-dissipation model. The discrete ordinates model is used for modeling of radiative heat transfer. A comparison is made between the predictions of the present model and the data from a test of the pilot-scale RHF, and a reasonable agreement is found. Finally, flow field, temperature, and CO concentration fields in the furnace are investigated by the model.

  19. Gravity fields of the solar system

    NASA Technical Reports Server (NTRS)

    Zendell, A.; Brown, R. D.; Vincent, S.

    1975-01-01

    The most frequently used formulations of the gravitational field are discussed and a standard set of models for the gravity fields of the earth, moon, sun, and other massive bodies in the solar system are defined. The formulas are presented in standard forms, some with instructions for conversion. A point-source or inverse-square model, which represents the external potential of a spherically symmetrical mass distribution by a mathematical point mass without physical dimensions, is considered. An oblate spheroid model is presented, accompanied by an introduction to zonal harmonics. This spheroid model is generalized and forms the basis for a number of the spherical harmonic models which were developed for the earth and moon. The triaxial ellipsoid model is also presented. These models and their application to space missions are discussed.

  20. Domain walls in the extensions of the Standard Model

    NASA Astrophysics Data System (ADS)

    Krajewski, Tomasz; Lalak, Zygmunt; Lewicki, Marek; Olszewski, Paweł

    2018-05-01

    Our main interest is the evolution of domain walls of the Higgs field in the early Universe. The aim of this paper is to understand how dynamics of Higgs domain walls could be influenced by yet unknown interactions from beyond the Standard Model. We assume that the Standard Model is valid up to certain, high, energy scale Λ and use the framework of the effective field theory to describe physics below that scale. Performing numerical simulations with different values of the scale Λ we are able to extend our previous analysis [1]. Our recent numerical simulations show that evolution of Higgs domain walls is rather insensitive to interactions beyond the Standard Model as long as masses of new particles are grater than 1012 GeV. For lower values of Λ the RG improved effective potential is strongly modified at field strengths crucial to the evolution of domain walls. However, we find that even for low values of Λ, Higgs domain walls decayed shortly after their formation for generic initial conditions. On the other hand, in simulations with specifically chosen initial conditions Higgs domain walls can live longer and enter the scaling regime. We also determine the energy spectrum of gravitational waves produced by decaying domain walls of the Higgs field. For generic initial field configurations the amplitude of the signal is too small to be observed in planned detectors.

  1. Standard Model as a Double Field Theory.

    PubMed

    Choi, Kang-Sin; Park, Jeong-Hyuck

    2015-10-23

    We show that, without any extra physical degree introduced, the standard model can be readily reformulated as a double field theory. Consequently, the standard model can couple to an arbitrary stringy gravitational background in an O(4,4) T-duality covariant manner and manifest two independent local Lorentz symmetries, Spin(1,3)×Spin(3,1). While the diagonal gauge fixing of the twofold spin groups leads to the conventional formulation on the flat Minkowskian background, the enhanced symmetry makes the standard model more rigid, and also stringy, than it appeared. The CP violating θ term may no longer be allowed by the symmetry, and hence the strong CP problem can be solved. There are now stronger constraints imposed on the possible higher order corrections. We speculate that the quarks and the leptons may belong to the two different spin classes.

  2. RECOLA2: REcursive Computation of One-Loop Amplitudes 2

    NASA Astrophysics Data System (ADS)

    Denner, Ansgar; Lang, Jean-Nicolas; Uccirati, Sandro

    2018-03-01

    We present the Fortran95 program RECOLA2 for the perturbative computation of next-to-leading-order transition amplitudes in the Standard Model of particle physics and extended Higgs sectors. New theories are implemented via model files in the 't Hooft-Feynman gauge in the conventional formulation of quantum field theory and in the Background-Field method. The present version includes model files for Two-Higgs-Doublet Model and the Higgs-Singlet Extension of the Standard Model. We support standard renormalization schemes for the Standard Model as well as many commonly used renormalization schemes in extended Higgs sectors. Within these models the computation of next-to-leading-order polarized amplitudes and squared amplitudes, optionally summed over spin and colour, is fully automated for any process. RECOLA2 allows the computation of colour- and spin-correlated leading-order squared amplitudes that are needed in the dipole subtraction formalism. RECOLA2 is publicly available for download at http://recola.hepforge.org.

  3. Standard model effective field theory: Integrating out neutralinos and charginos in the MSSM

    NASA Astrophysics Data System (ADS)

    Han, Huayong; Huo, Ran; Jiang, Minyuan; Shu, Jing

    2018-05-01

    We apply the covariant derivative expansion method to integrate out the neutralinos and charginos in the minimal supersymmetric Standard Model. The results are presented as set of pure bosonic dimension-six operators in the Standard Model effective field theory. Nontrivial chirality dependence in fermionic covariant derivative expansion is discussed carefully. The results are checked by computing the h γ γ effective coupling and the electroweak oblique parameters using the Standard Model effective field theory with our effective operators and direct loop calculation. In global fitting, the proposed lepton collider constraint projections, special phenomenological emphasis is paid to the gaugino mass unification scenario (M2≃2 M1) and anomaly mediation scenario (M1≃3.3 M2). These results show that the precision measurement experiments in future lepton colliders will provide a very useful complementary job in probing the electroweakino sector, in particular, filling the gap of the soft lepton plus the missing ET channel search left by the traditional collider, where the neutralino as the lightest supersymmetric particle is very degenerated with the next-to-lightest chargino/neutralino.

  4. Leading-order classical Lagrangians for the nonminimal standard-model extension

    NASA Astrophysics Data System (ADS)

    Reis, J. A. A. S.; Schreck, M.

    2018-03-01

    In this paper, we derive the general leading-order classical Lagrangian covering all fermion operators of the nonminimal standard-model extension (SME). Such a Lagrangian is considered to be the point-particle analog of the effective field theory description of Lorentz violation that is provided by the SME. At leading order in Lorentz violation, the Lagrangian obtained satisfies the set of five nonlinear equations that govern the map from the field theory to the classical description. This result can be of use for phenomenological studies of classical bodies in gravitational fields.

  5. Developing the Precision Magnetic Field for the E989 Muon g{2 Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Matthias W.

    The experimental value ofmore » $$(g\\hbox{--}2)_\\mu$$ historically has been and contemporarily remains an important probe into the Standard Model and proposed extensions. Previous measurements of $$(g\\hbox{--}2)_\\mu$$ exhibit a persistent statistical tension with calculations using the Standard Model implying that the theory may be incomplete and constraining possible extensions. The Fermilab Muon g-2 experiment, E989, endeavors to increase the precision over previous experiments by a factor of four and probe more deeply into the tension with the Standard Model. The $$(g\\hbox{--}2)_\\mu$$ experimental implementation measures two spin precession frequencies defined by the magnetic field, proton precession and muon precession. The value of $$(g\\hbox{--}2)_\\mu$$ is derived from a relationship between the two frequencies. The precision of magnetic field measurements and the overall magnetic field uniformity achieved over the muon storage volume are then two undeniably important aspects of the e xperiment in minimizing uncertainty. The current thesis details the methods employed to achieve magnetic field goals and results of the effort.« less

  6. New Angles on Standard Force Fields: Toward a General Approach for Treating Atomic-Level Anisotropy

    DOE PAGES

    Van Vleet, Mary J.; Misquitta, Alston J.; Schmidt, J. R.

    2017-12-21

    Nearly all standard force fields employ the “sum-of-spheres” approximation, which models intermolecular interactions purely in terms of interatomic distances. Nonetheless, atoms in molecules can have significantly nonspherical shapes, leading to interatomic interaction energies with strong orientation dependencies. Neglecting this “atomic-level anisotropy” can lead to significant errors in predicting interaction energies. Herein, we propose a simple, transferable, and computationally efficient model (MASTIFF) whereby atomic-level orientation dependence can be incorporated into ab initio intermolecular force fields. MASTIFF includes anisotropic exchange-repulsion, charge penetration, and dispersion effects, in conjunction with a standard treatment of anisotropic long-range (multipolar) electrostatics. To validate our approach, we benchmarkmore » MASTIFF against various sum-of-spheres models over a large library of intermolecular interactions between small organic molecules. MASTIFF achieves quantitative accuracy, with respect to both high-level electronic structure theory and experiment, thus showing promise as a basis for “next-generation” force field development.« less

  7. New Angles on Standard Force Fields: Toward a General Approach for Treating Atomic-Level Anisotropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Vleet, Mary J.; Misquitta, Alston J.; Schmidt, J. R.

    Nearly all standard force fields employ the “sum-of-spheres” approximation, which models intermolecular interactions purely in terms of interatomic distances. Nonetheless, atoms in molecules can have significantly nonspherical shapes, leading to interatomic interaction energies with strong orientation dependencies. Neglecting this “atomic-level anisotropy” can lead to significant errors in predicting interaction energies. Herein, we propose a simple, transferable, and computationally efficient model (MASTIFF) whereby atomic-level orientation dependence can be incorporated into ab initio intermolecular force fields. MASTIFF includes anisotropic exchange-repulsion, charge penetration, and dispersion effects, in conjunction with a standard treatment of anisotropic long-range (multipolar) electrostatics. To validate our approach, we benchmarkmore » MASTIFF against various sum-of-spheres models over a large library of intermolecular interactions between small organic molecules. MASTIFF achieves quantitative accuracy, with respect to both high-level electronic structure theory and experiment, thus showing promise as a basis for “next-generation” force field development.« less

  8. Dynamic mapping of EDDL device descriptions to OPC UA

    NASA Astrophysics Data System (ADS)

    Atta Nsiah, Kofi; Schappacher, Manuel; Sikora, Axel

    2017-07-01

    OPC UA (Open Platform Communications Unified Architecture) is already a well-known concept used widely in the automation industry. In the area of factory automation, OPC UA models the underlying field devices such as sensors and actuators in an OPC UA server to allow connecting OPC UA clients to access device-specific information via a standardized information model. One of the requirements of the OPC UA server to represent field device data using its information model is to have advanced knowledge about the properties of the field devices in the form of device descriptions. The international standard IEC 61804 specifies EDDL (Electronic Device Description Language) as a generic language for describing the properties of field devices. In this paper, the authors describe a possibility to dynamically map and integrate field device descriptions based on EDDL into OPCUA.

  9. Phenomenology of the N = 3 Lee-Wick Standard Model

    NASA Astrophysics Data System (ADS)

    TerBeek, Russell Henry

    With the discovery of the Higgs Boson in 2012, particle physics has decidedly moved beyond the Standard Model into a new epoch. Though the Standard Model particle content is now completely accounted for, there remain many theoretical issues about the structure of the theory in need of resolution. Among these is the hierarchy problem: since the renormalized Higgs mass receives quadratic corrections from a higher cutoff scale, what keeps the Higgs boson light? Many possible solutions to this problem have been advanced, such as supersymmetry, Randall-Sundrum models, or sub-millimeter corrections to gravity. One such solution has been advanced by the Lee-Wick Standard Model. In this theory, higher-derivative operators are added to the Lagrangian for each Standard Model field, which result in propagators that possess two physical poles and fall off more rapidly in the ultraviolet regime. It can be shown by an auxiliary field transformation that the higher-derivative theory is identical to positing a second, manifestly renormalizable theory in which new fields with opposite-sign kinetic and mass terms are found. These so-called Lee-Wick fields have opposite-sign propagators, and famously cancel off the quadratic divergences that plague the renormalized Higgs mass. The states in the Hilbert space corresponding to Lee-Wick particles have negative norm, and implications for causality and unitarity are examined. This dissertation explores a variant of the theory called the N = 3 Lee-Wick Standard Model. The Lagrangian of this theory features a yet-higher derivative operator, which produces a propagator with three physical poles and possesses even better high-energy behavior than the minimal Lee-Wick theory. An analogous auxiliary field transformation takes this higher-derivative theory into a renormalizable theory with states of alternating positive, negative, and positive norm. The phenomenology of this theory is examined in detail, with particular emphasis on the collider signatures of Lee-Wick particles, electroweak precision constraints on the masses that the new particles can take on, and scenarios in early-universe cosmology in which Lee-Wick particles can play a significant role.

  10. Balancing anisotropic curvature with gauge fields in a class of shear-free cosmological models

    NASA Astrophysics Data System (ADS)

    Thorsrud, Mikjel

    2018-05-01

    We present a complete list of general relativistic shear-free solutions in a class of anisotropic, spatially homogeneous and orthogonal cosmological models containing a collection of n independent p-form gauge fields, where p\\in\\{0, 1, 2, 3\\} , in addition to standard ΛCDM matter fields modelled as perfect fluids. Here a (collection of) gauge field(s) balances anisotropic spatial curvature on the right-hand side of the shear propagation equation. The result is a class of solutions dynamically equivalent to standard FLRW cosmologies, with an effective curvature constant Keff that depends both on spatial curvature and the energy density of the gauge field(s). In the case of a single gauge field (n  =  1) we show that the only spacetimes that admit such solutions are the LRS Bianchi type III, Bianchi type VI0 and Kantowski–Sachs metric, which are dynamically equivalent to open (Keff<0 ), flat (Keff=0 ) and closed (Keff>0 ) FLRW models, respectively. With a collection of gauge fields (n  >  1) also Bianchi type II admits a shear-free solution (Keff>0 ). We identify the LRS Bianchi type III solution to be the unique shear-free solution with a gauge field Hamiltonian bounded from below in the entire class of models.

  11. Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study

    PubMed Central

    Bornschein, Jörg; Henniges, Marc; Lücke, Jörg

    2013-01-01

    Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938

  12. No Lee-Wick fields out of gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodigast, Andreas; Schuster, Theodor

    2009-06-15

    We investigate the gravitational one-loop divergences of the standard model in large extra dimensions, with gravitons propagating in the (4+{delta})-dimensional bulk and gauge fields as well as scalar and fermionic multiplets confined to a three-brane. To determine the divergences we establish a cutoff regularization which allows us to extract gauge-invariant counterterms. In contrast to the claim of a recent paper [F. Wu and M. Zhong, Phys. Rev. D 78, 085010 (2008).], we show that the fermionic and scalar higher derivative counterterms do not coincide with the higher derivative terms in the Lee-Wick standard model. We argue that even if themore » exact Lee-Wick higher derivative terms were found, as in the case of the pure gauge sector, this would not allow to conclude the existence of the massive ghost fields corresponding to these higher derivative terms in the Lee-Wick standard model.« less

  13. A New Non-gaussian Turbulent Wind Field Generator to Estimate Design-Loads of Wind-Turbines

    NASA Astrophysics Data System (ADS)

    Schaffarczyk, A. P.; Gontier, H.; Kleinhans, D.; Friedrich, R.

    Climate change and finite fossil fuel resources make it urgent to turn into electricity generation from mostly renewable energies. One major part will play wind-energy supplied by wind-turbines of rated power up to 10 MW. For their design and development wind field models have to be used. The standard models are based on the empirical spectra, for example by von Karman or Kaimal. From investigation of measured data it is clear that gusts are underrepresented in such models. Based on some fundamental discoveries of the nature of turbulence by Friedrich [1] derived from the Navier-Stokes equation directly, we used the concept of Continuous Time Random Walks to construct three dimensional wind fields obeying non-Gaussian statistics. These wind fields were used to estimate critical fatigue loads necessary within the certification process. Calculations are carried out with an implementation of a beam-model (FLEX5) for two types of state-of-the-art wind turbines The authors considered the edgewise and flapwise blade-root bending moments as well as tilt moment at tower top due to the standard wind field models and our new non-Gaussian wind field model. Clear differences in the loads were found.

  14. Duality linking standard and tachyon scalar field cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avelino, P. P.; Bazeia, D.; Losano, L.

    2010-09-15

    In this work we investigate the duality linking standard and tachyon scalar field homogeneous and isotropic cosmologies in N+1 dimensions. We determine the transformation between standard and tachyon scalar fields and between their associated potentials, corresponding to the same background evolution. We show that, in general, the duality is broken at a perturbative level, when deviations from a homogeneous and isotropic background are taken into account. However, we find that for slow-rolling fields the duality is still preserved at a linear level. We illustrate our results with specific examples of cosmological relevance, where the correspondence between scalar and tachyon scalarmore » field models can be calculated explicitly.« less

  15. Development of Learning Models Based on Problem Solving and Meaningful Learning Standards by Expert Validity for Animal Development Course

    NASA Astrophysics Data System (ADS)

    Lufri, L.; Fitri, R.; Yogica, R.

    2018-04-01

    The purpose of this study is to produce a learning model based on problem solving and meaningful learning standards by expert assessment or validation for the course of Animal Development. This research is a development research that produce the product in the form of learning model, which consist of sub product, namely: the syntax of learning model and student worksheets. All of these products are standardized through expert validation. The research data is the level of validity of all sub products obtained using questionnaire, filled by validators from various field of expertise (field of study, learning strategy, Bahasa). Data were analysed using descriptive statistics. The result of the research shows that the problem solving and meaningful learning model has been produced. Sub products declared appropriate by expert include the syntax of learning model and student worksheet.

  16. Inflation in the standard cosmological model

    NASA Astrophysics Data System (ADS)

    Uzan, Jean-Philippe

    2015-12-01

    The inflationary paradigm is now part of the standard cosmological model as a description of its primordial phase. While its original motivation was to solve the standard problems of the hot big bang model, it was soon understood that it offers a natural theory for the origin of the large-scale structure of the universe. Most models rely on a slow-rolling scalar field and enjoy very generic predictions. Besides, all the matter of the universe is produced by the decay of the inflaton field at the end of inflation during a phase of reheating. These predictions can be (and are) tested from their imprint of the large-scale structure and in particular the cosmic microwave background. Inflation stands as a window in physics where both general relativity and quantum field theory are at work and which can be observationally studied. It connects cosmology with high-energy physics. Today most models are constructed within extensions of the standard model, such as supersymmetry or string theory. Inflation also disrupts our vision of the universe, in particular with the ideas of chaotic inflation and eternal inflation that tend to promote the image of a very inhomogeneous universe with fractal structure on a large scale. This idea is also at the heart of further speculations, such as the multiverse. This introduction summarizes the connections between inflation and the hot big bang model and details the basics of its dynamics and predictions. xml:lang="fr"

  17. Growth rate in the dynamical dark energy models.

    PubMed

    Avsajanishvili, Olga; Arkhipova, Natalia A; Samushia, Lado; Kahniashvili, Tina

    Dark energy models with a slowly rolling cosmological scalar field provide a popular alternative to the standard, time-independent cosmological constant model. We study the simultaneous evolution of background expansion and growth in the scalar field model with the Ratra-Peebles self-interaction potential. We use recent measurements of the linear growth rate and the baryon acoustic oscillation peak positions to constrain the model parameter [Formula: see text] that describes the steepness of the scalar field potential.

  18. Equivalent circuit simulation of HPEM-induced transient responses at nonlinear loads

    NASA Astrophysics Data System (ADS)

    Kotzev, Miroslav; Bi, Xiaotang; Kreitlow, Matthias; Gronwald, Frank

    2017-09-01

    In this paper the equivalent circuit modeling of a nonlinearly loaded loop antenna and its transient responses to HPEM field excitations are investigated. For the circuit modeling the general strategy to characterize the nonlinearly loaded antenna by a linear and a nonlinear circuit part is pursued. The linear circuit part can be determined by standard methods of antenna theory and numerical field computation. The modeling of the nonlinear circuit part requires realistic circuit models of the nonlinear loads that are given by Schottky diodes. Combining both parts, appropriate circuit models are obtained and analyzed by means of a standard SPICE circuit simulator. It is the main result that in this way full-wave simulation results can be reproduced. Furthermore it is clearly seen that the equivalent circuit modeling offers considerable advantages with respect to computation speed and also leads to improved physical insights regarding the coupling between HPEM field excitation and nonlinearly loaded loop antenna.

  19. Neutrino in standard model and beyond

    NASA Astrophysics Data System (ADS)

    Bilenky, S. M.

    2015-07-01

    After discovery of the Higgs boson at CERN the Standard Model acquired a status of the theory of the elementary particles in the electroweak range (up to about 300 GeV). What general conclusions can be inferred from the Standard Model? It looks that the Standard Model teaches us that in the framework of such general principles as local gauge symmetry, unification of weak and electromagnetic interactions and Brout-Englert-Higgs spontaneous breaking of the electroweak symmetry nature chooses the simplest possibilities. Two-component left-handed massless neutrino fields play crucial role in the determination of the charged current structure of the Standard Model. The absence of the right-handed neutrino fields in the Standard Model is the simplest, most economical possibility. In such a scenario Majorana mass term is the only possibility for neutrinos to be massive and mixed. Such mass term is generated by the lepton-number violating Weinberg effective Lagrangian. In this approach three Majorana neutrino masses are suppressed with respect to the masses of other fundamental fermions by the ratio of the electroweak scale and a scale of a lepton-number violating physics. The discovery of the neutrinoless double β-decay and absence of transitions of flavor neutrinos into sterile states would be evidence in favor of the minimal scenario we advocate here.

  20. Organizing Community-Based Data Standards: Lessons from Developing a Successful Open Standard in Systems Biology

    NASA Astrophysics Data System (ADS)

    Hucka, M.

    2015-09-01

    In common with many fields, including astronomy, a vast number of software tools for computational modeling and simulation are available today in systems biology. This wealth of resources is a boon to researchers, but it also presents interoperability problems. Despite working with different software tools, researchers want to disseminate their work widely as well as reuse and extend the models of other researchers. This situation led in the year 2000 to an effort to create a tool-independent, machine-readable file format for representing models: SBML, the Systems Biology Markup Language. SBML has since become the de facto standard for its purpose. Its success and general approach has inspired and influenced other community-oriented standardization efforts in systems biology. Open standards are essential for the progress of science in all fields, but it is often difficult for academic researchers to organize successful community-based standards. I draw on personal experiences from the development of SBML and summarize some of the lessons learned, in the hope that this may be useful to other groups seeking to develop open standards in a community-oriented fashion.

  1. Very special relativity as relativity of dark matter: the Elko connection

    NASA Astrophysics Data System (ADS)

    Ahluwalia, D. V.; Horvath, S. P.

    2010-11-01

    In the very special relativity (VSR) proposal by Cohen and Glashow, it was pointed out that invariance under HOM (2) is both necessary and sufficient to explain the null result of the Michelson-Morely experiment. It is the quantum field theoretic demand of locality, or the requirement of P, T, CP, or CT invariance, that makes invariance under the Lorentz group a necessity. Originally it was conjectured that VSR operates at the Planck scale; we propose that the natural arena for VSR is at energies similar to the standard model, but in the dark sector. To this end we provide an ab initio spinor representation invariant under the SIM (2) avatar of VSR and construct a mass dimension one fermionic quantum field of spin one half. This field turns out to be a very close sibling of Elko and it exhibits the same striking property of intrinsic darkness with respect to the standard model fields. In the new construct, the tension between Elko and Lorentz symmetries is fully resolved. We thus entertain the possibility that the symmetries underlying the standard model matter and gauge fields are those of Lorentz, while the event space underlying the dark matter and the dark gauge fields supports the algebraic structure underlying VSR.

  2. 2D- and 3D-quantitative structure-activity relationship studies for a series of phenazine N,N'-dioxide as antitumour agents.

    PubMed

    Cunha, Jonathan Da; Lavaggi, María Laura; Abasolo, María Inés; Cerecetto, Hugo; González, Mercedes

    2011-12-01

    Hypoxic regions of tumours are associated with increased resistance to radiation and chemotherapy. Nevertheless, hypoxia has been used as a tool for specific activation of some antitumour prodrugs, named bioreductive agents. Phenazine dioxides are an example of such bioreductive prodrugs. Our 2D-quantitative structure activity relationship studies established that phenazine dioxides electronic and lipophilic descriptors are related to survival fraction in oxia or in hypoxia. Additionally, statistically significant models, derived by partial least squares, were obtained between survival fraction in oxia and comparative molecular field analysis standard model (r² = 0.755, q² = 0.505 and F = 26.70) or comparative molecular similarity indices analysis-combined steric and electrostatic fields (r² = 0.757, q² = 0.527 and F = 14.93), and survival fraction in hypoxia and comparative molecular field analysis standard model (r² = 0.736, q² = 0.521 and F = 18.63) or comparative molecular similarity indices analysis-hydrogen bond acceptor field (r² = 0.858, q² = 0.737 and F = 27.19). Categorical classification was used for the biological parameter selective cytotoxicity emerging also good models, derived by soft independent modelling of class analogy, with both comparative molecular field analysis standard model (96% of overall classification accuracy) and comparative molecular similarity indices analysis-steric field (92% of overall classification accuracy). 2D- and 3D-quantitative structure-activity relationships models provided important insights into the chemical and structural basis involved in the molecular recognition process of these phenazines as bioreductive agents and should be useful for the design of new structurally related analogues with improved potency. © 2011 John Wiley & Sons A/S.

  3. Thermodynamic Model Formulations for Inhomogeneous Solids with Application to Non-isothermal Phase Field Modelling

    NASA Astrophysics Data System (ADS)

    Gladkov, Svyatoslav; Kochmann, Julian; Reese, Stefanie; Hütter, Markus; Svendsen, Bob

    2016-04-01

    The purpose of the current work is the comparison of thermodynamic model formulations for chemically and structurally inhomogeneous solids at finite deformation based on "standard" non-equilibrium thermodynamics [SNET: e. g. S. de Groot and P. Mazur, Non-equilibrium Thermodynamics, North Holland, 1962] and the general equation for non-equilibrium reversible-irreversible coupling (GENERIC) [H. C. Öttinger, Beyond Equilibrium Thermodynamics, Wiley Interscience, 2005]. In the process, non-isothermal generalizations of standard isothermal conservative [e. g. J. W. Cahn and J. E. Hilliard, Free energy of a non-uniform system. I. Interfacial energy. J. Chem. Phys. 28 (1958), 258-267] and non-conservative [e. g. S. M. Allen and J. W. Cahn, A macroscopic theory for antiphase boundary motion and its application to antiphase domain coarsening. Acta Metall. 27 (1979), 1085-1095; A. G. Khachaturyan, Theory of Structural Transformations in Solids, Wiley, New York, 1983] diffuse interface or "phase-field" models [e. g. P. C. Hohenberg and B. I. Halperin, Theory of dynamic critical phenomena, Rev. Modern Phys. 49 (1977), 435-479; N. Provatas and K. Elder, Phase Field Methods in Material Science and Engineering, Wiley-VCH, 2010.] for solids are obtained. The current treatment is consistent with, and includes, previous works [e. g. O. Penrose and P. C. Fife, Thermodynamically consistent models of phase-field type for the kinetics of phase transitions, Phys. D 43 (1990), 44-62; O. Penrose and P. C. Fife, On the relation between the standard phase-field model and a "thermodynamically consistent" phase-field model. Phys. D 69 (1993), 107-113] on non-isothermal systems as a special case. In the context of no-flux boundary conditions, the SNET- and GENERIC-based approaches are shown to be completely consistent with each other and result in equivalent temperature evolution relations.

  4. Quarks, Symmetries and Strings - a Symposium in Honor of Bunji Sakita's 60th Birthday

    NASA Astrophysics Data System (ADS)

    Kaku, M.; Jevicki, A.; Kikkawa, K.

    1991-04-01

    The Table of Contents for the full book PDF is as follows: * Preface * Evening Banquet Speech * I. Quarks and Phenomenology * From the SU(6) Model to Uniqueness in the Standard Model * A Model for Higgs Mechanism in the Standard Model * Quark Mass Generation in QCD * Neutrino Masses in the Standard Model * Solar Neutrino Puzzle, Horizontal Symmetry of Electroweak Interactions and Fermion Mass Hierarchies * State of Chiral Symmetry Breaking at High Temperatures * Approximate |ΔI| = 1/2 Rule from a Perspective of Light-Cone Frame Physics * Positronium (and Some Other Systems) in a Strong Magnetic Field * Bosonic Technicolor and the Flavor Problem * II. Strings * Supersymmetry in String Theory * Collective Field Theory and Schwinger-Dyson Equations in Matrix Models * Non-Perturbative String Theory * The Structure of Non-Perturbative Quantum Gravity in One and Two Dimensions * Noncritical Virasoro Algebra of d < 1 Matrix Model and Quantized String Field * Chaos in Matrix Models ? * On the Non-Commutative Symmetry of Quantum Gravity in Two Dimensions * Matrix Model Formulation of String Field Theory in One Dimension * Geometry of the N = 2 String Theory * Modular Invariance form Gauge Invariance in the Non-Polynomial String Field Theory * Stringy Symmetry and Off-Shell Ward Identities * q-Virasoro Algebra and q-Strings * Self-Tuning Fields and Resonant Correlations in 2d-Gravity * III. Field Theory Methods * Linear Momentum and Angular Momentum in Quaternionic Quantum Mechanics * Some Comments on Real Clifford Algebras * On the Quantum Group p-adics Connection * Gravitational Instantons Revisited * A Generalized BBGKY Hierarchy from the Classical Path-Integral * A Quantum Generated Symmetry: Group-Level Duality in Conformal and Topological Field Theory * Gauge Symmetries in Extended Objects * Hidden BRST Symmetry and Collective Coordinates * Towards Stochastically Quantizing Topological Actions * IV. Statistical Methods * A Brief Summary of the s-Channel Theory of Superconductivity * Neural Networks and Models for the Brain * Relativistic One-Body Equations for Planar Particles with Arbitrary Spin * Chiral Property of Quarks and Hadron Spectrum in Lattice QCD * Scalar Lattice QCD * Semi-Superconductivity of a Charged Anyon Gas * Two-Fermion Theory of Strongly Correlated Electrons and Charge-Spin Separation * Statistical Mechanics and Error-Correcting Codes * Quantum Statistics

  5. Modeling the Zeeman effect in high altitude SSMIS channels for numerical weather prediction profiles: comparing a fast model and a line-by-line model

    NASA Astrophysics Data System (ADS)

    Larsson, R.; Milz, M.; Rayer, P.; Saunders, R.; Bell, W.; Booton, A.; Buehler, S. A.; Eriksson, P.; John, V.

    2015-10-01

    We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Same channel, there is 1.2 K in average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Same channel, there is 1.3 K in average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies causing up to ± 7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.

  6. Modeling the Zeeman effect in high-altitude SSMIS channels for numerical weather prediction profiles: comparing a fast model and a line-by-line model

    NASA Astrophysics Data System (ADS)

    Larsson, Richard; Milz, Mathias; Rayer, Peter; Saunders, Roger; Bell, William; Booton, Anna; Buehler, Stefan A.; Eriksson, Patrick; John, Viju O.

    2016-03-01

    We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high-altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Concerning the same channel, there is 1.2 K on average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Regarding the same channel, there is 1.3 K on average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies, causing up to ±7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.

  7. Prewhitening of Colored Noise Fields for Detection of Threshold Sources

    DTIC Science & Technology

    1993-11-07

    determines the noise covariance matrix, prewhitening techniques allow detection of threshold sources. The multiple signal classification ( MUSIC ...SUBJECT TERMS 1S. NUMBER OF PAGES AR Model, Colored Noise Field, Mixed Spectra Model, MUSIC , Noise Field, 52 Prewhitening, SNR, Standardized Test...EXAMPLE 2: COMPLEX AR COEFFICIENT .............................................. 5 EXAMPLE 3: MUSIC IN A COLORED BACKGROUND NOISE ...................... 6

  8. Field Markup Language: biological field representation in XML.

    PubMed

    Chang, David; Lovell, Nigel H; Dokos, Socrates

    2007-01-01

    With an ever increasing number of biological models available on the internet, a standardized modeling framework is required to allow information to be accessed or visualized. Based on the Physiome Modeling Framework, the Field Markup Language (FML) is being developed to describe and exchange field information for biological models. In this paper, we describe the basic features of FML, its supporting application framework and its ability to incorporate CellML models to construct tissue-scale biological models. As a typical application example, we present a spatially-heterogeneous cardiac pacemaker model which utilizes both FML and CellML to describe and solve the underlying equations of electrical activation and propagation.

  9. Baryogenesis in false vacuum

    NASA Astrophysics Data System (ADS)

    Hamada, Yuta; Yamada, Masatoshi

    2017-09-01

    The null result in the LHC may indicate that the standard model is not drastically modified up to very high scales, such as the GUT/string scale. Having this in the mind, we suggest a novel leptogenesis scenario realized in the false vacuum of the Higgs field. If the Higgs field develops a large vacuum expectation value in the early universe, a lepton number violating process is enhanced, which we use for baryogenesis. To demonstrate the scenario, several models are discussed. For example, we show that the observed baryon asymmetry is successfully generated in the standard model with higher-dimensional operators.

  10. Spacetime Curvature and Higgs Stability after Inflation.

    PubMed

    Herranen, M; Markkanen, T; Nurmi, S; Rajantie, A

    2015-12-11

    We investigate the dynamics of the Higgs field at the end of inflation in the minimal scenario consisting of an inflaton field coupled to the standard model only through the nonminimal gravitational coupling ξ of the Higgs field. Such a coupling is required by renormalization of the standard model in curved space, and in the current scenario also by vacuum stability during high-scale inflation. We find that for ξ≳1, rapidly changing spacetime curvature at the end of inflation leads to significant production of Higgs particles, potentially triggering a transition to a negative-energy Planck scale vacuum state and causing an immediate collapse of the Universe.

  11. The behavior of the Higgs field in the new inflationary universe

    NASA Technical Reports Server (NTRS)

    Guth, Alan H.; Pi, So-Young

    1986-01-01

    Answers are provided to questions about the standard model of the new inflationary universe (NIU) which have raised concerns about the model's validity. A baby toy problem which consists of the study of a single particle moving in one dimension under the influence of a potential with the form of an upside-down harmonic oscillator is studied, showing that the quantum mechanical wave function at large times is accurately described by classical physics. Then, an exactly soluble toy model for the behavior of the Higgs field in the NIU is described which should provide a reasonable approximation to the behavior of the Higgs field in the NIU. The dynamics of the toy model is described, and calculative results are reviewed which, the authors claim, provide strong evidence that the basic features of the standard picture are correct.

  12. Quantum Entanglement of Matter and Geometry in Large Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Craig J.

    2014-12-04

    Standard quantum mechanics and gravity are used to estimate the mass and size of idealized gravitating systems where position states of matter and geometry become indeterminate. It is proposed that well-known inconsistencies of standard quantum field theory with general relativity on macroscopic scales can be reconciled by nonstandard, nonlocal entanglement of field states with quantum states of geometry. Wave functions of particle world lines are used to estimate scales of geometrical entanglement and emergent locality. Simple models of entanglement predict coherent fluctuations in position of massive bodies, of Planck scale origin, measurable on a laboratory scale, and may account formore » the fact that the information density of long lived position states in Standard Model fields, which is determined by the strong interactions, is the same as that determined holographically by the cosmological constant.« less

  13. Constraining the top-Higgs sector of the standard model effective field theory

    NASA Astrophysics Data System (ADS)

    Cirigliano, V.; Dekens, W.; de Vries, J.; Mereghetti, E.

    2016-08-01

    Working in the framework of the Standard Model effective field theory, we study chirality-flipping couplings of the top quark to Higgs and gauge bosons. We discuss in detail the renormalization-group evolution to lower energies and investigate direct and indirect contributions to high- and low-energy C P -conserving and C P -violating observables. Our analysis includes constraints from collider observables, precision electroweak tests, flavor physics, and electric dipole moments. We find that indirect probes are competitive or dominant for both C P -even and C P -odd observables, even after accounting for uncertainties associated with hadronic and nuclear matrix elements, illustrating the importance of including operator mixing in constraining the Standard Model effective field theory. We also study scenarios where multiple anomalous top couplings are generated at the high scale, showing that while the bounds on individual couplings relax, strong correlations among couplings survive. Finally, we find that enforcing minimal flavor violation does not significantly affect the bounds on the top couplings.

  14. Exploring the Standard Model of Particles

    ERIC Educational Resources Information Center

    Johansson, K. E.; Watkins, P. M.

    2013-01-01

    With the recent discovery of a new particle at the CERN Large Hadron Collider (LHC) the Higgs boson could be about to be discovered. This paper provides a brief summary of the standard model of particle physics and the importance of the Higgs boson and field in that model for non-specialists. The role of Feynman diagrams in making predictions for…

  15. Teacher Leader Model Standards: Implications for Preparation, Policy, and Practice

    ERIC Educational Resources Information Center

    Berg, Jill Harrison; Carver, Cynthia L.; Mangin, Melinda M.

    2014-01-01

    Teacher leadership is increasingly recognized as a resource for instructional improvement. Consequently, teacher leader initiatives have expanded rapidly despite limited knowledge about how to prepare and support teacher leaders. In this context, the "Teacher Leader Model Standards" represent an important development in the field. In…

  16. Temperature dependence of standard model CP violation.

    PubMed

    Brauner, Tomáš; Taanila, Olli; Tranberg, Anders; Vuorinen, Aleksi

    2012-01-27

    We analyze the temperature dependence of CP violation effects in the standard model by determining the effective action of its bosonic fields, obtained after integrating out the fermions from the theory and performing a covariant gradient expansion. We find nonvanishing CP violating terms starting at the sixth order of the expansion, albeit only in the C-odd-P-even sector, with coefficients that depend on quark masses, Cabibbo-Kobayashi-Maskawa matrix elements, temperature and the magnitude of the Higgs field. The CP violating effects are observed to decrease rapidly with temperature, which has important implications for the generation of a matter-antimatter asymmetry in the early Universe. Our results suggest that the cold electroweak baryogenesis scenario may be viable within the standard model, provided the electroweak transition temperature is at most of order 1 GeV.

  17. Unparticle dark energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, D.-C.; Stojkovic, Dejan; Dutta, Sourish

    2009-09-15

    We examine a dark energy model where a scalar unparticle degree of freedom plays the role of quintessence. In particular, we study a model where the unparticle degree of freedom has a standard kinetic term and a simple mass potential, the evolution is slowly rolling and the field value is of the order of the unparticle energy scale ({lambda}{sub u}). We study how the evolution of w depends on the parameters B (a function of unparticle scaling dimension d{sub u}), the initial value of the field {phi}{sub i} (or equivalently, {lambda}{sub u}) and the present matter density {omega}{sub m0}. Wemore » use observational data from type Ia supernovae, baryon acoustic oscillations and the cosmic microwave background to constrain the model parameters and find that these models are not ruled out by the observational data. From a theoretical point of view, unparticle dark energy model is very attractive, since unparticles (being bound states of fundamental fermions) are protected from radiative corrections. Further, coupling of unparticles to the standard model fields can be arbitrarily suppressed by raising the fundamental energy scale M{sub F}, making the unparticle dark energy model free of most of the problems that plague conventional scalar field quintessence models.« less

  18. Equivalent source modeling of the core magnetic field using magsat data

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Estes, R. H.

    1983-01-01

    Experiments are carried out on fitting the main field using different numbers of equivalent sources arranged in equal area at fixed radii at and inside the core-mantle boundary. In fixing the radius for a given series of runs, the convergence problems that result from the extreme nonlinearity of the problem when dipole positions are allowed to vary are avoided. Results are presented from a comparison between this approach and the standard spherical harmonic approach for modeling the main field in terms of accuracy and computational efficiency. The modeling of the main field with an equivalent dipole representation is found to be comparable to the standard spherical harmonic approach in accuracy. The 32 deg dipole density (42 dipoles) corresponds approximately to an eleventh degree/order spherical harmonic expansion (143 parameters), whereas the 21 dipole density (92 dipoles) corresponds to approximately a seventeenth degree and order expansion (323 parameters). It is pointed out that fixing the dipole positions results in rapid convergence of the dipole solutions for single-epoch models.

  19. Reusable Models of Pedagogical Concepts--A Framework for Pedagogical and Content Design.

    ERIC Educational Resources Information Center

    Pawlowski, Jan M.

    Standardization initiatives in the field of learning technologies have produced standards for the interoperability of learning environments and learning management systems. Learning resources based on these standards can be reused, recombined, and adapted to the user. However, these standards follow a content-oriented approach; the process of…

  20. Extension of the general thermal field equation for nanosized emitters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyritsakis, A., E-mail: akyritsos1@gmail.com; Xanthakis, J. P.

    2016-01-28

    During the previous decade, Jensen et al. developed a general analytical model that successfully describes electron emission from metals both in the field and thermionic regimes, as well as in the transition region. In that development, the standard image corrected triangular potential barrier was used. This barrier model is valid only for planar surfaces and therefore cannot be used in general for modern nanometric emitters. In a recent publication, the authors showed that the standard Fowler-Nordheim theory can be generalized for highly curved emitters if a quadratic term is included to the potential model. In this paper, we extend thismore » generalization for high temperatures and include both the thermal and intermediate regimes. This is achieved by applying the general method developed by Jensen to the quadratic barrier model of our previous publication. We obtain results that are in good agreement with fully numerical calculations for radii R > 4 nm, while our calculated current density differs by a factor up to 27 from the one predicted by the Jensen's standard General-Thermal-Field (GTF) equation. Our extended GTF equation has application to modern sharp electron sources, beam simulation models, and vacuum breakdown theory.« less

  1. Meeting report from the first meetings of the Computational Modeling in Biology Network (COMBINE)

    PubMed Central

    Le Novère, Nicolas; Hucka, Michael; Anwar, Nadia; Bader, Gary D; Demir, Emek; Moodie, Stuart; Sorokin, Anatoly

    2011-01-01

    The Computational Modeling in Biology Network (COMBINE), is an initiative to coordinate the development of the various community standards and formats in computational systems biology and related fields. This report summarizes the activities pursued at the first annual COMBINE meeting held in Edinburgh on October 6-9 2010 and the first HARMONY hackathon, held in New York on April 18-22 2011. The first of those meetings hosted 81 attendees. Discussions covered both official COMBINE standards-(BioPAX, SBGN and SBML), as well as emerging efforts and interoperability between different formats. The second meeting, oriented towards software developers, welcomed 59 participants and witnessed many technical discussions, development of improved standards support in community software systems and conversion between the standards. Both meetings were resounding successes and showed that the field is now mature enough to develop representation formats and related standards in a coordinated manner. PMID:22180826

  2. Meeting report from the first meetings of the Computational Modeling in Biology Network (COMBINE).

    PubMed

    Le Novère, Nicolas; Hucka, Michael; Anwar, Nadia; Bader, Gary D; Demir, Emek; Moodie, Stuart; Sorokin, Anatoly

    2011-11-30

    The Computational Modeling in Biology Network (COMBINE), is an initiative to coordinate the development of the various community standards and formats in computational systems biology and related fields. This report summarizes the activities pursued at the first annual COMBINE meeting held in Edinburgh on October 6-9 2010 and the first HARMONY hackathon, held in New York on April 18-22 2011. The first of those meetings hosted 81 attendees. Discussions covered both official COMBINE standards-(BioPAX, SBGN and SBML), as well as emerging efforts and interoperability between different formats. The second meeting, oriented towards software developers, welcomed 59 participants and witnessed many technical discussions, development of improved standards support in community software systems and conversion between the standards. Both meetings were resounding successes and showed that the field is now mature enough to develop representation formats and related standards in a coordinated manner.

  3. Effects of sea-level rise on salt water intrusion near a coastal well field in southeastern Florida

    USGS Publications Warehouse

    Langevin, Christian D.; Zygnerski, Michael

    2013-01-01

    A variable-density groundwater flow and dispersive solute transport model was developed for the shallow coastal aquifer system near a municipal supply well field in southeastern Florida. The model was calibrated for a 105-year period (1900 to 2005). An analysis with the model suggests that well-field withdrawals were the dominant cause of salt water intrusion near the well field, and that historical sea-level rise, which is similar to lower-bound projections of future sea-level rise, exacerbated the extent of salt water intrusion. Average 2005 hydrologic conditions were used for 100-year sensitivity simulations aimed at quantifying the effect of projected rises in sea level on fresh coastal groundwater resources near the well field. Use of average 2005 hydrologic conditions and a constant sea level result in total dissolved solids (TDS) concentration of the well field exceeding drinking water standards after 70 years. When sea-level rise is included in the simulations, drinking water standards are exceeded 10 to 21 years earlier, depending on the specified rate of sea-level rise.

  4. Dislocation dynamics and crystal plasticity in the phase-field crystal model

    NASA Astrophysics Data System (ADS)

    Skaugen, Audun; Angheluta, Luiza; Viñals, Jorge

    2018-02-01

    A phase-field model of a crystalline material is introduced to develop the necessary theoretical framework to study plastic flow due to dislocation motion. We first obtain the elastic stress from the phase-field crystal free energy under weak distortion and show that it obeys the stress-strain relation of linear elasticity. We focus next on dislocations in a two-dimensional hexagonal lattice. They are composite topological defects in the weakly nonlinear amplitude equation expansion of the phase field, with topological charges given by the standard Burgers vector. This allows us to introduce a formal relation between the dislocation velocity and the evolution of the slowly varying amplitudes of the phase field. Standard dissipative dynamics of the phase-field crystal model is shown to determine the velocity of the dislocations. When the amplitude expansion is valid and under additional simplifications, we find that the dislocation velocity is determined by the Peach-Koehler force. As an application, we compute the defect velocity for a dislocation dipole in two setups, pure glide and pure climb, and compare it with the analytical predictions.

  5. 77 FR 61604 - Exposure Modeling Public Meeting; Notice of Public Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-10

    ..., birds, reptiles, and amphibians: Model Parameterization and Knowledge base Development. 4. Standard Operating Procedure for calculating degradation kinetics. 5. Aquatic exposure modeling using field studies...

  6. Support Vector Machines for Differential Prediction

    PubMed Central

    Kuusisto, Finn; Santos Costa, Vitor; Nassif, Houssam; Burnside, Elizabeth; Page, David; Shavlik, Jude

    2015-01-01

    Machine learning is continually being applied to a growing set of fields, including the social sciences, business, and medicine. Some fields present problems that are not easily addressed using standard machine learning approaches and, in particular, there is growing interest in differential prediction. In this type of task we are interested in producing a classifier that specifically characterizes a subgroup of interest by maximizing the difference in predictive performance for some outcome between subgroups in a population. We discuss adapting maximum margin classifiers for differential prediction. We first introduce multiple approaches that do not affect the key properties of maximum margin classifiers, but which also do not directly attempt to optimize a standard measure of differential prediction. We next propose a model that directly optimizes a standard measure in this field, the uplift measure. We evaluate our models on real data from two medical applications and show excellent results. PMID:26158123

  7. Support Vector Machines for Differential Prediction.

    PubMed

    Kuusisto, Finn; Santos Costa, Vitor; Nassif, Houssam; Burnside, Elizabeth; Page, David; Shavlik, Jude

    Machine learning is continually being applied to a growing set of fields, including the social sciences, business, and medicine. Some fields present problems that are not easily addressed using standard machine learning approaches and, in particular, there is growing interest in differential prediction . In this type of task we are interested in producing a classifier that specifically characterizes a subgroup of interest by maximizing the difference in predictive performance for some outcome between subgroups in a population. We discuss adapting maximum margin classifiers for differential prediction. We first introduce multiple approaches that do not affect the key properties of maximum margin classifiers, but which also do not directly attempt to optimize a standard measure of differential prediction. We next propose a model that directly optimizes a standard measure in this field, the uplift measure. We evaluate our models on real data from two medical applications and show excellent results.

  8. Project X

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holmes, Steve; Alber, Russ; Asner, David

    2013-06-23

    Particle physics has made enormous progress in understanding the nature of matter and forces at a fundamental level and has unlocked many mysteries of our world. The development of the Standard Model of particle physics has been a magnificent achievement of the field. Many deep and important questions have been answered and yet many mysteries remain. The discovery of neutrino oscillations, discrepancies in some precision measurements of Standard-Model processes, observation of matter-antimatter asymmetry, the evidence for the existence of dark matter and dark energy, all point to new physics beyond the Standard Model. The pivotal developments of our field, includingmore » the latest discovery of the Higgs Boson, have progressed within three interlocking frontiers of research – the Energy, Intensity and Cosmic frontiers – where discoveries and insights in one frontier powerfully advance the other frontiers as well.« less

  9. Initial geomagnetic field model from MAGSAT

    NASA Technical Reports Server (NTRS)

    Langel, R. A.; Estes, R. H.; Mead, G. D.; Fabiano, E. B.; Lancaster, E. R.

    1980-01-01

    Magsat data from magnetically quiet days were used to derive a thirteenth degree and order spherical harmonic geomagnetic field model, MGST(3/80). The model utilized both scalar and vector data and fit that data with standard deviations of 8, 52, 55 and 97 nT for the scalar magnitude, B sub r, B sub theta and B sub phi respectively. When compared with earlier models, the Earth's dipole moment continues to decrease at a rate of about 26 nT/year. Evaluation of earlier models with Magsat data shows that the scalar field at the Magsat epoch is best predicted by the POGO(2/72) model but that the AWC/75 and IGS/75 are better for predicting vector fields.

  10. Skew-flavored dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agrawal, Prateek; Chacko, Zackaria; Fortes, Elaine C. F. S.

    We explore a novel flavor structure in the interactions of dark matter with the Standard Model. We consider theories in which both the dark matter candidate, and the particles that mediate its interactions with the Standard Model fields, carry flavor quantum numbers. The interactions are skewed in flavor space, so that a dark matter particle does not directly couple to the Standard Model matter fields of the same flavor, but only to the other two flavors. This framework respects minimal flavor violation and is, therefore, naturally consistent with flavor constraints. We study the phenomenology of a benchmark model in whichmore » dark matter couples to right-handed charged leptons. In large regions of parameter space, the dark matter can emerge as a thermal relic, while remaining consistent with the constraints from direct and indirect detection. The collider signatures of this scenario include events with multiple leptons and missing energy. In conclusion, these events exhibit a characteristic flavor pattern that may allow this class of models to be distinguished from other theories of dark matter.« less

  11. Skew-flavored dark matter

    DOE PAGES

    Agrawal, Prateek; Chacko, Zackaria; Fortes, Elaine C. F. S.; ...

    2016-05-10

    We explore a novel flavor structure in the interactions of dark matter with the Standard Model. We consider theories in which both the dark matter candidate, and the particles that mediate its interactions with the Standard Model fields, carry flavor quantum numbers. The interactions are skewed in flavor space, so that a dark matter particle does not directly couple to the Standard Model matter fields of the same flavor, but only to the other two flavors. This framework respects minimal flavor violation and is, therefore, naturally consistent with flavor constraints. We study the phenomenology of a benchmark model in whichmore » dark matter couples to right-handed charged leptons. In large regions of parameter space, the dark matter can emerge as a thermal relic, while remaining consistent with the constraints from direct and indirect detection. The collider signatures of this scenario include events with multiple leptons and missing energy. In conclusion, these events exhibit a characteristic flavor pattern that may allow this class of models to be distinguished from other theories of dark matter.« less

  12. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons.

    PubMed

    Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J

    2016-11-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.

  13. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons

    PubMed Central

    Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.

    2016-01-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647

  14. INFLUENCE OF DIFFERENT INCUBATOR MODELS ON MAGNETIC FIELD-INDUCED CHANGES IN NEURITE OUTGROWTH IN PC-12 CELLS

    EPA Science Inventory

    OBJECTIVE: Devise a method to standardize responses of cells to MF-exposure in different incubator environments. METHODS: We compared the cell responses to generated MF in a standard cell-culture incubator (Forma, model #3158) with cell responses to the same exposure when a mu-m...

  15. Basic Restriction and Reference Level in Anatomically-based Japanese Models for Low-Frequency Electric and Magnetic Field Exposures

    NASA Astrophysics Data System (ADS)

    Takano, Yukinori; Hirata, Akimasa; Fujiwara, Osamu

    Human exposed to electric and/or magnetic fields at low frequencies may cause direct effect such as nerve stimulation and excitation. Therefore, basic restriction is regulated in terms of induced current density in the ICNIRP guidelines and in-situ electric field in the IEEE standard. External electric or magnetic field which does not produce induced quantities exceeding the basic restriction is used as a reference level. The relationship between the basic restriction and reference level for low-frequency electric and magnetic fields has been investigated using European anatomic models, while limited for Japanese model, especially for electric field exposures. In addition, that relationship has not well been discussed. In the present study, we calculated the induced quantities in anatomic Japanese male and female models exposed to electric and magnetic fields at reference level. A quasi static finite-difference time-domain (FDTD) method was applied to analyze this problem. As a result, spatially averaged induced current density was found to be more sensitive to averaging algorithms than that of in-situ electric field. For electric and magnetic field exposure at the ICNIRP reference level, the maximum values of the induced current density for different averaging algorithm were smaller than the basic restriction for most cases. For exposures at the reference level in the IEEE standard, the maximum electric fields in the brain were larger than the basic restriction in the brain while smaller for the spinal cord and heart.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krnjaic, Gordan

    In this letter, we quantify the challenge of explaining the baryon asymmetry using initial conditions in a universe that undergoes inflation. Contrary to lore, we find that such an explanation is possible if netmore » $B-L$ number is stored in a light bosonic field with hyper-Planckian initial displacement and a delicately chosen field velocity prior to inflation. However, such a construction may require extremely tuned coupling constants to ensure that this asymmetry is viably communicated to the Standard Model after reheating; the large field displacement required to overcome inflationary dilution must not induce masses for Standard Model particles or generate dangerous washout processes. While these features are inelegant, this counterexample nonetheless shows that there is no theorem against such an explanation. We also comment on potential observables in the double $$\\beta$$-decay spectrum and on model variations that may allow for more natural realizations.« less

  17. Quantifying Uncertainty in Near Surface Electromagnetic Imaging Using Bayesian Methods

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Ray, A.; Key, K.

    2017-12-01

    Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.

  18. Exact solution of mean-field plus an extended T = 1 nuclear pairing Hamiltonian in the seniority-zero symmetric subspace

    NASA Astrophysics Data System (ADS)

    Pan, Feng; Ding, Xiaoxue; Launey, Kristina D.; Dai, Lianrong; Draayer, Jerry P.

    2018-05-01

    An extended pairing Hamiltonian that describes multi-pair interactions among isospin T = 1 and angular momentum J = 0 neutron-neutron, proton-proton, and neutron-proton pairs in a spherical mean field, such as the spherical shell model, is proposed based on the standard T = 1 pairing formalism. The advantage of the model lies in the fact that numerical solutions within the seniority-zero symmetric subspace can be obtained more easily and with less computational time than those calculated from the mean-field plus standard T = 1 pairing model. Thus, large-scale calculations within the seniority-zero symmetric subspace of the model is feasible. As an example of the application, the average neutron-proton interaction in even-even N ∼ Z nuclei that can be suitably described in the f5 pg9 shell is estimated in the present model, with a focus on the role of np-pairing correlations.

  19. Sub-percent Photometry: Faint DA White Dwarf Spectrophotometric Standards for Astrophysical Observatories

    NASA Astrophysics Data System (ADS)

    Narayan, Gautham; Axelrod, Tim; Calamida, Annalisa; Saha, Abhijit; Matheson, Thomas; Olszewski, Edward; Holberg, Jay; Holberg, Jay; Bohlin, Ralph; Stubbs, Christopher W.; Rest, Armin; Deustua, Susana; Sabbi, Elena; MacKenty, John W.; Points, Sean D.; Hubeny, Ivan

    2018-01-01

    We have established a network of faint (16.5 < V < 19) hot DA white dwarfs as spectrophotometric standards for present and future wide-field observatories. Our standards are accessible from both hemispheres and suitable for ground and space-based covering the UV to the near IR. The network is tied directly to the most precise astrophysical reference presently available - the CALSPEC standards - through a multi-cycle program imaging using the Wide-Field Camera 3 (WFC3) on the Hubble Space Telescope (HST). We have developed two independent analyses to forward model all the observed photometry and ground-based spectroscopy and infer a spectral energy distribution for each source using a non-local-thermodynamic-equilibrium (NLTE) DA white dwarf atmosphere extincted by interstellar dust. The models are in excellent agreement with each other, and agree with the observations to better than 0.01 mag in all passbands, and better than 0.005 mag in the optical. The high-precision of these faint sources, tied directly to the most accurate flux standards presently available, make our network of standards ideally suited for any experiments that have very stringent requirements on absolute flux calibration, such as studies of dark energy using the Large Synoptic Survey Telescope (LSST) and the Wide-Field Infrared Survey Telescope (WFIRST).

  20. Blowout Jets: Hinode X-Ray Jets that Don't Fit the Standard Model

    NASA Technical Reports Server (NTRS)

    Moore, Ronald L.; Cirtain, Jonathan W.; Sterling, Alphonse C.; Falconer, David A.

    2010-01-01

    Nearly half of all H-alpha macrospicules in polar coronal holes appear to be miniature filament eruptions. This suggests that there is a large class of X-ray jets in which the jet-base magnetic arcade undergoes a blowout eruption as in a CME, instead of remaining static as in most solar X-ray jets, the standard jets that fit the model advocated by Shibata. Along with a cartoon depicting the standard model, we present a cartoon depicting the signatures expected of blowout jets in coronal X-ray images. From Hinode/XRT movies and STEREO/EUVI snapshots in polar coronal holes, we present examples of (1) X-ray jets that fit the standard model, and (2) X-ray jets that do not fit the standard model but do have features appropriate for blowout jets. These features are (1) a flare arcade inside the jet-base arcade in addition to the small flare arcade (bright point) outside that standard jets have, (2) a filament of cool (T is approximately 80,000K) plasma that erupts from the core of the jetbase arcade, and (3) an extra jet strand that should not be made by the reconnection for standard jets but could be made by reconnection between the ambient unipolar open field and the opposite-polarity leg of the filament-carrying flux-rope core field of the erupting jet-base arcade. We therefore infer that these non-standard jets are blowout jets, jets made by miniature versions of the sheared-core-arcade eruptions that make CMEs

  1. Temporal self-splitting of optical pulses

    NASA Astrophysics Data System (ADS)

    Ding, Chaoliang; Koivurova, Matias; Turunen, Jari; Pan, Liuzhan

    2018-05-01

    We present mathematical models for temporally and spectrally partially coherent pulse trains with Laguerre-Gaussian and Hermite-Gaussian Schell-model statistics as extensions of the standard Gaussian Schell model for pulse trains. We derive propagation formulas of both classes of pulsed fields in linearly dispersive media and in temporal optical systems. It is found that, in general, both types of fields exhibit time-domain self-splitting upon propagation. The Laguerre-Gaussian model leads to multiply peaked pulses, while the Hermite-Gaussian model leads to doubly peaked pulses, in the temporal far field (in dispersive media) or at the Fourier plane of a temporal system. In both model fields the character of the self-splitting phenomenon depends both on the degree of temporal and spectral coherence and on the power spectrum of the field.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhawan, Suhail; Goobar, Ariel; Mörtsell, Edvard

    Recent re-calibration of the Type Ia supernova (SNe Ia) magnitude-redshift relation combined with cosmic microwave background (CMB) and baryon acoustic oscillation (BAO) data have provided excellent constraints on the standard cosmological model. Here, we examine particular classes of alternative cosmologies, motivated by various physical mechanisms, e.g. scalar fields, modified gravity and phase transitions to test their consistency with observations of SNe Ia and the ratio of the angular diameter distances from the CMB and BAO. Using a model selection criterion for a relative comparison of the models (the Bayes Factor), we find moderate to strong evidence that the data prefermore » flat ΛCDM over models invoking a thawing behaviour of the quintessence scalar field. However, some exotic models like the growing neutrino mass cosmology and vacuum metamorphosis still present acceptable evidence values. The bimetric gravity model with only the linear interaction term as well as a simplified Galileon model can be ruled out by the combination of SNe Ia and CMB/BAO datasets whereas the model with linear and quadratic interaction terms has a comparable evidence value to standard ΛCDM. Thawing models are found to have significantly poorer evidence compared to flat ΛCDM cosmology under the assumption that the CMB compressed likelihood provides an adequate description for these non-standard cosmologies. We also present estimates for constraints from future data and find that geometric probes from oncoming surveys can put severe limits on non-standard cosmological models.« less

  3. Standard model EFT and extended scalar sectors

    DOE PAGES

    Dawson, Sally; Murphy, Christopher W.

    2017-07-31

    One of the simplest extensions of the Standard Model is the inclusion of an additional scalar multiplet, and we consider scalars in the S U ( 2 ) L singlet, triplet, and quartet representations. Here, we examine models with heavy neutral scalars, m H ~1 – 2 TeV , and the matching of the UV complete theories to the low energy effective field theory. We also demonstrate the agreement of the kinematic distributions obtained in the singlet models for the gluon fusion of a Higgs pair with the predictions of the effective field theory. Finally, the restrictions on the extendedmore » scalar sectors due to unitarity and precision electroweak measurements are summarized and lead to highly restricted regions of viable parameter space for the triplet and quartet models.« less

  4. Standard model EFT and extended scalar sectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, Sally; Murphy, Christopher W.

    One of the simplest extensions of the Standard Model is the inclusion of an additional scalar multiplet, and we consider scalars in the S U ( 2 ) L singlet, triplet, and quartet representations. Here, we examine models with heavy neutral scalars, m H ~1 – 2 TeV , and the matching of the UV complete theories to the low energy effective field theory. We also demonstrate the agreement of the kinematic distributions obtained in the singlet models for the gluon fusion of a Higgs pair with the predictions of the effective field theory. Finally, the restrictions on the extendedmore » scalar sectors due to unitarity and precision electroweak measurements are summarized and lead to highly restricted regions of viable parameter space for the triplet and quartet models.« less

  5. Hydrodynamic interaction of a self-propelling particle with a wall : Comparison between an active Janus particle and a squirmer model.

    PubMed

    Shen, Zaiyi; Würger, Alois; Lintuvuori, Juho S

    2018-03-27

    Using lattice Boltzmann simulations we study the hydrodynamics of an active spherical particle near a no-slip wall. We develop a computational model for an active Janus particle, by considering different and independent mobilities on the two hemispheres and compare the behaviour to a standard squirmer model. We show that the topology of the far-field hydrodynamic nature of the active Janus particle is similar to the standard squirmer model, but in the near-field the hydrodynamics differ. In order to study how the near-field effects affect the interaction between the particle and a flat wall, we compare the behaviour of a Janus swimmer and a squirmer near a no-slip surface via extensive numerical simulations. Our results show generally a good agreement between these two models, but they reveal some key differences especially with low magnitudes of the squirming parameter [Formula: see text]. Notably the affinity of the particles to be trapped at a surface is increased for the active Janus particles when compared to standard squirmers. Finally, we find that when the particle is trapped on the surface, the velocity parallel to the surface exceeds the bulk swimming speed and scales linearly with [Formula: see text].

  6. A Standard-Driven Data Dictionary for Data Harmonization of Heterogeneous Datasets in Urban Geological Information Systems

    NASA Astrophysics Data System (ADS)

    Liu, G.; Wu, C.; Li, X.; Song, P.

    2013-12-01

    The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.

  7. Comparisons of a standard galaxy model with stellar observations in five fields

    NASA Technical Reports Server (NTRS)

    Bahcall, J. N.; Soneira, R. M.

    1984-01-01

    Modern data on the distribution of stellar colors and on the number of stars as a function of apparent magnitude in five directions in the Galaxy are analyzed. It is found that the standard model is consistent with all the available data. Detailed comparisons with the data for five separate fields are presented. The bright end of the spheroid luminosity function and the blue tip of the spheroid horizontal branch are analyzed. The allowed range of the disk scale heights and of fluctuations in the volume density is determined, and a lower limit is set on the disk scale length. Calculations based on the thick disk model of Gilmore and Reid (1983) are presented.

  8. One-loop topological expansion for spin glasses in the large connectivity limit

    NASA Astrophysics Data System (ADS)

    Chiara Angelini, Maria; Parisi, Giorgio; Ricci-Tersenghi, Federico

    2018-01-01

    We apply for the first time a new one-loop topological expansion around the Bethe solution to the spin-glass model with a field in the high connectivity limit, following the methodological scheme proposed in a recent work. The results are completely equivalent to the well-known ones, found by standard field-theoretical expansion around the fully connected model (Bray and Roberts 1980, and following works). However this method has the advantage that the starting point is the original Hamiltonian of the model, with no need to define an associated field theory, nor to know the initial values of the couplings, and the computations have a clear and simple physical meaning. Moreover this new method can also be applied in the case of zero temperature, when the Bethe model has a transition in field, contrary to the fully connected model that is always in the spin-glass phase. Sharing with finite-dimensional model the finite connectivity properties, the Bethe lattice is clearly a better starting point for an expansion with respect to the fully connected model. The present work is a first step towards the generalization of this new expansion to more difficult and interesting cases as the zero-temperature limit, where the expansion could lead to different results with respect to the standard one.

  9. Evaluation of the Revised Lagrangian Particle Model GRAL Against Wind-Tunnel and Field Observations in the Presence of Obstacles

    NASA Astrophysics Data System (ADS)

    Oettl, Dietmar

    2015-05-01

    A revised microscale flow field model has been implemented in the Lagrangian particle model Graz Lagrangian Model (GRAL) for computing flows around obstacles. It is based on the Reynolds-averaged Navier-Stokes equations in three dimensions and the widely used standard turbulence model. Here we focus on evaluating the model regarding computed concentrations by use of a comprehensive wind-tunnel experiment with numerous combinations of building geometries, stack positions, and locations. In addition, two field experiments carried out in Denmark and in the U.S were used to evaluate the model. Further, two different formulations of the standard deviation of wind component fluctuations have also been investigated, but no clear picture could be drawn in this respect. Overall the model is able to capture several of the main features of pollutant dispersion around obstacles, but at least one future model improvement was identified for stack releases within the recirculation zone of buildings. Regulatory applications are the bread-and-butter of most GRAL users nowadays, requiring fast and robust modelling algorithms. Thus, a few simplifications have been introduced to decrease the computational time required. Although predicted concentrations for the two field experiments were found to be in good agreement with observations, shortcomings were identified regarding the extent of computed recirculation zones for the idealized wind-tunnel building geometries, with approaching flows perpendicular to building faces.

  10. Curvaton scenario within the minimal supersymmetric standard model and predictions for non-Gaussianity.

    PubMed

    Mazumdar, Anupam; Nadathur, Seshadri

    2012-03-16

    We provide a model in which both the inflaton and the curvaton are obtained from within the minimal supersymmetric standard model, with known gauge and Yukawa interactions. Since now both the inflaton and curvaton fields are successfully embedded within the same sector, their decay products thermalize very quickly before the electroweak scale. This results in two important features of the model: first, there will be no residual isocurvature perturbations, and second, observable non-Gaussianities can be generated with the non-Gaussianity parameter f(NL)~O(5-1000) being determined solely by the combination of weak-scale physics and the standard model Yukawa interactions.

  11. Effect of sea-level rise on salt water intrusion near a coastal well field in southeastern Florida.

    PubMed

    Langevin, Christian D; Zygnerski, Michael

    2013-01-01

    A variable-density groundwater flow and dispersive solute transport model was developed for the shallow coastal aquifer system near a municipal supply well field in southeastern Florida. The model was calibrated for a 105-year period (1900 to 2005). An analysis with the model suggests that well-field withdrawals were the dominant cause of salt water intrusion near the well field, and that historical sea-level rise, which is similar to lower-bound projections of future sea-level rise, exacerbated the extent of salt water intrusion. Average 2005 hydrologic conditions were used for 100-year sensitivity simulations aimed at quantifying the effect of projected rises in sea level on fresh coastal groundwater resources near the well field. Use of average 2005 hydrologic conditions and a constant sea level result in total dissolved solids (TDS) concentration of the well field exceeding drinking water standards after 70 years. When sea-level rise is included in the simulations, drinking water standards are exceeded 10 to 21 years earlier, depending on the specified rate of sea-level rise. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.

  12. Process gg{yields}h{sub 0}{yields}{gamma}{gamma} in the Lee-Wick standard model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krauss, F.; Underwood, T. E. J.; Zwicky, R.

    2008-01-01

    The process gg{yields}h{sub 0}{yields}{gamma}{gamma} is studied in the Lee-Wick extension of the standard model (LWSM) proposed by Grinstein, O'Connell, and Wise. In this model, negative norm partners for each SM field are introduced with the aim to cancel quadratic divergences in the Higgs mass. All sectors of the model relevant to gg{yields}h{sub 0}{yields}{gamma}{gamma} are diagonalized and results are commented on from the perspective of both the Lee-Wick and higher-derivative formalisms. Deviations from the SM rate for gg{yields}h{sub 0} are found to be of the order of 15%-5% for Lee-Wick masses in the range 500-1000 GeV. Effects on the rate formore » h{sub 0}{yields}{gamma}{gamma} are smaller, of the order of 5%-1% for Lee-Wick masses in the same range. These comparatively small changes may well provide a means of distinguishing the LWSM from other models such as universal extra dimensions where same-spin partners to standard model fields also appear. Corrections to determinations of Cabibbo-Kobayashi-Maskawa (CKM) elements |V{sub t(b,s,d)}| are also considered and are shown to be positive, allowing the possibility of measuring a CKM element larger than unity, a characteristic signature of the ghostlike nature of the Lee-Wick fields.« less

  13. Electronic Model of a Ferroelectric Field Effect Transistor

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd C.; Ho, Fat Duen; Russell, Larry (Technical Monitor)

    2001-01-01

    A pair of electronic models has been developed of a Ferroelectric Field Effect transistor. These models can be used in standard electrical circuit simulation programs to simulate the main characteristics of the FFET. The models use the Schmitt trigger circuit as a basis for their design. One model uses bipolar junction transistors and one uses MOSFET's. Each model has the main characteristics of the FFET, which are the current hysterisis with different gate voltages and decay of the drain current when the gate voltage is off. The drain current from each model has similar values to an actual FFET that was measured experimentally. T'he input and o Output resistance in the models are also similar to that of the FFET. The models are valid for all frequencies below RF levels. No attempt was made to model the high frequency characteristics of the FFET. Each model can be used to design circuits using FFET's with standard electrical simulation packages. These circuits can be used in designing non-volatile memory circuits and logic circuits and is compatible with all SPICE based circuit analysis programs. The models consist of only standard electrical components, such as BJT's, MOSFET's, diodes, resistors, and capacitors. Each model is compared to the experimental data measured from an actual FFET.

  14. Lorentz violation, gravitoelectromagnetic field and Bhabha scattering

    NASA Astrophysics Data System (ADS)

    Santos, A. F.; Khanna, Faqir C.

    2018-01-01

    Lorentz symmetry is a fundamental symmetry in the Standard Model (SM) and in General Relativity (GR). This symmetry holds true for all models at low energies. However, at energies near the Planck scale, it is conjectured that there may be a very small violation of Lorentz symmetry. The Standard Model Extension (SME) is a quantum field theory that includes a systematic description of Lorentz symmetry violations in all sectors of particle physics and gravity. In this paper, SME is considered to study the physical process of Bhabha Scattering in the Gravitoelectromagnetism (GEM) theory. GEM is an important formalism that is valid in a suitable approximation of general relativity. A new nonminimal coupling term that violates Lorentz symmetry is used in this paper. Differential cross-section for gravitational Bhabha scattering is calculated. The Lorentz violation contributions to this GEM scattering cross-section are small and are similar in magnitude to the case of the electromagnetic field.

  15. Can the baryon asymmetry arise from initial conditions?

    DOE PAGES

    Krnjaic, Gordan

    2017-08-01

    In this letter, we quantify the challenge of explaining the baryon asymmetry using initial conditions in a universe that undergoes inflation. Contrary to lore, we find that such an explanation is possible if netmore » $B-L$ number is stored in a light bosonic field with hyper-Planckian initial displacement and a delicately chosen field velocity prior to inflation. However, such a construction may require extremely tuned coupling constants to ensure that this asymmetry is viably communicated to the Standard Model after reheating; the large field displacement required to overcome inflationary dilution must not induce masses for Standard Model particles or generate dangerous washout processes. While these features are inelegant, this counterexample nonetheless shows that there is no theorem against such an explanation. We also comment on potential observables in the double $$\\beta$$-decay spectrum and on model variations that may allow for more natural realizations.« less

  16. Standards for the Analysis and Processing of Surface-Water Data and Information Using Electronic Methods

    USGS Publications Warehouse

    Sauer, Vernon B.

    2002-01-01

    Surface-water computation methods and procedures are described in this report to provide standards from which a completely automated electronic processing system can be developed. To the greatest extent possible, the traditional U. S. Geological Survey (USGS) methodology and standards for streamflow data collection and analysis have been incorporated into these standards. Although USGS methodology and standards are the basis for this report, the report is applicable to other organizations doing similar work. The proposed electronic processing system allows field measurement data, including data stored on automatic field recording devices and data recorded by the field hydrographer (a person who collects streamflow and other surface-water data) in electronic field notebooks, to be input easily and automatically. A user of the electronic processing system easily can monitor the incoming data and verify and edit the data, if necessary. Input of the computational procedures, rating curves, shift requirements, and other special methods are interactive processes between the user and the electronic processing system, with much of this processing being automatic. Special computation procedures are provided for complex stations such as velocity-index, slope, control structures, and unsteady-flow models, such as the Branch-Network Dynamic Flow Model (BRANCH). Navigation paths are designed to lead the user through the computational steps for each type of gaging station (stage-only, stagedischarge, velocity-index, slope, rate-of-change in stage, reservoir, tide, structure, and hydraulic model stations). The proposed electronic processing system emphasizes the use of interactive graphics to provide good visual tools for unit values editing, rating curve and shift analysis, hydrograph comparisons, data-estimation procedures, data review, and other needs. Documentation, review, finalization, and publication of records are provided for with the electronic processing system, as well as archiving, quality assurance, and quality control.

  17. The mechanical properties of high speed GTAW weld and factors of nonlinear multiple regression model under external transverse magnetic field

    NASA Astrophysics Data System (ADS)

    Lu, Lin; Chang, Yunlong; Li, Yingmin; He, Youyou

    2013-05-01

    A transverse magnetic field was introduced to the arc plasma in the process of welding stainless steel tubes by high-speed Tungsten Inert Gas Arc Welding (TIG for short) without filler wire. The influence of external magnetic field on welding quality was investigated. 9 sets of parameters were designed by the means of orthogonal experiment. The welding joint tensile strength and form factor of weld were regarded as the main standards of welding quality. A binary quadratic nonlinear regression equation was established with the conditions of magnetic induction and flow rate of Ar gas. The residual standard deviation was calculated to adjust the accuracy of regression model. The results showed that, the regression model was correct and effective in calculating the tensile strength and aspect ratio of weld. Two 3D regression models were designed respectively, and then the impact law of magnetic induction on welding quality was researched.

  18. Modeling and Simulation Network Data Standards

    DTIC Science & Technology

    2011-09-30

    COMBATXXI Movement Logger Data Output Dictionary. Field # Geocentric Coordinates (GCC) Heading Geodetic Coordinates (GDC) Heading Universal...B-8 Field # Geocentric Coordinates (GCC) Heading Geodetic Coordinates (GDC) Heading Universal Transverse Mercator (UTM) Heading...FKSM Fort Knox Supplemental Material FM field manual GCC geocentric coordinates GDC geodetic coordinates GIG global information grid

  19. Comparison of Field Methods and Models to Estimate Mean Crown Diameter

    Treesearch

    William A. Bechtold; Manfred E. Mielke; Stanley J. Zarnoch

    2002-01-01

    The direct measurement of crown diameters with logger's tapes adds significantly to the cost of extensive forest inventories. We undertook a study of 100 trees to compare this measurement method to four alternatives-two field instruments, ocular estimates, and regression models. Using the taping method as the standard of comparison, accuracy of the tested...

  20. Chameleon field dynamics during inflation

    NASA Astrophysics Data System (ADS)

    Saba, Nasim; Farhoudi, Mehrdad

    By studying the chameleon model during inflation, we investigate whether it can be a successful inflationary model, wherein we employ the common typical potential usually used in the literature. Thus, in the context of the slow-roll approximations, we obtain the e-folding number for the model to verify the ability of resolving the problems of standard big bang cosmology. Meanwhile, we apply the constraints on the form of the chosen potential and also on the equation of state parameter coupled to the scalar field. However, the results of the present analysis show that there is not much chance of having the chameleonic inflation. Hence, we suggest that if through some mechanism the chameleon model can be reduced to the standard inflationary model, then it may cover the whole era of the universe from the inflation up to the late time.

  1. Is the Conformational Ensemble of Alzheimer’s Aβ10-40 Peptide Force Field Dependent?

    PubMed Central

    Siwy, Christopher M.

    2017-01-01

    By applying REMD simulations we have performed comparative analysis of the conformational ensembles of amino-truncated Aβ10-40 peptide produced with five force fields, which combine four protein parameterizations (CHARMM36, CHARMM22*, CHARMM22/cmap, and OPLS-AA) and two water models (standard and modified TIP3P). Aβ10-40 conformations were analyzed by computing secondary structure, backbone fluctuations, tertiary interactions, and radius of gyration. We have also calculated Aβ10-40 3JHNHα-coupling and RDC constants and compared them with their experimental counterparts obtained for the full-length Aβ1-40 peptide. Our study led us to several conclusions. First, all force fields predict that Aβ adopts unfolded structure dominated by turn and random coil conformations. Second, specific TIP3P water model does not dramatically affect secondary or tertiary Aβ10-40 structure, albeit standard TIP3P model favors slightly more compact states. Third, although the secondary structures observed in CHARMM36 and CHARMM22/cmap simulations are qualitatively similar, their tertiary interactions show little consistency. Fourth, two force fields, OPLS-AA and CHARMM22* have unique features setting them apart from CHARMM36 or CHARMM22/cmap. OPLS-AA reveals moderate β-structure propensity coupled with extensive, but weak long-range tertiary interactions leading to Aβ collapsed conformations. CHARMM22* exhibits moderate helix propensity and generates multiple exceptionally stable long- and short-range interactions. Our investigation suggests that among all force fields CHARMM22* differs the most from CHARMM36. Fifth, the analysis of 3JHNHα-coupling and RDC constants based on CHARMM36 force field with standard TIP3P model led us to an unexpected finding that in silico Aβ10-40 and experimental Aβ1-40 constants are generally in better agreement than these quantities computed and measured for identical peptides, such as Aβ1-40 or Aβ1-42. This observation suggests that the differences in the conformational ensembles of Aβ10-40 and Aβ1-40 are small and the former can be used as proxy of the full-length peptide. Based on this argument, we concluded that CHARMM36 force field with standard TIP3P model produces the most accurate representation of Aβ10-40 conformational ensemble. PMID:28085875

  2. Online dynamical downscaling of temperature and precipitation within the iLOVECLIM model (version 1.1)

    NASA Astrophysics Data System (ADS)

    Quiquet, Aurélien; Roche, Didier M.; Dumas, Christophe; Paillard, Didier

    2018-02-01

    This paper presents the inclusion of an online dynamical downscaling of temperature and precipitation within the model of intermediate complexity iLOVECLIM v1.1. We describe the following methodology to generate temperature and precipitation fields on a 40 km × 40 km Cartesian grid of the Northern Hemisphere from the T21 native atmospheric model grid. Our scheme is not grid specific and conserves energy and moisture in the same way as the original climate model. We show that we are able to generate a high-resolution field which presents a spatial variability in better agreement with the observations compared to the standard model. Although the large-scale model biases are not corrected, for selected model parameters, the downscaling can induce a better overall performance compared to the standard version on both the high-resolution grid and on the native grid. Foreseen applications of this new model feature include the improvement of ice sheet model coupling and high-resolution land surface models.

  3. Evaluation of the APEX model to simulate runoff quality from agricultural fields in the southern region of the US

    USDA-ARS?s Scientific Manuscript database

    The phosphorus (P) Index (PI) is the risk assessment tool approved in the NRCS 590 standard used to target critical source areas and practices to reduce P losses. A revision of the 590 standard, suggested using the Agricultural Policy/Environmental eXtender (APEX) model to assess the risk of nitroge...

  4. Bounce inflation cosmology with Standard Model Higgs boson

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Youping; Huang, Fa Peng; Zhang, Xinmin

    It is of great interest to connect cosmology in the early universe to the Standard Model of particle physics. In this paper, we try to construct a bounce inflation model with the standard model Higgs boson, where the one loop correction is taken into account in the effective potential of Higgs field. In this model, a Galileon term has been introduced to eliminate the ghost mode when bounce happens. Moreover, due to the fact that the Fermion loop correction can make part of the Higgs potential negative, one naturally obtains a large equation of state(EoS) parameter in the contracting phase,more » which can eliminate the anisotropy problem. After the bounce, the model can drive the universe into the standard higgs inflation phase, which can generate nearly scale-invariant power spectrum.« less

  5. Endoscope field of view measurement.

    PubMed

    Wang, Quanzeng; Khanicheh, Azadeh; Leiner, Dennis; Shafer, David; Zobel, Jurgen

    2017-03-01

    The current International Organization for Standardization (ISO) standard (ISO 8600-3: 1997 including Amendment 1: 2003) for determining endoscope field of view (FOV) does not accurately characterize some novel endoscopic technologies such as endoscopes with a close focus distance and capsule endoscopes. We evaluated the endoscope FOV measurement method (the FOV WS method) in the current ISO 8600-3 standard and proposed a new method (the FOV EP method). We compared the two methods by measuring the FOV of 18 models of endoscopes (one device for each model) from seven key international manufacturers. We also estimated the device to device variation of two models of colonoscopes by measuring several hundreds of devices. Our results showed that the FOV EP method was more accurate than the FOV WS method, and could be used for all endoscopes. We also found that the labelled FOV values of many commercial endoscopes are significantly overstated. Our study can help endoscope users understand endoscope FOV and identify a proper method for FOV measurement. This paper can be used as a reference to revise the current endoscope FOV measurement standard.

  6. Endoscope field of view measurement

    PubMed Central

    Wang, Quanzeng; Khanicheh, Azadeh; Leiner, Dennis; Shafer, David; Zobel, Jurgen

    2017-01-01

    The current International Organization for Standardization (ISO) standard (ISO 8600-3: 1997 including Amendment 1: 2003) for determining endoscope field of view (FOV) does not accurately characterize some novel endoscopic technologies such as endoscopes with a close focus distance and capsule endoscopes. We evaluated the endoscope FOV measurement method (the FOVWS method) in the current ISO 8600-3 standard and proposed a new method (the FOVEP method). We compared the two methods by measuring the FOV of 18 models of endoscopes (one device for each model) from seven key international manufacturers. We also estimated the device to device variation of two models of colonoscopes by measuring several hundreds of devices. Our results showed that the FOVEP method was more accurate than the FOVWS method, and could be used for all endoscopes. We also found that the labelled FOV values of many commercial endoscopes are significantly overstated. Our study can help endoscope users understand endoscope FOV and identify a proper method for FOV measurement. This paper can be used as a reference to revise the current endoscope FOV measurement standard. PMID:28663840

  7. Deformed Calogero-Sutherland model and fractional quantum Hall effect

    NASA Astrophysics Data System (ADS)

    Atai, Farrokh; Langmann, Edwin

    2017-01-01

    The deformed Calogero-Sutherland (CS) model is a quantum integrable system with arbitrary numbers of two types of particles and reducing to the standard CS model in special cases. We show that a known collective field description of the CS model, which is based on conformal field theory (CFT), is actually a collective field description of the deformed CS model. This provides a natural application of the deformed CS model in Wen's effective field theory of the fractional quantum Hall effect (FQHE), with the two kinds of particles corresponding to electrons and quasi-hole excitations. In particular, we use known mathematical results about super-Jack polynomials to obtain simple explicit formulas for the orthonormal CFT basis proposed by van Elburg and Schoutens in the context of the FQHE.

  8. Dark energy coupling with electromagnetism as seen from future low-medium redshift probes

    NASA Astrophysics Data System (ADS)

    Calabrese, E.; Martinelli, M.; Pandolfi, S.; Cardone, V. F.; Martins, C. J. A. P.; Spiro, S.; Vielzeuf, P. E.

    2014-04-01

    Beyond the standard cosmological model the late-time accelerated expansion of the Universe can be reproduced by the introduction of an additional dynamical scalar field. In this case, the field is expected to be naturally coupled to the rest of the theory's fields, unless a (still unknown) symmetry suppresses this coupling. Therefore, this would possibly lead to some observational consequences, such as space-time variations of nature's fundamental constants. In this paper we investigate the coupling between a dynamical dark energy model and the electromagnetic field, and the corresponding evolution of the fine structure constant (α) with respect to the standard local value α0. In particular, we derive joint constraints on two dynamical dark energy model parametrizations (the Chevallier-Polarski-Linder and early dark energy model) and on the coupling with electromagnetism ζ, forecasting future low-medium redshift observations. We combine supernovae and weak lensing measurements from the Euclid experiment with high-resolution spectroscopy measurements of fundamental couplings and the redshift drift from the European Extremely Large Telescope, highlighting the contribution of each probe. Moreover, we also consider the case where the field driving the α evolution is not the one responsible for cosmic acceleration and investigate how future observations can constrain this scenario.

  9. Automated next-to-leading order predictions for new physics at the LHC: The case of colored scalar pair production

    DOE PAGES

    Degrande, Céline; Fuks, Benjamin; Hirschi, Valentin; ...

    2015-05-05

    We present for the first time the full automation of collider predictions matched with parton showers at the next-to-leading accuracy in QCD within nontrivial extensions of the standard model. The sole inputs required from the user are the model Lagrangian and the process of interest. As an application of the above, we explore scenarios beyond the standard model where new colored scalar particles can be pair produced in hadron collisions. Using simplified models to describe the new field interactions with the standard model, we present precision predictions for the LHC within the MadGraph5_aMC@NLO framework.

  10. Characterization of commercial magnetorheological fluids at high shear rate: influence of the gap

    NASA Astrophysics Data System (ADS)

    Golinelli, Nicola; Spaggiari, Andrea

    2018-07-01

    This paper reports the experimental tests on the behaviour of a commercial MR fluid at high shear rates and the effect of the gap. Three gaps were considered at multiple magnetic fields and shear rates. From an extended set of almost two hundred experimental flow curves, a set of parameters for the apparent viscosity are retrieved by using the Ostwald de Waele model for non-Newtonian fluids. It is possible to simplify the parameter correlation by making the following considerations: the consistency of the model depends only on the magnetic field, the flow index depends on the fluid type and the gap shows an important effect only at null or very low magnetic fields. This lead to a simple and useful model, especially in the design phase of a MR based product. During the off state, with no applied field, it is possible to use a standard viscous model. During the active state, with high magnetic field, a strong non-Newtonian nature becomes prevalent over the viscous one even at very high shear rate; the magnetic field dominates the apparent viscosity change, while the gap does not play any relevant role on the system behaviour. This simple assumption allows the designer to dimension the gap only considering the non-active state, as in standard viscous systems, and taking into account only the magnetic effect in the active state, where the gap does not change the proposed fluid model.

  11. Modelling of plug and play interface for energy router based on IEC61850

    NASA Astrophysics Data System (ADS)

    Shi, Y. F.; Yang, F.; Gan, L.; He, H. L.

    2017-11-01

    Under the background of the “Internet Plus”, as the energy internet infrastructure equipment, energy router will be widely developed. The IEC61850 standard is the only universal standard in the field of power system automation which realizes the standardization of engineering operation of intelligent substation. To eliminate the lack of International unified standard for communication of energy router, this paper proposes to apply IEC61850 to plug and play interface and establishes the plug and play interface information model and information transfer services. This paper provides a research approach for the establishment of energy router communication standards, and promotes the development of energy router.

  12. Compactification on phase space

    NASA Astrophysics Data System (ADS)

    Lovelady, Benjamin; Wheeler, James

    2016-03-01

    A major challenge for string theory is to understand the dimensional reduction required for comparison with the standard model. We propose reducing the dimension of the compactification by interpreting some of the extra dimensions as the energy-momentum portion of a phase-space. Such models naturally arise as generalized quotients of the conformal group called biconformal spaces. By combining the standard Kaluza-Klein approach with such a conformal gauge theory, we may start from the conformal group of an n-dimensional Euclidean space to form a 2n-dimensional quotient manifold with symplectic structure. A pair of involutions leads naturally to two n-dimensional Lorentzian manifolds. For n = 5, this leaves only two extra dimensions, with a countable family of possible compactifications and an SO(5) Yang-Mills field on the fibers. Starting with n=6 leads to 4-dimensional compactification of the phase space. In the latter case, if the two dimensions each from spacetime and momentum space are compactified onto spheres, then there is an SU(2)xSU(2) (left-right symmetric electroweak) field between phase and configuration space and an SO(6) field on the fibers. Such a theory, with minor additional symmetry breaking, could contain all parts of the standard model.

  13. Spin-charge-family theory is offering next step in understanding elementary particles and fields and correspondingly universe

    NASA Astrophysics Data System (ADS)

    Mankoč Borštnik, Norma Susana

    2017-05-01

    More than 40 years ago the standard model made a successful new step in understanding properties of fermion and boson fields. Now the next step is needed, which would explain what the standard model and the cosmological models just assume: a. The origin of quantum numbers of massless one family members. b. The origin of families. c. The origin of the vector gauge fields. d. The origin of the Higgses and Yukawa couplings. e. The origin of the dark matter. f. The origin of the matter-antimatter asymmetry. g. The origin of the dark energy. h. And several other open problems. The spin-charge-family theory, a kind of the Kaluza-Klein theories in (d = (2n - 1) + 1)-space-time, with d = (13 + 1) and the two kinds of the spin connection fields, which are the gauge fields of the two kinds of the Clifford algebra objects anti-commuting with one another, may provide this much needed next step. The talk presents: i. A short presentation of this theory. ii. The review over the achievements of this theory so far, with some not published yet achievements included. iii. Predictions for future experiments.

  14. SU-E-T-17: A Mathematical Model for PinPoint Chamber Correction in Measuring Small Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T; Zhang, Y; Li, X

    2014-06-01

    Purpose: For small field dosimetry, such as measuring the cone output factor for stereotactic radiosurgery, ion chambers often result in underestimation of the dose, due to both the volume averaging effect and the lack of electron equilibrium. The purpose of this work is to develop a mathematical model, specifically for the pinpoint chamber, to calculate the correction factors corresponding to different type of small fields, including single cone-based circular field and non-standard composite fields. Methods: A PTW 0.015cc PinPoint chamber was used in the study. Its response in a certain field was modeled as the total contribution of many smallmore » beamlets, each with different response factor depending on the relative strength, radial distance to the chamber axis, and the beam angle. To get these factors, 12 cone-shaped circular fields (5mm,7.5mm, 10mm, 12.5mm, 15mm, 20mm, 25mm, 30mm, 35mm, 40mm, 50mm, 60mm) were irradiated and measured with the PinPoint chamber. For each field size, hundreds of readings were recorded for every 2mm chamber shift in the horizontal plane. These readings were then compared with the theoretical doses as obtained with Monte Carlo calculation. A penalized-least-square optimization algorithm was developed to find out the beamlet response factors. After the parameter fitting, the established mathematical model was validated with the same MC code for other non-circular fields. Results: The optimization algorithm used for parameter fitting was stable and the resulted response factors were smooth in spatial domain. After correction with the mathematical model, the chamber reading matched with the Monte Carlo calculation for all the tested fields to within 2%. Conclusion: A novel mathematical model has been developed for the PinPoint chamber for dosimetric measurement of small fields. The current model is applicable only when the beam axis is perpendicular to the chamber axis. It can be applied to non-standard composite fields. Further validation with other type of detectors is being conducted.« less

  15. Detailed numerical investigation of the Bohm limit in cosmic ray diffusion theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hussein, M.; Shalchi, A., E-mail: m_hussein@physics.umanitoba.ca, E-mail: andreasm4@yahoo.com

    2014-04-10

    A standard model in cosmic ray diffusion theory is the so-called Bohm limit in which the particle mean free path is assumed to be equal to the Larmor radius. This type of diffusion is often employed to model the propagation and acceleration of energetic particles. However, recent analytical and numerical work has shown that standard Bohm diffusion is not realistic. In the present paper, we perform test-particle simulations to explore particle diffusion in the strong turbulence limit in which the wave field is much stronger than the mean magnetic field. We show that there is indeed a lower limit ofmore » the particle mean free path along the mean field. In this limit, the mean free path is directly proportional to the unperturbed Larmor radius like in the traditional Bohm limit, but it is reduced by the factor δB/B {sub 0} where B {sub 0} is the mean field and δB the turbulent field. Although we focus on parallel diffusion, we also explore diffusion across the mean field in the strong turbulence limit.« less

  16. The chaotic regime of D-term inflation

    NASA Astrophysics Data System (ADS)

    Buchmüller, W.; Domcke, V.; Schmitz, K.

    2014-11-01

    We consider D-term inflation for small couplings of the inflaton to matter fields. Standard hybrid inflation then ends at a critical value of the inflaton field that exceeds the Planck mass. During the subsequent waterfall transition the inflaton continues its slow-roll motion, whereas the waterfall field rapidly grows by quantum fluctuations. Beyond the decoherence time, the waterfall field becomes classical and approaches a time-dependent minimum, which is determined by the value of the inflaton field and the self-interaction of the waterfall field. During the final stage of inflation, the effective inflaton potential is essentially quadratic, which leads to the standard predictions of chaotic inflation. The model illustrates how the decay of a false vacuum of GUT-scale energy density can end in a period of `chaotic inflation'.

  17. The formation of cosmic structure in a texture-seeded cold dark matter cosmogony

    NASA Technical Reports Server (NTRS)

    Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III

    1992-01-01

    The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.

  18. Conceptual Modeling in the Time of the Revolution: Part II

    NASA Astrophysics Data System (ADS)

    Mylopoulos, John

    Conceptual Modeling was a marginal research topic at the very fringes of Computer Science in the 60s and 70s, when the discipline was dominated by topics focusing on programs, systems and hardware architectures. Over the years, however, the field has moved to centre stage and has come to claim a central role both in Computer Science research and practice in diverse areas, such as Software Engineering, Databases, Information Systems, the Semantic Web, Business Process Management, Service-Oriented Computing, Multi-Agent Systems, Knowledge Management, and more. The transformation was greatly aided by the adoption of standards in modeling languages (e.g., UML), and model-based methodologies (e.g., Model-Driven Architectures) by the Object Management Group (OMG) and other standards organizations. We briefly review the history of the field over the past 40 years, focusing on the evolution of key ideas. We then note some open challenges and report on-going research, covering topics such as the representation of variability in conceptual models, capturing model intentions, and models of laws.

  19. Supernova brightening from chameleon-photon mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrage, C.

    2008-02-15

    Measurements of standard candles and measurements of standard rulers give an inconsistent picture of the history of the universe. This discrepancy can be explained if photon number is not conserved as computations of the luminosity distance must be modified. I show that photon number is not conserved when photons mix with chameleons in the presence of a magnetic field. The strong magnetic fields in a supernova mean that the probability of a photon converting into a chameleon in the interior of the supernova is high, this results in a large flux of chameleons at the surface of the supernova. Chameleonsmore » and photons also mix as a result of the intergalactic magnetic field. These two effects combined cause the image of the supernova to be brightened resulting in a model which fits both observations of standard candles and observations of standard rulers.« less

  20. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    PubMed

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  1. Weather forecasting with open source software

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Dörnbrack, Andreas

    2013-04-01

    To forecast the weather situation during aircraft-based atmospheric field campaigns, we employ a tool chain of existing and self-developed open source software tools and open standards. Of particular value are the Python programming language with its extension libraries NumPy, SciPy, PyQt4, Matplotlib and the basemap toolkit, the NetCDF standard with the Climate and Forecast (CF) Metadata conventions, and the Open Geospatial Consortium Web Map Service standard. These open source libraries and open standards helped to implement the "Mission Support System", a Web Map Service based tool to support weather forecasting and flight planning during field campaigns. The tool has been implemented in Python and has also been released as open source (Rautenhaus et al., Geosci. Model Dev., 5, 55-71, 2012). In this presentation we discuss the usage of free and open source software for weather forecasting in the context of research flight planning, and highlight how the field campaign work benefits from using open source tools and open standards.

  2. Representing Hydrologic Models as HydroShare Resources to Facilitate Model Sharing and Collaboration

    NASA Astrophysics Data System (ADS)

    Castronova, A. M.; Goodall, J. L.; Mbewe, P.

    2013-12-01

    The CUAHSI HydroShare project is a collaborative effort that aims to provide software for sharing data and models within the hydrologic science community. One of the early focuses of this work has been establishing metadata standards for describing models and model-related data as HydroShare resources. By leveraging this metadata definition, a prototype extension has been developed to create model resources that can be shared within the community using the HydroShare system. The extension uses a general model metadata definition to create resource objects, and was designed so that model-specific parsing routines can extract and populate metadata fields from model input and output files. The long term goal is to establish a library of supported models where, for each model, the system has the ability to extract key metadata fields automatically, thereby establishing standardized model metadata that will serve as the foundation for model sharing and collaboration within HydroShare. The Soil Water & Assessment Tool (SWAT) is used to demonstrate this concept through a case study application.

  3. Sustainability in Biobanking: Model of Biobank Graz.

    PubMed

    Sargsyan, Karine; Macheiner, Tanja; Story, Petra; Strahlhofer-Augsten, Manuela; Plattner, Katharina; Riegler, Skaiste; Granitz, Gabriele; Bayer, Michaela; Huppertz, Berthold

    2015-12-01

    Research infrastructures remain the key for state-of-the-art and successful research. In the last few decades, biobanks have become increasingly important in this field through standardization of biospecimen processing, sample storage, and standardized data management. Research infrastructure in cohort studies and other sample collection activities are currently experiencing a lack of long-term funding. In this article, the Biobank Graz discusses these aspects of sustainability including the definition of sustainability and necessity of a business plan, as well as cost calculation model in the field of biobanking. The economic state, critical success factors, and important operational issues are reviewed and described by the authors, using the example of the Biobank Graz. Sustainability in the field of biobanking is a globally important matter of necessity, starting from policy making and ending with security and documentation on each operational level.

  4. Evaluation of Troxler model 3411 nuclear gage.

    DOT National Transportation Integrated Search

    1978-01-01

    The performance of the Troxler Electronics Laboratory Model 3411 nuclear gage was evaluated through laboratory tests on the Department's density and moisture standards and field tests on various soils, base courses, and bituminous concrete overlays t...

  5. Issues related to the Fermion mass problem

    NASA Astrophysics Data System (ADS)

    Murakowski, Janusz Adam

    1998-09-01

    This thesis is divided into three parts. Each illustrates a different aspect of the fermion mass issue in elementary particle physics. In the first part, the possibility of chiral symmetry breaking in the presence of uniform magnetic and electric fields is investigated. The system is studied nonperturbatively with the use of basis functions compatible with the external field configuration, the parabolic cylinder functions. It is found that chiral symmetry, broken by a uniform magnetic field, is restored by electric field. Obtained result is nonperturbative in nature: even the tiniest deviation of the electric field from zero restores chiral symmetry. In the second part, heavy quarkonium systems are investigated. To study these systems, a phenomenological nonrelativistic model is built. Approximate solutions to this model are found with the use of a specially designed Pade approximation and by direct numerical integration of Schrodinger equation. The results are compared with experimental measurements of respective meson masses. Good agreement between theoretical calculations and experimental results is found. Advantages and shortcommings of the new approximation method are analysed. In the third part, an extension of the standard model of elementary particles is studied. The extension, called the aspon model, was originally introduced to cure the so called strong CP problem. In addition to fulfilling its original purpose, the aspon model modifies the couplings of the standard model quarks to the Z boson. As a result, the decay rates of the Z boson to quarks are altered. By using the recent precise measurements of the decay rates Z → bb and Z /to [/it c/=c], new constraints on the aspon model parameters are found.

  6. Non-minimal gravitational reheating during kination

    NASA Astrophysics Data System (ADS)

    Dimopoulos, Konstantinos; Markkanen, Tommi

    2018-06-01

    A new mechanism is presented which can reheat the Universe in non-oscillatory models of inflation, where the inflation period is followed by a period dominated by the kinetic density for the inflaton field (kination). The mechanism considers an auxiliary field non-minimally coupled to gravity. The auxiliary field is a spectator during inflation, rendered heavy by the non-minimal coupling to gravity. During kination however, the non-minimal coupling generates a tachyonic mass, which displaces the field, until its bare mass becomes important, leading to coherent oscillations. Then, the field decays into the radiation bath of the hot big bang. The model is generic and predictive, in that the resulting reheating temperature is a function only of the model parameters (masses and couplings) and not of initial conditions. It is shown that reheating can be very efficient also when considering only the Standard Model.

  7. Cartan gravity, matter fields, and the gauge principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westman, Hans F., E-mail: hwestman74@gmail.com; Zlosnik, Tom G., E-mail: t.zlosnik@imperial.ac.uk

    Gravity is commonly thought of as one of the four force fields in nature. However, in standard formulations its mathematical structure is rather different from the Yang–Mills fields of particle physics that govern the electromagnetic, weak, and strong interactions. This paper explores this dissonance with particular focus on how gravity couples to matter from the perspective of the Cartan-geometric formulation of gravity. There the gravitational field is represented by a pair of variables: (1) a ‘contact vector’ V{sup A} which is geometrically visualized as the contact point between the spacetime manifold and a model spacetime being ‘rolled’ on top ofmore » it, and (2) a gauge connection A{sub μ}{sup AB}, here taken to be valued in the Lie algebra of SO(2,3) or SO(1,4), which mathematically determines how much the model spacetime is rotated when rolled. By insisting on two principles, the gauge principle and polynomial simplicity, we shall show how one can reformulate matter field actions in a way that is harmonious with Cartan’s geometric construction. This yields a formulation of all matter fields in terms of first order partial differential equations. We show in detail how the standard second order formulation can be recovered. In particular, the Hodge dual, which characterizes the structure of bosonic field equations, pops up automatically. Furthermore, the energy–momentum and spin-density three-forms are naturally combined into a single object here denoted the spin-energy–momentum three-form. Finally, we highlight a peculiarity in the mathematical structure of our first-order formulation of Yang–Mills fields. This suggests a way to unify a U(1) gauge field with gravity into a SO(1,5)-valued gauge field using a natural generalization of Cartan geometry in which the larger symmetry group is spontaneously broken down to SO(1,3)×U(1). The coupling of this unified theory to matter fields and possible extensions to non-Abelian gauge fields are left as open questions. -- Highlights: •Develops Cartan gravity to include matter fields. •Coupling to gravity is done using the standard gauge prescription. •Matter actions are manifestly polynomial in all field variables. •Standard equations recovered on-shell for scalar, spinor and Yang–Mills fields. •Unification of a U(1) field with gravity based on the orthogonal group SO(1,5)« less

  8. Dosimetry for Small and Nonstandard Fields

    NASA Astrophysics Data System (ADS)

    Junell, Stephanie L.

    The proposed small and non-standard field dosimetry protocol from the joint International Atomic Energy Agency (IAEA) and American Association of Physicist in Medicine working group introduces new reference field conditions for ionization chamber based reference dosimetry. Absorbed dose beam quality conversion factors (kQ factors) corresponding to this formalism were determined for three different models of ionization chambers: a Farmer-type ionization chamber, a thimble ionization chamber, and a small volume ionization chamber. Beam quality correction factor measurements were made in a specially developed cylindrical polymethyl methacrylate (PMMA) phantom and a water phantom using thermoluminescent dosimeters (TLDs) and alanine dosimeters to determine dose to water. The TLD system for absorbed dose to water determination in high energy photon and electron beams was fully characterized as part of this dissertation. The behavior of the beam quality correction factor was observed as it transfers the calibration coefficient from the University of Wisconsin Accredited Dosimetry Calibration Laboratory (UWADCL) 60Co reference beam to the small field calibration conditions of the small field formalism. TLD-determined beam quality correction factors for the calibration conditions investigated ranged from 0.97 to 1.30 and had associated standard deviations from 1% to 3%. The alanine-determined beam quality correction factors ranged from 0.996 to 1.293. Volume averaging effects were observed with the Farmer-type ionization chamber in the small static field conditions. The proposed small and non-standard field dosimetry protocols new composite-field reference condition demonstrated its potential to reduce or remove ionization chamber volume dependancies, but the measured beam quality correction factors were not equal to the standard CoP's kQ, indicating a change in beam quality in the small and non-standard field dosimetry protocols new composite-field reference condition relative to the standard broad beam reference conditions. The TLD- and alanine-determined beam quality correction factors in the composite-field reference conditions were approximately 3% greater and differed by more than one standard deviation from the published TG-51 kQ values for all three chambers.

  9. Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model

    DOE PAGES

    Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.; ...

    2015-10-30

    We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less

  10. Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.

    We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less

  11. Dichotomy of X-Ray Jets in Solar Coronal Holes

    NASA Astrophysics Data System (ADS)

    Robe, D. M.; Moore, R. L.; Falconer, D. A.

    2012-12-01

    It has been found that there are two different types of X-ray jets observed in the Sun's polar coronal holes: standard jets and blowout jets. A proposed model of this dichotomy is that a standard jet is produced by a burst of reconnection of the ambient magnetic field with the opposite-polarity leg of the base arcade. In contrast, it appears that a blowout jet is produced when the interior of the arcade has so much pent-up free magnetic energy in the form of shear and twist in the interior field that the external reconnection unleashes the interior field to erupt open. In this project, X-ray movies of the polar coronal holes taken by Hinode were searched for X-ray jets. Co-temporal movies taken by the Solar Dynamics Observatory in 304 Å emission from He II, showing solar plasma at temperatures around 80,000 K, were examined for whether the identified blowout jets carry much more He II plasma than the identified standard jets. It was found that though some jets identified as standard from the X-ray movies could be seen in the He II 304 Å movies, the blowout jets carried much more 80,000 K plasma than did most standard jets. This finding supports the proposed model for the morphology and development of the two types of jets.

  12. Proof of factorization using background field method of QCD

    NASA Astrophysics Data System (ADS)

    Nayak, Gouranga C.

    2010-02-01

    Factorization theorem plays the central role at high energy colliders to study standard model and beyond standard model physics. The proof of factorization theorem is given by Collins, Soper and Sterman to all orders in perturbation theory by using diagrammatic approach. One might wonder if one can obtain the proof of factorization theorem through symmetry considerations at the lagrangian level. In this paper we provide such a proof.

  13. Proof of factorization using background field method of QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nayak, Gouranga C.

    Factorization theorem plays the central role at high energy colliders to study standard model and beyond standard model physics. The proof of factorization theorem is given by Collins, Soper and Sterman to all orders in perturbation theory by using diagrammatic approach. One might wonder if one can obtain the proof of factorization theorem through symmetry considerations at the lagrangian level. In this paper we provide such a proof.

  14. Fermionic dark matter and neutrino masses in a B - L model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sánchez-Vega, B. L.; Schmitz, E. R.

    2015-09-01

    In this work we present a common framework for neutrino masses and dark matter. Specifically, we work with a local B - L extension of the standard model which has three right-handed neutrinos, n(Ri), and some extra scalars, Phi, phi(i), besides the standard model fields. The n(Ri)'s have nonstandard B - L quantum numbers and thus these couple to different scalars. This model has the attractive property that an almost automatic Z(2) symmetry acting only on a fermionic field, n(R3), is present. Taking advantage of this Z(2) symmetry, we study both the neutrino mass generation via a natural seesaw mechanismmore » at low energy and the possibility of n(R3) being a dark matter candidate. For this last purpose, we study its relic abundance and its compatibility with the current direct detection experiments.« less

  15. Simulating Turbulent Wind Fields for Offshore Turbines in Hurricane-Prone Regions (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Damiani, R.; Musial, W.

    Extreme wind load cases are one of the most important external conditions in the design of offshore wind turbines in hurricane prone regions. Furthermore, in these areas, the increase in load with storm return-period is higher than in extra-tropical regions. However, current standards have limited information on the appropriate models to simulate wind loads from hurricanes. This study investigates turbulent wind models for load analysis of offshore wind turbines subjected to hurricane conditions. Suggested extreme wind models in IEC 61400-3 and API/ABS (a widely-used standard in oil and gas industry) are investigated. The present study further examines the wind turbinemore » response subjected to Hurricane wind loads. Three-dimensional wind simulator, TurbSim, is modified to include the API wind model. Wind fields simulated using IEC and API wind models are used for an offshore wind turbine model established in FAST to calculate turbine loads and response.« less

  16. cStress: Towards a Gold Standard for Continuous Stress Assessment in the Mobile Environment

    PubMed Central

    Hovsepian, Karen; al’Absi, Mustafa; Ertin, Emre; Kamarck, Thomas; Nakajima, Motohiro; Kumar, Santosh

    2015-01-01

    Recent advances in mobile health have produced several new models for inferring stress from wearable sensors. But, the lack of a gold standard is a major hurdle in making clinical use of continuous stress measurements derived from wearable sensors. In this paper, we present a stress model (called cStress) that has been carefully developed with attention to every step of computational modeling including data collection, screening, cleaning, filtering, feature computation, normalization, and model training. More importantly, cStress was trained using data collected from a rigorous lab study with 21 participants and validated on two independently collected data sets — in a lab study on 26 participants and in a week-long field study with 20 participants. In testing, the model obtains a recall of 89% and a false positive rate of 5% on lab data. On field data, the model is able to predict each instantaneous self-report with an accuracy of 72%. PMID:26543926

  17. Tests of local Lorentz invariance violation of gravity in the standard model extension with pulsars.

    PubMed

    Shao, Lijing

    2014-03-21

    The standard model extension is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model and general relativity (GR). In the pure-gravity sector of minimal standard model extension, nine coefficients describe dominant observable deviations from GR. We systematically implemented 27 tests from 13 pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of minimal standard model extension with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for the convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravity.

  18. Non-standard models and the sociology of cosmology

    NASA Astrophysics Data System (ADS)

    López-Corredoira, Martín

    2014-05-01

    I review some theoretical ideas in cosmology different from the standard "Big Bang": the quasi-steady state model, the plasma cosmology model, non-cosmological redshifts, alternatives to non-baryonic dark matter and/or dark energy, and others. Cosmologists do not usually work within the framework of alternative cosmologies because they feel that these are not at present as competitive as the standard model. Certainly, they are not so developed, and they are not so developed because cosmologists do not work on them. It is a vicious circle. The fact that most cosmologists do not pay them any attention and only dedicate their research time to the standard model is to a great extent due to a sociological phenomenon (the "snowball effect" or "groupthink"). We might well wonder whether cosmology, our knowledge of the Universe as a whole, is a science like other fields of physics or a predominant ideology.

  19. A process-based standard for the Solar Energetic Particle Event Environment

    NASA Astrophysics Data System (ADS)

    Gabriel, Stephen

    For 10 years or more, there has been a lack of concensus on what the ISO standard model for the Solar Energetic Particle Event (SEPE) environment should be. Despite many technical discussions between the world experts in this field, it has been impossible to agree on which of the several models available should be selected as the standard. Most of these discussions at the ISO WG4 meetings and conferences, etc have centred around the differences in modelling approach between the MSU model and the several remaining models from elsewhere worldwide (mainly the USA and Europe). The topic is considered timely given the inclusion of a session on reference data sets at the Space Weather Workshop in Boulder in April 2014. The original idea of a ‘process-based’ standard was conceived by Dr Kent Tobiska as a way of getting round the problems associated with not only the presence of different models, which in themselves could have quite distinct modelling approaches but could also be based on different data sets. In essence, a process based standard approach overcomes these issues by allowing there to be more than one model and not necessarily a single standard model; however, any such model has to be completely transparent in that the data set and the modelling techniques used have to be not only to be clearly and unambiguously defined but also subject to peer review. If the model meets all of these requirements then it should be acceptable as a standard model. So how does this process-based approach resolve the differences between the existing modelling approaches for the SEPE environment and remove the impasse? In a sense, it does not remove all of the differences but only some of them; however, most importantly it will allow something which so far has been impossible without ambiguities and disagreement and that is a comparison of the results of the various models. To date one of the problems (if not the major one) in comparing the results of the various different SEPE statistical models has been caused by two things: 1) the data set and 2) the definition of an event Because unravelling the dependencies of the outputs of different statistical models on these two parameters is extremely difficult if not impossible, currently comparison of the results from the different models is also extremely difficult and can lead to controversies, especially over which model is the correct one; hence, when it comes to using these models for engineering purposes to calculate, for example, the radiation dose for a particular mission, the user, who is in all likelihood not an expert in this field, could be given two( or even more) very different environments and find it impossible to know how to select one ( or even how to compare them). What is proposed then, is a process-based standard, which in common with nearly all of the current models is composed of 3 elements, a standard data set, a standard event definition and a resulting standard event list. A standard event list is the output of this standard and can then be used with any of the existing (or indeed future) models that are based on events. This standard event list is completely traceable and transparent and represents a reference event list for all the community. When coupled with a statistical model, the results when compared will only be dependent on the statistical model and not on the data set or event definition.

  20. Structural aspects of Lorentz-violating quantum field theory

    NASA Astrophysics Data System (ADS)

    Cambiaso, M.; Lehnert, R.; Potting, R.

    2018-01-01

    In the last couple of decades the Standard Model Extension has emerged as a fruitful framework to analyze the empirical and theoretical extent of the validity of cornerstones of modern particle physics, namely, of Special Relativity and of the discrete symmetries C, P and T (or some combinations of these). The Standard Model Extension allows to contrast high-precision experimental tests with posited alterations representing minute Lorentz and/or CPT violations. To date no violation of these symmetry principles has been observed in experiments, mostly prompted by the Standard-Model Extension. From the latter, bounds on the extent of departures from Lorentz and CPT symmetries can be obtained with ever increasing accuracy. These analyses have been mostly focused on tree-level processes. In this presentation I would like to comment on structural aspects of perturbative Lorentz violating quantum field theory. I will show that some insight coming from radiative corrections demands a careful reassessment of perturbation theory. Specifically I will argue that both the standard renormalization procedure as well as the Lehmann-Symanzik-Zimmermann reduction formalism need to be adapted given that the asymptotic single-particle states can receive quantum corrections from Lorentz-violating operators that are not present in the original Lagrangian.

  1. Contraction of electroweak model and neutrino

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gromov, N. A., E-mail: gromov@dm.komisc.ru

    The electroweak model, which lepton sector correspond to the contracted gauge group SU(2; j) Multiplication-Sign U(1), j {yields} 0, whereas boson and quark sectors are standard one, is suggested. The field space of the model is fibered under contraction in such a way that neutrino fields are in the fiber and all other fields are in the base. Properties of the fibered field space are understood in context of semi-Riemannian geometry. This model describes in a natural manner why neutrinos so rarely interact with matter, as well as why neutrino cross section increase with the energy. Dimensionfull parameter of themore » model is interpreted as neutrino energy. Dimensionless contraction parameter j at low energy is connected with the Fermi constant of weak interactions and is approximated as j{sup 2} Almost-Equal-To 10{sup -5}.« less

  2. Supersymmetric extensions of K field theories

    NASA Astrophysics Data System (ADS)

    Adam, C.; Queiruga, J. M.; Sanchez-Guillen, J.; Wereszczynski, A.

    2012-02-01

    We review the recently developed supersymmetric extensions of field theories with non-standard kinetic terms (so-called K field theories) in two an three dimensions. Further, we study the issue of topological defect formation in these supersymmetric theories. Specifically, we find supersymmetric K field theories which support topological kinks in 1+1 dimensions as well as supersymmetric extensions of the baby Skyrme model for arbitrary nonnegative potentials in 2+1 dimensions.

  3. Bringing Standardized Processes in Atom-Probe Tomography: I Establishing Standardized Terminology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Ian M; Danoix, F; Forbes, Richard

    2011-01-01

    Defining standardized methods requires careful consideration of the entire field and its applications. The International Field Emission Society (IFES) has elected a Standards Committee, whose task is to determine the needed steps to establish atom-probe tomography as an accepted metrology technique. Specific tasks include developing protocols or standards for: terminology and nomenclature; metrology and instrumentation, including specifications for reference materials; test methodologies; modeling and simulations; and science-based health, safety, and environmental practices. The Committee is currently working on defining terminology related to atom-probe tomography with the goal to include terms into a document published by the International Organization for Standardsmore » (ISO). A lot of terms also used in other disciplines have already been defined) and will be discussed for adoption in the context of atom-probe tomography.« less

  4. Assessing the Added Value of Dynamical Downscaling Using the Standardized Precipitation Index

    EPA Science Inventory

    In this study, the Standardized Precipitation Index (SPI) is used to ascertain the added value of dynamical downscaling over the contiguous United States. WRF is used as a regional climate model (RCM) to dynamically downscale reanalysis fields to compare values of SPI over drough...

  5. New vector-like fermions and flavor physics

    DOE PAGES

    Ishiwata, Koji; Ligeti, Zoltan; Wise, Mark B.

    2015-10-06

    We study renormalizable extensions of the standard model that contain vector-like fermions in a (single) complex representation of the standard model gauge group. There are 11 models where the vector-like fermions Yukawa couple to the standard model fermions via the Higgs field. These models do not introduce additional fine-tunings. They can lead to, and are constrained by, a number of different flavor-changing processes involving leptons and quarks, as well as direct searches. An interesting feature of the models with strongly interacting vector-like fermions is that constraints from neutral meson mixings (apart from CP violation inmore » $$ {K}^0-{\\overline{K}}^0 $$ mixing) are not sensitive to higher scales than other flavor-changing neutral-current processes. We identify order 1/(4πM) 2 (where M is the vector-like fermion mass) one-loop contributions to the coefficients of the four-quark operators for meson mixing, that are not suppressed by standard model quark masses and/or mixing angles.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    FY 2013 annual report focuses on the following areas: vehicle modeling and simulation, component and systems evaluations, laboratory and field evaluations, codes and standards, industry projects, and vehicle systems optimization.

  7. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too. PMID:26758822

  8. Evaluation Theory, Models, and Applications

    ERIC Educational Resources Information Center

    Stufflebeam, Daniel L.; Shinkfield, Anthony J.

    2007-01-01

    "Evaluation Theory, Models, and Applications" is designed for evaluators and students who need to develop a commanding knowledge of the evaluation field: its history, theory and standards, models and approaches, procedures, and inclusion of personnel as well as program evaluation. This important book shows how to choose from a growing…

  9. Heliport noise model : methodology - draft report

    DOT National Transportation Integrated Search

    1988-04-30

    The Heliport Noise Model (HNM) is the United States standard for predicting civil helicopter noise exposure in the vicinity of heliports and airports. HNM Version 1 is the culmination of several years of work in helicopter noise research, field measu...

  10. Visual Field Defects and Retinal Ganglion Cell Losses in Human Glaucoma Patients

    PubMed Central

    Harwerth, Ronald S.; Quigley, Harry A.

    2007-01-01

    Objective The depth of visual field defects are correlated with retinal ganglion cell densities in experimental glaucoma. This study was to determine whether a similar structure-function relationship holds for human glaucoma. Methods The study was based on retinal ganglion cell densities and visual thresholds of patients with documented glaucoma (Kerrigan-Baumrind, et al.) The data were analyzed by a model that predicted ganglion cell densities from standard clinical perimetry, which were then compared to histologic cell counts. Results The model, without free parameters, produced accurate and relatively precise quantification of ganglion cell densities associated with visual field defects. For 437 sets of data, the unity correlation for predicted vs. measured cell densities had a coefficient of determination of 0.39. The mean absolute deviation of the predicted vs. measured values was 2.59 dB, the mean and SD of the distribution of residual errors of prediction was -0.26 ± 3.22 dB. Conclusions Visual field defects by standard clinical perimetry are proportional to neural losses caused by glaucoma. Clinical Relevance The evidence for quantitative structure-function relationships provides a scientific basis of interpreting glaucomatous neuropathy from visual thresholds and supports the application of standard perimetry to establish the stage of the disease. PMID:16769839

  11. Intelligence Reach for Expertise (IREx)

    NASA Astrophysics Data System (ADS)

    Hadley, Christina; Schoening, James R.; Schreiber, Yonatan

    2015-05-01

    IREx is a search engine for next-generation analysts to find collaborators. U.S. Army Field Manual 2.0 (Intelligence) calls for collaboration within and outside the area of operations, but finding the best collaborator for a given task can be challenging. IREx will be demonstrated as part of Actionable Intelligence Technology Enabled Capability Demonstration (AI-TECD) at the E15 field exercises at Ft. Dix in July 2015. It includes a Task Model for describing a task and its prerequisite competencies, plus a User Model (i.e., a user profile) for individuals to assert their capabilities and other relevant data. These models use a canonical suite of ontologies as a foundation for these models, which enables robust queries and also keeps the models logically consistent. IREx also supports learning validation, where a learner who has completed a course module can search and find a suitable task to practice and demonstrate that their new knowledge can be used in the real world for its intended purpose. The IREx models are in the initial phase of a process to develop them as an IEEE standard. This initiative is currently an approved IEEE Study Group, after which follows a standards working group, then a balloting group, and if all goes well, an IEEE standard.

  12. Characterization of YBa2Cu3O7, including critical current density Jc, by trapped magnetic field

    NASA Technical Reports Server (NTRS)

    Chen, In-Gann; Liu, Jianxiong; Weinstein, Roy; Lau, Kwong

    1992-01-01

    Spatial distributions of persistent magnetic field trapped by sintered and melt-textured ceramic-type high-temperature superconductor (HTS) samples have been studied. The trapped field can be reproduced by a model of the current consisting of two components: (1) a surface current Js and (2) a uniform volume current Jv. This Js + Jv model gives a satisfactory account of the spatial distribution of the magnetic field trapped by different types of HTS samples. The magnetic moment can be calculated, based on the Js + Jv model, and the result agrees well with that measured by standard vibrating sample magnetometer (VSM). As a consequence, Jc predicted by VSM methods agrees with Jc predicted from the Js + Jv model. The field mapping method described is also useful to reveal the granular structure of large HTS samples and regions of weak links.

  13. On the importance of body posture and skin modelling with respect to in situ electric field strengths in magnetic field exposure scenarios

    NASA Astrophysics Data System (ADS)

    Schmid, Gernot; Hirtl, Rene

    2016-06-01

    The reference levels and maximum permissible exposure values for magnetic fields that are currently used have been derived from basic restrictions under the assumption of upright standing body models in a standard posture, i.e. with arms laterally down and without contact with metallic objects. Moreover, if anatomical modelling of the body was used at all, the skin was represented as a single homogeneous tissue layer. In the present paper we addressed the possible impacts of posture and skin modelling in scenarios of exposure to a 50 Hz uniform magnetic field on the in situ electric field strength in peripheral tissues, which must be limited in order to avoid peripheral nerve stimulation. We considered different body postures including situations where body parts form large induction loops (e.g. clasped hands) with skin-to-skin and skin-to-metal contact spots and compared the results obtained with a homogeneous single-layer skin model to results obtained with a more realistic two-layer skin representation consisting of a low-conductivity stratum corneum layer on top of a combined layer for the cellular epidermis and dermis. Our results clearly indicated that postures with loops formed of body parts may lead to substantially higher maximum values of induced in situ electric field strengths than in the case of standard postures due to a highly concentrated current density and in situ electric field strength in the skin-to-skin and skin-to-metal contact regions. With a homogeneous single-layer skin, as is used for even the most recent anatomical body models in exposure assessment, the in situ electric field strength may exceed the basic restrictions in such situations, even when the reference levels and maximum permissible exposure values are not exceeded. However, when using the more realistic two-layer skin model the obtained in situ electric field strengths were substantially lower and no violations of the basic restrictions occurred, which can be explained by the current-limiting effect of the low-conductivity stratum corneum layer.

  14. Computing decay rates for new physics theories with FEYNRULES and MADGRAPH 5_AMC@NLO

    NASA Astrophysics Data System (ADS)

    Alwall, Johan; Duhr, Claude; Fuks, Benjamin; Mattelaer, Olivier; Öztürk, Deniz Gizem; Shen, Chia-Hsien

    2015-12-01

    We present new features of the FEYNRULES and MADGRAPH 5_AMC@NLO programs for the automatic computation of decay widths that consistently include channels of arbitrary final-state multiplicity. The implementations are generic enough so that they can be used in the framework of any quantum field theory, possibly including higher-dimensional operators. We extend at the same time the conventions of the Universal FEYNRULES Output (or UFO) format to include decay tables and information on the total widths. We finally provide a set of representative examples of the usage of the new functions of the different codes in the framework of the Standard Model, the Higgs Effective Field Theory, the Strongly Interacting Light Higgs model and the Minimal Supersymmetric Standard Model and compare the results to available literature and programs for validation purposes.

  15. MSSM-inspired multifield inflation

    NASA Astrophysics Data System (ADS)

    Dubinin, M. N.; Petrova, E. Yu.; Pozdeeva, E. O.; Sumin, M. V.; Vernov, S. Yu.

    2017-12-01

    Despite the fact that experimentally with a high degree of statistical significance only a single Standard Model-like Higgs boson is discovered at the LHC, extended Higgs sectors with multiple scalar fields not excluded by combined fits of the data are more preferable theoretically for internally consistent realistic models of particle physics. We analyze the inflationary scenarios which could be induced by the two-Higgs-doublet potential of the Minimal Supersymmetric Standard Model (MSSM) where five scalar fields have non-minimal couplings to gravity. Observables following from such MSSM-inspired multifield inflation are calculated and a number of consistent inflationary scenarios are constructed. Cosmological evolution with different initial conditions for the multifield system leads to consequences fully compatible with observational data on the spectral index and the tensor-to-scalar ratio. It is demonstrated that the strong coupling approximation is precise enough to describe such inflationary scenarios.

  16. Search for the standard model Higgs boson in $$l\

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dikai

    2013-01-01

    Humans have always attempted to understand the mystery of Nature, and more recently physicists have established theories to describe the observed phenomena. The most recent theory is a gauge quantum field theory framework, called Standard Model (SM), which proposes a model comprised of elementary matter particles and interaction particles which are fundamental force carriers in the most unified way. The Standard Model contains the internal symmetries of the unitary product group SU(3) c ⓍSU(2) L Ⓧ U(1) Y , describes the electromagnetic, weak and strong interactions; the model also describes how quarks interact with each other through all of thesemore » three interactions, how leptons interact with each other through electromagnetic and weak forces, and how force carriers mediate the fundamental interactions.« less

  17. Quantum Gravity and Cosmology: an intimate interplay

    NASA Astrophysics Data System (ADS)

    Sakellariadou, Mairi

    2017-08-01

    I will briefly discuss three cosmological models built upon three distinct quantum gravity proposals. I will first highlight the cosmological rôle of a vector field in the framework of a string/brane cosmological model. I will then present the resolution of the big bang singularity and the occurrence of an early era of accelerated expansion of a geometric origin, in the framework of group field theory condensate cosmology. I will then summarise results from an extended gravitational model based on non-commutative spectral geometry, a model that offers a purely geometric explanation for the standard model of particle physics.

  18. Curvature perturbation and waterfall dynamics in hybrid inflation

    NASA Astrophysics Data System (ADS)

    Akbar Abolhasani, Ali; Firouzjahi, Hassan; Sasaki, Misao

    2011-10-01

    We investigate the parameter spaces of hybrid inflation model with special attention paid to the dynamics of waterfall field and curvature perturbations induced from its quantum fluctuations. Depending on the inflaton field value at the time of phase transition and the sharpness of the phase transition inflation can have multiple extended stages. We find that for models with mild phase transition the induced curvature perturbation from the waterfall field is too large to satisfy the COBE normalization. We investigate the model parameter space where the curvature perturbations from the waterfall quantum fluctuations vary between the results of standard hybrid inflation and the results obtained here.

  19. Pressure calculation in hybrid particle-field simulations

    NASA Astrophysics Data System (ADS)

    Milano, Giuseppe; Kawakatsu, Toshihiro

    2010-12-01

    In the framework of a recently developed scheme for a hybrid particle-field simulation techniques where self-consistent field (SCF) theory and particle models (molecular dynamics) are combined [J. Chem. Phys. 130, 214106 (2009)], we developed a general formulation for the calculation of instantaneous pressure and stress tensor. The expressions have been derived from statistical mechanical definition of the pressure starting from the expression for the free energy functional in the SCF theory. An implementation of the derived formulation suitable for hybrid particle-field molecular dynamics-self-consistent field simulations is described. A series of test simulations on model systems are reported comparing the calculated pressure with those obtained from standard molecular dynamics simulations based on pair potentials.

  20. Modeling Agricultural Watersheds with the Soil and Water Assessment Tool (SWAT): Calibration and Validation with a Novel Procedure for Spatially Explicit HRUs.

    PubMed

    Teshager, Awoke Dagnew; Gassman, Philip W; Secchi, Silvia; Schoof, Justin T; Misgna, Girmaye

    2016-04-01

    Applications of the Soil and Water Assessment Tool (SWAT) model typically involve delineation of a watershed into subwatersheds/subbasins that are then further subdivided into hydrologic response units (HRUs) which are homogeneous areas of aggregated soil, landuse, and slope and are the smallest modeling units used within the model. In a given standard SWAT application, multiple potential HRUs (farm fields) in a subbasin are usually aggregated into a single HRU feature. In other words, the standard version of the model combines multiple potential HRUs (farm fields) with the same landuse/landcover, soil, and slope, but located at different places of a subbasin (spatially non-unique), and considers them as one HRU. In this study, ArcGIS pre-processing procedures were developed to spatially define a one-to-one match between farm fields and HRUs (spatially unique HRUs) within a subbasin prior to SWAT simulations to facilitate input processing, input/output mapping, and further analysis at the individual farm field level. Model input data such as landuse/landcover (LULC), soil, crop rotation, and other management data were processed through these HRUs. The SWAT model was then calibrated/validated for Raccoon River watershed in Iowa for 2002-2010 and Big Creek River watershed in Illinois for 2000-2003. SWAT was able to replicate annual, monthly, and daily streamflow, as well as sediment, nitrate and mineral phosphorous within recommended accuracy in most cases. The one-to-one match between farm fields and HRUs created and used in this study is a first step in performing LULC change, climate change impact, and other analyses in a more spatially explicit manner.

  1. Modeling Agricultural Watersheds with the Soil and Water Assessment Tool (SWAT): Calibration and Validation with a Novel Procedure for Spatially Explicit HRUs

    NASA Astrophysics Data System (ADS)

    Teshager, Awoke Dagnew; Gassman, Philip W.; Secchi, Silvia; Schoof, Justin T.; Misgna, Girmaye

    2016-04-01

    Applications of the Soil and Water Assessment Tool (SWAT) model typically involve delineation of a watershed into subwatersheds/subbasins that are then further subdivided into hydrologic response units (HRUs) which are homogeneous areas of aggregated soil, landuse, and slope and are the smallest modeling units used within the model. In a given standard SWAT application, multiple potential HRUs (farm fields) in a subbasin are usually aggregated into a single HRU feature. In other words, the standard version of the model combines multiple potential HRUs (farm fields) with the same landuse/landcover, soil, and slope, but located at different places of a subbasin (spatially non-unique), and considers them as one HRU. In this study, ArcGIS pre-processing procedures were developed to spatially define a one-to-one match between farm fields and HRUs (spatially unique HRUs) within a subbasin prior to SWAT simulations to facilitate input processing, input/output mapping, and further analysis at the individual farm field level. Model input data such as landuse/landcover (LULC), soil, crop rotation, and other management data were processed through these HRUs. The SWAT model was then calibrated/validated for Raccoon River watershed in Iowa for 2002-2010 and Big Creek River watershed in Illinois for 2000-2003. SWAT was able to replicate annual, monthly, and daily streamflow, as well as sediment, nitrate and mineral phosphorous within recommended accuracy in most cases. The one-to-one match between farm fields and HRUs created and used in this study is a first step in performing LULC change, climate change impact, and other analyses in a more spatially explicit manner.

  2. Magneto-hydrodynamical model for plasma

    NASA Astrophysics Data System (ADS)

    Liu, Ruikuan; Yang, Jiayan

    2017-10-01

    Based on the Newton's second law and the Maxwell equations for the electromagnetic field, we establish a new 3-D incompressible magneto-hydrodynamics model for the motion of plasma under the standard Coulomb gauge. By using the Galerkin method, we prove the existence of a global weak solution for this new 3-D model.

  3. Leptogenesis from Left-Handed Neutrino Production during Axion Inflation.

    PubMed

    Adshead, Peter; Sfakianakis, Evangelos I

    2016-03-04

    We propose that the observed matter-antimatter asymmetry can be naturally produced as a by-product of axion-driven slow-roll inflation by coupling the axion to standard model neutrinos. We assume that grand unified theory scale right-handed neutrinos are responsible for the masses of the standard model neutrinos and that the Higgs field is light during inflation and develops a Hubble-scale root-mean-square value. In this setup, the rolling axion generates a helicity asymmetry in standard model neutrinos. Following inflation, this helicity asymmetry becomes equal to a net lepton number as the Higgs condensate decays and is partially reprocessed by the SU(2)_{L} sphaleron into a net baryon number.

  4. A geometric formulation of Higgs Effective Field Theory. Measuring the curvature of scalar field space

    DOE PAGES

    Alonso, Rodrigo; Jenkins, Elizabeth E.; Manohar, Aneesh V.

    2016-03-01

    A geometric formulation of Higgs Effective Field Theory (HEFT) is presented. Experimental observables are given in terms of geometric invariants of the scalar sigma model sector such as the curvature of the scalar field manifold M. Here we show how the curvature can be measured experimentally via Higgs cross-sections, WLscattering, and the Sparameter. The one-loop action of HEFT is given in terms of geometric invariants of M. Moreover, the distinction between the Standard Model (SM) and HEFT is whether Mis flat or curved, and the curvature is a signal of the scale of new physics.

  5. Application fields for the new Object Management Group (OMG) Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN) in the perioperative field.

    PubMed

    Wiemuth, M; Junger, D; Leitritz, M A; Neumann, J; Neumuth, T; Burgert, O

    2017-08-01

    Medical processes can be modeled using different methods and notations. Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail. We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN). First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention. An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.

  6. Neutrino mass with large S U (2 )L multiplet fields

    NASA Astrophysics Data System (ADS)

    Nomura, Takaaki; Okada, Hiroshi

    2017-11-01

    We propose an extension of the standard model introducing large S U (2 )L multiplet fields which are quartet and septet scalars and quintet Majorana fermions. These multiplets can induce the neutrino masses via interactions with the S U (2 ) doublet leptons. We then find the neutrino masses are suppressed by a small vacuum expectation value of the quartet/septet and an inverse of the quintet fermion mass, relaxing the Yukawa hierarchies among the standard model fermions. We also discuss collider physics at the Large Hadron Collider, considering the production of charged particles in these multiplets, and due to the effects of violating the custodial symmetry, some specific signatures can be found. Then, we discuss the detectability of these signals.

  7. runDM: Running couplings of Dark Matter to the Standard Model

    NASA Astrophysics Data System (ADS)

    D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo

    2018-02-01

    runDM calculates the running of the couplings of Dark Matter (DM) to the Standard Model (SM) in simplified models with vector mediators. By specifying the mass of the mediator and the couplings of the mediator to SM fields at high energy, the code can calculate the couplings at low energy, taking into account the mixing of all dimension-6 operators. runDM can also extract the operator coefficients relevant for direct detection, namely low energy couplings to up, down and strange quarks and to protons and neutrons.

  8. Effective description of general extensions of the Standard Model: the complete tree-level dictionary

    NASA Astrophysics Data System (ADS)

    de Blas, J.; Criado, J. C.; Pérez-Victoria, M.; Santiago, J.

    2018-03-01

    We compute all the tree-level contributions to the Wilson coefficients of the dimension-six Standard-Model effective theory in ultraviolet completions with general scalar, spinor and vector field content and arbitrary interactions. No assumption about the renormalizability of the high-energy theory is made. This provides a complete ultraviolet/infrared dictionary at the classical level, which can be used to study the low-energy implications of any model of interest, and also to look for explicit completions consistent with low-energy data.

  9. A lithospheric magnetic field model derived from the Swarm satellite magnetic field measurements

    NASA Astrophysics Data System (ADS)

    Hulot, G.; Thebault, E.; Vigneron, P.

    2015-12-01

    The Swarm constellation of satellites was launched in November 2013 and has since then delivered high quality scalar and vector magnetic field measurements. A consortium of several research institutions was selected by the European Space Agency (ESA) to provide a number of scientific products which will be made available to the scientific community. Within this framework, specific tools were tailor-made to better extract the magnetic signal emanating from Earth's the lithospheric. These tools rely on the scalar gradient measured by the lower pair of Swarm satellites and rely on a regional modeling scheme that is more sensitive to small spatial scales and weak signals than the standard spherical harmonic modeling. In this presentation, we report on various activities related to data analysis and processing. We assess the efficiency of this dedicated chain for modeling the lithospheric magnetic field using more than one year of measurements, and finally discuss refinements that are continuously implemented in order to further improve the robustness and the spatial resolution of the lithospheric field model.

  10. ANZSoilML: An Australian - New Zealand standard for exchange of soil data

    NASA Astrophysics Data System (ADS)

    Simons, Bruce; Wilson, Peter; Ritchie, Alistair; Cox, Simon

    2013-04-01

    The Australian-New Zealand soil information exchange standard (ANZSoilML) is a GML-based standard designed to allow the discovery, query and delivery of soil and landscape data via standard Open Geospatial Consortium (OGC) Web Feature Services. ANZSoilML modifies the Australian soil exchange standard (OzSoilML), which is based on the Australian Soil Information Transfer and Evaluation System (SITES) database design and exchange protocols, to meet the New Zealand National Soils Database requirements. The most significant change was the removal of the lists of CodeList terms in OzSoilML, which were based on the field methods specified in the 'Australian Soil and Land Survey Field Handbook'. These were replaced with empty CodeLists as placeholders to external vocabularies to allow the use of New Zealand vocabularies without violating the data model. Testing of the use of these separately governed Australian and New Zealand vocabularies has commenced. ANZSoilML attempts to accommodate the proposed International Organization for Standardization ISO/DIS 28258 standard for soil quality. For the most part, ANZSoilML is consistent with the ISO model, although major differences arise as a result of: • The need to specify the properties appropriate for each feature type; • The inclusion of soil-related 'Landscape' features; • Allowing the mapping of soil surfaces, bodies, layers and horizons, independent of the soil profile; • Allowing specifying the relationships between the various soil features; • Specifying soil horizons as specialisations of soil layers; • Removing duplication of features provided by the ISO Observation & Measurements standard. The International Union of Soil Sciences (IUSS) Working Group on Soil Information Standards (WG-SIS) aims to develop, promote and maintain a standard to facilitate the exchange of soils data and information. Developing an international exchange standard that is compatible with existing and emerging national and regional standards is a considerable challenge. ANZSoilML is proposed as a profile of the more generalised SoilML model being progressed through the IUSS Working Group.

  11. Optical Measurements of Strong Radio-Frequency Fields Using Rydberg Atoms

    NASA Astrophysics Data System (ADS)

    Miller, Stephanie Anne

    There has recently been an initiative toward establishing atomic measurement standards for field quantities, including radio-frequency, millimeter-wave, and micro-wave electric fields. Current measurement standards are obtained using dipole antennas, which are fundamentally limited in frequency bandwidth (set by the physical size of the antenna) and accuracy (due to the metal perturbing the field during the measurement). Establishing an atomic standard rectifies these problems. My thesis work contributes to an ongoing effort towards establishing the viability of using Rydberg electromagnetically induced transparency (EIT) to perform atom-based measurements of radio-frequency (RF) fields over a wide range of frequencies and field strengths, focusing on strong-field measurements. Rydberg atoms are atoms with an electron excited to a high principal quantum number, resulting in a high sensitivity to an applied field. A model based on Floquet theory is implemented to accurately describe the observed atomic energy level shifts from which information about the field is extracted. Additionally, the effects due to the different electric field domains within the measurement volume are accurately modeled. Absolute atomic measurements of fields up to 296 V/m within a +/-0.35% relative uncertainty are demonstrated. This is the strongest field measured at the time of data publication. Moreover, the uncertainty is over an order of magnitude better than that of current standards. A vacuum chamber setup that I implemented during my graduate studies is presented and its unique components are detailed. In this chamber, cold-atom samples are generated and Rydberg atoms are optically excited within the ground-state sample. The Rydberg ion detection and imaging procedure are discussed, particularly the high magnification that the system provides. By analyzing the position of the ions, the spatial correlation g(2) (r) of Rydberg-atom distributions can be extracted. Aside from ion detection, EIT is implemented in the cold-atom samples. By measuring the timing of the probe photons exiting the EIT medium, the temporal correlation function g(2)(tau) can be extracted, yielding information about the timing between two different arbitrary photons. An experimental goal using this setup is to look at g(2)(tau) in conjunction with g(2)(r) for Rydberg atoms. Progress and preliminary measurements of ion detection and EIT spectra are presented including observed qualitative behaviors.

  12. Baryon asymmetry and gravitational waves from pseudoscalar inflation

    NASA Astrophysics Data System (ADS)

    Jiménez, Daniel; Kamada, Kohei; Schmitz, Kai; Xu, Xun-Jie

    2017-12-01

    In models of inflation driven by an axion-like pseudoscalar field, the inflaton, a, may couple to the standard model hypercharge via a Chern-Simons-type interaction, Script L ⊃ a/(4Λ) Ftilde F. This coupling results in explosive gauge field production during inflation, especially at its last stage, which has interesting phenomenological consequences: For one thing, the primordial hypermagnetic field is maximally helical. It is thus capable of sourcing the generation of nonzero baryon number, via the standard model chiral anomaly, around the time of electroweak symmetry breaking. For another thing, the gauge field production during inflation feeds back into the primordial tensor power spectrum, leaving an imprint in the stochastic background of gravitational waves (GWs). In this paper, we focus on the correlation between these two phenomena. Working in the approximation of instant reheating, we (1) update the investigation of baryogenesis via hypermagnetic fields from pseudoscalar inflation and (2) examine the corresponding implications for the GW spectrum. We find that successful baryogenesis requires a suppression scale Λ of around Λ ~ 3 × 1017 GeV, which corresponds to a relatively weakly coupled axion. The gauge field production at the end of inflation is then typically accompanied by a peak in the GW spectrum at frequencies in the MHz range or above. The detection of such a peak is out of reach of present-day technology; but in the future, it may serve as a smoking-gun signal for baryogenesis from pseudoscalar inflation. Conversely, models that do yield an observable GW signal suffer from the overproduction of baryon number, unless the reheating temperature is lower than the electroweak scale.

  13. A Review of Roles and Responsibilities: Restructuring for Excellence in the School System.

    ERIC Educational Resources Information Center

    Society for the Advancement of Excellence in Education, Kelowna (British Columbia).

    Recommendations are presented for a new form of school governance in British Columbia that takes into account current research on effective schools. In the model described, the provincial government provides the funding, sets the core curriculum, standards, and outcomes, ensures standardized measurement and reporting, and supports field research.…

  14. Simulation Model of A Ferroelectric Field Effect Transistor

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd C.; Ho, Fat Duen; Russell, Larry W. (Technical Monitor)

    2002-01-01

    An electronic simulation model has been developed of a ferroelectric field effect transistor (FFET). This model can be used in standard electrical circuit simulation programs to simulate the main characteristics of the FFET. The model uses a previously developed algorithm that incorporates partial polarization as a basis for the design. The model has the main characteristics of the FFET, which are the current hysterisis with different gate voltages and decay of the drain current when the gate voltage is off. The drain current has values matching actual FFET's, which were measured experimentally. The input and output resistance in the model is similar to that of the FFET. The model is valid for all frequencies below RF levels. A variety of different ferroelectric material characteristics can be modeled. The model can be used to design circuits using FFET'S with standard electrical simulation packages. The circuit can be used in designing non-volatile memory circuits and logic circuits and is compatible with all SPICE based circuit analysis programs. The model is a drop in library that integrates seamlessly into a SPICE simulation. A comparison is made between the model and experimental data measured from an actual FFET.

  15. The synchronous orbit magnetic field data set

    NASA Technical Reports Server (NTRS)

    Mcpherron, R. L.

    1979-01-01

    The magnetic field at synchronous orbit is the result of superposition of fields from many sources such as the earth, the magnetopause, the geomagnetic tail, the ring current and field-aligned currents. In addition, seasonal changes in the orientation of the earth's dipole axis causes significant changes in each of the external sources. Main reasons for which the synchronous orbit magnetic field data set is a potentially valuable resource are outlined. The primary reason why synchronous magnetic field data have not been used more extensively in magnetic field modeling is the presence of absolute errors in the measured fields. Nevertheless, there exists a reasonably large collection of synchronous orbit magnetic field data. Some of these data can be useful in quantitative modeling of the earth's magnetic field. A brief description is given of the spacecraft, the magnetometers, the standard graphical data displays, and the digital data files.

  16. The standard model and some new directions. [for scientific theory of Active Galactic Nuclei

    NASA Technical Reports Server (NTRS)

    Blandford, R. D.; Rees, M. J.

    1992-01-01

    A 'standard' model of Active Galactic Nuclei (AGN), based upon a massive black hole surrounded by a thin accretion disk, is defined. It is argued that, although there is good evidence for the presence of black holes and orbiting gas, most of the details of this model are either inadequate or controversial. Magnetic field may be responsible for the confinement of continuum and line-emitting gas, for the dynamical evolution of accretion disks and for the formation of jets. It is further argued that gaseous fuel is supplied in molecular form and that this is responsible for thermal re-radiation, equatorial obscuration and, perhaps, the broad line gas clouds. Stars may also supply gas close to the black hole, especially in low power AGN and they may be observable in discrete orbits as probes of the gravitational field. Recent observations suggest that magnetic field, stars, dusty molecular gas and orientation effects must be essential components of a complete description of AGN. The discovery of quasars with redshifts approaching 5 is an important clue to the mechanism of galaxy formation.

  17. DBI-essence

    NASA Astrophysics Data System (ADS)

    Martin, Jérôme; Yamaguchi, Masahide

    2008-06-01

    Models where the dark energy is a scalar field with a nonstandard Dirac-Born-Infeld (DBI) kinetic term are investigated. Scaling solutions are studied and proven to be attractors. The corresponding shape of the brane tension and of the potential is also determined and found to be, as in the standard case, either exponentials or power law of the DBI field. In these scenarios, in contrast to the standard situation, the vacuum expectation value of the field at small redshifts can be small in comparison to the Planck mass which could be an advantage from the model building point of view. This situation arises when the present-day value of the Lorentz factor is large, this property being per se interesting. Serious shortcomings are also present such as the fact that, for simple potentials, the equation of state appears to be too far from the observational favored value -1. Another problem is that, although simple stringy-inspired models precisely lead to the power-law shape that has been shown to possess a tracking behavior, the power index turns out to have the wrong sign. Possible solutions to these issues are discussed.

  18. The characteristics of RF modulated plasma boundary sheaths: An analysis of the standard sheath model

    NASA Astrophysics Data System (ADS)

    Naggary, Schabnam; Brinkmann, Ralf Peter

    2015-09-01

    The characteristics of radio frequency (RF) modulated plasma boundary sheaths are studied on the basis of the so-called ``standard sheath model.'' This model assumes that the applied radio frequency ωRF is larger than the plasma frequency of the ions but smaller than that of the electrons. It comprises a phase-averaged ion model - consisting of an equation of continuity (with ionization neglected) and an equation of motion (with collisional ion-neutral interaction taken into account) - a phase-resolved electron model - consisting of an equation of continuity and the assumption of Boltzmann equilibrium -, and Poisson's equation for the electrical field. Previous investigations have studied the standard sheath model under additional approximations, most notably the assumption of a step-like electron front. This contribution presents an investigation and parameter study of the standard sheath model which avoids any further assumptions. The resulting density profiles and overall charge-voltage characteristics are compared with those of the step-model based theories. The authors gratefully acknowledge Efe Kemaneci for helpful comments and fruitful discussions.

  19. Magnetic Properties of Strongly Correlated Hubbard Model and Quantum Spin-One Ferromagnets with Arbitrary Crystal-Field Potential: Linked Cluster Series Expansion Approach

    NASA Astrophysics Data System (ADS)

    Pan, Kok-Kwei

    We have generalized the linked cluster expansion method to solve more many-body quantum systems, such as quantum spin systems with crystal-field potentials and the Hubbard model. The technique sums up all connected diagrams to a certain order of the perturbative Hamiltonian. The modified multiple-site Wick reduction theorem and the simple tau dependence of the standard basis operators have been used to facilitate the evaluation of the integration procedures in the perturbation expansion. Computational methods are developed to calculate all terms in the series expansion. As a first example, the perturbation series expansion of thermodynamic quantities of the single-band Hubbard model has been obtained using a linked cluster series expansion technique. We have made corrections to all previous results of several papers (up to fourth order). The behaviors of the three dimensional simple cubic and body-centered cubic systems have been discussed from the qualitative analysis of the perturbation series up to fourth order. We have also calculated the sixth-order perturbation series of this model. As a second example, we present the magnetic properties of spin-one Heisenberg model with arbitrary crystal-field potential using a linked cluster series expansion. The calculation of the thermodynamic properties using this method covers the whole range of temperature, in both magnetically ordered and disordered phases. The series for the susceptibility and magnetization have been obtained up to fourth order for this model. The method sums up all perturbation terms to certain order and estimates the result using a well -developed and highly successful extrapolation method (the standard ratio method). The dependence of critical temperature on the crystal-field potential and the magnetization as a function of temperature and crystal-field potential are shown. The critical behaviors at zero temperature are also shown. The range of the crystal-field potential for Ni(2+) compounds is roughly estimated based on this model using known experimental results.

  20. Unbinding slave spins in the Anderson impurity model

    NASA Astrophysics Data System (ADS)

    Guerci, Daniele; Fabrizio, Michele

    2017-11-01

    We show that a generic single-orbital Anderson impurity model, lacking, for instance, any kind of particle-hole symmetry, can be exactly mapped without any constraint onto a resonant level model coupled to two Ising variables, which reduce to one if the hybridization is particle-hole symmetric. The mean-field solution of this model is found to be stable to unphysical spontaneous magnetization of the impurity, unlike the saddle-point solution in the standard slave-boson representation. Remarkably, the mean-field estimate of the Wilson ratio approaches the exact value RW=2 in the Kondo regime.

  1. Current progress in patient-specific modeling

    PubMed Central

    2010-01-01

    We present a survey of recent advancements in the emerging field of patient-specific modeling (PSM). Researchers in this field are currently simulating a wide variety of tissue and organ dynamics to address challenges in various clinical domains. The majority of this research employs three-dimensional, image-based modeling techniques. Recent PSM publications mostly represent feasibility or preliminary validation studies on modeling technologies, and these systems will require further clinical validation and usability testing before they can become a standard of care. We anticipate that with further testing and research, PSM-derived technologies will eventually become valuable, versatile clinical tools. PMID:19955236

  2. Application of overlay modeling and control with Zernike polynomials in an HVM environment

    NASA Astrophysics Data System (ADS)

    Ju, JaeWuk; Kim, MinGyu; Lee, JuHan; Nabeth, Jeremy; Jeon, Sanghuck; Heo, Hoyoung; Robinson, John C.; Pierson, Bill

    2016-03-01

    Shrinking technology nodes and smaller process margins require improved photolithography overlay control. Generally, overlay measurement results are modeled with Cartesian polynomial functions for both intra-field and inter-field models and the model coefficients are sent to an advanced process control (APC) system operating in an XY Cartesian basis. Dampened overlay corrections, typically via exponentially or linearly weighted moving average in time, are then retrieved from the APC system to apply on the scanner in XY Cartesian form for subsequent lot exposure. The goal of the above method is to process lots with corrections that target the least possible overlay misregistration in steady state as well as in change point situations. In this study, we model overlay errors on product using Zernike polynomials with same fitting capability as the process of reference (POR) to represent the wafer-level terms, and use the standard Cartesian polynomials to represent the field-level terms. APC calculations for wafer-level correction are performed in Zernike basis while field-level calculations use standard XY Cartesian basis. Finally, weighted wafer-level correction terms are converted to XY Cartesian space in order to be applied on the scanner, along with field-level corrections, for future wafer exposures. Since Zernike polynomials have the property of being orthogonal in the unit disk we are able to reduce the amount of collinearity between terms and improve overlay stability. Our real time Zernike modeling and feedback evaluation was performed on a 20-lot dataset in a high volume manufacturing (HVM) environment. The measured on-product results were compared to POR and showed a 7% reduction in overlay variation including a 22% terms variation. This led to an on-product raw overlay Mean + 3Sigma X&Y improvement of 5% and resulted in 0.1% yield improvement.

  3. A new constraint on mean-field galactic dynamo theory

    NASA Astrophysics Data System (ADS)

    Chamandy, Luke; Singh, Nishant K.

    2017-07-01

    Appealing to an analytical result from mean-field theory, we show, using a generic galaxy model, that galactic dynamo action can be suppressed by small-scale magnetic fluctuations. This is caused by the magnetic analogue of the Rädler or Ω × J effect, where rotation-induced corrections to the mean-field turbulent transport result in what we interpret to be an effective reduction of the standard α effect in the presence of small-scale magnetic fields.

  4. Comparisons of the Standard Galaxy Model with observations in two fields

    NASA Technical Reports Server (NTRS)

    Bahcall, J. N.; Ratnatunga, K. U.

    1985-01-01

    The Bahcall-Soneira (1984) model for the distribution of stars in the Galaxy is compared with the observations reported by Gilmore, Reid, and Hewett (1984) in two directions in the sky, the pole and the Morton-Tritton (1982) region. It is shown that the Galaxy model is in good agreement with the observations everywhere it has been tested with modern data, including the magnitude range, V = 17-18, and provided that the globular cluster feature is included in the luminosity function of the field Population II stars.

  5. BF actions for the Husain-Kuchař model

    NASA Astrophysics Data System (ADS)

    Barbero G., J. Fernando; Villaseñor, Eduardo J.

    2001-04-01

    We show that the Husain-Kuchař model can be described in the framework of BF theories. This is a first step towards its quantization by standard perturbative quantum field theory techniques or the spin-foam formalism introduced in the space-time description of general relativity and other diff-invariant theories. The actions that we will consider are similar to the ones describing the BF-Yang-Mills model and some mass generating mechanisms for gauge fields. We will also discuss the role of diffeomorphisms in the new formulations that we propose.

  6. Axions, Inflation and String Theory

    NASA Astrophysics Data System (ADS)

    Mack, Katherine J.; Steinhardt, P. J.

    2009-01-01

    The QCD axion is the leading contender to rid the standard model of the strong-CP problem. If the Peccei-Quinn symmetry breaking occurs before inflation, which is likely in string theory models, axions manifest themselves cosmologically as a form of cold dark matter with a density determined by the axion's initial conditions and by the energy scale of inflation. Constraints on the dark matter density and on the amplitude of CMB isocurvature perturbations currently demand an exponential degree of fine-tuning of both axion and inflationary parameters beyond what is required for particle physics. String theory models generally produce large numbers of axion-like fields; the prospect that any of these fields exist at scales close to that of the QCD axion makes the problem drastically worse. I will discuss the challenge of accommodating string-theoretic axions in standard inflationary cosmology and show that the fine-tuning problems cannot be fully addressed by anthropic principle arguments.

  7. EEG and MEG data analysis in SPM8.

    PubMed

    Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl

    2011-01-01

    SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools.

  8. EEG and MEG Data Analysis in SPM8

    PubMed Central

    Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl

    2011-01-01

    SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools. PMID:21437221

  9. The Radiological Physics Center's standard dataset for small field size output factors.

    PubMed

    Followill, David S; Kry, Stephen F; Qin, Lihong; Lowenstein, Jessica; Molineu, Andrea; Alvarez, Paola; Aguirre, Jose Francisco; Ibbott, Geoffrey S

    2012-08-08

    Delivery of accurate intensity-modulated radiation therapy (IMRT) or stereotactic radiotherapy depends on a multitude of steps in the treatment delivery process. These steps range from imaging of the patient to dose calculation to machine delivery of the treatment plan. Within the treatment planning system's (TPS) dose calculation algorithm, various unique small field dosimetry parameters are essential, such as multileaf collimator modeling and field size dependence of the output. One of the largest challenges in this process is determining accurate small field size output factors. The Radiological Physics Center (RPC), as part of its mission to ensure that institutions deliver comparable and consistent radiation doses to their patients, conducts on-site dosimetry review visits to institutions. As a part of the on-site audit, the RPC measures the small field size output factors as might be used in IMRT treatments, and compares the resulting field size dependent output factors to values calculated by the institution's treatment planning system (TPS). The RPC has gathered multiple small field size output factor datasets for X-ray energies ranging from 6 to 18 MV from Varian, Siemens and Elekta linear accelerators. These datasets were measured at 10 cm depth and ranged from 10 × 10 cm(2) to 2 × 2 cm(2). The field sizes were defined by the MLC and for the Varian machines the secondary jaws were maintained at a 10 × 10 cm(2). The RPC measurements were made with a micro-ion chamber whose volume was small enough to gather a full ionization reading even for the 2 × 2 cm(2) field size. The RPC-measured output factors are tabulated and are reproducible with standard deviations (SD) ranging from 0.1% to 1.5%, while the institutions' calculated values had a much larger SD range, ranging up to 7.9% [corrected].The absolute average percent differences were greater for the 2 × 2 cm(2) than for the other field sizes. The RPC's measured small field output factors provide institutions with a standard dataset against which to compare their TPS calculated values. Any discrepancies noted between the standard dataset and calculated values should be investigated with careful measurements and with attention to the specific beam model.

  10. Representation of microstructural features and magnetic anisotropy of electrical steels in an energy-based vector hysteresis model

    NASA Astrophysics Data System (ADS)

    Jacques, Kevin; Steentjes, Simon; Henrotte, François; Geuzaine, Christophe; Hameyer, Kay

    2018-04-01

    This paper demonstrates how the statistical distribution of pinning fields in a ferromagnetic material can be identified systematically from standard magnetic measurements, Epstein frame or Single Sheet Tester (SST). The correlation between the pinning field distribution and microstructural parameters of the material is then analyzed.

  11. Heavy spin-2 Dark Matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babichev, Eugeny; UPMC-CNRS, UMR7095, Institut d’Astrophysique de Paris, GReCO,98bis boulevard Arago, F-75014 Paris; Marzola, Luca

    2016-09-12

    We provide further details on a recent proposal addressing the nature of the dark sectors in cosmology and demonstrate that all current observations related to Dark Matter can be explained by the presence of a heavy spin-2 particle. Massive spin-2 fields and their gravitational interactions are uniquely described by ghost-free bimetric theory, which is a minimal and natural extension of General Relativity. In this setup, the largeness of the physical Planck mass is naturally related to extremely weak couplings of the heavy spin-2 field to baryonic matter and therefore explains the absence of signals in experiments dedicated to Dark Mattermore » searches. It also ensures the phenomenological viability of our model as we confirm by comparing it with cosmological and local tests of gravity. At the same time, the spin-2 field possesses standard gravitational interactions and it decays universally into all Standard Model fields but not into massless gravitons. Matching the measured DM abundance together with the requirement of stability constrains the spin-2 mass to be in the 1 to 100 TeV range.« less

  12. Numerical investigation of airflow in an idealised human extra-thoracic airway: a comparison study

    PubMed Central

    Chen, Jie; Gutmark, Ephraim

    2013-01-01

    Large eddy simulation (LES) technique is employed to numerically investigate the airflow through an idealised human extra-thoracic airway under different breathing conditions, 10 l/min, 30 l/min, and 120 l/min. The computational results are compared with single and cross hot-wire measurements, and with time-averaged flow field computed by standard k-ω and k-ω-SST Reynolds averaged Navier-Stokes (RANS) models and the Lattice-Boltzmann method (LBM). The LES results are also compared to root-mean-square (RMS) flow field computed by the Reynolds stress model (RSM) and LBM. LES generally gives better prediction of the time-averaged flow field than RANS models and LBM. LES also provides better estimation of the RMS flow field than both the RSM and the LBM. PMID:23619907

  13. The BGS magnetic field candidate models for the 12th generation IGRF

    NASA Astrophysics Data System (ADS)

    Hamilton, Brian; Ridley, Victoria A.; Beggan, Ciarán D.; Macmillan, Susan

    2015-05-01

    We describe the candidate models submitted by the British Geological Survey for the 12th generation International Geomagnetic Reference Field. These models are extracted from a spherical harmonic `parent model' derived from vector and scalar magnetic field data from satellite and observatory sources. These data cover the period 2009.0 to 2014.7 and include measurements from the recently launched European Space Agency (ESA) Swarm satellite constellation. The parent model's internal field time dependence for degrees 1 to 13 is represented by order 6 B-splines with knots at yearly intervals. The parent model's degree 1 external field time dependence is described by periodic functions for the annual and semi-annual signals and by dependence on the 20-min Vector Magnetic Disturbance index. Signals induced by these external fields are also parameterized. Satellite data are weighted by spatial density and by two different noise estimators: (a) by standard deviation along segments of the satellite track and (b) a larger-scale noise estimator defined in terms of a measure of vector activity at the geographically closest magnetic observatories to the sample point. Forecasting of the magnetic field secular variation beyond the span of data is by advection of the main field using core surface flows.

  14. Evaluating Field Spectrometer Performance with Transmission Standards: Examples from the USGS Spectral Library and Research Databases

    NASA Astrophysics Data System (ADS)

    Hoefen, T. M.; Kokaly, R. F.; Swayze, G. A.; Livo, K. E.

    2015-12-01

    Collection of spectroscopic data has expanded with the development of field-portable spectrometers. The most commonly available spectrometers span one or several wavelength ranges: the visible (VIS) and near-infrared (NIR) region from approximately 400 to 1000 nm, and the shortwave infrared (SWIR) region from approximately 1000-2500 nm. Basic characteristics of spectrometer performance are the wavelength position and bandpass of each channel. Bandpass can vary across the wavelength coverage of an instrument, due to spectrometer design and detector materials. Spectrometer specifications can differ from one instrument to the next for a given model and between manufacturers. The USGS Spectroscopy Lab in Denver has developed a simple method to evaluate field spectrometer wavelength accuracy and bandpass values using transmission measurements of materials with intense, narrow absorption features, including Mylar* plastic, praseodymium-doped glass, and National Institute of Standards and Technology Standard Reference Material 2035. The evaluation procedure has been applied in laboratory and field settings for 19 years and used to detect deviations from cited manufacturer specifications. Tracking of USGS spectrometers with transmission standards has revealed several instances of wavelength shifts due to wear in spectrometer components. Since shifts in channel wavelengths and differences in bandpass between instruments can impact the use of field spectrometer data to calibrate and analyze imaging spectrometer data, field protocols to measure wavelength standards can limit data loss due to spectrometer degradation. In this paper, the evaluation procedure will be described and examples of observed wavelength shifts during a spectrometer field season will be presented. The impact of changing wavelength and bandpass characteristics on spectral measurements will be demonstrated and implications for spectral libraries will be discussed. *Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.

  15. LSPM J1314+1320: An Oversized Magnetic Star with Constraints on the Radio Emission Mechanism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonald, James; Mullan, D. J.

    LSPM J1314+1320 (=NLTT 33370) is a binary star system consisting of two nearly identical pre-main-sequence stars of spectral type M7. The system is remarkable among ultracool dwarfs for being the most luminous radio emitter over the widest frequency range. Masses and luminosities are at first sight consistent with the system being coeval at age ∼80 Myr according to standard (nonmagnetic) evolutionary models. However, these models predict an average effective temperature of ∼2950 K, which is 180 K hotter than the empirical value. Thus, the empirical radii are oversized relative to the standard models by ≈13%. We demonstrate that magnetic stellarmore » models can quantitatively account for the oversizing. As a check on our models, we note that the radio emission limits the surface magnetic field strengths: the limits depend on identifying the radio emission mechanism. We find that the field strengths required by our magnetic models are too strong to be consistent with gyrosynchrotron emission but are consistent with electron cyclotron maser emission.« less

  16. How to use the Standard Model effective field theory

    DOE PAGES

    Henning, Brian; Lu, Xiaochuan; Murayama, Hitoshi

    2016-01-06

    Here, we present a practical three-step procedure of using the Standard Model effective field theory (SM EFT) to connect ultraviolet (UV) models of new physics with weak scale precision observables. With this procedure, one can interpret precision measurements as constraints on a given UV model. We give a detailed explanation for calculating the effective action up to one-loop order in a manifestly gauge covariant fashion. This covariant derivative expansion method dramatically simplifies the process of matching a UV model with the SM EFT, and also makes available a universal formalism that is easy to use for a variety of UVmore » models. A few general aspects of RG running effects and choosing operator bases are discussed. Finally, we provide mapping results between the bosonic sector of the SM EFT and a complete set of precision electroweak and Higgs observables to which present and near future experiments are sensitive. Many results and tools which should prove useful to those wishing to use the SM EFT are detailed in several appendices.« less

  17. Electric field prediction for a human body-electric machine system.

    PubMed

    Ioannides, Maria G; Papadopoulos, Peter J; Dimitropoulou, Eugenia

    2004-01-01

    A system consisting of an electric machine and a human body is studied and the resulting electric field is predicted. A 3-phase induction machine operating at full load is modeled considering its geometry, windings, and materials. A human model is also constructed approximating its geometry and the electric properties of tissues. Using the finite element technique the electric field distribution in the human body is determined for a distance of 1 and 5 m from the machine and its effects are studied. Particularly, electric field potential variations are determined at specific points inside the human body and for these points the electric field intensity is computed and compared to the limit values for exposure according to international standards.

  18. Inflation, symmetry, and B-modes

    DOE PAGES

    Hertzberg, Mark P.

    2015-04-20

    Here, we examine the role of using symmetry and effective field theory in inflationary model building. We describe the standard formulation of starting with an approximate shift symmetry for a scalar field, and then introducing corrections systematically in order to maintain control over the inflationary potential. We find that this leads to models in good agreement with recent data. On the other hand, there are attempts in the literature to deviate from this paradigm by envoking other symmetries and corrections. In particular: in a suite of recent papers, several authors have made the claim that standard Einstein gravity with amore » cosmological constant and a massless scalar carries conformal symmetry. They claim this conformal symmetry is hidden when the action is written in the Einstein frame, and so has not been fully appreciated in the literature. They further claim that such a theory carries another hidden symmetry; a global SO(1,1) symmetry. By deforming around the global SO(1,1) symmetry, they are able to produce a range of inflationary models with asymptotically flat potentials, whose flatness is claimed to be protected by these symmetries. These models tend to give rise to B-modes with small amplitude. Here we explain that standard Einstein gravity does not in fact possess conformal symmetry. Instead these authors are merely introducing a redundancy into the description, not an actual conformal symmetry. Furthermore, we explain that the only real (global) symmetry in these models is not at all hidden, but is completely manifest when expressed in the Einstein frame; it is in fact the shift symmetry of a scalar field. When analyzed systematically as an effective field theory, deformations do not generally produce asymptotically flat potentials and small B-modes as suggested in these recent papers. Instead, deforming around the shift symmetry systematically, tends to produce models of inflation with B-modes of appreciable amplitude. Such simple models typically also produce the observed red spectral index, Gaussian fluctuations, etc. In short: simple models of inflation, organized by expanding around a shift symmetry, are in excellent agreement with recent data.« less

  19. Lorentz violation and gravity

    NASA Astrophysics Data System (ADS)

    Bailey, Quentin G.

    2007-08-01

    This work explores the theoretical and experimental aspects of Lorentz violation in gravity. A set of modified Einstein field equations is derived from the general Lorentz-violating Standard-Model Extension (SME). Some general theoretical implications of these results are discussed. The experimental consequences for weak-field gravitating systems are explored in the Earth- laboratory setting, the solar system, and beyond. The role of spontaneous Lorentz-symmetry breaking is discussed in the context of the pure-gravity sector of the SME. To establish the low-energy effective Einstein field equations, it is necessary to take into account the dynamics of 20 coefficients for Lorentz violation. As an example, the results are compared with bumblebee models, which are general theories of vector fields with spontaneous Lorentz violation. The field equations are evaluated in the post- newtonian limit using a perfect fluid description of matter. The post-newtonian metric of the SME is derived and compared with some standard test models of gravity. The possible signals for Lorentz violation due to gravity-sector coefficients are studied. Several new effects are identified that have experimental implications for current and future tests. Among the unconventional effects are a new type of spin precession for a gyroscope in orbit and a modification to the local gravitational acceleration on the Earth's surface. These and other tests are expected to yield interesting sensitivities to dimensionless gravity- sector coefficients.

  20. The general ventilation multipliers calculated by using a standard Near-Field/Far-Field model.

    PubMed

    Koivisto, Antti J; Jensen, Alexander C Ø; Koponen, Ismo K

    2018-05-01

    In conceptual exposure models, the transmission of pollutants in an imperfectly mixed room is usually described with general ventilation multipliers. This is the approach used in the Advanced REACH Tool (ART) and Stoffenmanager® exposure assessment tools. The multipliers used in these tools were reported by Cherrie (1999; http://dx.doi.org/10.1080/104732299302530 ) and Cherrie et al. (2011; http://dx.doi.org/10.1093/annhyg/mer092 ) who developed them by positing input values for a standard Near-Field/Far-Field (NF/FF) model and then calculating concentration ratios between NF and FF concentrations. This study revisited the calculations that produce the multipliers used in ART and Stoffenmanager and found that the recalculated general ventilation multipliers were up to 2.8 times (280%) higher than the values reported by Cherrie (1999) and the recalculated NF and FF multipliers for 1-hr exposure were up to 1.2 times (17%) smaller and for 8-hr exposure up to 1.7 times (41%) smaller than the values reported by Cherrie et al. (2011). Considering that Stoffenmanager and the ART are classified as higher-tier regulatory exposure assessment tools, the errors is general ventilation multipliers should not be ignored. We recommend revising the general ventilation multipliers. A better solution is to integrate the NF/FF model to Stoffenmanager and the ART.

  1. Exploring extra dimensions with scalar fields

    NASA Astrophysics Data System (ADS)

    Brown, Katherine; Mathur, Harsh; Verostek, Mike

    2018-05-01

    This paper provides a pedagogical introduction to the physics of extra dimensions by examining the behavior of scalar fields in three landmark models: the ADD, Randall-Sundrum, and DGP spacetimes. Results of this analysis provide qualitative insights into the corresponding behavior of gravitational fields and elementary particles in each of these models. In these "brane world" models, the familiar four dimensional spacetime of everyday experience is called the brane and is a slice through a higher dimensional spacetime called the bulk. The particles and fields of the standard model are assumed to be confined to the brane, while gravitational fields are assumed to propagate in the bulk. For all three spacetimes, we calculate the spectrum of propagating scalar wave modes and the scalar field produced by a static point source located on the brane. For the ADD and Randall-Sundrum models, at large distances, the field looks like that of a point source in four spacetime dimensions, but at short distances, it crosses over to a form appropriate to the higher dimensional spacetime. For the DGP model, the field has the higher dimensional form at long distances rather than short. The behavior of these scalar fields, derived using only undergraduate level mathematics, closely mirror the results that one would obtain by performing the far more difficult task of analyzing the behavior of gravitational fields in these spacetimes.

  2. Evolution of the Cosmic Web

    NASA Astrophysics Data System (ADS)

    Einasto, J.

    2017-07-01

    In the evolution of the cosmic web dark energy plays an important role. To understand the role of dark energy we investigate the evolution of superclusters in four cosmological models: standard model SCDM, conventional model LCDM, open model OCDM, and a hyper-dark-energy model HCDM. Numerical simulations of the evolution are performed in a box of size 1024 Mpc/h. Model superclusters are compared with superclusters found for Sloan Digital Sky Survey (SDSS). Superclusters are searched using density fields. LCDM superclusters have properties, very close to properties of observed SDSS superclusters. Standard model SCDM has about 2 times more superclusters than other models, but SCDM superclusters are smaller and have lower luminosities. Superclusters as principal structural elements of the cosmic web are present at all cosmological epochs.

  3. Baryon non-invariant couplings in Higgs effective field theory

    NASA Astrophysics Data System (ADS)

    Merlo, Luca; Saa, Sara; Sacristán-Barbero, Mario

    2017-03-01

    The basis of leading operators which are not invariant under baryon number is constructed within the Higgs effective field theory. This list contains 12 dimension six operators, which preserve the combination B-L, to be compared to only 6 operators for the standard model effective field theory. The discussion of the independent flavour contractions is presented in detail for a generic number of fermion families adopting the Hilbert series technique.

  4. DSSTOX WEBSITE LAUNCH: IMPROVING PUBLIC ACCESS TO DATABASES FOR BUILDING STRUCTURE-TOXICITY PREDICTION MODELS

    EPA Science Inventory

    DSSTox Website Launch: Improving Public Access to Databases for Building Structure-Toxicity Prediction Models
    Ann M. Richard
    US Environmental Protection Agency, Research Triangle Park, NC, USA

    Distributed: Decentralized set of standardized, field-delimited databases,...

  5. Simulation and mitigation of higher-order ionospheric errors in PPP

    NASA Astrophysics Data System (ADS)

    Zus, Florian; Deng, Zhiguo; Wickert, Jens

    2017-04-01

    We developed a rapid and precise algorithm to compute ionospheric phase advances in a realistic electron density field. The electron density field is derived from a plasmaspheric extension of the International Reference Ionosphere (Gulyaeva and Bilitza, 2012) and the magnetic field stems from the International Geomagnetic Reference Field. For specific station locations, elevation and azimuth angles the ionospheric phase advances are stored in a look-up table. The higher-order ionospheric residuals are computed by forming the standard linear combination of the ionospheric phase advances. In a simulation study we examine how the higher-order ionospheric residuals leak into estimated station coordinates, clocks, zenith delays and tropospheric gradients in precise point positioning. The simulation study includes a few hundred globally distributed stations and covers the time period 1990-2015. We take a close look on the estimated zenith delays and tropospheric gradients as they are considered a data source for meteorological and climate related research. We also show how the by product of this simulation study, the look-up tables, can be used to mitigate higher-order ionospheric errors in practise. Gulyaeva, T.L., and Bilitza, D. Towards ISO Standard Earth Ionosphere and Plasmasphere Model. In: New Developments in the Standard Model, edited by R.J. Larsen, pp. 1-39, NOVA, Hauppauge, New York, 2012, available at https://www.novapublishers.com/catalog/product_info.php?products_id=35812

  6. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE PAGES

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  7. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  8. Toward an Educational View of Scaling: Sufficing Standard and Not a Gold Standard

    ERIC Educational Resources Information Center

    Hung, David; Lee, Shu-Shing; Wu, Longkai

    2015-01-01

    Educational innovations in Singapore have reached fruition. It is now important to consider different innovations and issues that enable innovations to scale and become widespread. This proposition paper outlines two views of scaling and its relation to education systems. We argue that a linear model used in the medical field stresses top-down…

  9. Model with a gauged lepton flavor SU(2) symmetry

    NASA Astrophysics Data System (ADS)

    Chiang, Cheng-Wei; Tsumura, Koji

    2018-05-01

    We propose a model having a gauged SU(2) symmetry associated with the second and third generations of leptons, dubbed SU(2) μτ , of which U{(1)}_{L_{μ }-L_{τ }} is an Abelian subgroup. In addition to the Standard Model fields, we introduce two types of scalar fields. One exotic scalar field is an SU(2) μτ doublet and SM singlet that develops a nonzero vacuum expectation value at presumably multi-TeV scale to completely break the SU(2) μτ symmetry, rendering three massive gauge bosons. At the same time, the other exotic scalar field, carrying electroweak as well as SU(2) μτ charges, is induced to have a nonzero vacuum expectation value as well and breaks mass degeneracy between the muon and tau. We examine how the new particles in the model contribute to the muon anomalous magnetic moment in the parameter space compliant with the Michel decays of tau.

  10. Higgs decays to Z Z and Z γ in the standard model effective field theory: An NLO analysis

    NASA Astrophysics Data System (ADS)

    Dawson, S.; Giardino, P. P.

    2018-05-01

    We calculate the complete one-loop electroweak corrections to the inclusive H →Z Z and H →Z γ decays in the dimension-6 extension of the Standard Model Effective Field Theory (SMEFT). The corrections to H →Z Z are computed for on-shell Z bosons and are a precursor to the physical H →Z f f ¯ calculation. We present compact numerical formulas for our results and demonstrate that the logarithmic contributions that result from the renormalization group evolution of the SMEFT coefficients are larger than the finite next-to-leading-order contributions to the decay widths. As a byproduct of our calculation, we obtain the first complete result for the finite corrections to Gμ in the SMEFT.

  11. Effective field theory of integrating out sfermions in the MSSM: Complete one-loop analysis

    NASA Astrophysics Data System (ADS)

    Huo, Ran

    2018-04-01

    We apply the covariant derivative expansion of the Coleman-Weinberg potential to the sfermion sector in the minimal supersymmetric standard model, matching it to the relevant dimension-6 operators in the standard model effective field theory at one-loop level. Emphasis is paid to nondegenerate large soft supersymmetry breaking mass squares, and the most general analytical Wilson coefficients are obtained for all pure bosonic dimension-6 operators. In addition to the non-logarithmic contributions, they generally have another logarithmic contributions. Various numerical results are shown, in particular the constraints in the large Xt branch reproducing the 125 GeV Higgs mass can be pushed to high values to almost completely probe the low stop mass region at the future FCC-ee experiment, even given the Higgs mass calculation uncertainty.

  12. Insertion device calculations with mathematica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carr, R.; Lidia, S.

    1995-02-01

    The design of accelerator insertion devices such as wigglers and undulators has usually been aided by numerical modeling on digital computers, using code in high level languages like Fortran. In the present era, there are higher level programming environments like IDL{reg_sign}, MatLab{reg_sign}, and Mathematica{reg_sign} in which these calculations may be performed by writing much less code, and in which standard mathematical techniques are very easily used. The authors present a suite of standard insertion device modeling routines in Mathematica to illustrate the new techniques. These routines include a simple way to generate magnetic fields using blocks of CSEM materials, trajectorymore » solutions from the Lorentz force equations for given magnetic fields, Bessel function calculations of radiation for wigglers and undulators and general radiation calculations for undulators.« less

  13. Field testing of new-technology ambient air ozone monitors.

    PubMed

    Ollison, Will M; Crow, Walt; Spicer, Chester W

    2013-07-01

    Multibillion-dollar strategies control ambient air ozone (O3) levels in the United States, so it is essential that the measurements made to assess compliance with regulations be accurate. The predominant method employed to monitor O3 is ultraviolet (UV) photometry. Instruments employ a selective manganese dioxide or heated silver wool "scrubber" to remove O3 to provide a zero reference signal. Unfortunately, such scrubbers remove atmospheric constituents that absorb 254-nm light, causing measurement interference. Water vapor also interferes with the measurement under some circumstances. We report results of a 3-month field test of two new instruments designed to minimize interferences (2B Technologies model 211; Teledyne-API model 265E) that were operated in parallel with a conventional Thermo Scientific model 49C O3 monitor. The field test was hosted by the Houston Regional Monitoring Corporation (HRM). The model 211 photometer scrubs O3 with excess nitric oxide (NO) generated in situ by photolysis of added nitrous oxide (N2O) to provide a reference signal, eliminating the need for a conventional O3 scrubber. The model 265E analyzer directly measures O3-NO chemiluminescence from added excess NO to quantify O3 in the sample stream. Extensive quality control (QC) and collocated monitoring data are assessed to evaluate potential improvements to the accuracy of O3 compliance monitoring. Two new-technology ozone monitors were compared with a conventional monitor under field conditions. Over 3 months the conventional monitor reported more exceedances of the current standard than the new instruments, which could potentially result in an area being misjudged as "nonattainment." Instrument drift can affect O3 data accuracy, and the same degree of drift has a proportionally greater compliance effect as standard stringency is increased. Enhanced data quality assurance and data adjustment may be necessary to achieve the improved accuracy required to judge compliance with tighter standards.

  14. Confinement of the Crab Nebula with tangled magnetic field by its supernova remnant

    NASA Astrophysics Data System (ADS)

    Tanaka, Shuta J.; Toma, Kenji; Tominaga, Nozomu

    2018-05-01

    A pulsar wind is a relativistic outflow dominated by Poynting energy at its base. Based on the standard ideal magnetohydrodynamic (MHD) model of pulsar wind nebulae (PWNe) with the ordered magnetic field, the observed slow expansion vPWN ≪ c requires the wind to be dominated by kinetic energy at the upstream of its termination shock, which conflicts with the pulsar wind theory (σ-problem). In this paper, we extend the standard model of PWNe by phenomenologically taking into account conversion of the ordered to turbulent magnetic field and dissipation of the turbulent magnetic field. Disordering of the magnetic structure is inferred from the recent three-dimensional relativistic ideal MHD simulations, while magnetic dissipation is a non-ideal MHD effect requiring a finite resistivity. We apply this model to the Crab Nebula and find that the conversion effect is important for the flow deceleration, while the dissipation effect is not. Even for Poynting-dominated pulsar wind, we obtain the Crab Nebula's vPWN by adopting a finite conversion time-scale of ˜0.3 yr. Magnetic dissipation primarily affects the synchrotron radiation properties. Any values of the pulsar wind magnetization σw are allowed within the present model of the PWN dynamics alone, and even a small termination shock radius of ≪0.1 pc reproduces the observed dynamical features of the Crab Nebula. In order to establish a high-σw model of PWNe, it is important to extend the present model by taking into account the broadband spectrum and its spacial profiles.

  15. User Modeling in Adaptive Hypermedia Educational Systems

    ERIC Educational Resources Information Center

    Martins, Antonio Constantino; Faria, Luiz; Vaz de Carvalho, Carlos; Carrapatoso, Eurico

    2008-01-01

    This document is a survey in the research area of User Modeling (UM) for the specific field of Adaptive Learning. The aims of this document are: To define what it is a User Model; To present existing and well known User Models; To analyze the existent standards related with UM; To compare existing systems. In the scientific area of User Modeling…

  16. Developing Statistical Models to Assess Transplant Outcomes Using National Registries: The Process in the United States.

    PubMed

    Snyder, Jon J; Salkowski, Nicholas; Kim, S Joseph; Zaun, David; Xiong, Hui; Israni, Ajay K; Kasiske, Bertram L

    2016-02-01

    Created by the US National Organ Transplant Act in 1984, the Scientific Registry of Transplant Recipients (SRTR) is obligated to publicly report data on transplant program and organ procurement organization performance in the United States. These reports include risk-adjusted assessments of graft and patient survival, and programs performing worse or better than expected are identified. The SRTR currently maintains 43 risk adjustment models for assessing posttransplant patient and graft survival and, in collaboration with the SRTR Technical Advisory Committee, has developed and implemented a new systematic process for model evaluation and revision. Patient cohorts for the risk adjustment models are identified, and single-organ and multiorgan transplants are defined, then each risk adjustment model is developed following a prespecified set of steps. Model performance is assessed, the model is refit to a more recent cohort before each evaluation cycle, and then it is applied to the evaluation cohort. The field of solid organ transplantation is unique in the breadth of the standardized data that are collected. These data allow for quality assessment across all transplant providers in the United States. A standardized process of risk model development using data from national registries may enhance the field.

  17. Beyond Born-Mayer: Improved models for short-range repulsion in ab initio force fields

    DOE PAGES

    Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.; ...

    2016-06-23

    Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less

  18. The Janus Cosmological Model (JCM) : An answer to the missing cosmological antimatter

    NASA Astrophysics Data System (ADS)

    D'Agostini, Gilles; Petit, Jean-Pierre

    2017-01-01

    Cosmological antimatter absence remains unexplained. Twin universes 1967 Sakarov's model suggests an answer: excess of matter and anti-quarks production in our universe is balanced by equivalent excess of antimatter and quark in twin universe. JCM provides geometrical framework, with a single manifold , two metrics solutions of two coupled field equations, to describe two populations of particles, one with positive energy-mass and the other with negative energy-mass : the `twin matter'. In a quantum point of view, it's a copy of the standard matter but with negative mass and energy. The matter-antimatter duality holds in both sectors. The standard and twin matters do not interact except through the gravitational coupling expressed in field equations. The twin matter is unobservable from matter-made apparatus. Field equations shows that matter and twin matter repel each other. Twin matter surrounding galaxies explains their confinement (dark matter role) and, in the dust universe era, mainly drives the process of expansion of the positive sector, responsible of the observed acceleration (dark energy role).

  19. Automated workflows for data curation and standardization of chemical structures for QSAR modeling

    EPA Science Inventory

    Large collections of chemical structures and associated experimental data are publicly available, and can be used to build robust QSAR models for applications in different fields. One common concern is the quality of both the chemical structure information and associated experime...

  20. Interest rates in quantum finance: the Wilson expansion and Hamiltonian.

    PubMed

    Baaquie, Belal E

    2009-10-01

    Interest rate instruments form a major component of the capital markets. The Libor market model (LMM) is the finance industry standard interest rate model for both Libor and Euribor, which are the most important interest rates. The quantum finance formulation of the Libor market model is given in this paper and leads to a key generalization: all the Libors, for different future times, are imperfectly correlated. A key difference between a forward interest rate model and the LMM lies in the fact that the LMM is calibrated directly from the observed market interest rates. The short distance Wilson expansion [Phys. Rev. 179, 1499 (1969)] of a Gaussian quantum field is shown to provide the generalization of Ito calculus; in particular, the Wilson expansion of the Gaussian quantum field A(t,x) driving the Libors yields a derivation of the Libor drift term that incorporates imperfect correlations of the different Libors. The logarithm of Libor phi(t,x) is defined and provides an efficient and compact representation of the quantum field theory of the Libor market model. The Lagrangian and Feynman path integrals of the Libor market model of interest rates are obtained, as well as a derivation given by its Hamiltonian. The Hamiltonian formulation of the martingale condition provides an exact solution for the nonlinear drift of the Libor market model. The quantum finance formulation of the LMM is shown to reduce to the industry standard Bruce-Gatarek-Musiela-Jamshidian model when the forward interest rates are taken to be exactly correlated.

  1. Stable solutions of inflation driven by vector fields

    NASA Astrophysics Data System (ADS)

    Emami, Razieh; Mukohyama, Shinji; Namba, Ryo; Zhang, Ying-li

    2017-03-01

    Many models of inflation driven by vector fields alone have been known to be plagued by pathological behaviors, namely ghost and/or gradient instabilities. In this work, we seek a new class of vector-driven inflationary models that evade all of the mentioned instabilities. We build our analysis on the Generalized Proca Theory with an extension to three vector fields to realize isotropic expansion. We obtain the conditions required for quasi de-Sitter solutions to be an attractor analogous to the standard slow-roll one and those for their stability at the level of linearized perturbations. Identifying the remedy to the existing unstable models, we provide a simple example and explicitly show its stability. This significantly broadens our knowledge on vector inflationary scenarios, reviving potential phenomenological interests for this class of models.

  2. Wide-angle vision for road views

    NASA Astrophysics Data System (ADS)

    Huang, F.; Fehrs, K.-K.; Hartmann, G.; Klette, R.

    2013-03-01

    The field-of-view of a wide-angle image is greater than (say) 90 degrees, and so contains more information than available in a standard image. A wide field-of-view is more advantageous than standard input for understanding the geometry of 3D scenes, and for estimating the poses of panoramic sensors within such scenes. Thus, wide-angle imaging sensors and methodologies are commonly used in various road-safety, street surveillance, street virtual touring, or street 3D modelling applications. The paper reviews related wide-angle vision technologies by focusing on mathematical issues rather than on hardware.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, Samuel F.; Romero-Gomez, Pedro D. J.; Richmond, Marshall C.

    Standards provide recommendations for the best practices in the installation of current meters for measuring fluid flow in closed conduits. These include PTC-18 and IEC-41 . Both of these standards refer to the requirements of the ISO Standard 3354 for cases where the velocity distribution is assumed to be regular and the flow steady. Due to the nature of the short converging intakes of Kaplan hydroturbines, these assumptions may be invalid if current meters are intended to be used to characterize turbine flows. In this study, we examine a combination of measurement guidelines from both ISO standards by means ofmore » virtual current meters (VCM) set up over a simulated hydroturbine flow field. To this purpose, a computational fluid dynamics (CFD) model was developed to model the velocity field of a short converging intake of the Ice Harbor Dam on the Snake River, in the State of Washington. The detailed geometry and resulting wake of the submersible traveling screen (STS) at the first gate slot was of particular interest in the development of the CFD model using a detached eddy simulation (DES) turbulence solution. An array of virtual point velocity measurements were extracted from the resulting velocity field to simulate VCM at two virtual measurement (VM) locations at different distances downstream of the STS. The discharge through each bay was calculated from the VM using the graphical integration solution to the velocity-area method. This method of representing practical velocimetry techniques in a numerical flow field has been successfully used in a range of marine and conventional hydropower applications. A sensitivity analysis was performed to observe the effect of the VCM array resolution on the discharge error. The downstream VM section required 11–33% less VCM in the array than the upstream VM location to achieve a given discharge error. In general, more instruments were required to quantify the discharge at high levels of accuracy when the STS was introduced because of the increased spatial variability of the flow velocity.« less

  4. Human exposure standards in the frequency range 1 Hz To 100 kHz: the case for adoption of the IEEE standard.

    PubMed

    Patrick Reilly, J

    2014-10-01

    Differences between IEEE C95 Standards (C95.6-2002 and C95.1-2005) in the low-frequency (1 Hz-100 kHz) and the ICNIRP-2010 guidelines appear across the frequency spectrum. Factors accounting for lack of convergence include: differences between the IEEE standards and the ICNIRP guidelines with respect to biological induction models, stated objectives, data trail from experimentally derived thresholds through physical and biological principles, selection and justification of safety/reduction factors, use of probability models, compliance standards for the limbs as distinct from the whole body, defined population categories, strategies for central nervous system protection below 20 Hz, and correspondence of environmental electric field limits with contact currents. This paper discusses these factors and makes the case for adoption of the limits in the IEEE standards.

  5. The Unsteady Temperature Field in a Turbine Blade Cooling Channel

    DTIC Science & Technology

    2003-03-01

    SYB) 39-1 The Unsteady Temperature Field in a Turbine Blade Cooling Channel T . Arts Von Karman Institute for Fluid Dynamics 72, chausse de Waterloo...wall coordinates (y+ and T +) are used for this purpose: ν = − ρ−= τ+τ + uyy q TT uCT wall wall p (1) (SYB) 39...poor performance of the Baldwin-Lomax model and, up to some extent, of the standard k-ε model (Fig. 5). 0 5 10 15 20 25 1 10 100 1000 10000 Y+ T

  6. Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads

    DOE PAGES

    Moon, Jae; Manuel, Lance; Churchfield, Matthew; ...

    2017-12-28

    Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less

  7. Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moon, Jae; Manuel, Lance; Churchfield, Matthew

    Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less

  8. Explicit treatment for Dirichlet, Neumann and Cauchy boundary conditions in POD-based reduction of groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2018-05-01

    In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.

  9. On a more rigorous gravity field processing for future LL-SST type gravity satellite missions

    NASA Astrophysics Data System (ADS)

    Daras, I.; Pail, R.; Murböck, M.

    2013-12-01

    In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.

  10. Modelling daily water temperature from air temperature for the Missouri River.

    PubMed

    Zhu, Senlin; Nyarko, Emmanuel Karlo; Hadzima-Nyarko, Marijana

    2018-01-01

    The bio-chemical and physical characteristics of a river are directly affected by water temperature, which thereby affects the overall health of aquatic ecosystems. It is a complex problem to accurately estimate water temperature. Modelling of river water temperature is usually based on a suitable mathematical model and field measurements of various atmospheric factors. In this article, the air-water temperature relationship of the Missouri River is investigated by developing three different machine learning models (Artificial Neural Network (ANN), Gaussian Process Regression (GPR), and Bootstrap Aggregated Decision Trees (BA-DT)). Standard models (linear regression, non-linear regression, and stochastic models) are also developed and compared to machine learning models. Analyzing the three standard models, the stochastic model clearly outperforms the standard linear model and nonlinear model. All the three machine learning models have comparable results and outperform the stochastic model, with GPR having slightly better results for stations No. 2 and 3, while BA-DT has slightly better results for station No. 1. The machine learning models are very effective tools which can be used for the prediction of daily river temperature.

  11. On the exotic Higgs decays in effective field theory.

    PubMed

    Bélusca-Maïto, Hermès; Falkowski, Adam

    2016-01-01

    We discuss exotic Higgs decays in an effective field theory where the Standard Model is extended by dimension-6 operators. We review and update the status of two-body lepton- and quark-flavor-violating decays involving the Higgs boson. We also comment on the possibility of observing three-body flavor-violating Higgs decays in this context.

  12. Do Differing Types of Field Experiences Make a Difference in Teacher Candidates' Perceived Level of Competence?

    ERIC Educational Resources Information Center

    Caprano, Mary Margaret; Caprano, Robert M.; Helfeldt, Jack

    2010-01-01

    Little research has been conducted to directly compare the effectiveness of different models of field-based learning experiences and little has been reported on the use of the Interstate New Teacher Assessment and Support Consortium (INTASC) standards in establishing a formative assessment for teacher candidates (TCs). The current study used the…

  13. Fate of inflation and the natural reduction of vacuum energy

    NASA Astrophysics Data System (ADS)

    Nakamichi, Akika; Morikawa, Masahiro

    2014-04-01

    In the standard cosmology, an artificial fine tuning of the potential is inevitable for vanishing cosmological constant, though slow-rolling uniform scalar field easily causes cosmic inflation. We focus on the general fact that any potential with negative region can temporally halt the cosmic expansion at the end of inflation, where the field tends to diverge. This violent evolution naturally causes particle production and strong instability of the uniform configuration of the fields. Decaying of this uniform scalar field would leave vanishing cosmological constant as well as locally collapsed objects. The universe then continues to evolve into the standard Freedman model. We study the detail of the instability, based on the linear analysis, and the subsequent fate of the scalar field, based on the non-linear numerical analysis. The collapsed scalar field would easily exceed the Kaup limiting mass and forms primordial black holes, which may play an important role in galaxy formation in later stages of cosmic expansion. We systematically describe the above scenario by identifying the scalar field as the boson field condensation (BEC) and the inflation as the process of phase transition of them.

  14. Nonlinear interactions between black holes and Proca fields

    NASA Astrophysics Data System (ADS)

    Zilhão, Miguel; Witek, Helvi; Cardoso, Vitor

    2015-12-01

    Physics beyond the standard model is an important candidate for dark matter, and an interesting testing ground for strong-field gravity: the equivalence principle ‘forces’ all forms of matter to fall in the same way, and it is therefore natural to look for imprints of these fields in regions with strong gravitational fields, such as compact stars or black holes (BHs). Here we study general relativity minimally coupled to a massive vector field, and how BHs in this theory lose ‘hair’. Our results indicate that BHs can sustain Proca field condensates for extremely long time-scales.

  15. I. Aspects of the Dark Matter Problem. II. Fermion Balls

    NASA Astrophysics Data System (ADS)

    Tetradis, Nikolaos Athanassiou

    The first part of this thesis deals with the dark matter problem. A simple non-supersymmetric extension of the standard model is presented, which provides dark matter candidates not excluded by the existing dark matter searches. The simplest candidate is the neutral component of a zero hypercharge triplet, with vector gauge interactions. The upper bound on its mass is a few TeV. We also discuss possible modifications of the standard freeze-out scenario, induced by the presence of a phase transition. More specifically, if the critical temperature of the electroweak phase transition is sufficiently small, it can change the final abundances of heavy dark matter particles, by keeping them massless for a long time. Recent experimental bounds on the Higgs mass from LEP imply that this is not the case in the minimal standard model. In the second part we discuss non-trivial configurations, involving fermions which obtain their mass through Yukawa interactions with a scalar field. Under certain conditions, the vacuum expectation value of the scalar field is shifted from the minimum of the effective potential, in regions of high fermion density. This may result in the formation of fermion bound states. We study two such cases: (a) Using the non-linear SU(3)L times SU(3)R chiral Lagrangian coupled to a field theory of nuclear forces, we show that a bound state of baryons with a well defined surface may concievably form in the presence of kaon condensation. This state is of similar density to ordinary nuclei, but has net strangeness equal to about two thirds the baryon number. We discuss the properties of lumps of strange baryon matter with baryon number between ~20 and ~10 57 where gravitational effects become important. (b) The Higgs field near a very heavy top quark or any other heavy fermion is expected to be significantly deformed. By computing explicit solutions of the classical equations of motion for a spherically symmetric configuration without gauge fields, we show that in the standard model this cannot happen without violating either vacuum stability or perturbation theory at energies very close to the top quark mass.

  16. US Army Research Laboratory (ARL) Standard for Characterization of Electric-Field Sensors, 10 Hz to 10 kHz

    DTIC Science & Technology

    2016-11-01

    Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so designated by other...unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT The US Army Research Laboratory (ARL) has designed and build a one-of-a-kind electric-field sensor...direction. (Right) Computer- aided design model of the ARL electric-field cage with insulated mounting

  17. Indices of climate change based on patterns from CMIP5 models, and the range of projections

    NASA Astrophysics Data System (ADS)

    Watterson, I. G.

    2018-05-01

    Changes in temperature, precipitation, and other variables simulated by 40 current climate models for the 21st century are approximated as the product of the global mean warming and a spatial pattern of scaled changes. These fields of standardized change contain consistent features of simulated change, such as larger warming over land and increased high-latitude precipitation. However, they also differ across the ensemble, with standard deviations exceeding 0.2 for temperature over most continents, and 6% per degree for tropical precipitation. These variations are found to correlate, often strongly, with indices based on those of modes of interannual variability. Annular mode indices correlate, across the 40 models, with regional pressure changes and seasonal rainfall changes, particularly in South America and Europe. Equatorial ocean warming rates link to widespread anomalies, similarly to ENSO. A Pacific-Indian Dipole (PID) index representing the gradient in warming across the maritime continent is correlated with Australian rainfall with coefficient r of - 0.8. The component of equatorial warming orthogonal to this index, denoted EQN, has strong links to temperature and rainfall in Africa and the Americas. It is proposed that these indices and their associated patterns might be termed "modes of climate change". This is supported by an analysis of empirical orthogonal functions for the ensemble of standardized fields. Can such indices be used to help constrain projections? The relative similarity of the PID and EQN values of change, from models that have more skilful simulation of the present climate tropical pressure fields, provides a basis for this.

  18. Comparison of electric field strength and spatial distribution of electroconvulsive therapy and magnetic seizure therapy in a realistic human head model

    PubMed Central

    Lee, Won Hee; Lisanby, Sarah H.; Laine, Andrew F.; Peterchev, Angel V.

    2017-01-01

    Background This study examines the strength and spatial distribution of the electric field induced in the brain by electroconvulsive therapy (ECT) and magnetic seizure therapy (MST). Methods The electric field induced by standard (bilateral, right unilateral, and bifrontal) and experimental (focal electrically administered seizure therapy and frontomedial) ECT electrode configurations as well as a circular MST coil configuration was simulated in an anatomically realistic finite element model of the human head. Maps of the electric field strength relative to an estimated neural activation threshold were used to evaluate the stimulation strength and focality in specific brain regions of interest for these ECT and MST paradigms and various stimulus current amplitudes. Results The standard ECT configurations and current amplitude of 800–900 mA produced the strongest overall stimulation with median of 1.8–2.9 times neural activation threshold and more than 94% of the brain volume stimulated at suprathreshold level. All standard ECT electrode placements exposed the hippocampi to suprathreshold electric field, although there were differences across modalities with bilateral and right unilateral producing respectively the strongest and weakest hippocampal stimulation. MST stimulation is up to 9 times weaker compared to conventional ECT, resulting in direct activation of only 21% of the brain. Reducing the stimulus current amplitude can make ECT as focal as MST. Conclusions The relative differences in electric field strength may be a contributing factor for the cognitive sparing observed with right unilateral compared to bilateral ECT, and MST compared to right unilateral ECT. These simulations could help understand the mechanisms of seizure therapies and develop interventions with superior risk/benefit ratio. PMID:27318858

  19. Empirical investigation of a field theory formula and Black's formula for the price of an interest-rate caplet

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Liang, Cui

    2007-01-01

    The industry standard for pricing an interest-rate caplet is Black's formula. Another distinct price of the same caplet can be derived using a quantum field theory model of the forward interest rates. An empirical study is carried out to compare the two caplet pricing formulae. Historical volatility and correlation of forward interest rates are used to generate the field theory caplet price; another approach is to fit a parametric formula for the effective volatility using market caplet price. The study shows that the field theory model generates the price of a caplet and cap fairly accurately. Black's formula for a caplet is compared with field theory pricing formula. It is seen that the field theory formula for caplet price has many advantages over Black's formula.

  20. Twisted-Light-Ion Interaction: The Role of Longitudinal Fields

    NASA Astrophysics Data System (ADS)

    Quinteiro, G. F.; Schmidt-Kaler, Ferdinand; Schmiegelow, Christian T.

    2017-12-01

    The propagation of light beams is well described using the paraxial approximation, where field components along the propagation direction are usually neglected. For strongly inhomogeneous or shaped light fields, however, this approximation may fail, leading to intriguing variations of the light-matter interaction. This is the case of twisted light having opposite orbital and spin angular momenta. We compare experimental data for the excitation of a quadrupole transition in a single trapped 40Ca+ ion from Schmiegelow et al. [Nat. Commun. 7, 12998 (2016), 10.1038/ncomms12998] with a complete model where longitudinal components of the electric field are taken into account. Our model matches the experimental data and excludes by 11 standard deviations the approximation of a complete transverse field. This demonstrates the relevance of all field components for the interaction of twisted light with matter.

  1. A positivity preserving and conservative variational scheme for phase-field modeling of two-phase flows

    NASA Astrophysics Data System (ADS)

    Joshi, Vaibhav; Jaiman, Rajeev K.

    2018-05-01

    We present a positivity preserving variational scheme for the phase-field modeling of incompressible two-phase flows with high density ratio. The variational finite element technique relies on the Allen-Cahn phase-field equation for capturing the phase interface on a fixed Eulerian mesh with mass conservative and energy-stable discretization. The mass conservation is achieved by enforcing a Lagrange multiplier which has both temporal and spatial dependence on the underlying solution of the phase-field equation. To make the scheme energy-stable in a variational sense, we discretize the spatial part of the Lagrange multiplier in the phase-field equation by the mid-point approximation. The proposed variational technique is designed to reduce the spurious and unphysical oscillations in the solution while maintaining the second-order accuracy of both spatial and temporal discretizations. We integrate the Allen-Cahn phase-field equation with the incompressible Navier-Stokes equations for modeling a broad range of two-phase flow and fluid-fluid interface problems. The coupling of the implicit discretizations corresponding to the phase-field and the incompressible flow equations is achieved via nonlinear partitioned iterative procedure. Comparison of results between the standard linear stabilized finite element method and the present variational formulation shows a remarkable reduction of oscillations in the solution while retaining the boundedness of the phase-indicator field. We perform a standalone test to verify the accuracy and stability of the Allen-Cahn two-phase solver. We examine the convergence and accuracy properties of the coupled phase-field solver through the standard benchmarks of the Laplace-Young law and a sloshing tank problem. Two- and three-dimensional dam break problems are simulated to assess the capability of the phase-field solver for complex air-water interfaces involving topological changes on unstructured meshes. Finally, we demonstrate the phase-field solver for a practical offshore engineering application of wave-structure interaction.

  2. Evaluation method for in situ electric field in standardized human brain for different transcranial magnetic stimulation coils

    NASA Astrophysics Data System (ADS)

    Iwahashi, Masahiro; Gomez-Tames, Jose; Laakso, Ilkka; Hirata, Akimasa

    2017-03-01

    This study proposes a method to evaluate the electric field induced in the brain by transcranial magnetic stimulation (TMS) to realize focal stimulation in the target area considering the inter-subject difference of the brain anatomy. The TMS is a non-invasive technique used for treatment/diagnosis, and it works by inducing an electric field in a specific area of the brain via a coil-induced magnetic field. Recent studies that report on the electric field distribution in the brain induced by TMS coils have been limited to simplified human brain models or a small number of detailed human brain models. Until now, no method has been developed that appropriately evaluates the coil performance for a group of subjects. In this study, we first compare the magnetic field and the magnetic vector potential distributions to determine if they can be used as predictors of the TMS focality derived from the electric field distribution. Next, the hotspots of the electric field on the brain surface of ten subjects using six coils are compared. Further, decisive physical factors affecting the focality of the induced electric field by different coils are discussed by registering the computed electric field in a standard brain space for the first time, so as to evaluate coil characteristics for a large population of subjects. The computational results suggest that the induced electric field in the target area cannot be generalized without considering the morphological variability of the human brain. Moreover, there was no remarkable difference between the various coils, although focality could be improved to a certain extent by modifying the coil design (e.g., coil radius). Finally, the focality estimated by the electric field was more correlated with the magnetic vector potential than the magnetic field in a homogeneous sphere.

  3. Evaluation method for in situ electric field in standardized human brain for different transcranial magnetic stimulation coils.

    PubMed

    Iwahashi, Masahiro; Gomez-Tames, Jose; Laakso, Ilkka; Hirata, Akimasa

    2017-03-21

    This study proposes a method to evaluate the electric field induced in the brain by transcranial magnetic stimulation (TMS) to realize focal stimulation in the target area considering the inter-subject difference of the brain anatomy. The TMS is a non-invasive technique used for treatment/diagnosis, and it works by inducing an electric field in a specific area of the brain via a coil-induced magnetic field. Recent studies that report on the electric field distribution in the brain induced by TMS coils have been limited to simplified human brain models or a small number of detailed human brain models. Until now, no method has been developed that appropriately evaluates the coil performance for a group of subjects. In this study, we first compare the magnetic field and the magnetic vector potential distributions to determine if they can be used as predictors of the TMS focality derived from the electric field distribution. Next, the hotspots of the electric field on the brain surface of ten subjects using six coils are compared. Further, decisive physical factors affecting the focality of the induced electric field by different coils are discussed by registering the computed electric field in a standard brain space for the first time, so as to evaluate coil characteristics for a large population of subjects. The computational results suggest that the induced electric field in the target area cannot be generalized without considering the morphological variability of the human brain. Moreover, there was no remarkable difference between the various coils, although focality could be improved to a certain extent by modifying the coil design (e.g., coil radius). Finally, the focality estimated by the electric field was more correlated with the magnetic vector potential than the magnetic field in a homogeneous sphere.

  4. CityGML - Interoperable semantic 3D city models

    NASA Astrophysics Data System (ADS)

    Gröger, Gerhard; Plümer, Lutz

    2012-07-01

    CityGML is the international standard of the Open Geospatial Consortium (OGC) for the representation and exchange of 3D city models. It defines the three-dimensional geometry, topology, semantics and appearance of the most relevant topographic objects in urban or regional contexts. These definitions are provided in different, well-defined Levels-of-Detail (multiresolution model). The focus of CityGML is on the semantical aspects of 3D city models, its structures, taxonomies and aggregations, allowing users to employ virtual 3D city models for advanced analysis and visualization tasks in a variety of application domains such as urban planning, indoor/outdoor pedestrian navigation, environmental simulations, cultural heritage, or facility management. This is in contrast to purely geometrical/graphical models such as KML, VRML, or X3D, which do not provide sufficient semantics. CityGML is based on the Geography Markup Language (GML), which provides a standardized geometry model. Due to this model and its well-defined semantics and structures, CityGML facilitates interoperable data exchange in the context of geo web services and spatial data infrastructures. Since its standardization in 2008, CityGML has become used on a worldwide scale: tools from notable companies in the geospatial field provide CityGML interfaces. Many applications and projects use this standard. CityGML is also having a strong impact on science: numerous approaches use CityGML, particularly its semantics, for disaster management, emergency responses, or energy-related applications as well as for visualizations, or they contribute to CityGML, improving its consistency and validity, or use CityGML, particularly its different Levels-of-Detail, as a source or target for generalizations. This paper gives an overview of CityGML, its underlying concepts, its Levels-of-Detail, how to extend it, its applications, its likely future development, and the role it plays in scientific research. Furthermore, its relationship to other standards from the fields of computer graphics and computer-aided architectural design and to the prospective INSPIRE model are discussed, as well as the impact CityGML has and is having on the software industry, on applications of 3D city models, and on science generally.

  5. Operators up to dimension seven in standard model effective field theory extended with sterile neutrinos

    NASA Astrophysics Data System (ADS)

    Liao, Yi; Ma, Xiao-Dong

    2017-07-01

    We revisit the effective field theory of the standard model that is extended with sterile neutrinos, N . We examine the basis of complete and independent effective operators involving N up to mass dimension seven (dim-7). By employing equations of motion, integration by parts, and Fierz and group identities, we construct relations among operators that were considered independent in the previous literature, and we find 7 redundant operators at dim-6, as well as 16 redundant operators and two new operators at dim-7. The correct numbers of operators involving N are, without counting Hermitian conjugates, 16 (L ∩B )+1 (L ∩B )+2 (L ∩ B) at dim-6 and 47 (L ∩B )+5 (L ∩ B) at dim-7. Here L /B (L/B) stands for lepton/baryon number conservation (violation). We verify our counting by the Hilbert series approach for nf generations of the standard model fermions and sterile neutrinos. When operators involving different flavors of fermions are counted separately and their Hermitian conjugates are included, we find there are 29 (1614) and 80 (4206) operators involving sterile neutrinos at dim-6 and dim-7, respectively, for nf=1 (3).

  6. 20180318 - Automated workflows for data curation and standardization of chemical structures for QSAR modeling (ACS Spring)

    EPA Science Inventory

    Large collections of chemical structures and associated experimental data are publicly available, and can be used to build robust QSAR models for applications in different fields. One common concern is the quality of both the chemical structure information and associated experime...

  7. Modelling rollover behaviour of exacavator-based forest machines

    Treesearch

    M.W. Veal; S.E. Taylor; Robert B. Rummer

    2003-01-01

    This poster presentation provides results from analytical and computer simulation models of rollover behaviour of hydraulic excavators. These results are being used as input to the operator protective structure standards development process. Results from rigid body mechanics and computer simulation methods agree well with field rollover test data. These results show...

  8. Galaxy formation

    PubMed Central

    Peebles, P. J. E.

    1998-01-01

    It is argued that within the standard Big Bang cosmological model the bulk of the mass of the luminous parts of the large galaxies likely had been assembled by redshift z ∼ 10. Galaxy assembly this early would be difficult to fit in the widely discussed adiabatic cold dark matter model for structure formation, but it could agree with an isocurvature version in which the cold dark matter is the remnant of a massive scalar field frozen (or squeezed) from quantum fluctuations during inflation. The squeezed field fluctuations would be Gaussian with zero mean, and the distribution of the field mass therefore would be the square of a random Gaussian process. This offers a possibly interesting new direction for the numerical exploration of models for cosmic structure formation. PMID:9419326

  9. Cosine problem in EPRL/FK spinfoam model

    NASA Astrophysics Data System (ADS)

    Vojinović, Marko

    2014-01-01

    We calculate the classical limit effective action of the EPRL/FK spinfoam model of quantum gravity coupled to matter fields. By employing the standard QFT background field method adapted to the spinfoam setting, we find that the model has many different classical effective actions. Most notably, these include the ordinary Einstein-Hilbert action coupled to matter, but also an action which describes antigravity. All those multiple classical limits appear as a consequence of the fact that the EPRL/FK vertex amplitude has cosine-like large spin asymptotics. We discuss some possible ways to eliminate the unwanted classical limits.

  10. Chaotic hybrid inflation with a gauged B -L

    NASA Astrophysics Data System (ADS)

    Carpenter, Linda M.; Raby, Stuart

    2014-11-01

    In this paper we present a novel formulation of chaotic hybrid inflation in supergravity. The model includes a waterfall field which spontaneously breaks a gauged U1 (B- L) at a GUT scale. This allows for the possibility of future model building which includes the standard formulation of baryogenesis via leptogenesis with the waterfall field decaying into right-handed neutrinos. We have not considered the following issues in this short paper, i.e. supersymmetry breaking, dark matter or the gravitino or moduli problems. Our focus is on showing the compatibility of the present model with Planck, WMAP and Bicep2 data.

  11. Microcomputer software for calculating an elk habitat effectiveness index on Blue Mountain winter ranges.

    Treesearch

    Mark Hitchcock; Alan Ager

    1992-01-01

    National Forests in the Pacific Northwest Region have incorporated elk habitat standards into Forest plans to ensure that elk habitat objectives are met on multiple use land allocations. Many Forests have employed versions of the habitat effectiveness index (HEI) as a standard method to evaluate habitat. Field application of the HEI model unfortunately is a formidable...

  12. A Model of Direct Gauge Mediation of Supersymmetry Breaking

    NASA Astrophysics Data System (ADS)

    Murayama, Hitoshi

    1997-07-01

    We present the first phenomenologically viable model of gauge meditation of supersymmetry breaking without a messenger sector or gauge singlet fields. The standard model gauge groups couple directly to the sector which breaks supersymmetry dynamically. Despite the direct coupling, it can preserve perturbative gauge unification thanks to the inverted hierarchy mechanism. There is no dangerous negative contribution to m2q~, m2l~ due to two-loop renormalization group equation. The potentially nonuniversal supergravity contribution to m2q~ and m2l~ can be suppressed enough. The model is completely chiral, and one does not need to forbid mass terms for the messenger fields by hand. Cosmology of the model is briefly discussed.

  13. Higher Surface Ozone Concentrations Over the Chesapeake Bay than Over the Adjacent Land: Observations and Models from the DISCOVER-AQ and CBODAQ Campaigns

    NASA Technical Reports Server (NTRS)

    Goldberg, Daniel L.; Loughner, Christopher P.; Tzortziou, Maria; Stehr, Jeffrey W.; Pickering, Kenneth E.; Marufu, Lackson T.; Dickerson, Russell R.

    2013-01-01

    Air quality models, such as the Community Multiscale Air Quality (CMAQ) model, indicate decidedly higher ozone near the surface of large interior water bodies, such as the Great Lakes and Chesapeake Bay. In order to test the validity of the model output, we performed surface measurements of ozone (O3) and total reactive nitrogen (NOy) on the 26-m Delaware II NOAA Small Research Vessel experimental (SRVx), deployed in the Chesapeake Bay for 10 daytime cruises in July 2011 as part of NASA's GEO-CAPE CBODAQ oceanographic field campaign in conjunction with NASA's DISCOVER-AQ air quality field campaign. During this 10-day period, the EPA O3 regulatory standard of 75 ppbv averaged over an 8-h period was exceeded four times over water while ground stations in the area only exceeded the standard at most twice. This suggests that on days when the Baltimore/Washington region is in compliance with the EPA standard, air quality over the Chesapeake Bay might exceed the EPA standard. Ozone observations over the bay during the afternoon were consistently 10-20% higher than the closest upwind ground sites during the 10-day campaign; this pattern persisted during good and poor air quality days. A lower boundary layer, reduced cloud cover, slower dry deposition rates, and other lesser mechanisms, contribute to the local maximum of ozone over the Chesapeake Bay. Observations from this campaign were compared to a CMAQ simulation at 1.33 km resolution. The model is able to predict the regional maximum of ozone over the Chesapeake Bay accurately, but NOy concentrations are significantly overestimated. Explanations for the overestimation of NOy in the model simulations are also explored

  14. Predicting Rib Fracture Risk With Whole-Body Finite Element Models: Development and Preliminary Evaluation of a Probabilistic Analytical Framework

    PubMed Central

    Forman, Jason L.; Kent, Richard W.; Mroz, Krystoffer; Pipkorn, Bengt; Bostrom, Ola; Segui-Gomez, Maria

    2012-01-01

    This study sought to develop a strain-based probabilistic method to predict rib fracture risk with whole-body finite element (FE) models, and to describe a method to combine the results with collision exposure information to predict injury risk and potential intervention effectiveness in the field. An age-adjusted ultimate strain distribution was used to estimate local rib fracture probabilities within an FE model. These local probabilities were combined to predict injury risk and severity within the whole ribcage. The ultimate strain distribution was developed from a literature dataset of 133 tests. Frontal collision simulations were performed with the THUMS (Total HUman Model for Safety) model with four levels of delta-V and two restraints: a standard 3-point belt and a progressive 3.5–7 kN force-limited, pretensioned (FL+PT) belt. The results of three simulations (29 km/h standard, 48 km/h standard, and 48 km/h FL+PT) were compared to matched cadaver sled tests. The numbers of fractures predicted for the comparison cases were consistent with those observed experimentally. Combining these results with field exposure informantion (ΔV, NASS-CDS 1992–2002) suggests a 8.9% probability of incurring AIS3+ rib fractures for a 60 year-old restrained by a standard belt in a tow-away frontal collision with this restraint, vehicle, and occupant configuration, compared to 4.6% for the FL+PT belt. This is the first study to describe a probabilistic framework to predict rib fracture risk based on strains observed in human-body FE models. Using this analytical framework, future efforts may incorporate additional subject or collision factors for multi-variable probabilistic injury prediction. PMID:23169122

  15. Higher surface ozone concentrations over the Chesapeake Bay than over the adjacent land: Observations and models from the DISCOVER-AQ and CBODAQ campaigns

    NASA Astrophysics Data System (ADS)

    Goldberg, Daniel L.; Loughner, Christopher P.; Tzortziou, Maria; Stehr, Jeffrey W.; Pickering, Kenneth E.; Marufu, Lackson T.; Dickerson, Russell R.

    2014-02-01

    Air quality models, such as the Community Multiscale Air Quality (CMAQ) model, indicate decidedly higher ozone near the surface of large interior water bodies, such as the Great Lakes and Chesapeake Bay. In order to test the validity of the model output, we performed surface measurements of ozone (O3) and total reactive nitrogen (NOy) on the 26-m Delaware II NOAA Small Research Vessel experimental (SRVx), deployed in the Chesapeake Bay for 10 daytime cruises in July 2011 as part of NASA's GEO-CAPE CBODAQ oceanographic field campaign in conjunction with NASA's DISCOVER-AQ air quality field campaign. During this 10-day period, the EPA O3 regulatory standard of 75 ppbv averaged over an 8-h period was exceeded four times over water while ground stations in the area only exceeded the standard at most twice. This suggests that on days when the Baltimore/Washington region is in compliance with the EPA standard, air quality over the Chesapeake Bay might exceed the EPA standard. Ozone observations over the bay during the afternoon were consistently 10-20% higher than the closest upwind ground sites during the 10-day campaign; this pattern persisted during good and poor air quality days. A lower boundary layer, reduced cloud cover, slower dry deposition rates, and other lesser mechanisms, contribute to the local maximum of ozone over the Chesapeake Bay. Observations from this campaign were compared to a CMAQ simulation at 1.33 km resolution. The model is able to predict the regional maximum of ozone over the Chesapeake Bay accurately, but NOy concentrations are significantly overestimated. Explanations for the overestimation of NOy in the model simulations are also explored.

  16. A priori and a posteriori investigations for developing large eddy simulations of multi-species turbulent mixing under high-pressure conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borghesi, Giulio; Bellan, Josette, E-mail: josette.bellan@jpl.nasa.gov; Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109-8099

    2015-03-15

    A Direct Numerical Simulation (DNS) database was created representing mixing of species under high-pressure conditions. The configuration considered is that of a temporally evolving mixing layer. The database was examined and analyzed for the purpose of modeling some of the unclosed terms that appear in the Large Eddy Simulation (LES) equations. Several metrics are used to understand the LES modeling requirements. First, a statistical analysis of the DNS-database large-scale flow structures was performed to provide a metric for probing the accuracy of the proposed LES models as the flow fields obtained from accurate LESs should contain structures of morphology statisticallymore » similar to those observed in the filtered-and-coarsened DNS (FC-DNS) fields. To characterize the morphology of the large-scales structures, the Minkowski functionals of the iso-surfaces were evaluated for two different fields: the second-invariant of the rate of deformation tensor and the irreversible entropy production rate. To remove the presence of the small flow scales, both of these fields were computed using the FC-DNS solutions. It was found that the large-scale structures of the irreversible entropy production rate exhibit higher morphological complexity than those of the second invariant of the rate of deformation tensor, indicating that the burden of modeling will be on recovering the thermodynamic fields. Second, to evaluate the physical effects which must be modeled at the subfilter scale, an a priori analysis was conducted. This a priori analysis, conducted in the coarse-grid LES regime, revealed that standard closures for the filtered pressure, the filtered heat flux, and the filtered species mass fluxes, in which a filtered function of a variable is equal to the function of the filtered variable, may no longer be valid for the high-pressure flows considered in this study. The terms requiring modeling are the filtered pressure, the filtered heat flux, the filtered pressure work, and the filtered species mass fluxes. Improved models were developed based on a scale-similarity approach and were found to perform considerably better than the classical ones. These improved models were also assessed in an a posteriori study. Different combinations of the standard models and the improved ones were tested. At the relatively small Reynolds numbers achievable in DNS and at the relatively small filter widths used here, the standard models for the filtered pressure, the filtered heat flux, and the filtered species fluxes were found to yield accurate results for the morphology of the large-scale structures present in the flow. Analysis of the temporal evolution of several volume-averaged quantities representative of the mixing layer growth, and of the cross-stream variation of homogeneous-plane averages and second-order correlations, as well as of visualizations, indicated that the models performed equivalently for the conditions of the simulations. The expectation is that at the much larger Reynolds numbers and much larger filter widths used in practical applications, the improved models will have much more accurate performance than the standard one.« less

  17. How to deal with the high condition number of the noise covariance matrix of gravity field functionals synthesised from a satellite-only global gravity field model?

    NASA Astrophysics Data System (ADS)

    Klees, R.; Slobbe, D. C.; Farahani, H. H.

    2018-03-01

    The posed question arises for instance in regional gravity field modelling using weighted least-squares techniques if the gravity field functionals are synthesised from the spherical harmonic coefficients of a satellite-only global gravity model (GGM), and are used as one of the noisy datasets. The associated noise covariance matrix, appeared to be extremely ill-conditioned with a singular value spectrum that decayed gradually to zero without any noticeable gap. We analysed three methods to deal with the ill-conditioned noise covariance matrix: Tihonov regularisation of the noise covariance matrix in combination with the standard formula for the weighted least-squares estimator, a formula of the weighted least-squares estimator, which does not involve the inverse noise covariance matrix, and an estimator based on Rao's unified theory of least-squares. Our analysis was based on a numerical experiment involving a set of height anomalies synthesised from the GGM GOCO05s, which is provided with a full noise covariance matrix. We showed that the three estimators perform similar, provided that the two regularisation parameters each method knows were chosen properly. As standard regularisation parameter choice rules do not apply here, we suggested a new parameter choice rule, and demonstrated its performance. Using this rule, we found that the differences between the three least-squares estimates were within noise. For the standard formulation of the weighted least-squares estimator with regularised noise covariance matrix, this required an exceptionally strong regularisation, much larger than one expected from the condition number of the noise covariance matrix. The preferred method is the inversion-free formulation of the weighted least-squares estimator, because of its simplicity with respect to the choice of the two regularisation parameters.

  18. Right-handed charged currents in the era of the Large Hadron Collider

    DOE PAGES

    Alioli, Simone; Cirigliano, Vincenzo; Dekens, Wouter Gerard; ...

    2017-05-16

    We discuss the phenomenology of right-handed charged currents in the frame-work of the Standard Model Effective Field Theory, in which they arise due to a single gauge-invariant dimension-six operator. We study the manifestations of the nine complex couplings of the W to right-handed quarks in collider physics, flavor physics, and low-energy precision measurements. We first obtain constraints on the couplings under the assumption that the right-handed operator is the dominant correction to the Standard Model at observable energies. Here, we subsequently study the impact of degeneracies with other Beyond-the-Standard-Model effective interactions and identify observables, both at colliders and low-energy experiments,more » that would uniquely point to right-handed charged currents.« less

  19. Higher order QCD predictions for associated Higgs production with anomalous couplings to gauge bosons

    NASA Astrophysics Data System (ADS)

    Mimasu, Ken; Sanz, Verónica; Williams, Ciaran

    2016-08-01

    We present predictions for the associated production of a Higgs boson at NLO+PS accuracy, including the effect of anomalous interactions between the Higgs and gauge bosons. We present our results in different frameworks, one in which the interaction vertex between the Higgs boson and Standard Model W and Z bosons is parameterized in terms of general Lorentz structures, and one in which Electroweak symmetry breaking is manifestly linear and the resulting operators arise through a six-dimensional effective field theory framework. We present analytic calculations of the Standard Model and Beyond the Standard Model contributions, and discuss the phenomenological impact of the higher order pieces. Our results are implemented in the NLO Monte Carlo program MCFM, and interfaced to shower Monte Carlos through the Powheg box framework.

  20. Searching for θ 13 at Daya Bay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giedt, Joel; Napolitano, James

    2015-06-08

    An experiment has been carried out by the Daya Bay Collaboration to measure the neutrino mixing angle θ 13. In addition, the grant has supported research into lattice field theory beyond the standard model.

  1. Evaluation of apparent viscosity of Para rubber latex by diffuse reflection near-infrared spectroscopy.

    PubMed

    Sirisomboon, Panmanas; Chowbankrang, Rawiphan; Williams, Phil

    2012-05-01

    Near-infrared spectroscopy in diffuse reflection mode was used to evaluate the apparent viscosity of Para rubber field latex and concentrated latex over the wavelength range of 1100 to 2500 nm, using partial least square regression (PLSR). The model with ten principal components (PCs) developed using the raw spectra accurately predicted the apparent viscosity with correlation coefficient (r), standard error of prediction (SEP), and bias of 0.974, 8.6 cP, and -0.4 cP, respectively. The ratio of the SEP to the standard deviation (RPD) and the ratio of the SEP to the range (RER) for the prediction were 4.4 and 16.7, respectively. Therefore, the model can be used for measurement of the apparent viscosity of field latex and concentrated latex in quality assurance and process control in the factory.

  2. Grand unified brane world scenario

    NASA Astrophysics Data System (ADS)

    Arai, Masato; Blaschke, Filip; Eto, Minoru; Sakai, Norisuke

    2017-12-01

    We present a field theoretical model unifying grand unified theory (GUT) and brane world scenario. As a concrete example, we consider S U (5 ) GUT in 4 +1 dimensions where our 3 +1 dimensional spacetime spontaneously arises on five domain walls. A field-dependent gauge kinetic term is used to localize massless non-Abelian gauge fields on the domain walls and to assure the charge universality of matter fields. We find the domain walls with the symmetry breaking S U (5 )→S U (3 )×S U (2 )×U (1 ) as a global minimum and all the undesirable moduli are stabilized with the mass scale of MGUT. Profiles of massless standard model particles are determined as a consequence of wall dynamics. The proton decay can be exponentially suppressed.

  3. Modeling of Dipole and Quadrupole Fringe-Field Effects for the Advanced Photon Source Upgrade Lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borland, M.; Lindberg, R.

    2017-06-01

    The proposed upgrade of the Advanced Photon Source (APS) to a multibend-achromat lattice requires shorter and much stronger quadrupole magnets than are present in the existing ring. This results in longitudinal gradient profiles that differ significantly from a hard-edge model. Additionally, the lattice assumes the use of five-segment longitudinal gradient dipoles. Under these circumstances, the effects of fringe fields and detailed field distributions are of interest. We evaluated the effect of soft-edge fringe fields on the linear optics and chromaticity, finding that compensation for these effects is readily accomplished. In addition, we evaluated the reliability of standard methods of simulatingmore » hardedge nonlinear fringe effects in quadrupoles.« less

  4. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE PAGES

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...

    2018-03-26

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  5. Measurement and Modeling of Personal Exposure to the Electric and Magnetic Fields in the Vicinity of High Voltage Power Lines.

    PubMed

    Tourab, Wafa; Babouri, Abdesselam

    2016-06-01

    This work presents an experimental and modeling study of the electromagnetic environment in the vicinity of a high voltage substation located in eastern Algeria (Annaba city) specified with a very high population density. The effects of electromagnetic fields emanating from the coupled multi-lines high voltage power systems (MLHV) on the health of the workers and people living in proximity of substations has been analyzed. Experimental Measurements for the Multi-lines power system proposed have been conducted in the free space under the high voltage lines. Field's intensities were measured using a referenced and calibrated electromagnetic field meter PMM8053B for the levels 0 m, 1 m, 1.5 m and 1.8 m witch present the sensitive's parts as organs and major functions (head, heart, pelvis and feet) of the human body. The measurement results were validated by numerical simulation using the finite element method and these results are compared with the limit values of the international standards. We project to set own national standards for exposure to electromagnetic fields, in order to achieve a regional database that will be at the disposal of partners concerned to ensure safety of people and mainly workers inside high voltage electrical substations.

  6. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  7. Paraboloid magnetospheric magnetic field model and the status of the model as an ISO standard

    NASA Astrophysics Data System (ADS)

    Alexeev, I.

    A reliable representation of the magnetic field is crucial in the framework of radiation belt modelling especially for disturbed conditions The empirical model developed by Tsyganenko T96 is constructed by minimizing the rms deviation from the large magnetospheric data base The applicability of the T96 model is limited mainly by quiet conditions in the solar wind along the Earth orbit But contrary to the internal planet s field the external magnetospheric magnetic field sources are much more time-dependent A reliable representation of the magnetic field is crucial in the framework of radiation belt modelling especially for disturbed conditions It is a reason why the method of the paraboloid magnetospheric model construction based on the more accurate and physically consistent approach in which each source of the magnetic field would have its own relaxation timescale and a driving function based on an individual best fit combination of the solar wind and IMF parameters Such approach is based on a priori information about the global magnetospheric current systems structure Each current system is included as a separate block module in the magnetospheric model As it was shown by the spacecraft magnetometer data there are three current systems which are the main contributors to the external magnetospheric magnetic field magnetopause currents ring current and tail current sheet Paraboloid model is based on an analytical solution of the Laplace equation for each of these large-scale current systems in the magnetosphere with a

  8. A unified phase-field theory for the mechanics of damage and quasi-brittle failure

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Ying

    2017-06-01

    Being one of the most promising candidates for the modeling of localized failure in solids, so far the phase-field method has been applied only to brittle fracture with very few exceptions. In this work, a unified phase-field theory for the mechanics of damage and quasi-brittle failure is proposed within the framework of thermodynamics. Specifically, the crack phase-field and its gradient are introduced to regularize the sharp crack topology in a purely geometric context. The energy dissipation functional due to crack evolution and the stored energy functional of the bulk are characterized by a crack geometric function of polynomial type and an energetic degradation function of rational type, respectively. Standard arguments of thermodynamics then yield the macroscopic balance equation coupled with an extra evolution law of gradient type for the crack phase-field, governed by the aforesaid constitutive functions. The classical phase-field models for brittle fracture are recovered as particular examples. More importantly, the constitutive functions optimal for quasi-brittle failure are determined such that the proposed phase-field theory converges to a cohesive zone model for a vanishing length scale. Those general softening laws frequently adopted for quasi-brittle failure, e.g., linear, exponential, hyperbolic and Cornelissen et al. (1986) ones, etc., can be reproduced or fit with high precision. Except for the internal length scale, all the other model parameters can be determined from standard material properties (i.e., Young's modulus, failure strength, fracture energy and the target softening law). Some representative numerical examples are presented for the validation. It is found that both the internal length scale and the mesh size have little influences on the overall global responses, so long as the former can be well resolved by sufficiently fine mesh. In particular, for the benchmark tests of concrete the numerical results of load versus displacement curve and crack paths both agree well with the experimental data, showing validity of the proposed phase-field theory for the modeling of damage and quasi-brittle failure in solids.

  9. Electron transfer from a carbon nanotube into vacuum under high electric fields

    NASA Astrophysics Data System (ADS)

    Filip, L. D.; Smith, R. C.; Carey, J. D.; Silva, S. R. P.

    2009-05-01

    The transfer of an electron from a carbon nanotube (CNT) tip into vacuum under a high electric field is considered beyond the usual one-dimensional semi-classical approach. A model of the potential energy outside the CNT cap is proposed in order to show the importance of the intrinsic CNT parameters such as radius, length and vacuum barrier height. This model also takes into account set-up parameters such as the shape of the anode and the anode-to-cathode distance, which are generically portable to any modelling study of electron emission from a tip emitter. Results obtained within our model compare well to experimental data. Moreover, in contrast to the usual one-dimensional Wentzel-Kramers-Brillouin description, our model retains the ability to explain non-standard features of the process of electron field emission from CNTs that arise as a result of the quantum behaviour of electrons on the surface of the CNT.

  10. Difficulties in applying numerical simulations to an evaluation of occupational hazards caused by electromagnetic fields

    PubMed Central

    Zradziński, Patryk

    2015-01-01

    Due to the various physical mechanisms of interaction between a worker's body and the electromagnetic field at various frequencies, the principles of numerical simulations have been discussed for three areas of worker exposure: to low frequency magnetic field, to low and intermediate frequency electric field and to radiofrequency electromagnetic field. This paper presents the identified difficulties in applying numerical simulations to evaluate physical estimators of direct and indirect effects of exposure to electromagnetic fields at various frequencies. Exposure of workers operating a plastic sealer have been taken as an example scenario of electromagnetic field exposure at the workplace for discussion of those difficulties in applying numerical simulations. The following difficulties in reliable numerical simulations of workers’ exposure to the electromagnetic field have been considered: workers’ body models (posture, dimensions, shape and grounding conditions), working environment models (objects most influencing electromagnetic field distribution) and an analysis of parameters for which exposure limitations are specified in international guidelines and standards. PMID:26323781

  11. Magnetic space-based field measurements

    NASA Technical Reports Server (NTRS)

    Langel, R. A.

    1981-01-01

    Because the near Earth magnetic field is a complex combination of fields from outside the Earth of fields from its core and of fields from its crust, measurements from space prove to be the only practical way to obtain timely, global surveys. Due to difficulty in making accurate vector measurements, early satellites such as Sputnik and Vanguard measured only the magnitude survey. The attitude accuracy was 20 arc sec. Both the Earth's core fields and the fields arising from its crust were mapped from satellite data. The standard model of the core consists of a scalar potential represented by a spherical harmonics series. Models of the crustal field are relatively new. Mathematical representation is achieved in localized areas by arrays of dipoles appropriately located in the Earth's crust. Measurements of the Earth's field are used in navigation, to map charged particles in the magnetosphere, to study fluid properties in the Earth's core, to infer conductivity of the upper mantels, and to delineate regional scale geological features.

  12. Axion induced oscillating electric dipole moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, Christopher T.

    In this study, the axion electromagnetic anomaly induces an oscillating electric dipole for any magnetic dipole. This is a low energy theorem which is a consequence of the space-time dependent cosmic background field of the axion. The electron will acquire an oscillating electric dipole of frequency m a and strength ~ 10-32 e-cm, within four orders of magnitude of the present standard model DC limit, and two orders of magnitude above the nucleon, assuming standard axion model and dark matter parameters. This may suggest sensitive new experimental venues for the axion dark matter search.

  13. Lattice field theory applications in high energy physics

    NASA Astrophysics Data System (ADS)

    Gottlieb, Steven

    2016-10-01

    Lattice gauge theory was formulated by Kenneth Wilson in 1974. In the ensuing decades, improvements in actions, algorithms, and computers have enabled tremendous progress in QCD, to the point where lattice calculations can yield sub-percent level precision for some quantities. Beyond QCD, lattice methods are being used to explore possible beyond the standard model (BSM) theories of dynamical symmetry breaking and supersymmetry. We survey progress in extracting information about the parameters of the standard model by confronting lattice calculations with experimental results and searching for evidence of BSM effects.

  14. A geometric description of Maxwell field in a Kerr spacetime

    NASA Astrophysics Data System (ADS)

    Jezierski, Jacek; Smołka, Tomasz

    2016-06-01

    We consider the Maxwell field in the exterior of a Kerr black hole. For this system, we propose a geometric construction of generalized Klein-Gordon equation called Fackerell-Ipser equation. Our model is based on conformal Yano-Killing tensor (CYK tensor). We present non-standard properties of CYK tensors in the Kerr spacetime which are useful in electrodynamics.

  15. Bivariate random-effects meta-analysis models for diagnostic test accuracy studies using arcsine-based transformations.

    PubMed

    Negeri, Zelalem F; Shaikh, Mateen; Beyene, Joseph

    2018-05-11

    Diagnostic or screening tests are widely used in medical fields to classify patients according to their disease status. Several statistical models for meta-analysis of diagnostic test accuracy studies have been developed to synthesize test sensitivity and specificity of a diagnostic test of interest. Because of the correlation between test sensitivity and specificity, modeling the two measures using a bivariate model is recommended. In this paper, we extend the current standard bivariate linear mixed model (LMM) by proposing two variance-stabilizing transformations: the arcsine square root and the Freeman-Tukey double arcsine transformation. We compared the performance of the proposed methods with the standard method through simulations using several performance measures. The simulation results showed that our proposed methods performed better than the standard LMM in terms of bias, root mean square error, and coverage probability in most of the scenarios, even when data were generated assuming the standard LMM. We also illustrated the methods using two real data sets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Cosmology in massive gravity with effective composite metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heisenberg, Lavinia; Refregier, Alexandre, E-mail: lavinia.heisenberg@eth-its.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch

    This paper is dedicated to scrutinizing the cosmology in massive gravity. A matter field of the dark sector is coupled to an effective composite metric while a standard matter field couples to the dynamical metric in the usual way. For this purpose, we study the dynamical system of cosmological solutions by using phase analysis, which provides an overview of the class of cosmological solutions in this setup. This also permits us to study the critical points of the cosmological equations together with their stability. We show the presence of stable attractor de Sitter critical points relevant to the late-time cosmicmore » acceleration. Furthermore, we study the tensor, vector and scalar perturbations in the presence of standard matter fields and obtain the conditions for the absence of ghost and gradient instabilities. Hence, massive gravity in the presence of the effective composite metric can accommodate interesting dark energy phenomenology, that can be observationally distinguished from the standard model according to the expansion history and cosmic growth.« less

  17. Flattening the inflaton potential beyond minimal gravity

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Min

    2018-01-01

    We review the status of the Starobinsky-like models for inflation beyond minimal gravity and discuss the unitarity problem due to the presence of a large non-minimal gravity coupling. We show that the induced gravity models allow for a self-consistent description of inflation and discuss the implications of the inflaton couplings to the Higgs field in the Standard Model.

  18. An Analysis of Turkey's PISA 2015 Results Using Two-Level Hierarchical Linear Modelling

    ERIC Educational Resources Information Center

    Atas, Dogu; Karadag, Özge

    2017-01-01

    In the field of education, most of the data collected are multi-level structured. Cities, city based schools, school based classes and finally students in the classrooms constitute a hierarchical structure. Hierarchical linear models give more accurate results compared to standard models when the data set has a structure going far as individuals,…

  19. General Model of Photon-Pair Detection with an Image Sensor

    NASA Astrophysics Data System (ADS)

    Defienne, Hugo; Reichert, Matthew; Fleischer, Jason W.

    2018-05-01

    We develop an analytic model that relates intensity correlation measurements performed by an image sensor to the properties of photon pairs illuminating it. Experiments using an effective single-photon counting camera, a linear electron-multiplying charge-coupled device camera, and a standard CCD camera confirm the model. The results open the field of quantum optical sensing using conventional detectors.

  20. Custodial vector model

    NASA Astrophysics Data System (ADS)

    Becciolini, Diego; Franzosi, Diogo Buarque; Foadi, Roshan; Frandsen, Mads T.; Hapola, Tuomas; Sannino, Francesco

    2015-07-01

    We analyze the Large Hadron Collider (LHC) phenomenology of heavy vector resonances with a S U (2 )L×S U (2 )R spectral global symmetry. This symmetry partially protects the electroweak S parameter from large contributions of the vector resonances. The resulting custodial vector model spectrum and interactions with the standard model fields lead to distinct signatures at the LHC in the diboson, dilepton, and associated Higgs channels.

  1. Validation of anthropometry and foot-to-foot bioelectrical resistance against a three-component model to assess total body fat in children: the IDEFICS study.

    PubMed

    Bammann, K; Huybrechts, I; Vicente-Rodriguez, G; Easton, C; De Vriendt, T; Marild, S; Mesana, M I; Peeters, M W; Reilly, J J; Sioen, I; Tubic, B; Wawro, N; Wells, J C; Westerterp, K; Pitsiladis, Y; Moreno, L A

    2013-04-01

    To compare different field methods for estimating body fat mass with a reference value derived by a three-component (3C) model in pre-school and school children across Europe. Multicentre validation study. Seventy-eight preschool/school children aged 4-10 years from four different European countries. A standard measurement protocol was carried out in all children by trained field workers. A 3C model was used as the reference method. The field methods included height and weight measurement, circumferences measured at four sites, skinfold measured at two-six sites and foot-to-foot bioelectrical resistance (BIA) via TANITA scales. With the exception of height and neck circumference, all single measurements were able to explain at least 74% of the fat-mass variance in the sample. In combination, circumference models were superior to skinfold models and height-weight models. The best predictions were given by trunk models (combining skinfold and circumference measurements) that explained 91% of the observed fat-mass variance. The optimal data-driven model for our sample includes hip circumference, triceps skinfold and total body mass minus resistance index, and explains 94% of the fat-mass variance with 2.44 kg fat mass limits of agreement. In all investigated models, prediction errors were associated with fat mass, although to a lesser degree in the investigated skinfold models, arm models and the data-driven models. When studying total body fat in childhood populations, anthropometric measurements will give biased estimations as compared to gold standard measurements. Nevertheless, our study shows that when combining circumference and skinfold measurements, estimations of fat mass can be obtained with a limit of agreement of 1.91 kg in normal weight children and of 2.94 kg in overweight or obese children.

  2. Model-independent determination of the triple Higgs coupling at e + e – colliders

    DOE PAGES

    Barklow, Tim; Fujii, Keisuke; Jung, Sunghoon; ...

    2018-03-20

    Here, the observation of Higgs pair production at high-energy colliders can give evidence for the presence of a triple Higgs coupling. However, the actual determination of the value of this coupling is more difficult. In the context of general models for new physics, double Higgs production processes can receive contributions from many possible beyond-Standard-Model effects. This dependence must be understood if one is to make a definite statement about the deviation of the Higgs field potential from the Standard Model. In this paper, we study the extraction of the triple Higgs coupling from the process e +e –→Zhh. We showmore » that, by combining the measurement of this process with other measurements available at a 500 GeV e +e – collider, it is possible to quote model-independent limits on the effective field theory parameter c 6 that parametrizes modifications of the Higgs potential. We present precise error estimates based on the anticipated International Linear Collider physics program, studied with full simulation. Our analysis also gives new insight into the model-independent extraction of the Higgs boson coupling constants and total width from e +e – data.« less

  3. Model-independent determination of the triple Higgs coupling at e + e – colliders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barklow, Tim; Fujii, Keisuke; Jung, Sunghoon

    Here, the observation of Higgs pair production at high-energy colliders can give evidence for the presence of a triple Higgs coupling. However, the actual determination of the value of this coupling is more difficult. In the context of general models for new physics, double Higgs production processes can receive contributions from many possible beyond-Standard-Model effects. This dependence must be understood if one is to make a definite statement about the deviation of the Higgs field potential from the Standard Model. In this paper, we study the extraction of the triple Higgs coupling from the process e +e –→Zhh. We showmore » that, by combining the measurement of this process with other measurements available at a 500 GeV e +e – collider, it is possible to quote model-independent limits on the effective field theory parameter c 6 that parametrizes modifications of the Higgs potential. We present precise error estimates based on the anticipated International Linear Collider physics program, studied with full simulation. Our analysis also gives new insight into the model-independent extraction of the Higgs boson coupling constants and total width from e +e – data.« less

  4. Model-independent determination of the triple Higgs coupling at e+e- colliders

    NASA Astrophysics Data System (ADS)

    Barklow, Tim; Fujii, Keisuke; Jung, Sunghoon; Peskin, Michael E.; Tian, Junping

    2018-03-01

    The observation of Higgs pair production at high-energy colliders can give evidence for the presence of a triple Higgs coupling. However, the actual determination of the value of this coupling is more difficult. In the context of general models for new physics, double Higgs production processes can receive contributions from many possible beyond-Standard-Model effects. This dependence must be understood if one is to make a definite statement about the deviation of the Higgs field potential from the Standard Model. In this paper, we study the extraction of the triple Higgs coupling from the process e+e-→Z h h . We show that, by combining the measurement of this process with other measurements available at a 500 GeV e+e- collider, it is possible to quote model-independent limits on the effective field theory parameter c6 that parametrizes modifications of the Higgs potential. We present precise error estimates based on the anticipated International Linear Collider physics program, studied with full simulation. Our analysis also gives new insight into the model-independent extraction of the Higgs boson coupling constants and total width from e+e- data.

  5. The Standard Model and Higgs physics

    NASA Astrophysics Data System (ADS)

    Torassa, Ezio

    2018-05-01

    The Standard Model is a consistent and computable theory that successfully describes the elementary particle interactions. The strong, electromagnetic and weak interactions have been included in the theory exploiting the relation between group symmetries and group generators, in order to smartly introduce the force carriers. The group properties lead to constraints between boson masses and couplings. All the measurements performed at the LEP, Tevatron, LHC and other accelerators proved the consistency of the Standard Model. A key element of the theory is the Higgs field, which together with the spontaneous symmetry breaking, gives mass to the vector bosons and to the fermions. Unlike the case of vector bosons, the theory does not provide prediction for the Higgs boson mass. The LEP experiments, while providing very precise measurements of the Standard Model theory, searched for the evidence of the Higgs boson until the year 2000. The discovery of the top quark in 1994 by the Tevatron experiments and of the Higgs boson in 2012 by the LHC experiments were considered as the completion of the fundamental particles list of the Standard Model theory. Nevertheless the neutrino oscillations, the dark matter and the baryon asymmetry in the Universe evidence that we need a new extended model. In the Standard Model there are also some unattractive theoretical aspects like the divergent loop corrections to the Higgs boson mass and the very small Yukawa couplings needed to describe the neutrino masses. For all these reasons, the hunt of discrepancies between Standard Model and data is still going on with the aim to finally describe the new extended theory.

  6. Extra dimensions hypothesis in high energy physics

    NASA Astrophysics Data System (ADS)

    Volobuev, Igor; Boos, Eduard; Bunichev, Viacheslav; Perfilov, Maxim; Smolyakov, Mikhail

    2017-10-01

    We discuss the history of the extra dimensions hypothesis and the physics and phenomenology of models with large extra dimensions with an emphasis on the Randall- Sundrum (RS) model with two branes. We argue that the Standard Model extension based on the RS model with two branes is phenomenologically acceptable only if the inter-brane distance is stabilized. Within such an extension of the Standard Model, we study the influence of the infinite Kaluza-Klein (KK) towers of the bulk fields on collider processes. In particular, we discuss the modification of the scalar sector of the theory, the Higgs-radion mixing due to the coupling of the Higgs boson to the radion and its KK tower, and the experimental restrictions on the mass of the radion-dominated states.

  7. The Earth's magnetosphere modeling and ISO standard

    NASA Astrophysics Data System (ADS)

    Alexeev, I.

    The empirical model developed by Tsyganenko T96 is constructed by minimizing the rms deviation from the large magnetospheric data base Fairfield et al 1994 which contains Earth s magnetospheric magnetic field measurements accumulated during many years The applicability of the T96 model is limited mainly by quiet conditions in the solar wind along the Earth orbit But contrary to the internal planet s field the external magnetospheric magnetic field sources are much more time-dependent A reliable representation of the magnetic field is crucial in the framework of radiation belt modelling especially for disturbed conditions The last version of the Tsyganenko model has been constructed for a geomagnetic storm time interval This version based on the more accurate and physically consistent approach in which each source of the magnetic field would have its own relaxation timescale and a driving function based on an individual best fit combination of the solar wind and IMF parameters The same method has been used previously for paraboloid model construction This method is based on a priori information about the global magnetospheric current systems structure Each current system is included as a separate block module in the magnetospheric model As it was shown by the spacecraft magnetometer data there are three current systems which are the main contributors to the external magnetospheric magnetic field magnetopause currents ring current and tail current sheet Paraboloid model is based on an analytical solution of the Laplace

  8. Single Top Production at Next-to-Leading Order in the Standard Model Effective Field Theory.

    PubMed

    Zhang, Cen

    2016-04-22

    Single top production processes at hadron colliders provide information on the relation between the top quark and the electroweak sector of the standard model. We compute the next-to-leading order QCD corrections to the three main production channels: t-channel, s-channel, and tW associated production, in the standard model including operators up to dimension six. The calculation can be matched to parton shower programs and can therefore be directly used in experimental analyses. The QCD corrections are found to significantly impact the extraction of the current limits on the operators, because both of an improved accuracy and a better precision of the theoretical predictions. In addition, the distributions of some of the key discriminating observables are modified in a nontrivial way, which could change the interpretation of measurements in terms of UV complete models.

  9. Casimir force in a Lorentz violating theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, Mariana; Turan, Ismail

    2006-08-01

    We study the effects of the minimal extension of the standard model including Lorentz violation on the Casimir force between two parallel conducting plates in the vacuum. We provide explicit solutions for the electromagnetic field using scalar field analogy, for both the cases in which the Lorentz violating terms come from the CPT-even or CPT-odd terms. We also calculate the effects of the Lorentz violating terms for a fermion field between two parallel conducting plates and analyze the modifications of the Casimir force due to the modifications of the Dirac equation. In all cases under consideration, the standard formulas formore » the Casimir force are modified by either multiplicative or additive correction factors, the latter case exhibiting different dependence on the distance between the plates.« less

  10. Standard-compliant real-time transmission of ECGs: harmonization of ISO/IEEE 11073-PHD and SCP-ECG.

    PubMed

    Trigo, Jesús D; Chiarugi, Franco; Alesanco, Alvaro; Martínez-Espronceda, Miguel; Chronaki, Catherine E; Escayola, Javier; Martínez, Ignacio; García, José

    2009-01-01

    Ambient assisted living and integrated care in an aging society is based on the vision of the lifelong Electronic Health Record calling for HealthCare Information Systems and medical device interoperability. For medical devices this aim can be achieved by the consistent implementation of harmonized international interoperability standards. The ISO/IEEE 11073 (x73) family of standards is a reference standard for medical device interoperability. In its Personal Health Device (PHD) version several devices have been included, but an ECG device specialization is not yet available. On the other hand, the SCP-ECG standard for short-term diagnostic ECGs (EN1064) has been recently approved as an international standard ISO/IEEE 11073-91064:2009. In this paper, the relationships between a proposed x73-PHD model for an ECG device and the fields of the SCP-ECG standard are investigated. A proof-of-concept implementation of the proposed x73-PHD ECG model is also presented, identifying open issues to be addressed by standards development for the wider interoperability adoption of x73-PHD standards.

  11. Determining if an axially rotated solenoid will induce a radial EMF

    NASA Astrophysics Data System (ADS)

    MacDermott, Dustin R.

    The nature of the electromagnetic field of an axially rotated solenoid or magnet is investigated. The investigations reviewed suggest the possibility of a radially emitted electric field by either: axially rotated magnetic field lines, or a relativistic change in charge of the electron. For a very long solenoid a relativistic change in charge leaves no electric field inside while leaving an electric field outside. The concept of axially rotating magnetic field lines gives an opposite prediction. They both seem to be in contradiction to the standard model of induction, which gives no change in the electric field for a rotated solenoid or magnet. An experiment by Joseph B. Tate [48], [49] conducted in 1968 seemed to have measured a change in charge outside of a rotated solenoid. Another experiment by Barnett [3] in 1912 reported measuring no electric field inside of a rotated solenoid. Further experimentation was decided necessary and the method decided upon to attempt detection of the radial E or EMF induced by an axially rotating B field or change in charge is two concentric capacitor plates, one inside and the other outside an axially rotated solenoid. The solenoid was rotated on a lathe for the test. A concentric capacitor around an axially rotated permanent neodymium magnet was also used as a test. These experiments proved very challenging because of the small magnitude of the predicted effect. Nevertheless, the bulk of the evidence obtained indicates that no induced E arises when a magnetic source is rotated about its magnetic axis, thus supporting the standard field model of electromagnetic induction, and casting doubt on the alternative theories of magnetic field line rotation or relativistic charge enhancement.

  12. Geometry of the scalar sector

    DOE PAGES

    Alonso, Rodrigo; Jenkins, Elizabeth E.; Manohar, Aneesh V.

    2016-08-17

    The S-matrix of a quantum field theory is unchanged by field redefinitions, and so it only depends on geometric quantities such as the curvature of field space. Whether the Higgs multiplet transforms linearly or non-linearly under electroweak symmetry is a subtle question since one can make a coordinate change to convert a field that transforms linearly into one that transforms non-linearly. Renormalizability of the Standard Model (SM) does not depend on the choice of scalar fields or whether the scalar fields transform linearly or non-linearly under the gauge group, but only on the geometric requirement that the scalar field manifoldmore » M is flat. Standard Model Effective Field Theory (SMEFT) and Higgs Effective Field Theory (HEFT) have curved M, since they parametrize deviations from the flat SM case. We show that the HEFT Lagrangian can be written in SMEFT form if and only ifMhas a SU(2) L U(1) Y invariant fixed point. Experimental observables in HEFT depend on local geometric invariants of M such as sectional curvatures, which are of order 1/Λ 2 , where Λ is the EFT scale. We give explicit expressions for these quantities in terms of the structure constants for a general G → H symmetry breaking pattern. The one-loop radiative correction in HEFT is determined using a covariant expansion which preserves manifest invariance of M under coordinate redefinitions. The formula for the radiative correction is simple when written in terms of the curvature of M and the gauge curvature field strengths. We also extend the CCWZ formalism to non-compact groups, and generalize the HEFT curvature computation to the case of multiple singlet scalar fields.« less

  13. Standardized Full-Field Electroretinography in the Green Monkey (Chlorocebus sabaeus)

    PubMed Central

    Bouskila, Joseph; Javadi, Pasha; Palmour, Roberta M.; Bouchard, Jean-François; Ptito, Maurice

    2014-01-01

    Abstract Full-field electroretinography is an objective measure of retinal function, serving as an important diagnostic clinical tool in ophthalmology for evaluating the integrity of the retina. Given the similarity between the anatomy and physiology of the human and Green Monkey eyes, this species has increasingly become a favorable non-human primate model for assessing ocular defects in humans. To test this model, we obtained full-field electroretinographic recordings (ERG) and normal values for standard responses required by the International Society for Clinical Electrophysiology of Vision (ISCEV). Photopic and scotopic ERG recordings were obtained by full-field stimulation over a range of 6 log units of intensity in dark-adapted or light-adapted eyes of adult Green Monkeys (Chlorocebus sabaeus). Intensity, duration, and interval of light stimuli were varied separately. Reproducible values of amplitude and latency were obtained for the a- and b-waves, under well-controlled adaptation and stimulus conditions; the i-wave was also easily identifiable and separated from the a-b-wave complex in the photopic ERG. The recordings obtained in the healthy Green Monkey matched very well with those in humans and other non-human primate species (Macaca mulatta and Macaca fascicularis). These results validate the Green Monkey as an excellent non-human primate model, with potential to serve for testing retinal function following various manipulations such as visual deprivation or drug evaluation. PMID:25360686

  14. Random Interchange of Magnetic Connectivity

    NASA Astrophysics Data System (ADS)

    Matthaeus, W. H.; Ruffolo, D. J.; Servidio, S.; Wan, M.; Rappazzo, A. F.

    2015-12-01

    Magnetic connectivity, the connection between two points along a magnetic field line, has a stochastic character associated with field lines random walking in space due to magnetic fluctuations, but connectivity can also change in time due to dynamical activity [1]. For fluctuations transverse to a strong mean field, this connectivity change be caused by stochastic interchange due to component reconnection. The process may be understood approximately by formulating a diffusion-like Fokker-Planck coefficient [2] that is asymptotically related to standard field line random walk. Quantitative estimates are provided, for transverse magnetic field models and anisotropic models such as reduced magnetohydrodynamics. In heliospheric applications, these estimates may be useful for understanding mixing between open and close field line regions near coronal hole boundaries, and large latitude excursions of connectivity associated with turbulence. [1] A. F. Rappazzo, W. H. Matthaeus, D. Ruffolo, S. Servidio & M. Velli, ApJL, 758, L14 (2012) [2] D. Ruffolo & W. Matthaeus, ApJ, 806, 233 (2015)

  15. Unified TeV scale picture of baryogenesis and dark matter.

    PubMed

    Babu, K S; Mohapatra, R N; Nasri, Salah

    2007-04-20

    We present a simple extension of the minimal supersymmetric standard model which provides a unified picture of cosmological baryon asymmetry and dark matter. Our model introduces a gauge singlet field N and a color triplet field X which couple to the right-handed quark fields. The out-of-equilibrium decay of the Majorana fermion N mediated by the exchange of the scalar field X generates adequate baryon asymmetry for MN approximately 100 GeV and MX approximately TeV. The scalar partner of N (denoted N1) is naturally the lightest SUSY particle as it has no gauge interactions and plays the role of dark matter. The model is experimentally testable in (i) neutron-antineutron oscillations with a transition time estimated to be around 10(10)sec, (ii) discovery of colored particles X at LHC with mass of order TeV, and (iii) direct dark matter detection with a predicted cross section in the observable range.

  16. 3D-quantitative structure-activity relationship studies on benzothiadiazepine hydroxamates as inhibitors of tumor necrosis factor-alpha converting enzyme.

    PubMed

    Murumkar, Prashant R; Giridhar, Rajani; Yadav, Mange Ram

    2008-04-01

    A set of 29 benzothiadiazepine hydroxamates having selective tumor necrosis factor-alpha converting enzyme inhibitory activity were used to compare the quality and predictive power of 3D-quantitative structure-activity relationship, comparative molecular field analysis, and comparative molecular similarity indices models for the atom-based, centroid/atom-based, data-based, and docked conformer-based alignment. Removal of two outliers from the initial training set of molecules improved the predictivity of models. Among the 3D-quantitative structure-activity relationship models developed using the above four alignments, the database alignment provided the optimal predictive comparative molecular field analysis model for the training set with cross-validated r(2) (q(2)) = 0.510, non-cross-validated r(2) = 0.972, standard error of estimates (s) = 0.098, and F = 215.44 and the optimal comparative molecular similarity indices model with cross-validated r(2) (q(2)) = 0.556, non-cross-validated r(2) = 0.946, standard error of estimates (s) = 0.163, and F = 99.785. These models also showed the best test set prediction for six compounds with predictive r(2) values of 0.460 and 0.535, respectively. The contour maps obtained from 3D-quantitative structure-activity relationship studies were appraised for activity trends for the molecules analyzed. The comparative molecular similarity indices models exhibited good external predictivity as compared with that of comparative molecular field analysis models. The data generated from the present study helped us to further design and report some novel and potent tumor necrosis factor-alpha converting enzyme inhibitors.

  17. Center-to-limb observations and modeling of the Ca I 4227 Å line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Supriya, H. D.; Smitha, H. N.; Nagendra, K. N.

    2014-09-20

    The observed center-to-limb variation (CLV) of the scattering polarization in different lines of the Second Solar Spectrum can be used to constrain the height variation of various atmospheric parameters, in particular the magnetic fields, via the Hanle effect. Here we attempt to model the nonmagnetic CLV observations of the Q/I profiles of the Ca I 4227 Å line recorded with the Zurich Imaging Polarimeter-3 at IRSOL. For modeling, we use the polarized radiative transfer with partial frequency redistribution with a number of realistic one-dimensional (1D) model atmospheres. We find that all the standard Fontenla-Avrett-Loeser (FAL) model atmospheres, which we used,more » fail to simultaneously fit the observed (I, Q/I) at all the limb distances (μ). However, an attempt is made to find a single model which can provide a fit to at least the CLV of the observed Q/I instead of a simultaneous fit to the (I, Q/I) at all μ. To this end we construct a new 1D model by combining two of the standard models after modifying their temperature structures in the appropriate height ranges. This new combined model closely reproduces the observed Q/I at all μ but fails to reproduce the observed rest intensity at different μ. Hence we find that no single 1D model atmosphere succeeds in providing a good representation of the real Sun. This failure of 1D models does not, however, cause an impediment to the magnetic field diagnostic potential of the Ca I 4227 Å line. To demonstrate this we deduce the field strength at various μ positions without invoking the use of radiative transfer.« less

  18. Combined measurement and modeling of the hydrological impact of hydraulic redistribution using CLM4.5 at eight AmeriFlux sites

    USDA-ARS?s Scientific Manuscript database

    Effects of hydraulic redistribution (HR) on hydrological, biogeochemical, and ecological processes have been demonstrated in the field, but the current generation of standard earth system models does not include a representation of HR. Though recent studies have examined the effect of incorporating ...

  19. Closing the Gap. SREB Program Blends Academic Standards, Vocational Courses.

    ERIC Educational Resources Information Center

    Bottoms, Gene

    1992-01-01

    Southern Regional Education Board's State Vocational Education Consortium developed a model for integrating vocational and academic education that includes at least three credits each in math and science; four English courses; and four credits in a vocational major and two in related fields. Eight sites implementing the model have narrowed gap…

  20. Connecting to the Community: A Model for Caregiver-Teacher Conference Instruction

    ERIC Educational Resources Information Center

    Maher, Michael J.; Reiman, Alan J.

    2009-01-01

    Professionals throughout the field of education agree on the importance of teacher-caregiver communication. Yet teacher education programs still do very little to prepare future teachers for these interactions. This exploratory study investigated the use of a standardized caregiver model, with 12 teacher education students involved in a simulated…

  1. Asymmetric kinetic equilibria: Generalization of the BAS model for rotating magnetic profile and non-zero electric field

    NASA Astrophysics Data System (ADS)

    Dorville, Nicolas; Belmont, Gérard; Aunai, Nicolas; Dargent, Jérémy; Rezeau, Laurence

    2015-09-01

    Finding kinetic equilibria for non-collisional/collisionless tangential current layers is a key issue as well for their theoretical modeling as for our understanding of the processes that disturb them, such as tearing or Kelvin Helmholtz instabilities. The famous Harris equilibrium [E. Harris, Il Nuovo Cimento Ser. 10 23, 115-121 (1962)] assumes drifting Maxwellian distributions for ions and electrons, with constant temperatures and flow velocities; these assumptions lead to symmetric layers surrounded by vacuum. This strongly particular kind of layer is not suited for the general case: asymmetric boundaries between two media with different plasmas and different magnetic fields. The standard method for constructing more general kinetic equilibria consists in using Jeans theorem, which says that any function depending only on the Hamiltonian constants of motion is a solution to the steady Vlasov equation [P. J. Channell, Phys. Fluids (1958-1988) 19, 1541 (1976); M. Roth et al., Space Sci. Rev. 76, 251-317 (1996); and F. Mottez, Phys. Plasmas 10, 1541-1545 (2003)]. The inverse implication is however not true: when using the motion invariants as variables instead of the velocity components, the general stationary particle distributions keep on depending explicitly of the position, in addition to the implicit dependence introduced by these invariants. The standard approach therefore strongly restricts the class of solutions to the problem and probably does not select the most physically reasonable. The BAS (Belmont-Aunai-Smets) model [G. Belmont et al., Phys. Plasmas 19, 022108 (2012)] used for the first time the concept of particle accessibility to find new solutions: considering the case of a coplanar-antiparallel magnetic field configuration without electric field, asymmetric solutions could be found while the standard method can only lead to symmetric ones. These solutions were validated in a hybrid simulation [N. Aunai et al., Phys. Plasmas (1994-present) 20, 110702 (2013)], and more recently in a fully kinetic simulation as well [J. Dargent and N. Aunai, Phys. Plasmas (submitted)]. Nevertheless, in most asymmetric layers like the terrestrial magnetopause, one would indeed expect a magnetic field rotation from one direction to another without going through zero [J. Berchem and C. T. Russell, J. Geophys. Res. 87, 8139-8148 (1982)], and a non-zero normal electric field. In this paper, we propose the corresponding generalization: in the model presented, the profiles can be freely imposed for the magnetic field rotation (although restricted to a 180 rotation hitherto) and for the normal electric field. As it was done previously, the equilibrium is tested with a hybrid simulation.

  2. The effect of an offset polar cap dipolar magnetic field on the modeling of the Vela pulsar’s γ-ray light curves

    PubMed Central

    Barnard, M.; Venter, C.; Harding, A. K.

    2018-01-01

    We performed geometric pulsar light curve modeling using static, retarded vacuum, and offset polar cap (PC) dipole B-fields (the latter is characterized by a parameter ε), in conjunction with standard two-pole caustic (TPC) and outer gap (OG) emission geometries. The offset-PC dipole B-field mimics deviations from the static dipole (which corresponds to ε = 0). In addition to constant-emissivity geometric models, we also considered a slot gap (SG) E-field associated with the offset-PC dipole B-field and found that its inclusion leads to qualitatively different light curves. Solving the particle transport equation shows that the particle energy only becomes large enough to yield significant curvature radiation at large altitudes above the stellar surface, given this relatively low E-field. Therefore, particles do not always attain the radiation-reaction limit. Our overall optimal light curve fit is for the retarded vacuum dipole field and OG model, at an inclination angle α=78−1+1° and observer angle ζ=69−1+2°. For this B-field, the TPC model is statistically disfavored compared to the OG model. For the static dipole field, neither model is significantly preferred. We found that smaller values of ε are favored for the offset-PC dipole field when assuming constant emissivity, and larger ε values favored for variable emissivity, but not significantly so. When multiplying the SG E-field by a factor of 100, we found improved light curve fits, with α and ζ being closer to best fits from independent studies, as well as curvature radiation reaction at lower altitudes. PMID:29681648

  3. The Effect of an Offset Polar Cap Dipolar Magnetic Field on the Modeling of the Vela Pulsar's Gamma-Ray Light Curves

    NASA Technical Reports Server (NTRS)

    Barnard, M.; Venter, C.; Harding, A. K.

    2016-01-01

    We performed geometric pulsar light curve modeling using static, retarded vacuum, and offset polar cap (PC) dipole B-fields (the latter is characterized by a parameter epsilon), in conjunction with standard two-pole caustic (TPC) and outer gap (OG) emission geometries. The offset-PC dipole B-field mimics deviations from the static dipole (which corresponds to epsilon equals 0). In addition to constant-emissivity geometric models, we also considered a slot gap (SG) E-field associated with the offset-PC dipole B-field and found that its inclusion leads to qualitatively different light curves. Solving the particle transport equation shows that the particle energy only becomes large enough to yield significant curvature radiation at large altitudes above the stellar surface, given this relatively low E-field. Therefore, particles do not always attain the radiation-reaction limit. Our overall optimal light curve fit is for the retarded vacuum dipole field and OG model, at an inclination angle alpha equals 78 plus or minus 1 degree and observer angle zeta equals 69 plus 2 degrees or minus 1 degree. For this B-field, the TPC model is statistically disfavored compared to the OG model. For the static dipole field, neither model is significantly preferred. We found that smaller values of epsilon are favored for the offset-PC dipole field when assuming constant emissivity, and larger epsilon values favored for variable emissivity, but not significantly so. When multiplying the SG E-field by a factor of 100, we found improved light curve fits, with alpha and zeta being closer to best fits from independent studies, as well as curvature radiation reaction at lower altitudes.

  4. Merging for Particle-Mesh Complex Particle Kinetic Modeling of the Multiple Plasma Beams

    NASA Technical Reports Server (NTRS)

    Lipatov, Alexander S.

    2011-01-01

    We suggest a merging procedure for the Particle-Mesh Complex Particle Kinetic (PMCPK) method in case of inter-penetrating flow (multiple plasma beams). We examine the standard particle-in-cell (PIC) and the PMCPK methods in the case of particle acceleration by shock surfing for a wide range of the control numerical parameters. The plasma dynamics is described by a hybrid (particle-ion-fluid-electron) model. Note that one may need a mesh if modeling with the computation of an electromagnetic field. Our calculations use specified, time-independent electromagnetic fields for the shock, rather than self-consistently generated fields. While a particle-mesh method is a well-verified approach, the CPK method seems to be a good approach for multiscale modeling that includes multiple regions with various particle/fluid plasma behavior. However, the CPK method is still in need of a verification for studying the basic plasma phenomena: particle heating and acceleration by collisionless shocks, magnetic field reconnection, beam dynamics, etc.

  5. Transcranial Magnetic Stimulation: An Automated Procedure to Obtain Coil-specific Models for Field Calculations.

    PubMed

    Madsen, Kristoffer H; Ewald, Lars; Siebner, Hartwig R; Thielscher, Axel

    2015-01-01

    Field calculations for transcranial magnetic stimulation (TMS) are increasingly implemented online in neuronavigation systems and in more realistic offline approaches based on finite-element methods. They are often based on simplified and/or non-validated models of the magnetic vector potential of the TMS coils. To develop an approach to reconstruct the magnetic vector potential based on automated measurements. We implemented a setup that simultaneously measures the three components of the magnetic field with high spatial resolution. This is complemented by a novel approach to determine the magnetic vector potential via volume integration of the measured field. The integration approach reproduces the vector potential with very good accuracy. The vector potential distribution of a standard figure-of-eight shaped coil determined with our setup corresponds well with that calculated using a model reconstructed from x-ray images. The setup can supply validated models for existing and newly appearing TMS coils. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Quantizing the electromagnetic field near two-sided semitransparent mirrors

    NASA Astrophysics Data System (ADS)

    Furtak-Wells, Nicholas; Clark, Lewis A.; Purdy, Robert; Beige, Almut

    2018-04-01

    This paper models light scattering through flat surfaces with finite transmission, reflection, and absorption rates, with wave packets approaching the mirror from both sides. While using the same notion of photons as in free space, our model also accounts for the presence of mirror images and the possible exchange of energy between the electromagnetic field and the mirror surface. To test our model, we derive the spontaneous decay rate and the level shift of an atom in front of a semitransparent mirror as a function of its transmission and reflection rates. When considering limiting cases and using standard approximations, our approach reproduces well-known results but it also paves the way for the modeling of more complex scenarios.

  7. Time-varying q-deformed dark energy interacts with dark matter

    NASA Astrophysics Data System (ADS)

    Dil, Emre; Kolay, Erdinç

    We propose a new model for studying the dark constituents of the universe by regarding the dark energy as a q-deformed scalar field interacting with the dark matter, in the framework of standard general relativity. Here we assume that the number of particles in each mode of the q-deformed scalar field varies in time by the particle creation and annihilation. We first describe the q-deformed scalar field dark energy quantum-field theoretically, then construct the action and the dynamical structure of these interacting dark sectors, in order to study the dynamics of the model. We perform the phase space analysis of the model to confirm and interpret our proposal by searching the stable attractor solutions implying the late-time accelerating phase of the universe. We then obtain the result that when interaction and equation-of-state parameter of the dark matter evolve from the present day values into a particular value, the dark energy turns out to be a q-deformed scalar field.

  8. Probing the magnetic field structure in Sgr A* on Black Hole Horizon Scales with Polarized Radiative Transfer Simulations

    NASA Astrophysics Data System (ADS)

    Gold, Roman; McKinney, Jonathan; Johnson, Michael; Doeleman, Sheperd; Event Horizon Telescope Collaboration

    2016-03-01

    Accreting black holes (BHs) are at the core of relativistic astrophysics as messengers of the strong-field regime of General Relativity and prime targets of several observational campaigns, including imaging the black hole shadow in SagA* and M87 with the Event Horizon Telescope. I will present results from general-relativistic, polarized radiatiative transfer models for the inner accretion flow in Sgr A*. The models use time dependent, global GRMHD simulations of hot accretion flows including standard-and-normal-evolution (SANE) and magnetically arrested disks (MAD). I present comparisons of these synthetic data sets to the most recent observations with the Event Horizon Telescope and show how the data distinguishes the models and probes the magnetic field structure.

  9. Non-destructive evaluation of chlorophyll content in quinoa and amaranth leaves by simple and multiple regression analysis of RGB image components.

    PubMed

    Riccardi, M; Mele, G; Pulvento, C; Lavini, A; d'Andria, R; Jacobsen, S-E

    2014-06-01

    Leaf chlorophyll content provides valuable information about physiological status of plants; it is directly linked to photosynthetic potential and primary production. In vitro assessment by wet chemical extraction is the standard method for leaf chlorophyll determination. This measurement is expensive, laborious, and time consuming. Over the years alternative methods, rapid and non-destructive, have been explored. The aim of this work was to evaluate the applicability of a fast and non-invasive field method for estimation of chlorophyll content in quinoa and amaranth leaves based on RGB components analysis of digital images acquired with a standard SLR camera. Digital images of leaves from different genotypes of quinoa and amaranth were acquired directly in the field. Mean values of each RGB component were evaluated via image analysis software and correlated to leaf chlorophyll provided by standard laboratory procedure. Single and multiple regression models using RGB color components as independent variables have been tested and validated. The performance of the proposed method was compared to that of the widely used non-destructive SPAD method. Sensitivity of the best regression models for different genotypes of quinoa and amaranth was also checked. Color data acquisition of the leaves in the field with a digital camera was quick, more effective, and lower cost than SPAD. The proposed RGB models provided better correlation (highest R (2)) and prediction (lowest RMSEP) of the true value of foliar chlorophyll content and had a lower amount of noise in the whole range of chlorophyll studied compared with SPAD and other leaf image processing based models when applied to quinoa and amaranth.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cornejo, Juan Carlos

    The Standard Model has been a theory with the greatest success in describing the fundamental interactions of particles. As of the writing of this dissertation, the Standard Model has not been shown to make a false prediction. However, the limitations of the Standard Model have long been suspected by its lack of a description of gravity, nor dark matter. Its largest challenge to date, has been the observation of neutrino oscillations, and the implication that they may not be massless, as required by the Standard Model. The growing consensus is that the Standard Model is simply a lower energy effectivemore » field theory, and that new physics lies at much higher energies. The Q weak Experiment is testing the Electroweak theory of the Standard Model by making a precise determination of the weak charge of the proton (Q p w). Any signs of "new physics" will appear as a deviation to the Standard Model prediction. The weak charge is determined via a precise measurement of the parity-violating asymmetry of the electron-proton interaction via elastic scattering of a longitudinally polarized electron beam of an un-polarized proton target. The experiment required that the electron beam polarization be measured to an absolute uncertainty of 1%. At this level the electron beam polarization was projected to contribute the single largest experimental uncertainty to the parity-violating asymmetry measurement. This dissertation will detail the use of Compton scattering to determine the electron beam polarization via the detection of the scattered photon. I will conclude the remainder of the dissertation with an independent analysis of the blinded Q weak.« less

  11. Dual learning processes underlying human decision-making in reversal learning tasks: functional significance and evidence from the model fit to human behavior

    PubMed Central

    Bai, Yu; Katahira, Kentaro; Ohira, Hideki

    2014-01-01

    Humans are capable of correcting their actions based on actions performed in the past, and this ability enables them to adapt to a changing environment. The computational field of reinforcement learning (RL) has provided a powerful explanation for understanding such processes. Recently, the dual learning system, modeled as a hybrid model that incorporates value update based on reward-prediction error and learning rate modulation based on the surprise signal, has gained attention as a model for explaining various neural signals. However, the functional significance of the hybrid model has not been established. In the present study, we used computer simulation in a reversal learning task to address functional significance in a probabilistic reversal learning task. The hybrid model was found to perform better than the standard RL model in a large parameter setting. These results suggest that the hybrid model is more robust against the mistuning of parameters compared with the standard RL model when decision-makers continue to learn stimulus-reward contingencies, which can create abrupt changes. The parameter fitting results also indicated that the hybrid model fit better than the standard RL model for more than 50% of the participants, which suggests that the hybrid model has more explanatory power for the behavioral data than the standard RL model. PMID:25161635

  12. Simulating the electrohydrodynamics of a viscous droplet

    NASA Astrophysics Data System (ADS)

    Theillard, Maxime; Saintillan, David

    2016-11-01

    We present a novel numerical approach for the simulation of viscous drop placed in an electric field in two and three spatial dimensions. Our method is constructed as a stable projection method on Quad/Octree grids. Using a modified pressure correction we were able to alleviate the standard time step restriction incurred by capillary forces. In weak electric fields, our results match remarkably well with the predictions from the Taylor-Melcher leaky dielectric model. In strong electric fields the so-called Quincke rotation is correctly reproduced.

  13. Gamma-Ray Bursts and Fast Transients. Multi-wavelength Observations and Multi-messenger Signals

    NASA Astrophysics Data System (ADS)

    Willingale, R.; Mészáros, P.

    2017-07-01

    The current status of observations and theoretical models of gamma-ray bursts and some other related transients, including ultra-long bursts and tidal disruption events, is reviewed. We consider the impact of multi-wavelength data on the formulation and development of theoretical models for the prompt and afterglow emission including the standard fireball model utilizing internal shocks and external shocks, photospheric emission, the role of the magnetic field and hadronic processes. In addition, we discuss some of the prospects for non-photonic multi-messenger detection and for future instrumentation, and comment on some of the outstanding issues in the field.

  14. New Physics Beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Cai, Haiying

    In this thesis we discuss several extensons of the standard model, with an emphasis on the hierarchy problem. The hierachy problem related to the Higgs boson mass is a strong indication of new physics beyond the Standard Model. In the literature, several mechanisms, e.g. , supersymmetry (SUSY), the little Higgs and extra dimensions, are proposed to explain why the Higgs mass can be stabilized to the electroweak scale. In the Standard Model, the largest quadratically divergent contribution to the Higgs mass-squared comes from the top quark loop. We consider a few novel possibilities on how this contribution is cancelled. In the standard SUSY scenario, the quadratic divergence from the fermion loops is cancelled by the scalar superpartners and the SUSY breaking scale determines the masses of the scalars. We propose a new SUSY model, where the superpartner of the top quark is spin-1 rather than spin-0. In little Higgs theories, the Higgs field is realized as a psudo goldstone boson in a nonlinear sigma model. The smallness of its mass is protected by the global symmetry. As a variation, we put the little Higgs into an extra dimensional model where the quadratically divergent top loop contribution to the Higgs mass is cancelled by an uncolored heavy "top quirk" charged under a different SU(3) gauge group. Finally, we consider a supersymmetric warped extra dimensional model where the superpartners have continuum mass spectra. We use the holographic boundary action to study how a mass gap can arise to separate the zero modes from continuum modes. Such extensions of the Standard Model have novel signatures at the Large Hadron Collider.

  15. Constraining possible variations of the fine structure constant in strong gravitational fields with the Kα iron line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bambi, Cosimo, E-mail: bambi@fudan.edu.cn

    2014-03-01

    In extensions of general relativity and in theories aiming at unifying gravity with the forces of the Standard Model, the value of the ''fundamental constants'' is often determined by the vacuum expectation value of new fields, which may thus change in different backgrounds. Variations of fundamental constants with respect to the values measured today in laboratories on Earth are expected to be more evident on cosmological timescales and/or in strong gravitational fields. In this paper, I show that the analysis of the Kα iron line observed in the X-ray spectrum of black holes can potentially be used to probe themore » fine structure constant α in gravitational potentials relative to Earth of Δφ ≈ 0.1. At present, systematic effects not fully under control prevent to get robust and stringent bounds on possible variations of the value of α with this technique, but the fact that current data can be fitted with models based on standard physics already rules out variations of the fine structure constant larger than some percent.« less

  16. Magnetic monopoles in field theory and cosmology.

    PubMed

    Rajantie, Arttu

    2012-12-28

    The existence of magnetic monopoles is predicted by many theories of particle physics beyond the standard model. However, in spite of extensive searches, there is no experimental or observational sign of them. I review the role of magnetic monopoles in quantum field theory and discuss their implications for particle physics and cosmology. I also highlight their differences and similarities with monopoles found in frustrated magnetic systems.

  17. Combined measurement and modeling of the hydrological impact of hydraulic redistribution using CLM4.5 at eight AmeriFlux sites

    Treesearch

    Congsheng Fu; Guiling Wang; Michael L. Goulden; Russell L. Scott; Kenneth Bible; Zoe G. Cardon

    2016-01-01

    Effects of hydraulic redistribution (HR) on hydrological, biogeochemical, and ecological processes have been demonstrated in the field, but the current generation of standard earth system models does not include a representation of HR. Though recent studies have examined the effect of incorporating HR into land surface models, few (if any) have done cross-site...

  18. Closed inflationary universe in patch cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campo, Sergio del; Herrera, Ramon; Saavedra, Joel

    2009-09-15

    In this paper, we study closed inflationary universe models using the Gauss-Bonnet Brane. We determine and characterize the existence of a universe with {omega}>1, with an appropriate period of inflation. We have found that this model is less restrictive in comparison with the standard approach where a scalar field is considered. We use recent astronomical observations to constrain the parameters appearing in the model.

  19. Particles, Feynman Diagrams and All That

    ERIC Educational Resources Information Center

    Daniel, Michael

    2006-01-01

    Quantum fields are introduced in order to give students an accurate qualitative understanding of the origin of Feynman diagrams as representations of particle interactions. Elementary diagrams are combined to produce diagrams representing the main features of the Standard Model.

  20. Origins of inert Higgs doublets

    DOE PAGES

    Kephart, Thomas W.; Yuan, Tzu -Chiang

    2016-03-24

    Here, we consider beyond the standard model embedding of inert Higgs doublet fields. We argue that inert Higgs doublets can arise naturally in grand unified theories where the necessary associated Z 2 symmetry can occur automatically. Several examples are discussed.

  1. New constraints on dark matter effective theories from standard model loops.

    PubMed

    Crivellin, Andreas; D'Eramo, Francesco; Procura, Massimiliano

    2014-05-16

    We consider an effective field theory for a gauge singlet Dirac dark matter particle interacting with the standard model fields via effective operators suppressed by the scale Λ ≳ 1 TeV. We perform a systematic analysis of the leading loop contributions to spin-independent Dirac dark matter-nucleon scattering using renormalization group evolution between Λ and the low-energy scale probed by direct detection experiments. We find that electroweak interactions induce operator mixings such that operators that are naively velocity suppressed and spin dependent can actually contribute to spin-independent scattering. This allows us to put novel constraints on Wilson coefficients that were so far poorly bounded by direct detection. Constraints from current searches are already significantly stronger than LHC bounds, and will improve in the near future. Interestingly, the loop contribution we find is isospin violating even if the underlying theory is isospin conserving.

  2. Postinflationary Higgs relaxation and the origin of matter-antimatter asymmetry.

    PubMed

    Kusenko, Alexander; Pearce, Lauren; Yang, Louis

    2015-02-13

    The recent measurement of the Higgs boson mass implies a relatively slow rise of the standard model Higgs potential at large scales, and a possible second minimum at even larger scales. Consequently, the Higgs field may develop a large vacuum expectation value during inflation. The relaxation of the Higgs field from its large postinflationary value to the minimum of the effective potential represents an important stage in the evolution of the Universe. During this epoch, the time-dependent Higgs condensate can create an effective chemical potential for the lepton number, leading to a generation of the lepton asymmetry in the presence of some large right-handed Majorana neutrino masses. The electroweak sphalerons redistribute this asymmetry between leptons and baryons. This Higgs relaxation leptogenesis can explain the observed matter-antimatter asymmetry of the Universe even if the standard model is valid up to the scale of inflation, and any new physics is suppressed by that high scale.

  3. Gluon-fusion Higgs production in the Standard Model Effective Field Theory

    NASA Astrophysics Data System (ADS)

    Deutschmann, Nicolas; Duhr, Claude; Maltoni, Fabio; Vryonidou, Eleni

    2017-12-01

    We provide the complete set of predictions needed to achieve NLO accuracy in the Standard Model Effective Field Theory at dimension six for Higgs production in gluon fusion. In particular, we compute for the first time the contribution of the chromomagnetic operator {\\overline{Q}}_LΦ σ {q}_RG at NLO in QCD, which entails two-loop virtual and one-loop real contributions, as well as renormalisation and mixing with the Yukawa operator {Φ}^{\\dagger}Φ{\\overline{Q}}_LΦ {q}_R and the gluon-fusion operator Φ†Φ GG. Focusing on the top-quark-Higgs couplings, we consider the phenomenological impact of the NLO corrections in constraining the three relevant operators by implementing the results into the M adG raph5_ aMC@NLO frame-work. This allows us to compute total cross sections as well as to perform event generation at NLO that can be directly employed in experimental analyses.

  4. Thin Interface Asymptotics for an Energy/Entropy Approach to Phase-Field Models with Unequal Conductivities

    NASA Technical Reports Server (NTRS)

    McFadden, G. B.; Wheeler, A. A.; Anderson, D. M.

    1999-01-01

    Karma and Rapped recently developed a new sharp interface asymptotic analysis of the phase-field equations that is especially appropriate for modeling dendritic growth at low undercoolings. Their approach relieves a stringent restriction on the interface thickness that applies in the conventional asymptotic analysis, and has the added advantage that interfacial kinetic effects can also be eliminated. However, their analysis focussed on the case of equal thermal conductivities in the solid and liquid phases; when applied to a standard phase-field model with unequal conductivities, anomalous terms arise in the limiting forms of the boundary conditions for the interfacial temperature that are not present in conventional sharp-interface solidification models, as discussed further by Almgren. In this paper we apply their asymptotic methodology to a generalized phase-field model which is derived using a thermodynamically consistent approach that is based on independent entropy and internal energy gradient functionals that include double wells in both the entropy and internal energy densities. The additional degrees of freedom associated with the generalized phased-field equations can be chosen to eliminate the anomalous terms that arise for unequal conductivities.

  5. TMFF-A Two-Bead Multipole Force Field for Coarse-Grained Molecular Dynamics Simulation of Protein.

    PubMed

    Li, Min; Liu, Fengjiao; Zhang, John Z H

    2016-12-13

    Coarse-grained (CG) models are desirable for studying large and complex biological systems. In this paper, we propose a new two-bead multipole force field (TMFF) in which electric multipoles up to the quadrupole are included in the CG force field. The inclusion of electric multipoles in the proposed CG force field enables a more realistic description of the anisotropic electrostatic interactions in the protein system and, thus, provides an improvement over the standard isotropic two-bead CG models. In order to test the accuracy of the new CG force field model, extensive molecular dynamics simulations were carried out for a series of benchmark protein systems. These simulation studies showed that the TMFF model can realistically reproduce the structural and dynamical properties of proteins, as demonstrated by the close agreement of the CG results with those from the corresponding all-atom simulations in terms of root-mean-square deviations (RMSDs) and root-mean-square fluctuations (RMSFs) of the protein backbones. The current two-bead model is highly coarse-grained and is 50-fold more efficient than all-atom method in MD simulation of proteins in explicit water.

  6. Modeling of a Metal-Ferroelectric-Semiconductor Field-Effect Transistor NAND Gate

    NASA Technical Reports Server (NTRS)

    Phillips, Thomas A.; MacLeod, Todd C.; Ho, Fat Duen

    2005-01-01

    Considerable research has been performed by several organizations in the use of the Metal- Ferroelectric-Semiconductor Field-Effect Transistors (MFSFET) in memory circuits. However, research has been limited in expanding the use of the MFSFET to other electronic circuits. This research project investigates the modeling of a NAND gate constructed from MFSFETs. The NAND gate is one of the fundamental building blocks of digital electronic circuits. The first step in forming a NAND gate is to develop an inverter circuit. The inverter circuit was modeled similar to a standard CMOS inverter. A n-channel MFSFET with positive polarization was used for the n-channel transistor, and a n-channel MFSFET with negative polarization was used for the p-channel transistor. The MFSFETs were simulated by using a previously developed current model which utilized a partitioned ferroelectric layer. The inverter voltage transfer curve was obtained over a standard input of zero to five volts. Then a 2-input NAND gate was modeled similar to the inverter circuit. Voltage transfer curves were obtained for the NAND gate for various configurations of input voltages. The resultant data shows that it is feasible to construct a NAND gate with MFSFET transistors.

  7. The extratropical 40-day oscillation in the UCLA general circulation model. Part 1: Atmospheric angular momentum

    NASA Technical Reports Server (NTRS)

    Marcus, S. L.; Ghil, M.; Dickey, J. O.

    1994-01-01

    Variations in atmospheric angular momentum (AAM) are examined in a three-year simulation of the large-scale atmosphere with perpetual January forcing. The simulation is performed with a version of the University of California at Los Angeles (UCLA) general circulation model that contains no tropical Madden-Julian Oscillation (MJO). In addition, the results of three shorter experiments with no topography are analyzed. The three-year standard topography run contains no significant intraseasonal AAM periodicity in the tropics, consistent with the lack of the MJO, but produces a robust, 42-day AAM oscillation in the Northern Hemisphere (NH) extratropics. The model tropics undergoes a barotropic, zonally symmetric oscillation, driven by an exchange of mass with the NH extratropics. No intraseasonal periodicity is found in the average tropical latent heating field, indicating that the model oscillation is dynamically rather than thermodynamically driven. The no-mountain runs fail to produce an intraseasonal AAM oscillation, consistent with a topographic origin for the NH extratropical oscillation in the standard model. The spatial patterns of the oscillation in the 500-mb height field, and the relationship of the extratropical oscillation to intraseasonal variations in the tropics, will be discussed in Part 2 of this study.

  8. Enhanced propagation modeling of directional aviation noise: A hybrid parabolic equation-fast field program method

    NASA Astrophysics Data System (ADS)

    Rosenbaum, Joyce E.

    2011-12-01

    Commercial air traffic is anticipated to increase rapidly in the coming years. The impact of aviation noise on communities surrounding airports is, therefore, a growing concern. Accurate prediction of noise can help to mitigate the impact on communities and foster smoother integration of aerospace engineering advances. The problem of accurate sound level prediction requires careful inclusion of all mechanisms that affect propagation, in addition to correct source characterization. Terrain, ground type, meteorological effects, and source directivity can have a substantial influence on the noise level. Because they are difficult to model, these effects are often included only by rough approximation. This dissertation presents a model designed for sound propagation over uneven terrain, with mixed ground type and realistic meteorological conditions. The model is a hybrid of two numerical techniques: the parabolic equation (PE) and fast field program (FFP) methods, which allow for physics-based inclusion of propagation effects and ensure the low frequency content, a factor in community impact, is predicted accurately. Extension of the hybrid model to a pseudo-three-dimensional representation allows it to produce aviation noise contour maps in the standard form. In order for the model to correctly characterize aviation noise sources, a method of representing arbitrary source directivity patterns was developed for the unique form of the parabolic equation starting field. With this advancement, the model can represent broadband, directional moving sound sources, traveling along user-specified paths. This work was prepared for possible use in the research version of the sound propagation module in the Federal Aviation Administration's new standard predictive tool.

  9. Can Polar Fields Explain Missing Open Flux?

    NASA Astrophysics Data System (ADS)

    Linker, J.; Downs, C.; Caplan, R. M.; Riley, P.; Mikic, Z.; Lionello, R.

    2017-12-01

    The "open" magnetic field is the portion of the Sun's magnetic field that extends out into the heliosphere and becomes the interplanetary magnetic field (IMF). Both the IMF and the Sun's magnetic field in the photosphere have been measured for many years. In the standard paradigm of coronal structure, the open magnetic field originates primarily in coronal holes. The regions that are magnetically closed trap the coronal plasma and give rise to the streamer belt. This basic picture is qualitatively reproduced by models of coronal structure using photospheric magnetic fields as input. If this paradigm is correct, there are two primary observational constraints on the models: (1) The open field regions in the model should approximately correspond to coronal holes observed in emission, and (2) the magnitude of the open magnetic flux in the model should match that inferred from in situ spacecraft measurements. Linker et al. (2017, ApJ, submitted) investigated the July 2010 time period for a range of observatory maps and both PFSS and MHD models. We found that all of the model/map combinations underestimated the interplanetary magnetic flux, unless the modeled open field regions were larger than observed coronal holes. An estimate of the open magnetic flux made entirely from solar observations (combining detected coronal hole boundaries with observatory synoptic magnetic maps) also underestimated the interplanetary magnetic flux. The magnetic field near the Sun's poles is poorly observed and may not be well represented in observatory maps. In this paper, we explore whether an underestimate of the polar magnetic flux during this time period could account for the overall underestimate of open magnetic flux. Research supported by NASA, AFOSR, and NSF.

  10. Isocurvature constraints on portal couplings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kainulainen, Kimmo; Nurmi, Sami; Vaskonen, Ville

    2016-06-01

    We consider portal models which are ultraweakly coupled with the Standard Model, and confront them with observational constraints on dark matter abundance and isocurvature perturbations. We assume the hidden sector to contain a real singlet scalar s and a sterile neutrino ψ coupled to s via a pseudoscalar Yukawa term. During inflation, a primordial condensate consisting of the singlet scalar s is generated, and its contribution to the isocurvature perturbations is imprinted onto the dark matter abundance. We compute the total dark matter abundance including the contributions from condensate decay and nonthermal production from the Standard Model sector. We thenmore » use the Planck limit on isocurvature perturbations to derive a novel constraint connecting dark matter mass and the singlet self coupling with the scale of inflation: m {sub DM}/GeV ∼< 0.2λ{sub s}{sup 3/8} ( H {sub *}/10{sup 11} GeV){sup −3/2}. This constraint is relevant in most portal models ultraweakly coupled with the Standard Model and containing light singlet scalar fields.« less

  11. Statistical approach to Higgs boson couplings in the standard model effective field theory

    NASA Astrophysics Data System (ADS)

    Murphy, Christopher W.

    2018-01-01

    We perform a parameter fit in the standard model effective field theory (SMEFT) with an emphasis on using regularized linear regression to tackle the issue of the large number of parameters in the SMEFT. In regularized linear regression, a positive definite function of the parameters of interest is added to the usual cost function. A cross-validation is performed to try to determine the optimal value of the regularization parameter to use, but it selects the standard model (SM) as the best model to explain the measurements. Nevertheless as proof of principle of this technique we apply it to fitting Higgs boson signal strengths in SMEFT, including the latest Run-2 results. Results are presented in terms of the eigensystem of the covariance matrix of the least squares estimators as it has a degree model-independent to it. We find several results in this initial work: the SMEFT predicts the total width of the Higgs boson to be consistent with the SM prediction; the ATLAS and CMS experiments at the LHC are currently sensitive to non-resonant double Higgs boson production. Constraints are derived on the viable parameter space for electroweak baryogenesis in the SMEFT, reinforcing the notion that a first order phase transition requires fairly low-scale beyond the SM physics. Finally, we study which future experimental measurements would give the most improvement on the global constraints on the Higgs sector of the SMEFT.

  12. Melanins and melanogenesis: methods, standards, protocols.

    PubMed

    d'Ischia, Marco; Wakamatsu, Kazumasa; Napolitano, Alessandra; Briganti, Stefania; Garcia-Borron, José-Carlos; Kovacs, Daniela; Meredith, Paul; Pezzella, Alessandro; Picardo, Mauro; Sarna, Tadeusz; Simon, John D; Ito, Shosuke

    2013-09-01

    Despite considerable advances in the past decade, melanin research still suffers from the lack of universally accepted and shared nomenclature, methodologies, and structural models. This paper stems from the joint efforts of chemists, biochemists, physicists, biologists, and physicians with recognized and consolidated expertise in the field of melanins and melanogenesis, who critically reviewed and experimentally revisited methods, standards, and protocols to provide for the first time a consensus set of recommended procedures to be adopted and shared by researchers involved in pigment cell research. The aim of the paper was to define an unprecedented frame of reference built on cutting-edge knowledge and state-of-the-art methodology, to enable reliable comparison of results among laboratories and new progress in the field based on standardized methods and shared information. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Hybrid numerical method for solution of the radiative transfer equation in one, two, or three dimensions.

    PubMed

    Reinersman, Phillip N; Carder, Kendall L

    2004-05-01

    A hybrid method is presented by which Monte Carlo (MC) techniques are combined with an iterative relaxation algorithm to solve the radiative transfer equation in arbitrary one-, two-, or three-dimensional optical environments. The optical environments are first divided into contiguous subregions, or elements. MC techniques are employed to determine the optical response function of each type of element. The elements are combined, and relaxation techniques are used to determine simultaneously the radiance field on the boundary and throughout the interior of the modeled environment. One-dimensional results compare well with a standard radiative transfer model. The light field beneath and adjacent to a long barge is modeled in two dimensions and displayed. Ramifications for underwater video imaging are discussed. The hybrid model is currently capable of providing estimates of the underwater light field needed to expedite inspection of ship hulls and port facilities.

  14. The spatial structure of a nonlinear receptive field.

    PubMed

    Schwartz, Gregory W; Okawa, Haruhisa; Dunn, Felice A; Morgan, Josh L; Kerschensteiner, Daniel; Wong, Rachel O; Rieke, Fred

    2012-11-01

    Understanding a sensory system implies the ability to predict responses to a variety of inputs from a common model. In the retina, this includes predicting how the integration of signals across visual space shapes the outputs of retinal ganglion cells. Existing models of this process generalize poorly to predict responses to new stimuli. This failure arises in part from properties of the ganglion cell response that are not well captured by standard receptive-field mapping techniques: nonlinear spatial integration and fine-scale heterogeneities in spatial sampling. Here we characterize a ganglion cell's spatial receptive field using a mechanistic model based on measurements of the physiological properties and connectivity of only the primary excitatory circuitry of the retina. The resulting simplified circuit model successfully predicts ganglion-cell responses to a variety of spatial patterns and thus provides a direct correspondence between circuit connectivity and retinal output.

  15. Large scale structure from the Higgs fields of the supersymmetric standard model

    NASA Astrophysics Data System (ADS)

    Bastero-Gil, M.; di Clemente, V.; King, S. F.

    2003-05-01

    We propose an alternative implementation of the curvaton mechanism for generating the curvature perturbations which does not rely on a late decaying scalar decoupled from inflation dynamics. In our mechanism the supersymmetric Higgs scalars are coupled to the inflaton in a hybrid inflation model, and this allows the conversion of the isocurvature perturbations of the Higgs fields to the observed curvature perturbations responsible for large scale structure to take place during reheating. We discuss an explicit model which realizes this mechanism in which the μ term in the Higgs superpotential is generated after inflation by the vacuum expectation value of a singlet field. The main prediction of the model is that the spectral index should deviate significantly from unity, |n-1|˜0.1. We also expect relic isocurvature perturbations in neutralinos and baryons, but no significant departures from Gaussianity and no observable effects of gravity waves in the CMB spectrum.

  16. Vectorlike fermions and Higgs effective field theory revisited

    DOE PAGES

    Chen, Chien-Yi; Dawson, S.; Furlan, Elisabetta

    2017-07-10

    Heavy vectorlike quarks (VLQs) appear in many models of beyond the Standard Model physics. Direct experimental searches require these new quarks to be heavy, ≳ 800 – 1000 GeV . Here, we perform a global fit of the parameters of simple VLQ models in minimal representations of S U ( 2 ) L to precision data and Higgs rates. One interesting connection between anomalous Z bmore » $$\\bar{b}$$ interactions and Higgs physics in VLQ models is discussed. Finally, we present our analysis in an effective field theory (EFT) framework and show that the parameters of VLQ models are already highly constrained. Exact and approximate analytical formulas for the S and T parameters in the VLQ models we consider are available in the Supplemental Material as Mathematica files.« less

  17. Evaluation of simulated ocean carbon in the CMIP5 earth system models

    NASA Astrophysics Data System (ADS)

    Orr, James; Brockmann, Patrick; Seferian, Roland; Servonnat, Jérôme; Bopp, Laurent

    2013-04-01

    We maintain a centralized model output archive containing output from the previous generation of Earth System Models (ESMs), 7 models used in the IPCC AR4 assessment. Output is in a common format located on a centralized server and is publicly available through a web interface. Through the same interface, LSCE/IPSL has also made available output from the Coupled Model Intercomparison Project (CMIP5), the foundation for the ongoing IPCC AR5 assessment. The latter includes ocean biogeochemical fields from more than 13 ESMs. Modeling partners across 3 EU projects refer to the combined AR4-AR5 archive and comparison as OCMIP5, building on previous phases of OCMIP (Ocean Carbon Cycle Intercomparison Project) and making a clear link to IPCC AR5 (CMIP5). While now focusing on assessing the latest generation of results (AR5, CMIP5), this effort is also able to put them in context (AR4). For model comparison and evaluation, we have also stored computed derived variables (e.g., those needed to assess ocean acidification) and key fields regridded to a common 1°x1° grid, thus complementing the standard CMIP5 archive. The combined AR4-AR5 output (OCMIP5) has been used to compute standard quantitative metrics, both global and regional, and those have been synthesized with summary diagrams. In addition, for key biogeochemical fields we have deconvolved spatiotemporal components of the mean square error in order to constrain which models go wrong where. Here we will detail results from these evaluations which have exploited gridded climatological data. The archive, interface, and centralized evaluation provide a solid technical foundation, upon which collaboration and communication is being broadened in the ocean biogeochemical modeling community. Ultimately we aim to encourage wider use of the OCMIP5 archive.

  18. On transient rheology and glacial isostasy

    NASA Technical Reports Server (NTRS)

    Yuen, David A.; Sabadini, Roberto C. A.; Gasperini, Paolo; Boschi, Enzo

    1986-01-01

    The effect of transient creep on the inference of long-term mantle viscosity is investigated using theoretical predictions from self-gravitating, layered earth models with Maxwell, Burgers' body, and standard linear solid rheologies. The interaction between transient and steady-state rheologies is studied. The responses of the standard linear solid and Burgers' body models to transient creep in the entire mantle, and of the Burgers' body and Maxwell models to creep in the lower mantle are described. The models' responses are examined in terms of the surface displacement, free air gravity anomaly, wander of the rotation pole, and the secular variation of the degree 2 zonal coefficient of the earth's gravitational potential field. The data reveal that transient creep cannot operate throughout the entire mantle.

  19. Mapping the function of neuronal ion channels in model and experiment

    PubMed Central

    Podlaski, William F; Seeholzer, Alexander; Groschner, Lukas N; Miesenböck, Gero; Ranjan, Rajnish; Vogels, Tim P

    2017-01-01

    Ion channel models are the building blocks of computational neuron models. Their biological fidelity is therefore crucial for the interpretation of simulations. However, the number of published models, and the lack of standardization, make the comparison of ion channel models with one another and with experimental data difficult. Here, we present a framework for the automated large-scale classification of ion channel models. Using annotated metadata and responses to a set of voltage-clamp protocols, we assigned 2378 models of voltage- and calcium-gated ion channels coded in NEURON to 211 clusters. The IonChannelGenealogy (ICGenealogy) web interface provides an interactive resource for the categorization of new and existing models and experimental recordings. It enables quantitative comparisons of simulated and/or measured ion channel kinetics, and facilitates field-wide standardization of experimentally-constrained modeling. DOI: http://dx.doi.org/10.7554/eLife.22152.001 PMID:28267430

  20. Meeting report from the fourth meeting of the Computational Modeling in Biology Network (COMBINE)

    PubMed Central

    Waltemath, Dagmar; Bergmann, Frank T.; Chaouiya, Claudine; Czauderna, Tobias; Gleeson, Padraig; Goble, Carole; Golebiewski, Martin; Hucka, Michael; Juty, Nick; Krebs, Olga; Le Novère, Nicolas; Mi, Huaiyu; Moraru, Ion I.; Myers, Chris J.; Nickerson, David; Olivier, Brett G.; Rodriguez, Nicolas; Schreiber, Falk; Smith, Lucian; Zhang, Fengkai; Bonnet, Eric

    2014-01-01

    The Computational Modeling in Biology Network (COMBINE) is an initiative to coordinate the development of community standards and formats in computational systems biology and related fields. This report summarizes the topics and activities of the fourth edition of the annual COMBINE meeting, held in Paris during September 16-20 2013, and attended by a total of 96 people. This edition pioneered a first day devoted to modeling approaches in biology, which attracted a broad audience of scientists thanks to a panel of renowned speakers. During subsequent days, discussions were held on many subjects including the introduction of new features in the various COMBINE standards, new software tools that use the standards, and outreach efforts. Significant emphasis went into work on extensions of the SBML format, and also into community-building. This year’s edition once again demonstrated that the COMBINE community is thriving, and still manages to help coordinate activities between different standards in computational systems biology.

  1. The International Gravity Field Service (IGFS): Present Day Activities And Future Plans

    NASA Astrophysics Data System (ADS)

    Barzaghi, R.; Vergos, G. S.

    2016-12-01

    IGFS is a unified "umbrella" IAG service that coordinates the servicing of the geodetic and geophysical community with gravity field related data, software and information. The combined data of the IGFS entities will include global geopotential models, terrestrial, airborne, satellite and marine gravity observations, Earth tide data, GPS/levelling data, digital models of terrain and bathymetry, as well as ocean gravity field and geoid from satellite altimetry. The IGFS structure is based on the Gravity Services, the "operating arms" of IGFS. These Services related to IGFS are: BGI (Bureau Gravimetrique International), Toulouse, France ISG (International Service for the Geoid), Politecnico di Milano, Milano, Italy IGETS (International Geodynamics and Earth Tides Service), EOST, Strasbourg, France ICGEM (International Center for Global Earth Models), GFZ, Potsdam, Germany IDEMS (International Digital Elevation Model Service), ESRI, Redlands, CA, USA The Central Bureau, hosted at the Aristotle Thessaloniki University, is in charge for all the interactions among the services and the other IAG bodies, particularly GGOS. In this respect, connections with the GGOS Bureaus of Products and Standards and of Networks and Observations have been recently strengthened in order to align the Gravity services to the GGOS standards. IGFS is also strongly involved in the most relevant projects related to the gravity field such as the establishment of the new Global Absolute Gravity Reference System and of the International Height Reference System. These projects, along with the organization of Geoid Schools devoted to methods for gravity and geoid estimate, will play a central role in the IGFS future actions in the framework of GGOS.

  2. The synoptic maps of Br from HMI observations

    NASA Astrophysics Data System (ADS)

    Hayashi, Keiji; Hoeksema, J. Todd; Liu, Sun; Yang, Xudong; Centeno, Rebecca; Leka, K. D.; Barnes, Graham

    2012-03-01

    The vector magnetic field measurement can, in principal, give the "true" radial component of the magnetic field. We prepare 4 types of synoptic maps of the radial photospheric magnetic field, from the vector magnetic field data disambiguated by means of the minimum energy method developed at NWRA/CoRA, the vector data determined under the potential-field acute assumption, and the vector data determined under the radial-acute assumption, and the standard line-of-sight magnetogram. The models of the global corona, the MHD and the PFSS, are applied to different types of maps. Although the three-dimensional structures of the global coronal magnetic field with different maps are similar and overall agreeing well the AIA full-disk images, noticeable differences among the model outputs are found especially in the high latitude regions. We will show details of these test maps and discuss the issues in determining the radial component of the photospheric magnetic field near the poles and limb.

  3. Robust geographically weighted regression of modeling the Air Polluter Standard Index (APSI)

    NASA Astrophysics Data System (ADS)

    Warsito, Budi; Yasin, Hasbi; Ispriyanti, Dwi; Hoyyi, Abdul

    2018-05-01

    The Geographically Weighted Regression (GWR) model has been widely applied to many practical fields for exploring spatial heterogenity of a regression model. However, this method is inherently not robust to outliers. Outliers commonly exist in data sets and may lead to a distorted estimate of the underlying regression model. One of solution to handle the outliers in the regression model is to use the robust models. So this model was called Robust Geographically Weighted Regression (RGWR). This research aims to aid the government in the policy making process related to air pollution mitigation by developing a standard index model for air polluter (Air Polluter Standard Index - APSI) based on the RGWR approach. In this research, we also consider seven variables that are directly related to the air pollution level, which are the traffic velocity, the population density, the business center aspect, the air humidity, the wind velocity, the air temperature, and the area size of the urban forest. The best model is determined by the smallest AIC value. There are significance differences between Regression and RGWR in this case, but Basic GWR using the Gaussian kernel is the best model to modeling APSI because it has smallest AIC.

  4. Dynamically avoiding fine-tuning the cosmological constant: the ``Relaxed Universe''

    NASA Astrophysics Data System (ADS)

    Bauer, Florian; Solà, Joan; Štefancić, Hrvoje

    2010-12-01

    We demonstrate that there exists a large class of Script F(R,Script G) action functionals of the scalar curvature and of the Gauß-Bonnet invariant which are able to relax dynamically a large cosmological constant (CC), whatever it be its starting value in the early universe. Hence, it is possible to understand, without fine-tuning, the very small current value Λ0 ~ H02 of the CC as compared to its theoretically expected large value in quantum field theory and string theory. In our framework, this relaxation appears as a pure gravitational effect, where no ad hoc scalar fields are needed. The action involves a positive power of a characteristic mass parameter, Script M, whose value can be, interestingly enough, of the order of a typical particle physics mass of the Standard Model of the strong and electroweak interactions or extensions thereof, including the neutrino mass. The model universe emerging from this scenario (the ``Relaxed Universe'') falls within the class of the so-called ΛXCDM models of the cosmic evolution. Therefore, there is a ``cosmon'' entity X (represented by an effective object, not a field), which in this case is generated by the effective functional Script F(R,Script G) and is responsible for the dynamical adjustment of the cosmological constant. This model universe successfully mimics the essential past epochs of the standard (or ``concordance'') cosmological model (ΛCDM). Furthermore, it provides interesting clues to the coincidence problem and it may even connect naturally with primordial inflation.

  5. Modeling of contact tracing in social networks

    NASA Astrophysics Data System (ADS)

    Tsimring, Lev S.; Huerta, Ramón

    2003-07-01

    Spreading of certain infections in complex networks is effectively suppressed by using intelligent strategies for epidemic control. One such standard epidemiological strategy consists in tracing contacts of infected individuals. In this paper, we use a recently introduced generalization of the standard susceptible-infectious-removed stochastic model for epidemics in sparse random networks which incorporates an additional (traced) state. We describe a deterministic mean-field description which yields quantitative agreement with stochastic simulations on random graphs. We also discuss the role of contact tracing in epidemics control in small-world and scale-free networks. Effectiveness of contact tracing grows as the rewiring probability is reduced.

  6. Precision modelling of M dwarf stars: the magnetic components of CM Draconis

    NASA Astrophysics Data System (ADS)

    MacDonald, J.; Mullan, D. J.

    2012-04-01

    The eclipsing binary CM Draconis (CM Dra) contains two nearly identical red dwarfs of spectral class dM4.5. The masses and radii of the two components have been reported with unprecedentedly small statistical errors: for M, these errors are 1 part in 260, while for R, the errors reported by Morales et al. are 1 part in 130. When compared with standard stellar models with appropriate mass and age (≈4 Gyr), the empirical results indicate that both components are discrepant from the models in the following sense: the observed stars are larger in R ('bloated'), by several standard deviations, than the models predict. The observed luminosities are also lower than the models predict. Here, we attempt at first to model the two components of CM Dra in the context of standard (non-magnetic) stellar models using a systematic array of different assumptions about helium abundances (Y), heavy element abundances (Z), opacities and mixing length parameter (α). We find no 4-Gyr-old models with plausible values of these four parameters that fit the observed L and R within the reported statistical error bars. However, CM Dra is known to contain magnetic fields, as evidenced by the occurrence of star-spots and flares. Here we ask: can inclusion of magnetic effects into stellar evolution models lead to fits of L and R within the error bars? Morales et al. have reported that the presence of polar spots results in a systematic overestimate of R by a few per cent when eclipses are interpreted with a standard code. In a star where spots cover a fraction f of the surface area, we find that the revised R and L for CM Dra A can be fitted within the error bars by varying the parameter α. The latter is often assumed to be reduced by the presence of magnetic fields, although the reduction in α as a function of B is difficult to quantify. An alternative magnetic effect, namely inhibition of the onset of convection, can be readily quantified in terms of a magnetic parameter δ≈B2/4πγpgas (where B is the strength of the local vertical magnetic field). In the context of δ models in which B is not allowed to exceed a 'ceiling' of 106 G, we find that the revised R and L can also be fitted, within the error bars, in a finite region of the f-δ plane. The permitted values of δ near the surface leads us to estimate that the vertical field strength on the surface of CM Dra A is about 500 G, in good agreement with independent observational evidence for similar low-mass stars. Recent results for another binary with parameters close to those of CM Dra suggest that metallicity differences cannot be the dominant explanation for the bloating of the two components of CM Dra.

  7. A standard for measuring metadata quality in spectral libraries

    NASA Astrophysics Data System (ADS)

    Rasaiah, B.; Jones, S. D.; Bellman, C.

    2013-12-01

    A standard for measuring metadata quality in spectral libraries Barbara Rasaiah, Simon Jones, Chris Bellman RMIT University Melbourne, Australia barbara.rasaiah@rmit.edu.au, simon.jones@rmit.edu.au, chris.bellman@rmit.edu.au ABSTRACT There is an urgent need within the international remote sensing community to establish a metadata standard for field spectroscopy that ensures high quality, interoperable metadata sets that can be archived and shared efficiently within Earth observation data sharing systems. Metadata are an important component in the cataloguing and analysis of in situ spectroscopy datasets because of their central role in identifying and quantifying the quality and reliability of spectral data and the products derived from them. This paper presents approaches to measuring metadata completeness and quality in spectral libraries to determine reliability, interoperability, and re-useability of a dataset. Explored are quality parameters that meet the unique requirements of in situ spectroscopy datasets, across many campaigns. Examined are the challenges presented by ensuring that data creators, owners, and data users ensure a high level of data integrity throughout the lifecycle of a dataset. Issues such as field measurement methods, instrument calibration, and data representativeness are investigated. The proposed metadata standard incorporates expert recommendations that include metadata protocols critical to all campaigns, and those that are restricted to campaigns for specific target measurements. The implication of semantics and syntax for a robust and flexible metadata standard are also considered. Approaches towards an operational and logistically viable implementation of a quality standard are discussed. This paper also proposes a way forward for adapting and enhancing current geospatial metadata standards to the unique requirements of field spectroscopy metadata quality. [0430] BIOGEOSCIENCES / Computational methods and data processing [0480] BIOGEOSCIENCES / Remote sensing [1904] INFORMATICS / Community standards [1912] INFORMATICS / Data management, preservation, rescue [1926] INFORMATICS / Geospatial [1930] INFORMATICS / Data and information governance [1946] INFORMATICS / Metadata [1952] INFORMATICS / Modeling [1976] INFORMATICS / Software tools and services [9810] GENERAL OR MISCELLANEOUS / New fields

  8. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.

    PubMed

    Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-04-01

    To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.

  9. Atom transistor from the point of view of nonequilibrium dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Dunjko, V.; Olshanii, M.

    2015-12-01

    We analyze the atom field-effect transistor scheme (Stickney et al 2007 Phys. Rev. A 75 013608) using the standard tools of quantum and classical nonequlilibrium dynamics. We first study the correspondence between the quantum and the mean-field descriptions of this system by computing, both ab initio and by using their mean-field analogs, the deviations from the Eigenstate Thermalization Hypothesis, quantum fluctuations, and the density of states. We find that, as far as the quantities that interest us, the mean-field model can serve as a semi-classical emulator of the quantum system. Then, using the mean-field model, we interpret the point of maximal output signal in our transistor as the onset of ergodicity—the point where the system becomes, in principle, able to attain the thermal values of the former integrals of motion, albeit not being fully thermalized yet.

  10. Fluctuating local field method probed for a description of small classical correlated lattices

    NASA Astrophysics Data System (ADS)

    Rubtsov, Alexey N.

    2018-05-01

    Thermal-equilibrated finite classical lattices are considered as a minimal model of the systems showing an interplay between low-energy collective fluctuations and single-site degrees of freedom. Standard local field approach, as well as classical limit of the bosonic DMFT method, do not provide a satisfactory description of Ising and Heisenberg small lattices subjected to an external polarizing field. We show that a dramatic improvement can be achieved within a simple approach, in which the local field appears to be a fluctuating quantity related to the low-energy degree(s) of freedom.

  11. Multiple spectator condensates from inflation

    NASA Astrophysics Data System (ADS)

    Hardwick, Robert J.

    2018-05-01

    We investigate the development of spectator (light test) field condensates due to their quantum fluctuations in a de Sitter inflationary background, making use of the stochastic formalism to describe the system. In this context, a condensate refers to the typical field value found after a coarse-graining using the Hubble scale H, which can be essential to seed the initial conditions required by various post-inflationary processes. We study models with multiple coupled spectators and for the first time we demonstrate that new forms of stationary solution exist (distinct from the standard exponential form) when the potential is asymmetric. Furthermore, we find a critical value for the inter-field coupling as a function of the number of fields above which the formation of stationary condensates collapses to H. Considering some simple two-field example potentials, we are also able to derive a lower limit on the coupling, below which the fluctuations are effectively decoupled, and the standard stationary variance formulae for each field separately can be trusted. These results are all numerically verified by a new publicly available python class (nfield) to solve the coupled Langevin equations over a large number of fields, realisations and timescales. Further applications of this new tool are also discussed.

  12. The Effect of Seasonal and Long-Period Geopotential Variations on the GPS Orbits

    NASA Technical Reports Server (NTRS)

    Melachroinos, Stavros A.; Lemoine, Frank G.; Chinn, Douglas S.; Zelensky, Nikita P.; Nicholas, Joseph B.; Beckley, Brian D.

    2013-01-01

    We examine the impact of using seasonal and long-period time-variable gravity field (TVG) models on GPS orbit determination, through simulations from 1994 to 2012. The models of time-variable gravity that we test include the GRGS release RL02 GRACE-derived 10-day gravity field models up to degree and order 20 (grgs20x20), a 4 x 4 series of weekly coefficients using GGM03S as a base derived from SLR and DORIS tracking to 11 satellites (tvg4x4), and a harmonic fit to the above 4 x 4 SLR-DORIS time series (goco2s_fit2). These detailed models are compared to GPS orbit simulations using a reference model (stdtvg) based on the International Earth Rotation Service (IERS) and International GNSS Service (IGS) repro1 standards. We find that the new TVG modeling produces significant along, cross-track orbit differences as well as annual, semi-annual, draconitic and long-period effects in the Helmert translation parameters (Tx, Ty, Tz) of the GPS orbits with magnitudes of several mm. We show that the simplistic TVG modeling approach used by all of the IGS Analysis Centers, which is based on the models provided by the IERS standards, becomes progressively less adequate following 2006 when compared to the seasonal and long-period TVG models.

  13. Making Organisms Model Human Behavior: Situated Models in North-American Alcohol Research, 1950-onwards

    PubMed Central

    Leonelli, Sabina; Ankeny, Rachel A.; Nelson, Nicole C.; Ramsden, Edmund

    2014-01-01

    Argument We examine the criteria used to validate the use of nonhuman organisms in North-American alcohol addiction research from the 1950s to the present day. We argue that this field, where the similarities between behaviors in humans and non-humans are particularly difficult to assess, has addressed questions of model validity by transforming the situatedness of non-human organisms into an experimental tool. We demonstrate that model validity does not hinge on the standardization of one type of organism in isolation, as often the case with genetic model organisms. Rather, organisms are viewed as necessarily situated: they cannot be understood as a model for human behavior in isolation from their environmental conditions. Hence the environment itself is standardized as part of the modeling process; and model validity is assessed with reference to the environmental conditions under which organisms are studied. PMID:25233743

  14. Reducing RANS Model Error Using Random Forest

    NASA Astrophysics Data System (ADS)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng; Ling, Julia

    2016-11-01

    Reynolds-Averaged Navier-Stokes (RANS) models are still the work-horse tools in the turbulence modeling of industrial flows. However, the model discrepancy due to the inadequacy of modeled Reynolds stresses largely diminishes the reliability of simulation results. In this work we use a physics-informed machine learning approach to improve the RANS modeled Reynolds stresses and propagate them to obtain the mean velocity field. Specifically, the functional forms of Reynolds stress discrepancies with respect to mean flow features are trained based on an offline database of flows with similar characteristics. The random forest model is used to predict Reynolds stress discrepancies in new flows. Then the improved Reynolds stresses are propagated to the velocity field via RANS equations. The effects of expanding the feature space through the use of a complete basis of Galilean tensor invariants are also studied. The flow in a square duct, which is challenging for standard RANS models, is investigated to demonstrate the merit of the proposed approach. The results show that both the Reynolds stresses and the propagated velocity field are improved over the baseline RANS predictions. SAND Number: SAND2016-7437 A

  15. Simulation of gaseous pollutant dispersion around an isolated building using the k-ω SST (shear stress transport) turbulence model.

    PubMed

    Yu, Hesheng; Thé, Jesse

    2017-05-01

    The dispersion of gaseous pollutant around buildings is complex due to complex turbulence features such as flow detachment and zones of high shear. Computational fluid dynamics (CFD) models are one of the most promising tools to describe the pollutant distribution in the near field of buildings. Reynolds-averaged Navier-Stokes (RANS) models are the most commonly used CFD techniques to address turbulence transport of the pollutant. This research work studies the use of [Formula: see text] closure model for the gas dispersion around a building by fully resolving the viscous sublayer for the first time. The performance of standard [Formula: see text] model is also included for comparison, along with results of an extensively validated Gaussian dispersion model, the U.S. Environmental Protection Agency (EPA) AERMOD (American Meteorological Society/U.S. Environmental Protection Agency Regulatory Model). This study's CFD models apply the standard [Formula: see text] and the [Formula: see text] turbulence models to obtain wind flow field. A passive concentration transport equation is then calculated based on the resolved flow field to simulate the distribution of pollutant concentrations. The resultant simulation of both wind flow and concentration fields are validated rigorously by extensive data using multiple validation metrics. The wind flow field can be acceptably modeled by the [Formula: see text] model. However, the [Formula: see text] model fails to simulate the gas dispersion. The [Formula: see text] model outperforms [Formula: see text] in both flow and dispersion simulations, with higher hit rates for dimensionless velocity components and higher "factor of 2" of observations (FAC2) for normalized concentration. All these validation metrics of [Formula: see text] model pass the quality assurance criteria recommended by The Association of German Engineers (Verein Deutscher Ingenieure, VDI) guideline. Furthermore, these metrics are better than or the same as those in the literature. Comparison between the performances of [Formula: see text] and AERMOD shows that the CFD simulation is superior to Gaussian-type model for pollutant dispersion in the near wake of obstacles. AERMOD can perform as a screening tool for near-field gas dispersion due to its expeditious calculation and the ability to handle complicated cases. The utilization of [Formula: see text] to simulate gaseous pollutant dispersion around an isolated building is appropriate and is expected to be suitable for complex urban environment. Multiple validation metrics of [Formula: see text] turbulence model in CFD quantitatively indicated that this turbulence model was appropriate for the simulation of gas dispersion around buildings. CFD is, therefore, an attractive alternative to wind tunnel for modeling gas dispersion in urban environment due to its excellent performance, and lower cost.

  16. Novel Texture-based Visualization Methods for High-dimensional Multi-field Data Sets

    DTIC Science & Technology

    2013-07-06

    project: In standard format showing authors, title, journal, issue, pages, and date, for each category list the following: b) papers published...visual- isation [18]. Novel image acquisition and simulation tech- niques have made is possible to record a large number of co-located data fields...function, structure, anatomical changes, metabolic activity, blood perfusion, and cellular re- modelling. In this paper we investigate texture-based

  17. The Bean model and ac losses in Bi{sub 2}Ca{sub 2}Cu{sub 3}O{sub 10}/Ag tapes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suenaga, M.; Chiba, T.; Wiesmann, H.J.

    The Bean model is almost solely used to interpret ac losses in the powder-in-tube processed composite conductor, Bi{sub 2}Sr{sub 2}Ca{sub 2}Cu{sub 3}O{sub 10}/Ag. In order to examine the limits of the applicability of the model, a detailed comparison was made between the values of critical current density J{sub c} for Bi(2223)/Ag tapes which were determined by standard four-probe-dc measurement, and which were deduced from the field dependence of the ac losses utilizing the model. A significant inconsistency between these values of J{sub c} were found, particularly at high fields. Possible sources of the discrepancies are discussed.

  18. Chaplygin gas inspired scalar fields inflation via well-known potentials

    NASA Astrophysics Data System (ADS)

    Jawad, Abdul; Butt, Sadaf; Rani, Shamaila

    2016-08-01

    Brane inflationary universe models in the context of modified Chaplygin gas and generalized cosmic Chaplygin gas are being studied. We develop these models in view of standard scalar and tachyon fields. In both models, the implemented inflationary parameters such as scalar and tensor power spectra, scalar spectral index and tensor to scalar ratio are derived under slow roll approximations. We also use chaotic and exponential potential in high energy limits and discuss the characteristics of inflationary parameters for both potentials. These models are compatible with recent astronomical observations provided by WMAP7{+}9 and Planck data, i.e., ηs=1.027±0.051, 1.009±0.049, 0.096±0.025 and r<0.38, 0.36, 0.11.

  19. Compton Scattering Polarimetry for the Determination of the Proton's Weak Charge Through Measurements of the Parity-Violating Asymmetry of 1H(e,e')p

    NASA Astrophysics Data System (ADS)

    Cornejo, Juan Carlos

    The Standard Model has been a theory with the greatest success in describing the fundamental interactions of particles. As of the writing of this dissertation, the Standard Model has not been shown to make a false prediction. However, the limitations of the Standard Model have long been suspected by its lack of a description of gravity, nor dark matter. Its largest challenge to date, has been the observation of neutrino oscillations, and the implication that they may not be massless, as required by the Standard Model. The growing consensus is that the Standard Model is simply a lower energy effective field theory, and that new physics lies at much higher energies. The Qweak Experiment is testing the Electroweak theory of the Standard Model by making a precise determination of the weak charge of the proton (Qpw). Any signs of "new physics" will appear as a deviation to the Standard Model prediction. The weak charge is determined via a precise measurement of the parity-violating asymmetry of the electron-proton interaction via elastic scattering of a longitudinally polarized electron beam of an un-polarized proton target. The experiment required that the electron beam polarization be measured to an absolute uncertainty of 1 %. At this level the electron beam polarization was projected to contribute the single largest experimental uncertainty to the parity-violating asymmetry measurement. This dissertation will detail the use of Compton scattering to determine the electron beam polarization via the detection of the scattered photon. I will conclude the remainder of the dissertation with an independent analysis of the blinded Qweak.

  20. Simulated workplace neutron fields

    NASA Astrophysics Data System (ADS)

    Lacoste, V.; Taylor, G.; Röttger, S.

    2011-12-01

    The use of simulated workplace neutron fields, which aim at replicating radiation fields at practical workplaces, is an alternative solution for the calibration of neutron dosemeters. They offer more appropriate calibration coefficients when the mean fluence-to-dose equivalent conversion coefficients of the simulated and practical fields are comparable. Intensive Monte Carlo modelling work has become quite indispensable for the design and/or the characterization of the produced mixed neutron/photon fields, and the use of Bonner sphere systems and proton recoil spectrometers is also mandatory for a reliable experimental determination of the neutron fluence energy distribution over the whole energy range. The establishment of a calibration capability with a simulated workplace neutron field is not an easy task; to date only few facilities are available as standard calibration fields.

  1. A Comparison of Methods for Computing the Residual Resistivity Ratio of High-Purity Niobium

    PubMed Central

    Splett, J. D.; Vecchia, D. F.; Goodrich, L. F.

    2011-01-01

    We compare methods for estimating the residual resistivity ratio (RRR) of high-purity niobium and investigate the effects of using different functional models. RRR is typically defined as the ratio of the electrical resistances measured at 273 K (the ice point) and 4.2 K (the boiling point of helium at standard atmospheric pressure). However, pure niobium is superconducting below about 9.3 K, so the low-temperature resistance is defined as the normal-state (i.e., non-superconducting state) resistance extrapolated to 4.2 K and zero magnetic field. Thus, the estimated value of RRR depends significantly on the model used for extrapolation. We examine three models for extrapolation based on temperature versus resistance, two models for extrapolation based on magnetic field versus resistance, and a new model based on the Kohler relationship that can be applied to combined temperature and field data. We also investigate the possibility of re-defining RRR so that the quantity is not dependent on extrapolation. PMID:26989580

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alanne, Tommi; Kainulainen, Kimmo; Helsinki Institute of Physics, University of Helsinki,P.O. Box 64, FI-00014 Helsinki

    We investigate an extension of the Standard Model containing two Higgs doublets and a singlet scalar field (2HDSM). We show that the model can have a strongly first-order phase transition and give rise to the observed baryon asymmetry of the Universe, consistent with all experimental constraints. In particular, the constraints from the electron and neutron electric dipole moments are less constraining here than in pure two-Higgs-doublet model (2HDM). The two-step, first-order transition in 2HDSM, induced by the singlet field, may lead to strong supercooling and low nucleation temperatures in comparison with the critical temperature, T{sub n}≪T{sub c}, which can significantlymore » alter the usual phase-transition pattern in 2HD models with T{sub n}≈T{sub c}. Furthermore, the singlet field can be the dark matter particle. However, in models with a strong first-order transition its abundance is typically but a thousandth of the observed dark matter abundance.« less

  3. Goldstone Gauginos.

    PubMed

    Alves, Daniele S M; Galloway, Jamison; McCullough, Matthew; Weiner, Neal

    2015-10-16

    Models of supersymmetry with Dirac gauginos provide an attractive scenario for physics beyond the standard model. The "supersoft" radiative corrections and suppressed supersymmetry production at colliders provide for more natural theories and an understanding of why no new states have been seen. Unfortunately, these models are handicapped by a tachyon which is naturally present in existing models of Dirac gauginos. We argue that this tachyon is absent, with the phenomenological successes of the model preserved, if the right-handed gaugino is a (pseudo-)Goldstone field of a spontaneously broken anomalous flavor symmetry.

  4. Evaluating Mobile Survey Tools (MSTs) for Field-Level Monitoring and Data Collection: Development of a Novel Evaluation Framework, and Application to MSTs for Rural Water and Sanitation Monitoring

    PubMed Central

    Fisher, Michael B.; Mann, Benjamin H.; Cronk, Ryan D.; Shields, Katherine F.; Klug, Tori L.; Ramaswamy, Rohit

    2016-01-01

    Information and communications technologies (ICTs) such as mobile survey tools (MSTs) can facilitate field-level data collection to drive improvements in national and international development programs. MSTs allow users to gather and transmit field data in real time, standardize data storage and management, automate routine analyses, and visualize data. Dozens of diverse MST options are available, and users may struggle to select suitable options. We developed a systematic MST Evaluation Framework (EF), based on International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) software quality modeling standards, to objectively assess MSTs and assist program implementers in identifying suitable MST options. The EF is applicable to MSTs for a broad variety of applications. We also conducted an MST user survey to elucidate needs and priorities of current MST users. Finally, the EF was used to assess seven MSTs currently used for water and sanitation monitoring, as a validation exercise. The results suggest that the EF is a promising method for evaluating MSTs. PMID:27563916

  5. Evaluating Mobile Survey Tools (MSTs) for Field-Level Monitoring and Data Collection: Development of a Novel Evaluation Framework, and Application to MSTs for Rural Water and Sanitation Monitoring.

    PubMed

    Fisher, Michael B; Mann, Benjamin H; Cronk, Ryan D; Shields, Katherine F; Klug, Tori L; Ramaswamy, Rohit

    2016-08-23

    Information and communications technologies (ICTs) such as mobile survey tools (MSTs) can facilitate field-level data collection to drive improvements in national and international development programs. MSTs allow users to gather and transmit field data in real time, standardize data storage and management, automate routine analyses, and visualize data. Dozens of diverse MST options are available, and users may struggle to select suitable options. We developed a systematic MST Evaluation Framework (EF), based on International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) software quality modeling standards, to objectively assess MSTs and assist program implementers in identifying suitable MST options. The EF is applicable to MSTs for a broad variety of applications. We also conducted an MST user survey to elucidate needs and priorities of current MST users. Finally, the EF was used to assess seven MSTs currently used for water and sanitation monitoring, as a validation exercise. The results suggest that the EF is a promising method for evaluating MSTs.

  6. A Vehicular Mobile Standard Instrument for Field Verification of Traffic Speed Meters Based on Dual-Antenna Doppler Radar Sensor

    PubMed Central

    Du, Lei; Sun, Qiao; Cai, Changqing; Bai, Jie; Fan, Zhe; Zhang, Yue

    2018-01-01

    Traffic speed meters are important legal measuring instruments specially used for traffic speed enforcement and must be tested and verified in the field every year using a vehicular mobile standard speed-measuring instrument to ensure speed-measuring performances. The non-contact optical speed sensor and the GPS speed sensor are the two most common types of standard speed-measuring instruments. The non-contact optical speed sensor requires extremely high installation accuracy, and its speed-measuring error is nonlinear and uncorrectable. The speed-measuring accuracy of the GPS speed sensor is rapidly reduced if the amount of received satellites is insufficient enough, which often occurs in urban high-rise regions, tunnels, and mountainous regions. In this paper, a new standard speed-measuring instrument using a dual-antenna Doppler radar sensor is proposed based on a tradeoff between the installation accuracy requirement and the usage region limitation, which has no specified requirements for its mounting distance and no limitation on usage regions and can automatically compensate for the effect of an inclined installation angle on its speed-measuring accuracy. Theoretical model analysis, simulated speed measurement results, and field experimental results compared with a GPS speed sensor with high accuracy showed that the dual-antenna Doppler radar sensor is effective and reliable as a new standard speed-measuring instrument. PMID:29621142

  7. A Vehicular Mobile Standard Instrument for Field Verification of Traffic Speed Meters Based on Dual-Antenna Doppler Radar Sensor.

    PubMed

    Du, Lei; Sun, Qiao; Cai, Changqing; Bai, Jie; Fan, Zhe; Zhang, Yue

    2018-04-05

    Traffic speed meters are important legal measuring instruments specially used for traffic speed enforcement and must be tested and verified in the field every year using a vehicular mobile standard speed-measuring instrument to ensure speed-measuring performances. The non-contact optical speed sensor and the GPS speed sensor are the two most common types of standard speed-measuring instruments. The non-contact optical speed sensor requires extremely high installation accuracy, and its speed-measuring error is nonlinear and uncorrectable. The speed-measuring accuracy of the GPS speed sensor is rapidly reduced if the amount of received satellites is insufficient enough, which often occurs in urban high-rise regions, tunnels, and mountainous regions. In this paper, a new standard speed-measuring instrument using a dual-antenna Doppler radar sensor is proposed based on a tradeoff between the installation accuracy requirement and the usage region limitation, which has no specified requirements for its mounting distance and no limitation on usage regions and can automatically compensate for the effect of an inclined installation angle on its speed-measuring accuracy. Theoretical model analysis, simulated speed measurement results, and field experimental results compared with a GPS speed sensor with high accuracy showed that the dual-antenna Doppler radar sensor is effective and reliable as a new standard speed-measuring instrument.

  8. Realizing three generations of the Standard Model fermions in the type IIB matrix model

    NASA Astrophysics Data System (ADS)

    Aoki, Hajime; Nishimura, Jun; Tsuchiya, Asato

    2014-05-01

    We discuss how the Standard Model particles appear from the type IIB matrix model, which is considered to be a nonperturbative formulation of superstring theory. In particular, we are concerned with a constructive definition of the theory, in which we start with finite- N matrices and take the large- N limit afterwards. In that case, it was pointed out recently that realizing chiral fermions in the model is more difficult than it had been thought from formal arguments at N = ∞ and that introduction of a matrix version of the warp factor is necessary. Based on this new insight, we show that two generations of the Standard Model fermions can be realized by considering a rather generic configuration of fuzzy S2 and fuzzy S2 × S2 in the extra dimensions. We also show that three generations can be obtained by squashing one of the S2's that appear in the configuration. Chiral fermions appear at the intersections of the fuzzy manifolds with nontrivial Yukawa couplings to the Higgs field, which can be calculated from the overlap of their wave functions.

  9. Global Constraints on Anomalous Triple Gauge Couplings in the Effective Field Theory Approach.

    PubMed

    Falkowski, Adam; González-Alonso, Martín; Greljo, Admir; Marzocca, David

    2016-01-08

    We present a combined analysis of LHC Higgs data (signal strengths) together with LEP-2 WW production measurements. To characterize possible deviations from the standard model (SM) predictions, we employ the framework of an effective field theory (EFT) where the SM is extended by higher-dimensional operators suppressed by the mass scale of new physics Λ. The analysis is performed consistently at the order Λ(-2) in the EFT expansion keeping all the relevant operators. While the two data sets suffer from flat directions, together they impose stringent model-independent constraints on the anomalous triple gauge couplings.

  10. Large Hysteresis effect in Synchronization of Nanocontact Vortex Oscillators by Microwave Fields

    PubMed Central

    Perna, S.; Lopez-Diaz, L.; d’Aquino, M.; Serpico, C.

    2016-01-01

    Current-induced vortex oscillations in an extended thin-film with point-contact geometry are considered. The synchronization of these oscillations with a microwave external magnetic field is investigated by a reduced order model that takes into account the dynamical effects associated with the significant deformation of the vortex structure produced by the current, which cannot be taken care of by using the standard rigid vortex theory. The complete phase diagram of the vortex oscillation dynamics is derived and it is shown that strong hysteretic behavior occurs in the synchronization with the external field. The complex nonlinear nature of the synchronization manifests itself also through the appearance of asymmetry in the locking frequency bands for moderate microwave field amplitudes. Predictions from the reduced order model are confirmed by full micromagnetic simulations. PMID:27538476

  11. Value of the Cosmological Constant in Emergent Quantum Gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Craig

    It is suggested that the exact value of the cosmological constant could be derived from first principles, based on entanglement of the Standard Model field vacuum with emergent holographic quantum geometry. For the observed value of the cosmological constant, geometrical information is shown to agree closely with the spatial information density of the QCD vacuum, estimated in a free-field approximation. The comparison is motivated by a model of exotic rotational fluctuations in the inertial frame that can be precisely tested in laboratory experiments. Cosmic acceleration in this model is always positive, but fluctuates with characteristic coherence lengthmore » $$\\approx 100$$km and bandwidth $$\\approx 3000$$ Hz.« less

  12. Equivalent Hamiltonian for the Lee model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, H. F.

    2008-03-15

    Using the techniques of quasi-Hermitian quantum mechanics and quantum field theory we use a similarity transformation to construct an equivalent Hermitian Hamiltonian for the Lee model. In the field theory confined to the V/N{theta} sector it effectively decouples V, replacing the three-point interaction of the original Lee model by an additional mass term for the V particle and a four-point interaction between N and {theta}. While the construction is originally motivated by the regime where the bare coupling becomes imaginary, leading to a ghost, it applies equally to the standard Hermitian regime where the bare coupling is real. In thatmore » case the similarity transformation becomes a unitary transformation.« less

  13. The ST environment: Expected charged particle radiation levels

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.

    1978-01-01

    The external (surface incident) charged particle radiation, predicted for the ST satellite at the three different mission altitudes, was determined in two ways: (1) by orbital flux-integration and (2) by geographical instantaneous flux-mapping. The latest standard models of the environment were used in this effort. Magnetic field definitions for three nominal circular trajectories and for the geographic mapping positions were obtained from a current field model. Spatial and temporal variations or conditions affecting the static environment models were considered and accounted for, wherever possible. Limited shielding and dose evaluations were performed for a simple geometry. Results, given in tabular and graphical form, are analyzed, explained, and discussed. Conclusions are included.

  14. 2D massless Dirac Fermi gas model of superconductivity in the surface state of a topological insulator at high magnetic fields

    NASA Astrophysics Data System (ADS)

    Zhuravlev, Vladimir; Duan, Wenye; Maniv, Tsofar

    2017-10-01

    The Nambu-Gorkov Green's function approach is applied to strongly type-II superconductivity in a 2D spin-momentum-locked (Weyl) Fermi gas model at high perpendicular magnetic fields. The resulting phase diagram can be mapped onto that derived for the standard, parabolic band-structure model, having the same Fermi surface parameters, E F and v, but with cyclotron effective mass m\\ast=EF/2v2 . Significant deviations from the predicted mapping are found only for very small E F , when the Landau-Level filling factors are smaller than unity, and E F shrinks below the cutoff energy.

  15. Superstatistics model for T₂ distribution in NMR experiments on porous media.

    PubMed

    Correia, M D; Souza, A M; Sinnecker, J P; Sarthour, R S; Santos, B C C; Trevizan, W; Oliveira, I S

    2014-07-01

    We propose analytical functions for T2 distribution to describe transverse relaxation in high- and low-fields NMR experiments on porous media. The method is based on a superstatistics theory, and allows to find the mean and standard deviation of T2, directly from measurements. It is an alternative to multiexponential models for data decay inversion in NMR experiments. We exemplify the method with q-exponential functions and χ(2)-distributions to describe, respectively, data decay and T2 distribution on high-field experiments of fully water saturated glass microspheres bed packs, sedimentary rocks from outcrop and noisy low-field experiment on rocks. The method is general and can also be applied to biological systems. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Electromagnetic backscattering from a random distribution of lossy dielectric scatterers

    NASA Technical Reports Server (NTRS)

    Lang, R. H.

    1980-01-01

    Electromagnetic backscattering from a sparse distribution of discrete lossy dielectric scatterers occupying a region 5 was studied. The scatterers are assumed to have random position and orientation. Scattered fields are calculated by first finding the mean field and then by using it to define an equivalent medium within the volume 5. The scatterers are then viewed as being embedded in the equivalent medium; the distorted Born approximation is then used to find the scattered fields. This technique represents an improvement over the standard Born approximation since it takes into account the attenuation of the incident and scattered waves in the equivalent medium. The method is used to model a leaf canopy when the leaves are modeled by lossy dielectric discs.

  17. Cartan gravity, matter fields, and the gauge principle

    NASA Astrophysics Data System (ADS)

    Westman, Hans F.; Zlosnik, Tom G.

    2013-07-01

    Gravity is commonly thought of as one of the four force fields in nature. However, in standard formulations its mathematical structure is rather different from the Yang-Mills fields of particle physics that govern the electromagnetic, weak, and strong interactions. This paper explores this dissonance with particular focus on how gravity couples to matter from the perspective of the Cartan-geometric formulation of gravity. There the gravitational field is represented by a pair of variables: (1) a 'contact vector' VA which is geometrically visualized as the contact point between the spacetime manifold and a model spacetime being 'rolled' on top of it, and (2) a gauge connection AμAB, here taken to be valued in the Lie algebra of SO(2,3) or SO(1,4), which mathematically determines how much the model spacetime is rotated when rolled. By insisting on two principles, the gauge principle and polynomial simplicity, we shall show how one can reformulate matter field actions in a way that is harmonious with Cartan's geometric construction. This yields a formulation of all matter fields in terms of first order partial differential equations. We show in detail how the standard second order formulation can be recovered. In particular, the Hodge dual, which characterizes the structure of bosonic field equations, pops up automatically. Furthermore, the energy-momentum and spin-density three-forms are naturally combined into a single object here denoted the spin-energy-momentum three-form. Finally, we highlight a peculiarity in the mathematical structure of our first-order formulation of Yang-Mills fields. This suggests a way to unify a U(1) gauge field with gravity into a SO(1,5)-valued gauge field using a natural generalization of Cartan geometry in which the larger symmetry group is spontaneously broken down to SO(1,3)×U(1). The coupling of this unified theory to matter fields and possible extensions to non-Abelian gauge fields are left as open questions.

  18. The SHiP physics program

    NASA Astrophysics Data System (ADS)

    De Lellis, Giovanni

    2018-05-01

    The discovery of the Higgs boson has fully confirmed the Standard Model of particles and fields. Nevertheless, there are still fundamental phenomena, like the existence of dark matter and the baryon asymmetry of the Universe, which deserve an explanation that could come from the discovery of new particles. The SHiP experiment at CERN meant to search for very weakly coupled particles in the few GeV mass domain has been recently proposed. The existence of such particles, foreseen in different theoretical models beyond the Standard Model, is largely unexplored. A beam dump facility using high intensity 400 GeV protons is a copious source of such unknown particles in the GeV mass range. The beam dump is also a copious source of neutrinos and in particular it is an ideal source of tau neutrinos, the less known particle in the Standard Model. Indeed, tau anti-neutrinos have not been directly observed so far. We report the physics potential of such an experiment including the tau neutrino magnetic moment.

  19. Astroparticle physics and cosmology.

    PubMed

    Mitton, Simon

    2006-05-20

    Astroparticle physics is an interdisciplinary field that explores the connections between the physics of elementary particles and the large-scale properties of the universe. Particle physicists have developed a standard model to describe the properties of matter in the quantum world. This model explains the bewildering array of particles in terms of constructs made from two or three quarks. Quarks, leptons, and three of the fundamental forces of physics are the main components of this standard model. Cosmologists have also developed a standard model to describe the bulk properties of the universe. In this new framework, ordinary matter, such as stars and galaxies, makes up only around 4% of the material universe. The bulk of the universe is dark matter (roughly 23%) and dark energy (about 73%). This dark energy drives an acceleration that means that the expanding universe will grow ever larger. String theory, in which the universe has several invisible dimensions, might offer an opportunity to unite the quantum description of the particle world with the gravitational properties of the large-scale universe.

  20. Simple Scaling of Mulit-Stream Jet Plumes for Aeroacoustic Modeling

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2016-01-01

    When creating simplified, semi-empirical models for the noise of simple single-stream jets near surfaces it has proven useful to be able to generalize the geometry of the jet plume. Having a model that collapses the mean and turbulent velocity fields for a range of flows allows the problem to become one of relating the normalized jet field and the surface. However, most jet flows of practical interest involve jets of two or more coannular flows for which standard models for the plume geometry do not exist. The present paper describes one attempt to relate the mean and turbulent velocity fields of multi-stream jets to that of an equivalent single-stream jet. The normalization of single-stream jets is briefly reviewed, from the functional form of the flow model to the results of the modeling. Next, PIV data from a number of multi-stream jets is analyzed in a similar fashion. The results of several single-stream approximations of the multi-stream jet plume are demonstrated, with a best approximation determined and the shortcomings of the model highlighted.

  1. Simple Scaling of Multi-Stream Jet Plumes for Aeroacoustic Modeling

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2015-01-01

    When creating simplified, semi-empirical models for the noise of simple single-stream jets near surfaces it has proven useful to be able to generalize the geometry of the jet plume. Having a model that collapses the mean and turbulent velocity fields for a range of flows allows the problem to become one of relating the normalized jet field and the surface. However, most jet flows of practical interest involve jets of two or more co-annular flows for which standard models for the plume geometry do not exist. The present paper describes one attempt to relate the mean and turbulent velocity fields of multi-stream jets to that of an equivalent single-stream jet. The normalization of single-stream jets is briefly reviewed, from the functional form of the flow model to the results of the modeling. Next, PIV (Particle Image Velocimetry) data from a number of multi-stream jets is analyzed in a similar fashion. The results of several single-stream approximations of the multi-stream jet plume are demonstrated, with a 'best' approximation determined and the shortcomings of the model highlighted.

  2. Fragmentation modeling of a resin bonded sand

    NASA Astrophysics Data System (ADS)

    Hilth, William; Ryckelynck, David

    2017-06-01

    Cemented sands exhibit a complex mechanical behavior that can lead to sophisticated models, with numerous parameters without real physical meaning. However, using a rather simple generalized critical state bonded soil model has proven to be a relevant compromise between an easy calibration and good results. The constitutive model formulation considers a non-associated elasto-plastic formulation within the critical state framework. The calibration procedure, using standard laboratory tests, is complemented by the study of an uniaxial compression test observed by tomography. Using finite elements simulations, this test is simulated considering a non-homogeneous 3D media. The tomography of compression sample gives access to 3D displacement fields by using image correlation techniques. Unfortunately these fields have missing experimental data because of the low resolution of correlations for low displacement magnitudes. We propose a recovery method that reconstructs 3D full displacement fields and 2D boundary displacement fields. These fields are mandatory for the calibration of the constitutive parameters by using 3D finite element simulations. The proposed recovery technique is based on a singular value decomposition of available experimental data. This calibration protocol enables an accurate prediction of the fragmentation of the specimen.

  3. Metal-Ferroelectric-Semiconductor Field-Effect Transistor NAND Gate Switching Time Analysis

    NASA Technical Reports Server (NTRS)

    Phillips, Thomas A.; Macleod, Todd C.; Ho, Fat D.

    2006-01-01

    Previous research investigated the modeling of a N Wga te constructed of Metal-Ferroelectric- Semiconductor Field-Effect Transistors (MFSFETs) to obtain voltage transfer curves. The NAND gate was modeled using n-channel MFSFETs with positive polarization for the standard CMOS n-channel transistors and n-channel MFSFETs with negative polarization for the standard CMOS p-channel transistors. This paper investigates the MFSFET NAND gate switching time propagation delay, which is one of the other important parameters required to characterize the performance of a logic gate. Initially, the switching time of an inverter circuit was analyzed. The low-to-high and high-to-low propagation time delays were calculated. During the low-to-high transition, the negatively polarized transistor pulls up the output voltage, and during the high-to-low transition, the positively polarized transistor pulls down the output voltage. The MFSFETs were simulated by using a previously developed model which utilized a partitioned ferroelectric layer. Then the switching time of a 2-input NAND gate was analyzed similarly to the inverter gate. Extension of this technique to more complicated logic gates using MFSFETs will be studied.

  4. Testing chameleon theories with light propagating through a magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brax, Philippe; Bruck, Carsten van de; Davis, Anne-Christine

    2007-10-15

    It was recently argued that the observed PVLAS anomaly can be explained by chameleon field theories in which large deviations from Newton's law can be avoided. Here we present the predictions for the dichroism and the birefringence induced in the vacuum by a magnetic field in these models. We show that chameleon particles behave very differently from standard axionlike particles (ALPs). We find that, unlike ALPs, the chameleon particles are confined within the experimental setup. As a consequence, the birefringence is always bigger than the dichroism in PVLAS-type experiments.

  5. Electron heating in the laser and static electric and magnetic fields

    NASA Astrophysics Data System (ADS)

    Zhang, Yanzeng; Krasheninnikov, S. I.

    2018-01-01

    A 2D slab approximation of the interactions of electrons with intense linearly polarized laser radiation and static electric and magnetic fields is widely used for both numerical simulations and simplified semi-analytical models. It is shown that in this case, electron dynamics can be conveniently described in the framework of the 3/2 dimensional Hamiltonian approach. The electron acceleration beyond a standard ponderomotive scaling, caused by the synergistic effects of the laser and static electro-magnetic fields, is due to an onset of stochastic electron motion.

  6. Effective-field renormalization-group method for Ising systems

    NASA Astrophysics Data System (ADS)

    Fittipaldi, I. P.; De Albuquerque, D. F.

    1992-02-01

    A new applicable effective-field renormalization-group (ERFG) scheme for computing critical properties of Ising spins systems is proposed and used to study the phase diagrams of a quenched bond-mixed spin Ising model on square and Kagomé lattices. The present EFRG approach yields results which improves substantially on those obtained from standard mean-field renormalization-group (MFRG) method. In particular, it is shown that the EFRG scheme correctly distinguishes the geometry of the lattice structure even when working with the smallest possible clusters, namely N'=1 and N=2.

  7. Modelling field scale spatial variation in water run-off, soil moisture, N2O emissions and herbage biomass of a grazed pasture using the SPACSYS model.

    PubMed

    Liu, Yi; Li, Yuefen; Harris, Paul; Cardenas, Laura M; Dunn, Robert M; Sint, Hadewij; Murray, Phil J; Lee, Michael R F; Wu, Lianhai

    2018-04-01

    In this study, we evaluated the ability of the SPACSYS model to simulate water run-off, soil moisture, N 2 O fluxes and grass growth using data generated from a field of the North Wyke Farm Platform. The field-scale model is adapted via a linked and grid-based approach (grid-to-grid) to account for not only temporal dynamics but also the within-field spatial variation in these key ecosystem indicators. Spatial variability in nutrient and water presence at the field-scale is a key source of uncertainty when quantifying nutrient cycling and water movement in an agricultural system. Results demonstrated that the new spatially distributed version of SPACSYS provided a worthy improvement in accuracy over the standard (single-point) version for biomass productivity. No difference in model prediction performance was observed for water run-off, reflecting the closed-system nature of this variable. Similarly, no difference in model prediction performance was found for N 2 O fluxes, but here the N 2 O predictions were noticeably poor in both cases. Further developmental work, informed by this study's findings, is proposed to improve model predictions for N 2 O. Soil moisture results with the spatially distributed version appeared promising but this promise could not be objectively verified.

  8. Odour assessment in the vicinity of a pig-fatting farm using field inspections (EN 16841-1) and dispersion modelling

    NASA Astrophysics Data System (ADS)

    Oettl, Dietmar; Kropsch, Michael; Mandl, Michael

    2018-05-01

    The assessment of odour annoyance varies vastly among countries even within the European Union. Using so-called odour-hour frequencies offers the distinct possibility for either applying dispersion models or field inspections, both generally assumed to be equivalent. In this study, odour-hours based on field inspections according to the European standard EN 16841-1 (2017) in the vicinity of a pig-fattening farm have been compared with modelled ones using the Lagrangian particle model GRAL, which uses odour-concentration variances for computing odour hours as recently proposed by Oettl and Ferrero (2017). Using a threshold of 1 ou m-3 (ou = odour units) for triggering odour hours in the model, as prescribed by the German guideline for odour assessment, led to reasonable agreements between the two different methodologies. It is pointed out that the individual odour sensitivity of qualified panel members, who carry out field inspections, is of crucial importance for selecting a proper odour-hour model. Statistical analysis of a large number of data stemming from dynamic olfactometry (EN 13725, 2003), that cover a wide range of odorants, suggests that the prescribed method in Germany for modelling odour hours may likely result in an overestimation, and hence, equivalence with field inspections is not given. The dataset is freely available on request.

  9. Low-temperature behavior of the quark-meson model

    NASA Astrophysics Data System (ADS)

    Tripolt, Ralf-Arno; Schaefer, Bernd-Jochen; von Smekal, Lorenz; Wambach, Jochen

    2018-02-01

    We revisit the phase diagram of strong-interaction matter for the two-flavor quark-meson model using the functional renormalization group. In contrast to standard mean-field calculations, an unusual phase structure is encountered at low temperatures and large quark chemical potentials. In particular, we identify a regime where the pressure decreases with increasing temperature and discuss possible reasons for this unphysical behavior.

  10. Adaptive quantification and longitudinal analysis of pulmonary emphysema with a hidden Markov measure field model.

    PubMed

    Hame, Yrjo; Angelini, Elsa D; Hoffman, Eric A; Barr, R Graham; Laine, Andrew F

    2014-07-01

    The extent of pulmonary emphysema is commonly estimated from CT scans by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols, and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the presented model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was applied on a longitudinal data set with 87 subjects and a total of 365 scans acquired with varying imaging protocols. The resulting emphysema estimates had very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. The generated emphysema delineations promise advantages for regional analysis of emphysema extent and progression.

  11. Dynamics of relaxed inflation

    NASA Astrophysics Data System (ADS)

    Tangarife, Walter; Tobioka, Kohsaku; Ubaldi, Lorenzo; Volansky, Tomer

    2018-02-01

    The cosmological relaxation of the electroweak scale has been proposed as a mechanism to address the hierarchy problem of the Standard Model. A field, the relaxion, rolls down its potential and, in doing so, scans the squared mass parameter of the Higgs, relaxing it to a parametrically small value. In this work, we promote the relaxion to an inflaton. We couple it to Abelian gauge bosons, thereby introducing the necessary dissipation mechanism which slows down the field in the last stages. We describe a novel reheating mechanism, which relies on the gauge-boson production leading to strong electro-magnetic fields, and proceeds via the vacuum production of electron-positron pairs through the Schwinger effect. We refer to this mechanism as Schwinger reheating. We discuss the cosmological dynamics of the model and the phenomenological constraints from CMB and other experiments. We find that a cutoff close to the Planck scale may be achieved. In its minimal form, the model does not generate sufficient curvature perturbations and additional ingredients, such as a curvaton field, are needed.

  12. Impact of theoretical priors in cosmological analyses: The case of single field quintessence

    NASA Astrophysics Data System (ADS)

    Peirone, Simone; Martinelli, Matteo; Raveri, Marco; Silvestri, Alessandra

    2017-09-01

    We investigate the impact of general conditions of theoretical stability and cosmological viability on dynamical dark energy models. As a powerful example, we study whether minimally coupled, single field quintessence models that are safe from ghost instabilities, can source the Chevallier-Polarski-Linder (CPL) expansion history recently shown to be mildly favored by a combination of cosmic microwave background (Planck) and weak lensing (KiDS) data. We find that in their most conservative form, the theoretical conditions impact the analysis in such a way that smooth single field quintessence becomes significantly disfavored with respect to the standard Λ CDM cosmological model. This is due to the fact that these conditions cut a significant portion of the (w0,wa) parameter space for CPL, in particular, eliminating the region that would be favored by weak lensing data. Within the scenario of a smooth dynamical dark energy parametrized with CPL, weak lensing data favors a region that would require multiple fields to ensure gravitational stability.

  13. Asymmetric kinetic equilibria: Generalization of the BAS model for rotating magnetic profile and non-zero electric field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorville, Nicolas, E-mail: nicolas.dorville@lpp.polytechnique.fr; Belmont, Gérard; Aunai, Nicolas

    Finding kinetic equilibria for non-collisional/collisionless tangential current layers is a key issue as well for their theoretical modeling as for our understanding of the processes that disturb them, such as tearing or Kelvin Helmholtz instabilities. The famous Harris equilibrium [E. Harris, Il Nuovo Cimento Ser. 10 23, 115–121 (1962)] assumes drifting Maxwellian distributions for ions and electrons, with constant temperatures and flow velocities; these assumptions lead to symmetric layers surrounded by vacuum. This strongly particular kind of layer is not suited for the general case: asymmetric boundaries between two media with different plasmas and different magnetic fields. The standard methodmore » for constructing more general kinetic equilibria consists in using Jeans theorem, which says that any function depending only on the Hamiltonian constants of motion is a solution to the steady Vlasov equation [P. J. Channell, Phys. Fluids (1958–1988) 19, 1541 (1976); M. Roth et al., Space Sci. Rev. 76, 251–317 (1996); and F. Mottez, Phys. Plasmas 10, 1541–1545 (2003)]. The inverse implication is however not true: when using the motion invariants as variables instead of the velocity components, the general stationary particle distributions keep on depending explicitly of the position, in addition to the implicit dependence introduced by these invariants. The standard approach therefore strongly restricts the class of solutions to the problem and probably does not select the most physically reasonable. The BAS (Belmont-Aunai-Smets) model [G. Belmont et al., Phys. Plasmas 19, 022108 (2012)] used for the first time the concept of particle accessibility to find new solutions: considering the case of a coplanar-antiparallel magnetic field configuration without electric field, asymmetric solutions could be found while the standard method can only lead to symmetric ones. These solutions were validated in a hybrid simulation [N. Aunai et al., Phys. Plasmas (1994-present) 20, 110702 (2013)], and more recently in a fully kinetic simulation as well [J. Dargent and N. Aunai, Phys. Plasmas (submitted)]. Nevertheless, in most asymmetric layers like the terrestrial magnetopause, one would indeed expect a magnetic field rotation from one direction to another without going through zero [J. Berchem and C. T. Russell, J. Geophys. Res. 87, 8139–8148 (1982)], and a non-zero normal electric field. In this paper, we propose the corresponding generalization: in the model presented, the profiles can be freely imposed for the magnetic field rotation (although restricted to a 180 rotation hitherto) and for the normal electric field. As it was done previously, the equilibrium is tested with a hybrid simulation.« less

  14. Aspect Ratio Model for Radiation-Tolerant Dummy Gate-Assisted n-MOSFET Layout.

    PubMed

    Lee, Min Su; Lee, Hee Chul

    2014-01-01

    In order to acquire radiation-tolerant characteristics in integrated circuits, a dummy gate-assisted n-type metal oxide semiconductor field effect transistor (DGA n-MOSFET) layout was adopted. The DGA n-MOSFET has a different channel shape compared with the standard n-MOSFET. The standard n-MOSFET has a rectangular channel shape, whereas the DGA n-MOSFET has an extended rectangular shape at the edge of the source and drain, which affects its aspect ratio. In order to increase its practical use, a new aspect ratio model is proposed for the DGA n-MOSFET and this model is evaluated through three-dimensional simulations and measurements of the fabricated devices. The proposed aspect ratio model for the DGA n-MOSFET exhibits good agreement with the simulation and measurement results.

  15. Aspect Ratio Model for Radiation-Tolerant Dummy Gate-Assisted n-MOSFET Layout

    PubMed Central

    Lee, Min Su; Lee, Hee Chul

    2014-01-01

    In order to acquire radiation-tolerant characteristics in integrated circuits, a dummy gate-assisted n-type metal oxide semiconductor field effect transistor (DGA n-MOSFET) layout was adopted. The DGA n-MOSFET has a different channel shape compared with the standard n-MOSFET. The standard n-MOSFET has a rectangular channel shape, whereas the DGA n-MOSFET has an extended rectangular shape at the edge of the source and drain, which affects its aspect ratio. In order to increase its practical use, a new aspect ratio model is proposed for the DGA n-MOSFET and this model is evaluated through three-dimensional simulations and measurements of the fabricated devices. The proposed aspect ratio model for the DGA n-MOSFET exhibits good agreement with the simulation and measurement results. PMID:27350975

  16. Global electric field determination in the Earth's outer magnetosphere using energetic charged particles

    NASA Technical Reports Server (NTRS)

    Eastman, Timothy E.; Sheldon, R.; Hamilton, D.

    1995-01-01

    Although many properties of the Earth's magnetosphere have been measured and quantified in the past 30 years since it was discovered, one fundamental measurement (for zeroth order MHD equilibrium) has been made infrequently and with poor spatial coverage - the global electric field. This oversight is due in part to the neglect of theorists. However, there is renewed interest in the convection electric field because it is now realized to be central to many magnetospheric processes, including the global MHD equilibrium, reconnection rates, Region 2 Birkeland currents, magnetosphere ionosphere coupling, ring current and radiation belt transport, substorm injections, and several acceleration mechanisms. Unfortunately the standard experimental methods have not been able to synthesize a global field (excepting the pioneering work of McIlwain's geostationary models) and we are left with an overly simplistic theoretical field, the Volland-Stern electric field model. Single point measurements of the plasmapause were used to infer the appropriate amplitudes of this model, parameterized by K(sub p). Although this result was never intended to be the definitive electric field model, it has gone nearly unchanged for 20 years. The analysis of current data sets requires a great deal more accuracy than can be provided by the Volland-Stern model. The variability of electric field shielding has not been properly addressed although effects of penetrating magnetospheric electric fields has been seen in mid-and low-latitude ionospheric data sets. The growing interest in substorm dynamics also requires a much better assessment of the electric fields responsible for particle injections. Thus we proposed and developed algorithms for extracting electric fields from particle data taken in the Earth's magnetosphere. As a test of the effectiveness of these new techniques, we analyzed data taken by the AMPTE/CCE spacecraft in equatorial orbit from 1984 to 1989.

  17. Integration of logistic regression and multicriteria land evaluation to simulation establishment of sustainable paddy field zone in Indramayu Regency, West Java Province, Indonesia

    NASA Astrophysics Data System (ADS)

    Nahib, Irmadi; Suryanta, Jaka; Niedyawati; Kardono, Priyadi; Turmudi; Lestari, Sri; Windiastuti, Rizka

    2018-05-01

    Ministry of Agriculture have targeted production of 1.718 million tons of dry grain harvest during period of 2016-2021 to achieve food self-sufficiency, through optimization of special commodities including paddy, soybean and corn. This research was conducted to develop a sustainable paddy field zone delineation model using logistic regression and multicriteria land evaluation in Indramayu Regency. A model was built on the characteristics of local function conversion by considering the concept of sustainable development. Spatial data overlay was constructed using available data, and then this model was built upon the occurrence of paddy field between 1998 and 2015. Equation for the model of paddy field changes obtained was: logit (paddy field conversion) = - 2.3048 + 0.0032*X1 – 0.0027*X2 + 0.0081*X3 + 0.0025*X4 + 0.0026*X5 + 0.0128*X6 – 0.0093*X7 + 0.0032*X8 + 0.0071*X9 – 0.0046*X10 where X1 to X10 were variables that determine the occurrence of changes in paddy fields, with a result value of Relative Operating Characteristics (ROC) of 0.8262. The weakest variable in influencing the change of paddy field function was X7 (paddy field price), while the most influential factor was X1 (distance from river). Result of the logistic regression was used as a weight for multicriteria land evaluation, which recommended three scenarios of paddy fields protection policy: standard, protective, and permissive. The result of this modelling, the priority paddy fields for protected scenario were obtained, as well as the buffer zones for the surrounding paddy fields.

  18. SU-E-T-276: Dose Calculation Accuracy with a Standard Beam Model for Extended SSD Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kisling, K; Court, L; Kirsner, S

    2015-06-15

    Purpose: While most photon treatments are delivered near 100cm SSD or less, a subset of patients may benefit from treatment at SSDs greater than 100cm. A proposed rotating chair for upright treatments would enable isocentric treatments at extended SSDs. The purpose of this study was to assess the accuracy of the Pinnacle{sup 3} treatment planning system dose calculation for standard beam geometries delivered at extended SSDs with a beam model commissioned at 100cm SSD. Methods: Dose to a water phantom at 100, 110, and 120cm SSD was calculated with the Pinnacle {sup 3} CC convolve algorithm for 6x beams formore » 5×5, 10×10, 20×20, and 30×30cm{sup 2} field sizes (defined at the water surface for each SSD). PDDs and profiles (depths of 1.5, 12.5, and 22cm) were compared to measurements in water with an ionization chamber. Point-by-point agreement was analyzed, as well as agreement in field size defined by the 50% isodose. Results: The deviations of the calculated PDDs from measurement, analyzed from depth of maximum dose to 23cm, were all within 1.3% for all beam geometries. In particular, the calculated PDDs at 10cm depth were all within 0.7% of measurement. For profiles, the deviations within the central 80% of the field were within 2.2% for all geometries. The field sizes all agreed within 2mm. Conclusion: The agreement of the PDDs and profiles calculated by Pinnacle3 for extended SSD geometries were within the acceptability criteria defined by Van Dyk (±2% for PDDs and ±3% for profiles). The accuracy of the calculation of more complex beam geometries at extended SSDs will be investigated to further assess the feasibility of using a standard beam model commissioned at 100cm SSD in Pinnacle3 for extended SSD treatments.« less

  19. The phenomenology of maverick dark matter

    NASA Astrophysics Data System (ADS)

    Krusberg, Zosia Anna Celina

    Astrophysical observations from galactic to cosmological scales point to a substantial non-baryonic component to the universe's total matter density. Although very little is presently known about the physical properties of dark matter, its existence offers some of the most compelling evidence for physics beyond the standard model (BSM). In the weakly interacting massive particle (WIMP) scenario, the dark matter consists of particles that possess weak-scale interactions with the particles of the standard model, offering a compelling theoretical framework that allows us to understand the relic abundance of dark matter as a natural consequence of the thermal history of the early universe. From the perspective of particle physics phenomenology, the WIMP scenario is appealing for two additional reasons. First, many theories of BSM physics contain attractive WIMP candidates. Second, the weak-scale interactions between WIMPs and standard model particles imply the possibility of detecting scatterings between relic WIMPs and detector nuclei in direct detection experiments, products of WIMP annihilations at locations throughout the galaxy in indirect detection programs, and WIMP production signals at high-energy particle colliders. In this work, we use an effective field theory approach to study model-independent dark matter phenomenology in direct detection and collider experiments. The maverick dark matter scenario is defined by an effective field theory in which the WIMP is the only new particle within the energy range accessible to the Large Hadron Collider (LHC). Although certain assumptions are necessary to keep the problem tractable, we describe our WIMP candidate generically by specifying only its spin and dominant interaction form with standard model particles. Constraints are placed on the masses and coupling constants of the maverick WIMPs using the Wilkinson Microwave Anisotropy Probe (WMAP) relic density measurement and direct detection exclusion data from both spin-independent (XENON100 and SuperCDMS) and spin-dependent (COUPP) experiments. We further study the distinguishability of maverick WIMP production signals at the Tevatron and the LHC---at its early and nominal configurations---using standard simulation packages, place constraints on maverick WIMP properties using existing collider data, and determine projected mass reaches in future data from both colliders. We find ourselves in a unique era of theoretically-motivated, high-precision dark matter searches that hold the potential to give us important insights, not only into the nature of dark matter, but also into the physics that lies beyond the standard model.

  20. Potential GPRS 900/180-MHz and WCDMA 1900-MHz interference to medical devices.

    PubMed

    Iskra, Steve; Thomas, Barry W; McKenzie, Ray; Rowley, Jack

    2007-10-01

    This study compared the potential for interference to medical devices from radio frequency (RF) fields radiated by GSM 900/1800-MHz, general packet radio service (GPRS) 900/1800-MHz, and wideband code division multiple access (WCDMA) 1900-MHz handsets. The study used a balanced half-wave dipole antenna, which was energized with a signal at the standard power level for each technology, and then brought towards the medical device while noting the distance at which interference became apparent. Additional testing was performed with signals that comply with the requirements of the international immunity standard to RF fields, IEC 61000-4-3. The testing provides a sense of the overall interference impact that GPRS and WCDMA (frequency division duplex) may have, relative to current mobile technologies, and to the internationally recognized standard for radiated RF immunity. Ten medical devices were tested: two pulse oximeters, a blood pressure monitor, a patient monitor, a humidifier, three models of cardiac defibrillator, and two models of infusion pump. Our conclusion from this and a related study on consumer devices is that WCDMA handsets are unlikely to be a significant interference threat to medical electronics at typical separation distances.

  1. Accretion of magnetized matter into a black hole.

    NASA Astrophysics Data System (ADS)

    Bisnovatyj-Kogan, G. S.

    1999-12-01

    Accretion is the main source of energy in binary X-ray sources inside the Galaxy, and most probably in active galactic nuclei, where numerous observational data for the existence of supermassive black holes have been obtained. Standard accretion disk theory is formulated which is based on local heat balance. The whole energy produced by turbulent viscous heating is supposed to be emitted to the sides of the disk. Sources of turbulence in the accretion disk are discussed, including nonlinear hydrodynamic turbulence, convection and magnetic field. In standard theory there are two branches of solution, optically thick, anti-optically thin, which are individually self-consistent. The choice between these solutions should be done on the basis of a stability analysis. Advection in the accretion disks is described by differential equations, which makes the theory nonlocal. The low-luminosity optically thin accretion disk model with advection under some conditions may become advectively dominated, carrying almost all the energy inside the black hole. A proper account for magnetic field in the process of accretion limits the energy advected into a black hole, and does not allow the radiative efficiency of accretion to become lower than about 1/4 of the standard accretion disk model efficiency.

  2. A standard protocol for describing individual-based and agent-based models

    USGS Publications Warehouse

    Grimm, Volker; Berger, Uta; Bastiansen, Finn; Eliassen, Sigrunn; Ginot, Vincent; Giske, Jarl; Goss-Custard, John; Grand, Tamara; Heinz, Simone K.; Huse, Geir; Huth, Andreas; Jepsen, Jane U.; Jorgensen, Christian; Mooij, Wolf M.; Muller, Birgit; Pe'er, Guy; Piou, Cyril; Railsback, Steven F.; Robbins, Andrew M.; Robbins, Martha M.; Rossmanith, Eva; Ruger, Nadja; Strand, Espen; Souissi, Sami; Stillman, Richard A.; Vabo, Rune; Visser, Ute; DeAngelis, Donald L.

    2006-01-01

    Simulation models that describe autonomous individual organisms (individual based models, IBM) or agents (agent-based models, ABM) have become a widely used tool, not only in ecology, but also in many other disciplines dealing with complex systems made up of autonomous entities. However, there is no standard protocol for describing such simulation models, which can make them difficult to understand and to duplicate. This paper presents a proposed standard protocol, ODD, for describing IBMs and ABMs, developed and tested by 28 modellers who cover a wide range of fields within ecology. This protocol consists of three blocks (Overview, Design concepts, and Details), which are subdivided into seven elements: Purpose, State variables and scales, Process overview and scheduling, Design concepts, Initialization, Input, and Submodels. We explain which aspects of a model should be described in each element, and we present an example to illustrate the protocol in use. In addition, 19 examples are available in an Online Appendix. We consider ODD as a first step for establishing a more detailed common format of the description of IBMs and ABMs. Once initiated, the protocol will hopefully evolve as it becomes used by a sufficiently large proportion of modellers.

  3. Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency

    NASA Astrophysics Data System (ADS)

    Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.

    2013-09-01

    A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.

  4. Fermion Cooper pairing with unequal masses: Standard field theory approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He Lianyi; Jin Meng; Zhuang Pengfei

    Fermion Cooper pairing with unequal masses is investigated in a standard field theory approach. We derived the superfluid density and Meissner mass squared of the U(1) gauge field in a general two-species model and found that the often used proportional relation between the two quantities is broken when the fermion masses are unequal. In the weak-coupling region, the superfluid density is always negative but the Meissner mass squared becomes mostly positive when the mass ratio between the pairing fermions is large enough. We established a proper momentum configuration of the LOFF pairing with unequal masses and showed that the LOFFmore » state is energetically favored due to the negative superfluid density. The single-plane-wave LOFF state is physically equivalent to an anisotropic state with a spontaneously generated superflow. The extension to a finite-range interaction is briefly discussed.« less

  5. Coulomb-free and Coulomb-distorted recolliding quantum orbits in photoelectron holography

    NASA Astrophysics Data System (ADS)

    Maxwell, A. S.; Figueira de Morisson Faria, C.

    2018-06-01

    We perform a detailed analysis of the different types of orbits in the Coulomb quantum orbit strong-field approximation (CQSFA), ranging from direct to those undergoing hard collisions. We show that some of them exhibit clear counterparts in the standard formulations of the strong-field approximation for direct and rescattered above-threshold ionization, and show that the standard orbit classification commonly used in Coulomb-corrected models is over-simplified. We identify several types of rescattered orbits, such as those responsible for the low-energy structures reported in the literature, and determine the momentum regions in which they occur. We also find formerly overlooked interference patterns caused by backscattered Coulomb-corrected orbits and assess their effect on photoelectron angular distributions. These orbits improve the agreement of photoelectron angular distributions computed with the CQSFA with the outcome of ab initio methods for high energy phtotoelectrons perpendicular to the field polarization axis.

  6. Relationship Between Frequency and Deflection Angle in the DNA Prism

    PubMed Central

    Chen, Zhen; Dorfman, Kevin D.

    2013-01-01

    The DNA prism is a modification of the standard pulsed-field electrophoresis protocol to provide a continuous separation, where the DNA are deflected at an angle that depends on their molecular weight. The standard switchback model for the DNA prism predicts a monotonic increase in the deflection angle as a function of the frequency for switching the field until a plateau regime is reached. However, experiments indicate that the deflection angle achieves a maximum value before decaying to a size-independent value at high frequencies. Using Brownian dynamics simulations, we show that the maximum in the deflection angle is related to the reorientation time for the DNA and the decay in deflection angle at high frequencies is due to inadequate stretching. The generic features of the dependence of the deflection angle on molecular weight, switching frequency, and electric field strength explain a number of experimental phenomena. PMID:23410375

  7. Scalar field propagation in the ϕ 4 κ-Minkowski model

    NASA Astrophysics Data System (ADS)

    Meljanac, S.; Samsarov, A.; Trampetić, J.; Wohlgenannt, M.

    2011-12-01

    In this article we use the noncommutative (NC) κ-Minkowski ϕ 4 model based on the κ-deformed star product, (★ h ). The action is modified by expanding up to linear order in the κ-deformation parameter a, producing an effective model on commutative spacetime. For the computation of the tadpole diagram contributions to the scalar field propagation/self-energy, we anticipate that statistics on the κ-Minkowski is specifically κ-deformed. Thus our prescription in fact represents hybrid approach between standard quantum field theory (QFT) and NCQFT on the κ-deformed Minkowski spacetime, resulting in a κ-effective model. The propagation is analyzed in the framework of the two-point Green's function for low, intermediate, and for the Planckian propagation energies, respectively. Semiclassical/hybrid behavior of the first order quantum correction do show up due to the κ-deformed momentum conservation law. For low energies, the dependence of the tadpole contribution on the deformation parameter a drops out completely, while for Planckian energies, it tends to a fixed finite value. The mass term of the scalar field is shifted and these shifts are very different at different propagation energies. At the Planck-ian energies we obtain the direction dependent κ-modified dispersion relations. Thus our κ-effective model for the massive scalar field shows a birefringence effect.

  8. How TK-TD and population models for aquatic macrophytes could support the risk assessment for plant protection products.

    PubMed

    Hommen, Udo; Schmitt, Walter; Heine, Simon; Brock, Theo Cm; Duquesne, Sabine; Manson, Phil; Meregalli, Giovanna; Ochoa-Acuña, Hugo; van Vliet, Peter; Arts, Gertie

    2016-01-01

    This case study of the Society of Environmental Toxicology and Chemistry (SETAC) workshop MODELINK demonstrates the potential use of mechanistic effects models for macrophytes to extrapolate from effects of a plant protection product observed in laboratory tests to effects resulting from dynamic exposure on macrophyte populations in edge-of-field water bodies. A standard European Union (EU) risk assessment for an example herbicide based on macrophyte laboratory tests indicated risks for several exposure scenarios. Three of these scenarios are further analyzed using effect models for 2 aquatic macrophytes, the free-floating standard test species Lemna sp., and the sediment-rooted submerged additional standard test species Myriophyllum spicatum. Both models include a toxicokinetic (TK) part, describing uptake and elimination of the toxicant, a toxicodynamic (TD) part, describing the internal concentration-response function for growth inhibition, and a description of biomass growth as a function of environmental factors to allow simulating seasonal dynamics. The TK-TD models are calibrated and tested using laboratory tests, whereas the growth models were assumed to be fit for purpose based on comparisons of predictions with typical growth patterns observed in the field. For the risk assessment, biomass dynamics are predicted for the control situation and for several exposure levels. Based on specific protection goals for macrophytes, preliminary example decision criteria are suggested for evaluating the model outputs. The models refined the risk indicated by lower tier testing for 2 exposure scenarios, while confirming the risk associated for the third. Uncertainties related to the experimental and the modeling approaches and their application in the risk assessment are discussed. Based on this case study and the assumption that the models prove suitable for risk assessment once fully evaluated, we recommend that 1) ecological scenarios be developed that are also linked to the exposure scenarios, and 2) quantitative protection goals be set to facilitate the interpretation of model results for risk assessment. © 2015 SETAC.

  9. Multi-agent systems in epidemiology: a first step for computational biology in the study of vector-borne disease transmission.

    PubMed

    Roche, Benjamin; Guégan, Jean-François; Bousquet, François

    2008-10-15

    Computational biology is often associated with genetic or genomic studies only. However, thanks to the increase of computational resources, computational models are appreciated as useful tools in many other scientific fields. Such modeling systems are particularly relevant for the study of complex systems, like the epidemiology of emerging infectious diseases. So far, mathematical models remain the main tool for the epidemiological and ecological analysis of infectious diseases, with SIR models could be seen as an implicit standard in epidemiology. Unfortunately, these models are based on differential equations and, therefore, can become very rapidly unmanageable due to the too many parameters which need to be taken into consideration. For instance, in the case of zoonotic and vector-borne diseases in wildlife many different potential host species could be involved in the life-cycle of disease transmission, and SIR models might not be the most suitable tool to truly capture the overall disease circulation within that environment. This limitation underlines the necessity to develop a standard spatial model that can cope with the transmission of disease in realistic ecosystems. Computational biology may prove to be flexible enough to take into account the natural complexity observed in both natural and man-made ecosystems. In this paper, we propose a new computational model to study the transmission of infectious diseases in a spatially explicit context. We developed a multi-agent system model for vector-borne disease transmission in a realistic spatial environment. Here we describe in detail the general behavior of this model that we hope will become a standard reference for the study of vector-borne disease transmission in wildlife. To conclude, we show how this simple model could be easily adapted and modified to be used as a common framework for further research developments in this field.

  10. Mechanical Engineering Technology Curriculum.

    ERIC Educational Resources Information Center

    Georgia State Univ., Atlanta. Dept. of Vocational and Career Development.

    This guide offers information and procedures necessary to train mechanical engineering technicians. Discussed first are the rationale and objectives of the curriculum. The occupational field of mechanical engineering technology is described. Next, a curriculum model is set forth that contains information on the standard mechanical engineering…

  11. Optics & Opto-Electronic Systems

    DTIC Science & Technology

    1988-06-01

    its reflection by the 13 cavity boundaries, and its reabsorption by the atom. Multimode corrections to the single-mode Jaynes - Cummings model are...walls. Transients in the Micromaser C. R. Stroud, Jr. The Jaynes - Cummings model of a single two-lev3l atom interacting with a single field mode of a...increasing laser intensity and to be as large as 22 bits/sec. A standard model of self- pumped phase conjugation due to four- wave mixing has been

  12. Toward inflation models compatible with the no-boundary proposal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, Dong-il; Yeom, Dong-han, E-mail: dongil.j.hwang@gmail.com, E-mail: innocent.yeom@gmail.com

    2014-06-01

    In this paper, we investigate various inflation models in the context of the no-boundary proposal. We propose that a good inflation model should satisfy three conditions: observational constraints, plausible initial conditions, and naturalness of the model. For various inflation models, we assign the probability to each initial condition using the no-boundary proposal and define a quantitative standard, typicality, to check whether the model satisfies the observational constraints with probable initial conditions. There are three possible ways to satisfy the typicality criterion: there was pre-inflation near the high energy scale, the potential is finely tuned or the inflationary field space ismore » unbounded, or there are sufficient number of fields that contribute to inflation. The no-boundary proposal rejects some of naive inflation models, explains some of traditional doubts on inflation, and possibly, can have observational consequences.« less

  13. Singlet model interference effects with high scale UV physics

    DOE PAGES

    Dawson, S.; Lewis, I. M.

    2017-01-06

    One of the simplest extensions of the Standard Model (SM) is the addition of a scalar gauge singlet, S . If S is not forbidden by a symmetry from mixing with the Standard Model Higgs boson, the mixing will generate non-SM rates for Higgs production and decays. Generally, there could also be unknown high energy physics that generates additional effective low energy interactions. We show that interference effects between the scalar resonance of the singlet model and the effective field theory (EFT) operators can have significant effects in the Higgs sector. Here, we examine a non- Z 2 symmetricmore » scalar singlet model and demonstrate that a fit to the 125 GeV Higgs boson couplings and to limits on high mass resonances, S , exhibit an interesting structure and possible large cancellations of effects between the resonance contribution and the new EFT interactions, that invalidate conclusions based on the renormalizable singlet model alone.« less

  14. The Flare/CME Connection

    NASA Technical Reports Server (NTRS)

    Moore, Ron; Falconer, David; Sterling, Alphonse

    2008-01-01

    We present evidence supporting the view that, while many flares are produced by a confined magnetic explosion that does not produce a CME, every CME is produced by an ejective magnetic explosion that also produces a flare. The evidence is that the observed heliocentric angular width of the full-blown CME plasmoid in the outer corona (at 3 to 20 solar radii) is about that predicted by the standard model for CME production, from the amount of magnetic flux covered by the co-produced flare arcade. In the standard model, sheared and twisted sigmoidal field in the core of an initially closed magnetic arcade erupts. As it erupts, tether-cutting reconnection, starting between the legs of the erupting sigmoid and continuing between the merging stretched legs of the enveloping arcade, simultaneously produces a growing flare arcade and unleashes the erupting sigmoid and arcade to become the low-beta plasmoid (magnetic bubble) that becomes the CME. The flare arcade is the downward product of the reconnection and the CME plasmoid is the upward product. The unleashed, expanding CME plasmoid is propelled into the outer corona and solar wind by its own magnetic field pushing on the surrounding field in the inner and outer corona. This tether-cutting scenario predicts that the amount of magnetic flux in the full-blown CME plasmoid nearly equals that covered by the full-grown flare arcade. This equality predicts (1) the field strength in the flare region from the ratio of the angular width of the CME in the outer corona to angular width of the full-grown flare arcade, and (2) an upper bound on the angular width of the CME in the outer corona from the total magnetic flux in the active region from which the CME explodes. We show that these predictions are fulfilled by observed CMEs. This agreement validates the standard model. The model explains (1) why most CMEs have much greater angular widths than their co-produced flares, and (2) why the radial path of a CME in the outer corona can be laterally far offset from the co-produced flare.

  15. STELLAR DYNAMO MODELS WITH PROMINENT SURFACE TOROIDAL FIELDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonanno, Alfio

    2016-12-20

    Recent spectro-polarimetric observations of solar-type stars have shown the presence of photospheric magnetic fields with a predominant toroidal component. If the external field is assumed to be current-free it is impossible to explain these observations within the framework of standard mean-field dynamo theory. In this work, it will be shown that if the coronal field of these stars is assumed to be harmonic, the underlying stellar dynamo mechanism can support photospheric magnetic fields with a prominent toroidal component even in the presence of axisymmetric magnetic topologies. In particular, it is argued that the observed increase in the toroidal energy inmore » low-mass fast-rotating stars can be naturally explained with an underlying α Ω mechanism.« less

  16. EarthCache as a Tool to Promote Earth-Science in Public School Classrooms

    NASA Astrophysics Data System (ADS)

    Gochis, E. E.; Rose, W. I.; Klawiter, M.; Vye, E. C.; Engelmann, C. A.

    2011-12-01

    Geoscientists often find it difficult to bridge the gap in communication between university research and what is learned in the public schools. Today's schools operate in a high stakes environment that only allow instruction based on State and National Earth Science curriculum standards. These standards are often unknown by academics or are written in a style that obfuscates the transfer of emerging scientific research to students in the classroom. Earth Science teachers are in an ideal position to make this link because they have a background in science as well as a solid understanding of the required curriculum standards for their grade and the pedagogical expertise to pass on new information to their students. As part of the Michigan Teacher Excellence Program (MiTEP), teachers from Grand Rapids, Kalamazoo, and Jackson school districts participate in 2 week field courses with Michigan Tech University to learn from earth science experts about how the earth works. This course connects Earth Science Literacy Principles' Big Ideas and common student misconceptions with standards-based education. During the 2011 field course, we developed and began to implement a three-phase EarthCache model that will provide a geospatial interactive medium for teachers to translate the material they learn in the field to the students in their standards based classrooms. MiTEP participants use GPS and Google Earth to navigate to Michigan sites of geo-significance. At each location academic experts aide participants in making scientific observations about the locations' geologic features, and "reading the rocks" methodology to interpret the area's geologic history. The participants are then expected to develop their own EarthCache site to be used as pedagogical tool bridging the gap between standards-based classroom learning, contemporary research and unique outdoor field experiences. The final phase supports teachers in integrating inquiry based, higher-level learning student activities to EarthCache sites near their own urban communities, or in regional areas such as nature preserves and National Parks. By working together, MiTEP participants are developing a network of regional EarthCache sites and shared lesson plans which explore places that are meaningful to students while simultaneously connecting them to geologic concepts they are learning in school. We believe that the MiTEP EarthCaching model will help participants emerge as leaders of inquiry style, and virtual place-based educators within their districts.

  17. New chiral fermions, a new gauge interaction, Dirac neutrinos, and dark matter

    DOE PAGES

    de Gouvea, Andre; Hernandez, Daniel

    2015-10-07

    Here, we propose that all light fermionic degrees of freedom, including the Standard Model (SM) fermions and all possible light beyond-the-standard-model fields, are chiral with respect to some spontaneously broken abelian gauge symmetry. Hypercharge, for example, plays this role for the SM fermions. We introduce a new symmetry, U(1) ν , for all new light fermionic states. Anomaly cancellations mandate the existence of several new fermion fields with nontrivial U(1) ν charges. We develop a concrete model of this type, for which we show that (i) some fermions remain massless after U(1) ν breaking — similar to SM neutrinos —more » and (ii) accidental global symmetries translate into stable massive particles — similar to SM protons. These ingredients provide a solution to the dark matter and neutrino mass puzzles assuming one also postulates the existence of heavy degrees of freedom that act as “mediators” between the two sectors. The neutrino mass mechanism described here leads to parametrically small Dirac neutrino masses, and the model also requires the existence of at least four Dirac sterile neutrinos. Finally, we describe a general technique to write down chiral-fermions-only models that are at least anomaly-free under a U(1) gauge symmetry.« less

  18. New chiral fermions, a new gauge interaction, Dirac neutrinos, and dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Gouvea, Andre; Hernandez, Daniel

    Here, we propose that all light fermionic degrees of freedom, including the Standard Model (SM) fermions and all possible light beyond-the-standard-model fields, are chiral with respect to some spontaneously broken abelian gauge symmetry. Hypercharge, for example, plays this role for the SM fermions. We introduce a new symmetry, U(1) ν , for all new light fermionic states. Anomaly cancellations mandate the existence of several new fermion fields with nontrivial U(1) ν charges. We develop a concrete model of this type, for which we show that (i) some fermions remain massless after U(1) ν breaking — similar to SM neutrinos —more » and (ii) accidental global symmetries translate into stable massive particles — similar to SM protons. These ingredients provide a solution to the dark matter and neutrino mass puzzles assuming one also postulates the existence of heavy degrees of freedom that act as “mediators” between the two sectors. The neutrino mass mechanism described here leads to parametrically small Dirac neutrino masses, and the model also requires the existence of at least four Dirac sterile neutrinos. Finally, we describe a general technique to write down chiral-fermions-only models that are at least anomaly-free under a U(1) gauge symmetry.« less

  19. Spectrum-doubled heavy vector bosons at the LHC

    DOE PAGES

    Appelquist, Thomas; Bai, Yang; Ingoldby, James; ...

    2016-01-19

    We study a simple effective field theory incorporating six heavy vector bosons together with the standard-model field content. The new particles preserve custodial symmetry as well as an approximate left-right parity symmetry. The enhanced symmetry of the model allows it to satisfy precision electroweak constraints and bounds from Higgs physics in a regime where all the couplings are perturbative and where the amount of fine-tuning is comparable to that in the standard model itself. We find that the model could explain the recently observed excesses in di-boson processes at invariant mass close to 2TeV from LHC Run 1 for amore » range of allowed parameter space. The masses of all the particles differ by no more than roughly 10%. In a portion of the allowed parameter space only one of the new particles has a production cross section large enough to be detectable with the energy and luminosity of Run 1, both via its decay to WZ and to Wh, while the others have suppressed production rates. Furthermore, the model can be tested at the higher-energy and higher-luminosity run of the LHC even for an overall scale of the new particles higher than 3TeV.« less

  20. A comparative study of various inflow boundary conditions and turbulence models for wind turbine wake predictions

    NASA Astrophysics Data System (ADS)

    Tian, Lin-Lin; Zhao, Ning; Song, Yi-Lei; Zhu, Chun-Ling

    2018-05-01

    This work is devoted to perform systematic sensitivity analysis of different turbulence models and various inflow boundary conditions in predicting the wake flow behind a horizontal axis wind turbine represented by an actuator disc (AD). The tested turbulence models are the standard k-𝜀 model and the Reynolds Stress Model (RSM). A single wind turbine immersed in both uniform flows and in modeled atmospheric boundary layer (ABL) flows is studied. Simulation results are validated against the field experimental data in terms of wake velocity and turbulence intensity.

  1. Finite temperature corrections and embedded strings in noncommutative geometry and the standard model with neutrino mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martins, R. A.

    The recent extension of the standard model to include massive neutrinos in the framework of noncommutative geometry and the spectral action principle involves new scalar fields and their interactions with the usual complex scalar doublet. After ensuring that they bring no unphysical consequences, we address the question of how these fields affect the physics predicted in the Weinberg-Salam theory, particularly in the context of the electroweak phase transition. Applying the Dolan-Jackiw procedure, we calculate the finite temperature corrections, and find that the phase transition is first order. The new scalar interactions significantly improve the stability of the electroweak Z string,more » through the 'bag' phenomenon described by Vachaspati and Watkins ['Bound states can stabilize electroweak strings', Phys. Lett. B 318, 163-168 (1993)]. (Recently, cosmic strings have climbed back into interest due to a new evidence.) Sourced by static embedded strings, an internal space analogy of Cartan's torsion is drawn, and a possible Higgs-force-like 'gravitational' effect of this nonpropagating torsion on the fermion masses is described. We also check that the field generating the Majorana mass for the {nu}{sub R} is nonzero in the physical vacuum.« less

  2. Feynman rules for the Standard Model Effective Field Theory in R ξ -gauges

    NASA Astrophysics Data System (ADS)

    Dedes, A.; Materkowska, W.; Paraskevas, M.; Rosiek, J.; Suxho, K.

    2017-06-01

    We assume that New Physics effects are parametrized within the Standard Model Effective Field Theory (SMEFT) written in a complete basis of gauge invariant operators up to dimension 6, commonly referred to as "Warsaw basis". We discuss all steps necessary to obtain a consistent transition to the spontaneously broken theory and several other important aspects, including the BRST-invariance of the SMEFT action for linear R ξ -gauges. The final theory is expressed in a basis characterized by SM-like propagators for all physical and unphysical fields. The effect of the non-renormalizable operators appears explicitly in triple or higher multiplicity vertices. In this mass basis we derive the complete set of Feynman rules, without resorting to any simplifying assumptions such as baryon-, lepton-number or CP conservation. As it turns out, for most SMEFT vertices the expressions are reasonably short, with a noticeable exception of those involving 4, 5 and 6 gluons. We have also supplemented our set of Feynman rules, given in an appendix here, with a publicly available Mathematica code working with the FeynRules package and producing output which can be integrated with other symbolic algebra or numerical codes for automatic SMEFT amplitude calculations.

  3. Low-derivative operators of the Standard Model effective field theory via Hilbert series methods

    NASA Astrophysics Data System (ADS)

    Lehman, Landon; Martin, Adam

    2016-02-01

    In this work, we explore an extension of Hilbert series techniques to count operators that include derivatives. For sufficiently low-derivative operators, we conjecture an algorithm that gives the number of invariant operators, properly accounting for redundancies due to the equations of motion and integration by parts. Specifically, the conjectured technique can be applied whenever there is only one Lorentz invariant for a given partitioning of derivatives among the fields. At higher numbers of derivatives, equation of motion redundancies can be removed, but the increased number of Lorentz contractions spoils the subtraction of integration by parts redundancies. While restricted, this technique is sufficient to automatically recreate the complete set of invariant operators of the Standard Model effective field theory for dimensions 6 and 7 (for arbitrary numbers of flavors). At dimension 8, the algorithm does not automatically generate the complete operator set; however, it suffices for all but five classes of operators. For these remaining classes, there is a well defined procedure to manually determine the number of invariants. Assuming our method is correct, we derive a set of 535 dimension-8 N f = 1 operators.

  4. An ecological approach to problems of Dark Energy, Dark Matter, MOND and Neutrinos

    NASA Astrophysics Data System (ADS)

    Zhao, Hong Sheng

    2008-11-01

    Modern astronomical data on galaxy and cosmological scales have revealed powerfully the existence of certain dark sectors of fundamental physics, i.e., existence of particles and fields outside the standard models and inaccessible by current experiments. Various approaches are taken to modify/extend the standard models. Generic theories introduce multiple de-coupled fields A, B, C, each responsible for the effects of DM (cold supersymmetric particles), DE (Dark Energy) effect, and MG (Modified Gravity) effect respectively. Some theories use adopt vanilla combinations like AB, BC, or CA, and assume A, B, C belong to decoupled sectors of physics. MOND-like MG and Cold DM are often taken as antagnising frameworks, e.g. in the muddled debate around the Bullet Cluster. Here we argue that these ad hoc divisions of sectors miss important clues from the data. The data actually suggest that the physics of all dark sectors is likely linked together by a self-interacting oscillating field, which governs a chameleon-like dark fluid, appearing as DM, DE and MG in different settings. It is timely to consider an interdisciplinary approach across all semantic boundaries of dark sectors, treating the dark stress as one identity, hence accounts for several "coincidences" naturally.

  5. A minimal scale invariant axion solution to the strong CP-problem

    NASA Astrophysics Data System (ADS)

    Tokareva, Anna

    2018-05-01

    We present a scale-invariant extension of the Standard model allowing for the Kim-Shifman-Vainstein-Zakharov (KSVZ) axion solution of the strong CP problem in QCD. We add the minimal number of new particles and show that the Peccei-Quinn scalar might be identified with the complex dilaton field. Scale invariance, together with the Peccei-Quinn symmetry, is broken spontaneously near the Planck scale before inflation, which is driven by the Standard Model Higgs field. We present a set of general conditions which makes this scenario viable and an explicit example of an effective theory possessing spontaneous breaking of scale invariance. We show that this description works both for inflation and low-energy physics in the electroweak vacuum. This scenario can provide a self-consistent inflationary stage and, at the same time, successfully avoid the cosmological bounds on the axion. Our general predictions are the existence of colored TeV mass fermion and the QCD axion. The latter has all the properties of the KSVZ axion but does not contribute to dark matter. This axion can be searched via its mixing to a photon in an external magnetic field.

  6. Implications of Neutrino Oscillations on the Dark-Matter World

    NASA Astrophysics Data System (ADS)

    Hwang, W.-Y. Pauchy

    2014-01-01

    According to my own belief that "The God wouldn't create a world that is so boring that a particle knows only the very feeble weak interaction.", maybe we underestimate the roles of neutrinos. We note that right-handed neutrinos play no roles, or don't exist, in the minimal Standard Model. We discuss the language to write down an extended Standard Model - using renormalizable quantum field theory as the language; to start with a certain set of basic units under a certain gauge group; in fact, to use the three right-handed neutrinos to initiate the family gauge group SUf (3). Specifically we use the left-handed and right-handed spinors to form the basic units together with SUc (3) × SUL (2) × U (1) × SUf (3) as the gauge group. The dark-matter SUf (3) world couples with the lepton world, but not with the quark world. Amazingly enough, the space of the Standard-Model Higgs Φ (1 , 2), the family Higgs triplet Φ(3, 1), and the neutral part of the mixed family Higgs Φ0 (3 , 2) undergoes the spontaneous symmetry breaking, i.e. the Standard-Model Higgs mechanism and the "project-out" family Higgs mechanism, to give rise to the weak bosons W± and Z0, one Standard-Model Higgs, the eight massive family gauge bosons, and the remaining four massive neutral family Higgs particles, and nothing more. Thus, the roles of neutrinos in this extended Standard Model are extremely interesting in connection with the dark-matter world.

  7. Generalized two-dimensional chiral QED: Anomaly and exotic statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saradzhev, F.M.

    1997-07-01

    We study the influence of the anomaly on the physical quantum picture of the generalized chiral Schwinger model defined on S{sup 1}. We show that the anomaly (i) results in the background linearly rising electric field and (ii) makes the spectrum of the physical Hamiltonian nonrelativistic without a massive boson. The physical matter fields acquire exotic statistics. We construct explicitly the algebra of the Poincar{acute e} generators and show that it differs from the Poincar{acute e} one. We exhibit the role of the vacuum Berry phase in the failure of the Poincar{acute e} algebra to close. We prove that, inmore » spite of the background electric field, such phenomenon as the total screening of external charges characteristic for the standard Schwinger model takes place in the generalized chiral Schwinger model, too. {copyright} {ital 1997} {ital The American Physical Society}« less

  8. Research and development of energy-efficient appliance motor-compressors. Volume IV. Production demonstration and field test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Middleton, M.G.; Sauber, R.S.

    Two models of a high-efficiency compressor were manufactured in a pilot production run. These compressors were for low back-pressure applications. While based on a production compressor, there were many changes that required production process changes. Some changes were performed within our company and others were made by outside vendors. The compressors were used in top mount refrigerator-freezers and sold in normal distribution channels. Forty units were placed in residences for a one-year field test. Additional compressors were built so that a life test program could be performed. The results of the field test reveal a 27.0% improvement in energy consumptionmore » for the 18 ft/sup 3/ high-efficiency model and a 15.6% improvement in the 21 ft/sup 3/ improvement in the 21 ft/sup 3/ high-efficiency model as compared to the standard production unit.« less

  9. Anatomy of the ATLAS diboson anomaly

    NASA Astrophysics Data System (ADS)

    Allanach, B. C.; Gripaios, Ben; Sutherland, Dave

    2015-09-01

    We perform a general analysis of new physics interpretations of the recent ATLAS diboson excesses over standard model expectations in LHC Run I collisions. First, we estimate a likelihood function in terms of the truth signal in the W W , W Z , and Z Z channels, finding that the maximum has zero events in the W Z channel, though the likelihood is sufficiently flat to allow other scenarios. Second, we survey the possible effective field theories containing the standard model plus a new resonance that could explain the data, identifying two possibilities, viz. a vector that is either a left- or right-handed S U (2 ) triplet. Finally, we compare these models with other experimental data and determine the parameter regions in which they provide a consistent explanation.

  10. Low energy analysis of νN→νNγ in the standard model

    NASA Astrophysics Data System (ADS)

    Hill, Richard J.

    2010-01-01

    The production of single photons in low energy (˜1GeV) neutrino scattering off nucleons is analyzed in the standard model. At very low energies, Eν≪GeV, a simple description of the chiral Lagrangian involving baryons and arbitrary SU(2)L×U(1)Y gauge fields is developed. Extrapolation of the process into the ˜1-2GeV region is treated in a simple phenomenological model. Coherent enhancements in compound nuclei are studied. The relevance of single-photon events as a background to experimental searches for νμ→νe is discussed. In particular, single photons are a plausible explanation for excess events observed by the MiniBooNE experiment.

  11. The Beam Dynamics and Beam Related Uncertainties in Fermilab Muon $g-2$ Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Wanwei

    The anomaly of the muon magnetic moment,more » $$a_{\\mu}\\equiv (g-2)/2$$, has played an important role in constraining physics beyond the Standard Model for many years. Currently, the Standard Model prediction for $$a_{\\mu}$$ is accurate to 0.42 parts per million (ppm). The most recent muon $g-2$ experiment was done at Brookhaven National Laboratory (BNL) and determined $$a_{\\mu}$$ to 0.54 ppm, with a central value that differs from the Standard Model prediction by 3.3-3.6 standard deviations and provides a strong hint of new physics. The Fermilab Muon $g-2$ Experiment has a goal to measure $$a_{\\mu}$$ to unprecedented precision: 0.14 ppm, which could provide an unambiguous answer to the question whether there are new particles and forces that exist in nature. To achieve this goal, several items have been identified to lower the systematic uncertainties. In this work, we focus on the beam dynamics and beam associated uncertainties, which are important and must be better understood. We will discuss the electrostatic quadrupole system, particularly the hardware-related quad plate alignment and the quad extension and readout system. We will review the beam dynamics in the muon storage ring, present discussions on the beam related systematic errors, simulate the 3D electric fields of the electrostatic quadrupoles and examine the beam resonances. We will use a fast rotation analysis to study the muon radial momentum distribution, which provides the key input for evaluating the electric field correction to the measured $$a_{\\mu}$$.« less

  12. Possible overexposure of pregnant women to emissions from a walk through metal detector.

    PubMed

    Wu, Dagang; Qiang, Rui; Chen, Ji; Seidman, Seth; Witters, Donald; Kainz, Wolfgang

    2007-10-07

    This paper presents a systematic procedure to evaluate the induced current densities and electric fields due to walk-through metal detector (WTMD) exposure. This procedure is then used to assess the exposure of nine pregnant women models exposed to one WTMD model. First, we measured the magnetic field generated by the WTMD, then we extracted the equivalent current source to represent the WTMD emissions and finally we calculated the induced current densities and electric fields using the impedance method. The WTMD emissions and the induced fields in the pregnant women and fetus models are then compared to the ICNIRP Guidelines and the IEEE C95.6 exposure safety standard. The results prove the consistency between maximum permissible exposure (MPE) levels and basic restrictions for the ICNIRP Guidelines and IEEE C95.6. We also found that this particular WTMD complies with the ICNIRP basic restrictions for month 1-5 models, but leads to both fetus and pregnant women overexposure for month 6-9 models. The IEEE C95.6 restrictions (MPEs and basic restrictions) are not exceeded. The fetus overexposure of this particular WTMD calls for carefully conducted safety evaluations of security systems before they are deployed.

  13. Possible overexposure of pregnant women to emissions from a walk through metal detector

    NASA Astrophysics Data System (ADS)

    Wu, Dagang; Qiang, Rui; Chen, Ji; Seidman, Seth; Witters, Donald; Kainz, Wolfgang

    2007-09-01

    This paper presents a systematic procedure to evaluate the induced current densities and electric fields due to walk-through metal detector (WTMD) exposure. This procedure is then used to assess the exposure of nine pregnant women models exposed to one WTMD model. First, we measured the magnetic field generated by the WTMD, then we extracted the equivalent current source to represent the WTMD emissions and finally we calculated the induced current densities and electric fields using the impedance method. The WTMD emissions and the induced fields in the pregnant women and fetus models are then compared to the ICNIRP Guidelines and the IEEE C95.6 exposure safety standard. The results prove the consistency between maximum permissible exposure (MPE) levels and basic restrictions for the ICNIRP Guidelines and IEEE C95.6. We also found that this particular WTMD complies with the ICNIRP basic restrictions for month 1-5 models, but leads to both fetus and pregnant women overexposure for month 6-9 models. The IEEE C95.6 restrictions (MPEs and basic restrictions) are not exceeded. The fetus overexposure of this particular WTMD calls for carefully conducted safety evaluations of security systems before they are deployed.

  14. The Impact of Microwave-Derived Surface Soil Moisture on Watershed Hydrological Modeling

    NASA Technical Reports Server (NTRS)

    ONeill, P. E.; Hsu, A. Y.; Jackson, T. J.; Wood, E. F.; Zion, M.

    1997-01-01

    The usefulness of incorporating microwave-derived soil moisture information in a semi-distributed hydrological model was demonstrated for the Washita '92 experiment in the Little Washita River watershed in Oklahoma. Initializing the hydrological model with surface soil moisture fields from the ESTAR airborne L-band microwave radiometer on a single wet day at the start of the study period produced more accurate model predictions of soil moisture than a standard hydrological initialization with streamflow data over an eight-day soil moisture drydown.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, Damien P.; Mooij, Sander; Postma, Marieke, E-mail: dpg39@cam.ac.uk, E-mail: sander.mooij@ing.uchile.cl, E-mail: mpostma@nikhef.nl

    We compute the one-loop renormalization group equations for Standard Model Higgs inflation. The calculation is done in the Einstein frame, using a covariant formalism for the multi-field system. All counterterms, and thus the betafunctions, can be extracted from the radiative corrections to the two-point functions; the calculation of higher n-point functions then serves as a consistency check of the approach. We find that the theory is renormalizable in the effective field theory sense in the small, mid and large field regime. In the large field regime our results differ slightly from those found in the literature, due to a differentmore » treatment of the Goldstone bosons.« less

  16. Electromagnetic pulse (EMP), Part II: Field-expedient ways to minimize its effects on field medical treatment facilities.

    PubMed

    Vandre, R H; Klebers, J; Tesche, F M; Blanchard, J P

    1993-05-01

    Part I of this paper showed that a field commander can expect approximately 65% of his unprotected electronic medical equipment to be damaged by the electromagnetic pulse (EMP) from a single nuclear detonation as far as 2200 km away. Using computer modeling, field-expedient ways to minimize the effects of EMP were studied. The results were: (1) keep wiring near the ground, (2) keep wiring short, (3) unplug unused equipment, (4) run power cabling and tents in a magnetic north-south direction (avoid running power cabling in the east-west direction), and (5) place sensitive equipment in International Organization for Standardization shelters.

  17. Characterization of NiSi nanowires as field emitters and limitations of Fowler-Nordheim model at the nanoscale

    NASA Astrophysics Data System (ADS)

    Belkadi, Amina B.; Gale, E.; Isakovic, A. F.

    2015-03-01

    Nanoscale field emitters are of technological interest because of the anticipated faster turn-on time, better sustainability and compactness. This report focuses on NiSi nanowires as field emitters for two reasons: (a) possible enhancement of field emission in nanoscale field emitters over bulk, and (b) achieving the same field emission properties as in bulk, but at a lower energy cost. To this end, we have grown, fabricated and characterized NiSi nanowires as field emitters. Depending on the geometry of the NiSi nanowires (aspect ratio, shape etc.), the relevant major field emission parameters, such as (1) the turn-on field, (2) the work function, and (3) the field enhancement factor, can be comparable or even superior to other recently explored nanoscale field emitters, such as CdS and ZnO. We also report on a comparative performance of various nanoscale field emitters and on the difficulties in the performance comparison in the light of relatively poor applicability of the standard Folwer-Nordheim model for field emission analysis for the case of the nanoscale field emitters. Proposed modifications are discussed. This work is supported through SRC-ATIC Grant 2011-KJ-2190. We also acknoweldge BNL-CFN and Cornell CNF facilities and staff.

  18. Scale invariance of the η-deformed AdS5 × S5 superstring, T-duality and modified type II equations

    NASA Astrophysics Data System (ADS)

    Arutyunov, G.; Frolov, S.; Hoare, B.; Roiban, R.; Tseytlin, A. A.

    2016-02-01

    We consider the ABF background underlying the η-deformed AdS5 ×S5 sigma model. This background fails to satisfy the standard IIB supergravity equations which indicates that the corresponding sigma model is not Weyl invariant, i.e. does not define a critical string theory in the usual sense. We argue that the ABF background should still define a UV finite theory on a flat 2d world-sheet implying that the η-deformed model is scale invariant. This property follows from the formal relation via T-duality between the η-deformed model and the one defined by an exact type IIB supergravity solution that has 6 isometries albeit broken by a linear dilaton. We find that the ABF background satisfies candidate type IIB scale invariance conditions which for the R-R field strengths are of the second order in derivatives. Surprisingly, we also find that the ABF background obeys an interesting modification of the standard IIB supergravity equations that are first order in derivatives of R-R fields. These modified equations explicitly depend on Killing vectors of the ABF background and, although not universal, they imply the universal scale invariance conditions. Moreover, we show that it is precisely the non-isometric dilaton of the T-dual solution that leads, after T-duality, to modification of type II equations from their standard form. We conjecture that the modified equations should follow from κ-symmetry of the η-deformed model. All our observations apply also to η-deformations of AdS3 ×S3 ×T4and AdS2 ×S2 ×T6models.

  19. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data

    PubMed Central

    Ying, Gui-shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-01-01

    Purpose To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. Methods We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field data in the elderly. Results When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI −0.03 to 0.32D, P=0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, P=0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller P-values, while analysis of the worse eye provided larger P-values than mixed effects models and marginal models. Conclusion In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. PMID:28102741

  20. Scale invariance of the η-deformed AdS 5 × S 5 superstring, T-duality and modified type II equations

    DOE PAGES

    Arutyunov, G.; Frolov, S.; Hoare, B.; ...

    2015-12-23

    We consider the ABF background underlying the η-deformed AdS 5 × S 5 sigma model. This background fails to satisfy the standard IIB supergravity equations which indicates that the corresponding sigma model is not Weyl invariant, i.e. does not define a critical string theory in the usual sense. We argue that the ABF background should still define a UV finite theory on a flat 2d world-sheet implying that the η-deformed model is scale invariant. This property follows from the formal relation via T-duality between the η-deformed model and the one defined by an exact type IIB supergravity solution that hasmore » 6 isometries albeit broken by a linear dilaton. We find that the ABF background satisfies candidate type IIB scale invariance conditions which for the R–R field strengths are of the second order in derivatives. Surprisingly, we also find that the ABF background obeys an interesting modification of the standard IIB supergravity equations that are first order in derivatives of R–R fields. These modified equations explicitly depend on Killing vectors of the ABF background and, although not universal, they imply the universal scale invariance conditions. Moreover, we show that it is precisely the non-isometric dilaton of the T-dual solution that leads, after T-duality, to modification of type II equations from their standard form. We conjecture that the modified equations should follow from κ-symmetry of the η-deformed model. All our observations apply also to η-deformations of AdS 3 × S 3 × T 4 and AdS 2 × S 2 × T 6 models.« less

  1. Validation of Simplified Load Equations through Loads Measurement and Modeling of a Small Horizontal-Axis Wind Turbine Tower; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana, S.; Damiani, R.; vanDam, J.

    As part of an ongoing effort to improve the modeling and prediction of small wind turbine dynamics, NREL tested a small horizontal axis wind turbine in the field at the National Wind Technology Center (NWTC). The test turbine was a 2.1-kW downwind machine mounted on an 18-meter multi-section fiberglass composite tower. The tower was instrumented and monitored for approximately 6 months. The collected data were analyzed to assess the turbine and tower loads and further validate the simplified loads equations from the International Electrotechnical Commission (IEC) 61400-2 design standards. Field-measured loads were also compared to the output of an aeroelasticmore » model of the turbine. Ultimate loads at the tower base were assessed using both the simplified design equations and the aeroelastic model output. The simplified design equations in IEC 61400-2 do not accurately model fatigue loads. In this project, we compared fatigue loads as measured in the field, as predicted by the aeroelastic model, and as calculated using the simplified design equations.« less

  2. Benchmark problems for numerical implementations of phase field models

    DOE PAGES

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emami, Razieh; Mukohyama, Shinji; Namba, Ryo

    Many models of inflation driven by vector fields alone have been known to be plagued by pathological behaviors, namely ghost and/or gradient instabilities. In this work, we seek a new class of vector-driven inflationary models that evade all of the mentioned instabilities. We build our analysis on the Generalized Proca Theory with an extension to three vector fields to realize isotropic expansion. We obtain the conditions required for quasi de-Sitter solutions to be an attractor analogous to the standard slow-roll one and those for their stability at the level of linearized perturbations. Identifying the remedy to the existing unstable models,more » we provide a simple example and explicitly show its stability. This significantly broadens our knowledge on vector inflationary scenarios, reviving potential phenomenological interests for this class of models.« less

  4. Disordered λ φ4+ρ φ6 Landau-Ginzburg model

    NASA Astrophysics Data System (ADS)

    Diaz, R. Acosta; Svaiter, N. F.; Krein, G.; Zarro, C. A. D.

    2018-03-01

    We discuss a disordered λ φ4+ρ φ6 Landau-Ginzburg model defined in a d -dimensional space. First we adopt the standard procedure of averaging the disorder-dependent free energy of the model. The dominant contribution to this quantity is represented by a series of the replica partition functions of the system. Next, using the replica-symmetry ansatz in the saddle-point equations, we prove that the average free energy represents a system with multiple ground states with different order parameters. For low temperatures we show the presence of metastable equilibrium states for some replica fields for a range of values of the physical parameters. Finally, going beyond the mean-field approximation, the one-loop renormalization of this model is performed, in the leading-order replica partition function.

  5. BEAN MODEL AND AC LOSSES IN Bi{sub 2}Sr{sub 2}Ca{sub 2}Cu{sub 3}O{sub 10}/Ag TAPES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SUENAGA,M.; CHIBA,T.; WIESMANN,H.J.

    The Bean model is almost solely used to interpret ac losses in the powder-in-tube processed composite conductor, Bi{sub 2}Sr{sub 2}Ca{sub 2}Cu{sub 3}O{sub 10}/Ag. In order to examine the limits of the applicability of the model, a detailed comparison was made between the values of critical current density J{sub c} for Bi(2223)/Ag tapes which were determined by standard four-probe-dc measurement, and which were deduced from the field dependence of the ac losses utilizing the model. A significant inconsistency between these values of J{sub c} were found, particularly at high fields. Possible sources of the discrepancies are discussed.

  6. A Model of Direct Gauge Mediation of Supersymmetry Breaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murayama, H.

    1997-07-01

    We present the first phenomenologically viable model of gauge meditation of supersymmetry breaking without a messenger sector or gauge singlet fields. The standard model gauge groups couple directly to the sector which breaks supersymmetry dynamically. Despite the direct coupling, it can preserve perturbative gauge unification thanks to the inverted hierarchy mechanism. There is no dangerous negative contribution to m{sup 2}{sub {tilde q}} , m{sup 2}{sub {tilde l}} due to two-loop renormalization group equation. The potentially nonuniversal supergravity contribution to m{sup 2}{sub {tilde q}} and m{sup 2}{sub {tilde l}} can be suppressed enough. The model is completely chiral, and one doesmore » not need to forbid mass terms for the messenger fields by hand. Cosmology of the model is briefly discussed. {copyright} {ital 1997} {ital The American Physical Society}« less

  7. Electronic field emission models beyond the Fowler-Nordheim one

    NASA Astrophysics Data System (ADS)

    Lepetit, Bruno

    2017-12-01

    We propose several quantum mechanical models to describe electronic field emission from first principles. These models allow us to correlate quantitatively the electronic emission current with the electrode surface details at the atomic scale. They all rely on electronic potential energy surfaces obtained from three dimensional density functional theory calculations. They differ by the various quantum mechanical methods (exact or perturbative, time dependent or time independent), which are used to describe tunneling through the electronic potential energy barrier. Comparison of these models between them and with the standard Fowler-Nordheim one in the context of one dimensional tunneling allows us to assess the impact on the accuracy of the computed current of the approximations made in each model. Among these methods, the time dependent perturbative one provides a well-balanced trade-off between accuracy and computational cost.

  8. Kinematic solar dynamo models with a deep meridional flow

    NASA Astrophysics Data System (ADS)

    Guerrero, G. A.; Muñoz, J. D.

    2004-05-01

    We develop two different solar dynamo models to verify the hypothesis that a deep meridional flow can restrict the appearance of sunspots below 45°, proposed recently by Nandy & Choudhuri. In the first one, a single polytropic approximation for the density profile was taken, for both radiative and convective zones. In the second one, that of Pinzon & Calvo-Mozo, two polytropes were used to distinguish between both zones. The magnetic buoyancy mechanism proposed by Dikpati & Charbonneau was chosen in both models. We have in fact obtained that a deep meridional flow pushes the maxima of toroidal magnetic field towards the solar equator, but, in contrast to Nandy & Choudhuri, a second zone of maximal fields remains at the poles. The second model, although closely resembling the solar standard model of Bahcall et al., gives solar cycles three times longer than observed.

  9. SU-E-T-223: High-Energy Photon Standard Dosimetry Data: A Quality Assurance Tool.

    PubMed

    Lowenstein, J; Kry, S; Molineu, A; Alvarez, P; Aguirre, J; Summers, P; Followill, D

    2012-06-01

    Describe the Radiological Physics Center's (RPC) extensive standard dosimetry data set determined from on-site audits measurements. Measurements were made during on-site audits to institutions participating in NCI funded cooperative clinical trials for 44 years using a 0.6cc cylindrical ionization chamber placed within the RPC's water tank. Measurements were made on Varian, Siemens, and Elekta/Philips accelerators for 11 different energies from 68 models of accelerators. We have measured percent depth dose, output factors, and off-axis factors for 123 different accelerator model/energy combinations for which we have 5 or more sets of measurements. The RPC analyzed these data and determined the 'standard data' for each model/energy combination. The RPC defines 'standard data' as the mean value of 5 or more sets of dosimetry data or agreement with published depth dose data (within 2%). The analysis of these standard data indicates that for modern accelerator models, the dosimetry data for a particular model/energy are within ï,±2%. The RPC has always found accelerators of the same make/model/energy combination have the same dosimetric properties in terms of depth dose, field size dependence and off-axis factors. Because of this consistency, the RPC can assign standard data for percent depth dose, average output factors and off-axis factors for a given combination of energy and accelerator make and model. The RPC standard data can be used as a redundant quality assurance tool to assist Medical Physicists to have confidence in their clinical data to within 2%. The next step is for the RPC to provide a way for institutions to submit data to the RPC to determine if their data agrees with the standard data as a redundant check. This work was supported by PHS grants CA10953 awarded by NCI, DHHS. © 2012 American Association of Physicists in Medicine.

  10. Processing of DMSP magnetic data and its use in geomagnetic field modeling

    NASA Technical Reports Server (NTRS)

    Ridgway, J. R.; Sabaka, T. J.; Chinn, D.; Langel, R. A.

    1989-01-01

    The DMSP F-7 satellite is an operational Air Force meteorological satellite which carried a magnetometer for geophysical measurements. The magnetometer was located within the body of the spacecraft in the presence of large spacecraft fields. In addition to stray magnetic fields, the data have inherent position and time inaccuracies. Algorithms were developed to identify and remove time varying magnetic field noise from the data. Techniques developed for Magsat were then modified and used to attempt determination of the spacecraft fields, of any rotation between the magnetometer axes and the spacecraft axes, and of any scale changes within the magnetometer itself. The corrected data were then used to attempt to model the geomagnetic field. This was done in combination with data from Magsat, from the standard magnetic observatories, from aeromagnetic and other survey data, and from DE-2 spacecraft field data. Future DMSP missions can be upgraded in terms of geomagnetic measurements by upgrading the time and position information furnished with the data, placing the magnetometer at the end of the boom, upgrading the attitude determination at the magnetometer, and increasing the accuracy of the magnetometer.

  11. Electromagnetic field strength prediction in an urban environment: A useful tool for the planning of LMSS

    NASA Technical Reports Server (NTRS)

    Vandooren, G. A. J.; Herben, M. H. A. J.; Brussaard, G.; Sforza, M.; Poiaresbaptista, J. P. V.

    1993-01-01

    A model for the prediction of the electromagnetic field strength in an urban environment is presented. The ray model, that is based on the Uniform Theory of Diffraction (UTD), includes effects of the non-perfect conductivity of the obstacles and their surface roughness. The urban environment is transformed into a list of standardized obstacles that have various shapes and material properties. The model is capable of accurately predicting the field strength in the urban environment by calculating different types of wave contributions such as reflected, edge and corner diffracted waves, and combinations thereof. Also, antenna weight functions are introduced to simulate the spatial filtering by the mobile antenna. Communication channel parameters such as signal fading, time delay profiles, Doppler shifts and delay-Doppler spectra can be derived from the ray-tracing procedure using post-processing routines. The model has been tested against results from scaled measurements at 50 GHz and proves to be accurate.

  12. Electro-thermo-optical simulation of vertical-cavity surface-emitting lasers

    NASA Astrophysics Data System (ADS)

    Smagley, Vladimir Anatolievich

    Three-dimensional electro-thermal simulator based on the double-layer approximation for the active region was coupled to optical gain and optical field numerical simulators to provide a self-consistent steady-state solution of VCSEL current-voltage and current-output power characteristics. Methodology of VCSEL modeling had been established and applied to model a standard 850-nm VCSEL based on GaAs-active region and a novel intracavity-contacted 400-nm GaN-based VCSEL. Results of GaAs VCSEL simulation were in a good agreement with experiment. Correlations between current injection and radiative mode profiles have been observed. Physical sub-models of transport, optical gain and cavity optical field were developed. Carrier transport through DBRs was studied. Problem of optical fields in VCSEL cavity was treated numerically by the effective frequency method. All the sub-models were connected through spatially inhomogeneous rate equation system. It was shown that the conventional uncoupled analysis of every separate physical phenomenon would be insufficient to describe VCSEL operation.

  13. Kinklike structures in models of the Dirac-Born-Infeld type

    NASA Astrophysics Data System (ADS)

    Bazeia, D.; Lima, Elisama E. M.; Losano, L.

    2018-01-01

    The present work investigates several models of a single real scalar field, engendering kinetic term of the Dirac-Born- Infeld type. Such theories introduce nonlinearities to the kinetic part of the Lagrangian, which presents a square root restricting the field evolution and including additional powers in derivatives of the scalar field, controlled by a real parameter. In order to obtain topological solutions analytically, we propose a first-order framework that simplifies the equation of motion ensuring solutions that are linearly stable. This is implemented using the deformation method, and we introduce examples presenting two categories of potentials, one having polynomial interactions and the other with nonpolynomial interactions. We also explore how the Dirac-Born-Infeld kinetic term affects the properties of the solutions. In particular, we note that the kinklike solutions are similar to the ones obtained through models with standard kinetic term and canonical potential, but their energy densities and stability potentials vary according to the parameter introduced to control the new models.

  14. Fitting a Structured Juvenile-Adult Model for Green Tree Frogs to Population Estimates from Capture-Mark-Recapture Field Data

    USGS Publications Warehouse

    Ackleh, A.S.; Carter, J.; Deng, K.; Huang, Q.; Pal, N.; Yang, X.

    2012-01-01

    We derive point and interval estimates for an urban population of green tree frogs (Hyla cinerea) from capture-mark-recapture field data obtained during the years 2006-2009. We present an infinite-dimensional least-squares approach which compares a mathematical population model to the statistical population estimates obtained from the field data. The model is composed of nonlinear first-order hyperbolic equations describing the dynamics of the amphibian population where individuals are divided into juveniles (tadpoles) and adults (frogs). To solve the least-squares problem, an explicit finite difference approximation is developed. Convergence results for the computed parameters are presented. Parameter estimates for the vital rates of juveniles and adults are obtained, and standard deviations for these estimates are computed. Numerical results for the model sensitivity with respect to these parameters are given. Finally, the above-mentioned parameter estimates are used to illustrate the long-time behavior of the population under investigation. ?? 2011 Society for Mathematical Biology.

  15. Experimental rill erosion research vs. model concepts - quantification of the hydraulic and erosional efficiency of rills

    NASA Astrophysics Data System (ADS)

    Wirtz, Stefan

    2014-05-01

    In soil erosion research, rills are believed to be one of the most efficient forms. They act as preferential flow paths for overland flow and hence become the most efficient sediment sources in a catchment. However their fraction of the overall detachment in a certain area compared to other soil erosion processes is contentious. The requirement for handling this subject is the standardization of the used measurement methods for rill erosion quantification. Only by using a standardized method, the results of different studies become comparable and can be synthesized to one overall statement. In rill erosion research, such a standardized field method was missing until now. Hence, the first aim of this study is to present an experimental setup that enables us to obtain comparable data about process dynamics in eroding rills under standardized conditions in the field. Using this rill experiment, the runoff efficiency of rills (second aim) and the fraction of rill erosion on total soil loss (third aim) in a catchment are quantified. The erosion rate [g m-2] in the rills is between twenty- and sixty-times higher compared to the interrill areas, the specific discharge [L s-1 m-2] in the rills is about 2000 times higher. The identification and quantification of different rill erosion processes are the fourth aim within this project. Gravitative processes like side wall failure, headcut- and knickpoint retreat provide up to 94 % of the detached sediment quantity. In soil erosion models, only the incision into the rill's bottom is considered, hence the modelled results are unsatisfactorily. Due to the low quality of soil erosion model results, the fifth aim of the study is to review two physical basic assumptions using the rill experiments. Contrasting with the model assumptions, there is no clear linear correlation between any hydraulic parameter and the detachment rate and the transport rate is capable of exceeding the transport capacity. In conclusion, the results clearly show the need of experimental field data obtained under conditions as close as possible to reality. This is the only way to improve the fundamental knowledge about the function and the impact of the different processes in rill erosion. A better understanding of the process combinations is a fundamental request for developing a really functioning soil erosion model. In such a model, spatial and temporal variability as well as the combination of different sub-processes must be considered. Regarding the experimental results of this study, the simulation of natural processes using simple, static mathematical equations seems not to be possible.

  16. 24 CFR 3285.601 - Field assembly.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... parts that are necessary to join all sections of the home and are designed to be located underneath the home. The installation instructions must be designed in accordance with applicable requirements of part... DEVELOPMENT MODEL MANUFACTURED HOME INSTALLATION STANDARDS Ductwork and Plumbing and Fuel Supply Systems...

  17. On the local standard of rest. [comoving with young objects in gravitational field of spiral galaxies

    NASA Technical Reports Server (NTRS)

    Yuan, C.

    1983-01-01

    Under the influence of a spiral gravitational field, there should be differences among the mean motions of different types of objects with different dispersion velocities in a spiral galaxy. The old stars with high dispersion velocity should have essentially no mean motion normal to the galactic rotation. On the other hand, young objects and interstellar gas may be moving relative to the old stars at a velocity of a few kilometer per second in both the radial (galacto-centric), and circular directions, depending on the spiral model adopted. Such a velocity is usually referred as the systematic motion or the streaming motion. The conventionally adopted local standard of rest is indeed co-moving with the young objects of the solar vicinity. Therefore, it has a net systematic motion with respect to the circular motion of an equilibrium galactic model, defined by the old stars. Previously announced in STAR as N83-24443

  18. Light Higgs channel of the resonant decay of magnon condensate in superfluid (3)He-B.

    PubMed

    Zavjalov, V V; Autti, S; Eltsov, V B; Heikkinen, P J; Volovik, G E

    2016-01-08

    In superfluids the order parameter, which describes spontaneous symmetry breaking, is an analogue of the Higgs field in the Standard Model of particle physics. Oscillations of the field amplitude are massive Higgs bosons, while oscillations of the orientation are massless Nambu-Goldstone bosons. The 125 GeV Higgs boson, discovered at Large Hadron Collider, is light compared with electroweak energy scale. Here, we show that such light Higgs exists in superfluid (3)He-B, where one of three Nambu-Goldstone spin-wave modes acquires small mass due to the spin-orbit interaction. Other modes become optical and acoustic magnons. We observe parametric decay of Bose-Einstein condensate of optical magnons to light Higgs modes and decay of optical to acoustic magnons. Formation of a light Higgs from a Nambu-Goldstone mode observed in (3)He-B opens a possibility that such scenario can be realized in other systems, where violation of some hidden symmetry is possible, including the Standard Model.

  19. Light Higgs channel of the resonant decay of magnon condensate in superfluid 3He-B

    PubMed Central

    Zavjalov, V. V.; Autti, S.; Eltsov, V. B.; Heikkinen, P. J.; Volovik, G. E.

    2016-01-01

    In superfluids the order parameter, which describes spontaneous symmetry breaking, is an analogue of the Higgs field in the Standard Model of particle physics. Oscillations of the field amplitude are massive Higgs bosons, while oscillations of the orientation are massless Nambu-Goldstone bosons. The 125 GeV Higgs boson, discovered at Large Hadron Collider, is light compared with electroweak energy scale. Here, we show that such light Higgs exists in superfluid 3He-B, where one of three Nambu-Goldstone spin-wave modes acquires small mass due to the spin–orbit interaction. Other modes become optical and acoustic magnons. We observe parametric decay of Bose-Einstein condensate of optical magnons to light Higgs modes and decay of optical to acoustic magnons. Formation of a light Higgs from a Nambu-Goldstone mode observed in 3He-B opens a possibility that such scenario can be realized in other systems, where violation of some hidden symmetry is possible, including the Standard Model. PMID:26743951

  20. Minimal mirror twin Higgs

    DOE PAGES

    Barbieri, Riccardo; Hall, Lawrence J.; Harigaya, Keisuke

    2016-11-29

    In a Mirror Twin World with a maximally symmetric Higgs sector the little hierarchy of the Standard Model can be significantly mitigated, perhaps displacing the cutoff scale above the LHC reach. We show that consistency with observations requires that the Z 2 parity exchanging the Standard Model with its mirror be broken in the Yukawa couplings. A minimal such effective field theory, with this sole Z 2 breaking, can generate the Z 2 breaking in the Higgs sector necessary for the Twin Higgs mechanism. The theory has constrained and correlated signals i n Higgs decays, direct Dark Matter Detection andmore » Dark Radiation, all within reach of foreseen experiments, over a region of parameter space where the fine-tuning for the electroweak scale is 10-50%. For dark matter, both mirror neutrons and a variety of self-interacting mirror atoms are considered. Neutrino mass signals and the effects of a possible additional Z 2 breaking from the vacuum expectation values of B-L breaking fields are also discussed.« less

  1. Numerical modeling of laser-driven experiments aiming to demonstrate magnetic field amplification via turbulent dynamo

    NASA Astrophysics Data System (ADS)

    Tzeferacos, P.; Rigby, A.; Bott, A.; Bell, A. R.; Bingham, R.; Casner, A.; Cattaneo, F.; Churazov, E. M.; Emig, J.; Flocke, N.; Fiuza, F.; Forest, C. B.; Foster, J.; Graziani, C.; Katz, J.; Koenig, M.; Li, C.-K.; Meinecke, J.; Petrasso, R.; Park, H.-S.; Remington, B. A.; Ross, J. S.; Ryu, D.; Ryutov, D.; Weide, K.; White, T. G.; Reville, B.; Miniati, F.; Schekochihin, A. A.; Froula, D. H.; Gregori, G.; Lamb, D. Q.

    2017-04-01

    The universe is permeated by magnetic fields, with strengths ranging from a femtogauss in the voids between the filaments of galaxy clusters to several teragauss in black holes and neutron stars. The standard model behind cosmological magnetic fields is the nonlinear amplification of seed fields via turbulent dynamo to the values observed. We have conceived experiments that aim to demonstrate and study the turbulent dynamo mechanism in the laboratory. Here, we describe the design of these experiments through simulation campaigns using FLASH, a highly capable radiation magnetohydrodynamics code that we have developed, and large-scale three-dimensional simulations on the Mira supercomputer at the Argonne National Laboratory. The simulation results indicate that the experimental platform may be capable of reaching a turbulent plasma state and determining the dynamo amplification. We validate and compare our numerical results with a small subset of experimental data using synthetic diagnostics.

  2. Reproducing Quantum Probability Distributions at the Speed of Classical Dynamics: A New Approach for Developing Force-Field Functors.

    PubMed

    Sundar, Vikram; Gelbwaser-Klimovsky, David; Aspuru-Guzik, Alán

    2018-04-05

    Modeling nuclear quantum effects is required for accurate molecular dynamics (MD) simulations of molecules. The community has paid special attention to water and other biomolecules that show hydrogen bonding. Standard methods of modeling nuclear quantum effects like Ring Polymer Molecular Dynamics (RPMD) are computationally costlier than running classical trajectories. A force-field functor (FFF) is an alternative method that computes an effective force field that replicates quantum properties of the original force field. In this work, we propose an efficient method of computing FFF using the Wigner-Kirkwood expansion. As a test case, we calculate a range of thermodynamic properties of Neon, obtaining the same level of accuracy as RPMD, but with the shorter runtime of classical simulations. By modifying existing MD programs, the proposed method could be used in the future to increase the efficiency and accuracy of MD simulations involving water and proteins.

  3. Nonthermal ions and associated magnetic field behavior at a quasi-parallel earth's bow shock

    NASA Technical Reports Server (NTRS)

    Wilkinson, W. P.; Pardaens, A. K.; Schwartz, S. J.; Burgess, D.; Luehr, H.; Kessel, R. L.; Dunlop, M.; Farrugia, C. J.

    1993-01-01

    Attention is given to ion and magnetic field measurements at the earth's bow shock from the AMPTE-UKS and -IRM spacecraft, which were examined in high time resolution during a 45-min interval when the field remained closely aligned with the model bow shock normal. Dense ion beams were detected almost exclusively in the midst of short-duration periods of turbulent magnetic field wave activity. Many examples of propagation at large elevation angles relative to the ecliptic plane, which is inconsistent with reflection in the standard model shock configuration, were discovered. The associated waves are elliptically polarized and are preferentially left-handed in the observer's frame of reference, but are less confined to the maximum variance plane than other previously studied foreshock waves. The association of the wave activity with the ion beams suggests that the former may be triggered by an ion-driven instability, and possible candidates are discussed.

  4. New Holographic Chaplygin Gas Model of Dark Energy

    NASA Astrophysics Data System (ADS)

    Malekjani, M.; Khodam-Mohammadi, A.

    In this work, we investigate the holographic dark energy model with a new infrared cutoff (new HDE model), proposed by Granda and Oliveros. Using this new definition for the infrared cutoff, we establish the correspondence between the new HDE model and the standard Chaplygin gas (SCG), generalized Chaplygin gas (GCG) and modified Chaplygin gas (MCG) scalar field models in a nonflat universe. The potential and dynamics for these scalar field models, which describe the accelerated expansion of the universe, are reconstructed. According to the evolutionary behavior of the new HDE model, we derive the same form of dynamics and potential for the different SCG, GCG and MCG models. We also calculate the squared sound speed of the new HDE model as well as the SCG, GCG and MCG models, and investigate the new HDE Chaplygin gas models from the viewpoint of linear perturbation theory. In addition, all results in the nonflat universe are discussed in the limiting case of the flat universe, i.e. k = 0.

  5. Adaptive Standard Operating Procedures for Complex Disasters

    DTIC Science & Technology

    2017-03-01

    Developments in Business Simulation and Experiential Learning 33 (2014). 23 Patrick Lagadec and Benjamin Topper, “How Crises Model the Modern World...field of crisis response . Therefore, this experiment supports the argument for implementing the adaptive design proposals. The adaptive SOP enhancement...Kalay. “An Event- Based Model to Simulate Human Behaviour in Built Environments.” Proceedings of the 30th eCAADe Conference 1 (2012). Snowden

  6. OpenCMISS: a multi-physics & multi-scale computational infrastructure for the VPH/Physiome project.

    PubMed

    Bradley, Chris; Bowery, Andy; Britten, Randall; Budelmann, Vincent; Camara, Oscar; Christie, Richard; Cookson, Andrew; Frangi, Alejandro F; Gamage, Thiranja Babarenda; Heidlauf, Thomas; Krittian, Sebastian; Ladd, David; Little, Caton; Mithraratne, Kumar; Nash, Martyn; Nickerson, David; Nielsen, Poul; Nordbø, Oyvind; Omholt, Stig; Pashaei, Ali; Paterson, David; Rajagopal, Vijayaraghavan; Reeve, Adam; Röhrle, Oliver; Safaei, Soroush; Sebastián, Rafael; Steghöfer, Martin; Wu, Tim; Yu, Ting; Zhang, Heye; Hunter, Peter

    2011-10-01

    The VPH/Physiome Project is developing the model encoding standards CellML (cellml.org) and FieldML (fieldml.org) as well as web-accessible model repositories based on these standards (models.physiome.org). Freely available open source computational modelling software is also being developed to solve the partial differential equations described by the models and to visualise results. The OpenCMISS code (opencmiss.org), described here, has been developed by the authors over the last six years to replace the CMISS code that has supported a number of organ system Physiome projects. OpenCMISS is designed to encompass multiple sets of physical equations and to link subcellular and tissue-level biophysical processes into organ-level processes. In the Heart Physiome project, for example, the large deformation mechanics of the myocardial wall need to be coupled to both ventricular flow and embedded coronary flow, and the reaction-diffusion equations that govern the propagation of electrical waves through myocardial tissue need to be coupled with equations that describe the ion channel currents that flow through the cardiac cell membranes. In this paper we discuss the design principles and distributed memory architecture behind the OpenCMISS code. We also discuss the design of the interfaces that link the sets of physical equations across common boundaries (such as fluid-structure coupling), or between spatial fields over the same domain (such as coupled electromechanics), and the concepts behind CellML and FieldML that are embodied in the OpenCMISS data structures. We show how all of these provide a flexible infrastructure for combining models developed across the VPH/Physiome community. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Novel Physical Model for DC Partial Discharge in Polymeric Insulators

    NASA Astrophysics Data System (ADS)

    Andersen, Allen; Dennison, J. R.

    The physics of DC partial discharge (DCPD) continues to pose a challenge to researchers. We present a new physically-motivated model of DCPD in amorphous polymers based on our dual-defect model of dielectric breakdown. The dual-defect model is an extension of standard static mean field theories, such as the Crine model, that describe avalanche breakdown of charge carriers trapped on uniformly distributed defect sites. It assumes the presence of both high-energy chemical defects and low-energy thermally-recoverable physical defects. We present our measurements of breakdown and DCPD for several common polymeric materials in the context of this model. Improved understanding of DCPD and how it relates to eventual dielectric breakdown is critical to the fields of spacecraft charging, high voltage DC power distribution, high density capacitors, and microelectronics. This work was supported by a NASA Space Technology Research Fellowship.

  8. Optical Modeling Activities for the James Webb Space Telescope (JWST) Project. II; Determining Image Motion and Wavefront Error Over an Extended Field of View with a Segmented Optical System

    NASA Technical Reports Server (NTRS)

    Howard, Joseph M.; Ha, Kong Q.

    2004-01-01

    This is part two of a series on the optical modeling activities for JWST. Starting with the linear optical model discussed in part one, we develop centroid and wavefront error sensitivities for the special case of a segmented optical system such as JWST, where the primary mirror consists of 18 individual segments. Our approach extends standard sensitivity matrix methods used for systems consisting of monolithic optics, where the image motion is approximated by averaging ray coordinates at the image and residual wavefront error is determined with global tip/tilt removed. We develop an exact formulation using the linear optical model, and extend it to cover multiple field points for performance prediction at each instrument aboard JWST. This optical model is then driven by thermal and dynamic structural perturbations in an integrated modeling environment. Results are presented.

  9. Effective electric fields along realistic DTI-based neural trajectories for modelling the stimulation mechanisms of TMS

    NASA Astrophysics Data System (ADS)

    De Geeter, N.; Crevecoeur, G.; Leemans, A.; Dupré, L.

    2015-01-01

    In transcranial magnetic stimulation (TMS), an applied alternating magnetic field induces an electric field in the brain that can interact with the neural system. It is generally assumed that this induced electric field is the crucial effect exciting a certain region of the brain. More specifically, it is the component of this field parallel to the neuron’s local orientation, the so-called effective electric field, that can initiate neuronal stimulation. Deeper insights on the stimulation mechanisms can be acquired through extensive TMS modelling. Most models study simple representations of neurons with assumed geometries, whereas we embed realistic neural trajectories computed using tractography based on diffusion tensor images. This way of modelling ensures a more accurate spatial distribution of the effective electric field that is in addition patient and case specific. The case study of this paper focuses on the single pulse stimulation of the left primary motor cortex with a standard figure-of-eight coil. Including realistic neural geometry in the model demonstrates the strong and localized variations of the effective electric field between the tracts themselves and along them due to the interplay of factors such as the tract’s position and orientation in relation to the TMS coil, the neural trajectory and its course along the white and grey matter interface. Furthermore, the influence of changes in the coil orientation is studied. Investigating the impact of tissue anisotropy confirms that its contribution is not negligible. Moreover, assuming isotropic tissues lead to errors of the same size as rotating or tilting the coil with 10 degrees. In contrast, the model proves to be less sensitive towards the not well-known tissue conductivity values.

  10. Pairwise-interaction extended point-particle model for particle-laden flows

    NASA Astrophysics Data System (ADS)

    Akiki, G.; Moore, W. C.; Balachandar, S.

    2017-12-01

    In this work we consider the pairwise interaction extended point-particle (PIEP) model for Euler-Lagrange simulations of particle-laden flows. By accounting for the precise location of neighbors the PIEP model goes beyond local particle volume fraction, and distinguishes the influence of upstream, downstream and laterally located neighbors. The two main ingredients of the PIEP model are (i) the undisturbed flow at any particle is evaluated as a superposition of the macroscale flow and a microscale flow that is approximated as a pairwise superposition of perturbation fields induced by each of the neighboring particles, and (ii) the forces and torque on the particle are then calculated from the undisturbed flow using the Faxén form of the force relation. The computational efficiency of the standard Euler-Lagrange approach is retained, since the microscale perturbation fields induced by a neighbor are pre-computed and stored as PIEP maps. Here we extend the PIEP force model of Akiki et al. [3] with a corresponding torque model to systematically include the effect of perturbation fields induced by the neighbors in evaluating the net torque. Also, we use DNS results from a uniform flow over two stationary spheres to further improve the PIEP force and torque models. We then test the PIEP model in three different sedimentation problems and compare the results against corresponding DNS to assess the accuracy of the PIEP model and improvement over the standard point-particle approach. In the case of two sedimenting spheres in a quiescent ambient the PIEP model is shown to capture the drafting-kissing-tumbling process. In cases of 5 and 80 sedimenting spheres a good agreement is obtained between the PIEP simulation and the DNS. For all three simulations, the DEM-PIEP was able to recreate, to a good extent, the results from the DNS, while requiring only a negligible fraction of the numerical resources required by the fully-resolved DNS.

  11. Low temperature electroweak phase transition in the Standard Model with hidden scale invariance

    NASA Astrophysics Data System (ADS)

    Arunasalam, Suntharan; Kobakhidze, Archil; Lagger, Cyril; Liang, Shelley; Zhou, Albert

    2018-01-01

    We discuss a cosmological phase transition within the Standard Model which incorporates spontaneously broken scale invariance as a low-energy theory. In addition to the Standard Model fields, the minimal model involves a light dilaton, which acquires a large vacuum expectation value (VEV) through the mechanism of dimensional transmutation. Under the assumption of the cancellation of the vacuum energy, the dilaton develops a very small mass at 2-loop order. As a result, a flat direction is present in the classical dilaton-Higgs potential at zero temperature while the quantum potential admits two (almost) degenerate local minima with unbroken and broken electroweak symmetry. We found that the cosmological electroweak phase transition in this model can only be triggered by a QCD chiral symmetry breaking phase transition at low temperatures, T ≲ 132 MeV. Furthermore, unlike the standard case, the universe settles into the chiral symmetry breaking vacuum via a first-order phase transition which gives rise to a stochastic gravitational background with a peak frequency ∼10-8 Hz as well as triggers the production of approximately solar mass primordial black holes. The observation of these signatures of cosmological phase transitions together with the detection of a light dilaton would provide a strong hint of the fundamental role of scale invariance in particle physics.

  12. Standard, Random, and Optimum Array conversions from Two-Pole resistance data

    DOE PAGES

    Rucker, D. F.; Glaser, Danney R.

    2014-09-01

    We present an array evaluation of standard and nonstandard arrays over a hydrogeological target. We develop the arrays by linearly combining data from the pole-pole (or 2-pole) array. The first test shows that reconstructed resistances for the standard Schlumberger and dipoledipole arrays are equivalent or superior to the measured arrays in terms of noise, especially at large geometric factors. The inverse models for the standard arrays also confirm what others have presented in terms of target resolvability, namely the dipole-dipole array has the highest resolution. In the second test, we reconstruct random electrode combinations from the 2-pole data segregated intomore » inner, outer, and overlapping dipoles. The resistance data and inverse models from these randomized arrays show those with inner dipoles to be superior in terms of noise and resolution and that overlapping dipoles can cause model instability and low resolution. Finally, we use the 2-pole data to create an optimized array that maximizes the model resolution matrix for a given electrode geometry. The optimized array produces the highest resolution and target detail. Thus, the tests demonstrate that high quality data and high model resolution can be achieved by acquiring field data from the pole-pole array.« less

  13. ATK-ForceField: a new generation molecular dynamics software package

    NASA Astrophysics Data System (ADS)

    Schneider, Julian; Hamaekers, Jan; Chill, Samuel T.; Smidstrup, Søren; Bulin, Johannes; Thesen, Ralph; Blom, Anders; Stokbro, Kurt

    2017-12-01

    ATK-ForceField is a software package for atomistic simulations using classical interatomic potentials. It is implemented as a part of the Atomistix ToolKit (ATK), which is a Python programming environment that makes it easy to create and analyze both standard and highly customized simulations. This paper will focus on the atomic interaction potentials, molecular dynamics, and geometry optimization features of the software, however, many more advanced modeling features are available. The implementation details of these algorithms and their computational performance will be shown. We present three illustrative examples of the types of calculations that are possible with ATK-ForceField: modeling thermal transport properties in a silicon germanium crystal, vapor deposition of selenium molecules on a selenium surface, and a simulation of creep in a copper polycrystal.

  14. Spherical collapse in chameleon models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brax, Ph.; Rosenfeld, R.; Steer, D.A., E-mail: brax@spht.saclay.cea.fr, E-mail: rosenfel@ift.unesp.br, E-mail: daniele.steer@apc.univ-paris7.fr

    2010-08-01

    We study the gravitational collapse of an overdensity of nonrelativistic matter under the action of gravity and a chameleon scalar field. We show that the spherical collapse model is modified by the presence of a chameleon field. In particular, we find that even though the chameleon effects can be potentially large at small scales, for a large enough initial size of the inhomogeneity the collapsing region possesses a thin shell that shields the modification of gravity induced by the chameleon field, recovering the standard gravity results. We analyse the behaviour of a collapsing shell in a cosmological setting in themore » presence of a thin shell and find that, in contrast to the usual case, the critical density for collapse in principle depends on the initial comoving size of the inhomogeneity.« less

  15. Simulations of Cold Electroweak Baryogenesis: quench from portal coupling to new singlet field

    NASA Astrophysics Data System (ADS)

    Mou, Zong-Gang; Saffin, Paul M.; Tranberg, Anders

    2018-01-01

    We compute the baryon asymmetry generated from Cold Electroweak Baryogenesis, when a dynamical Beyond-the-Standard-Model scalar singlet field triggers the spinodal transition. Using a simple potential for this additional field, we match the speed of the quench to earlier simulations with a "by-hand" mass flip. We find that for the parameter subspace most similar to a by-hand transition, the final baryon asymmetry shows a similar dependence on quench time and is of the same magnitude. For more general parameter choices the Higgs-singlet dynamics can be very complicated, resulting in an enhancement of the final baryon asymmetry. Our results validate and generalise results of simulations in the literature and open up the Cold Electroweak Baryogenesis scenario to further model building.

  16. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China*

    PubMed Central

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-01-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749

  17. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China.

    PubMed

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-04-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.

  18. Exploring the Role of Overlying Fields and Flare Ribbons in CME Speeds

    NASA Astrophysics Data System (ADS)

    Deng, M.; Welsch, B. T.

    2013-12-01

    The standard model of eruptive, two-ribbon flares involves reconnection of overlying magnetic fields beneath a rising ejection. Numerous observers have reported evidence linking this reconnection, indicated by photospheric flux swept out by flare ribbons, to coronal mass ejection (CME) acceleration. This acceleration might be caused by reconnected fields that wrap around the ejection producing an increased outward "hoop force." Other observations have linked stronger overlying fields, measured by the power-law index of the fitted decay rate of field strengths overlying eruption sites, to slower CME speeds. This might be caused by greater downward magnetic tension in stronger overlying fields. So overlying fields might both help and hinder the acceleration of CMEs: reconnection that converts overlying fields into flux winding about the ejection might help, but unreconnected overlying fields might hurt. Here, we investigate the roles of both ribbon fluxes and the decay rates of overlying fields in a set of eruptive events.

  19. Computational Material Processing in Microgravity

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Working with Professor David Matthiesen at Case Western Reserve University (CWRU) a computer model of the DPIMS (Diffusion Processes in Molten Semiconductors) space experiment was developed that is able to predict the thermal field, flow field and concentration profile within a molten germanium capillary under both ground-based and microgravity conditions as illustrated. These models are coupled with a novel nonlinear statistical methodology for estimating the diffusion coefficient from measured concentration values after a given time that yields a more accurate estimate than traditional methods. This code was integrated into a web-based application that has become a standard tool used by engineers in the Materials Science Department at CWRU.

  20. Axionic black branes in the k -essence sector of the Horndeski model

    NASA Astrophysics Data System (ADS)

    Cisterna, Adolfo; Hassaine, Mokhtar; Oliva, Julio; Rinaldi, Massimiliano

    2017-12-01

    We construct new black brane solutions in the context of Horndeski gravity, in particular, in its K-essence sector. These models are supported by axion scalar fields that depend only on the horizon coordinates. The dynamics of these fields is determined by a K-essence term that includes the standard kinetic term X and a correction of the form Xk. We find both neutral and charged exact and analytic solutions in D -dimensions, which are asymptotically anti-de Sitter. Then, we describe in detail the thermodynamical properties of the four-dimensional solutions and we compute the dual holographic DC conductivity.

  1. Quantitative description and modeling of real networks

    NASA Astrophysics Data System (ADS)

    Capocci, Andrea; Caldarelli, Guido; de Los Rios, Paolo

    2003-10-01

    We present data analysis and modeling of two particular cases of study in the field of growing networks. We analyze World Wide Web data set and authorship collaboration networks in order to check the presence of correlation in the data. The results are reproduced with good agreement through a suitable modification of the standard Albert-Barabási model of network growth. In particular, intrinsic relevance of sites plays a role in determining the future degree of the vertex.

  2. Including gauge-group parameters into the theory of interactions: an alternative mass-generating mechanism for gauge fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldaya, V.; Lopez-Ruiz, F. F.; Sanchez-Sastre, E.

    2006-11-03

    We reformulate the gauge theory of interactions by introducing the gauge group parameters into the model. The dynamics of the new 'Goldstone-like' bosons is accomplished through a non-linear {sigma}-model Lagrangian. They are minimally coupled according to a proper prescription which provides mass terms to the intermediate vector bosons without spoiling gauge invariance. The present formalism is explicitly applied to the Standard Model of electroweak interactions.

  3. Implementing the HL7v3 standard in Croatian primary healthcare domain.

    PubMed

    Koncar, Miroslav

    2004-01-01

    The mission of HL7 Inc. is to provide standards for the exchange, management and integration of data that supports clinical patient care and the management, delivery and evaluation of healthcare services. The scope of this work includes the specifications of flexible, cost-effective approaches, standards, guidelines, methodologies, and related services for interoperability between healthcare information systems. In the field of medical information technologies, HL7 provides the world's most advanced information standards. Versions 1 and 2 of the HL7 standard have on the one hand solved many issues, but on the other demonstrated the size and complexity of the health information sharing problem. As the solution, a complete new methodology has been adopted, which is being encompassed in version 3 recommendations. This approach standardizes the Reference Information Model (RIM), which is the source of all domain models and message structures. Message design is now defined in detail, enabling interoperability between loosely-coupled systems that are designed by different vendors and deployed in various environments. At the start of the Primary Healthcare Information System project, we have decided to go directly to HL7v3. Implementing the HL7v3 standard in healthcare applications represents a challenging task. By using standardized refinement and localization methods we were able to define information models for Croatian primary healthcare domain. The scope of our work includes clinical, financial and administrative data management, where in some cases we were compelled to introduce new HL7v3-compliant models. All of the HL7v3 transactions are digitally signed, using the W3C XML Digital Signature standard.

  4. Characteristics Of Ferroelectric Logic Gates Using a Spice-Based Model

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd C.; Phillips, Thomas A.; Ho, Fat D.

    2005-01-01

    A SPICE-based model of an n-channel ferroelectric field effect transistor has been developed based on both theoretical and empirical data. This model was used to generate the I-V characteristic of several logic gates. The use of ferroelectric field effect transistors in memory circuits is being developed by several organizations. The use of FFETs in other circuits, both analog and digital needs to be better understood. The ability of FFETs to have different characteristics depending on the initial polarization can be used to create logic gates. These gates can have properties not available to standard CMOS logic gates, such as memory, reconfigurability and memory. This paper investigates basic properties of FFET logic gates. It models FFET inverter, NAND gate and multi-input NAND gate. The I-V characteristics of the gates are presented as well as transfer characteristics and timing. The model used is a SPICE-based model developed from empirical data from actual Ferroelectric transistors. It simulates all major characteristics of the ferroelectric transistor, including polarization, hysteresis and decay. Contrasts are made of the differences between FFET logic gates and CMOS logic gates. FFET parameters are varied to show the effect on the overall gate. A recodigurable gate is investigated which is not possible with CMOS circuits. The paper concludes that FFETs can be used in logic gates and have several advantages over standard CMOS gates.

  5. A partial entropic lattice Boltzmann MHD simulation of the Orszag-Tang vortex

    NASA Astrophysics Data System (ADS)

    Flint, Christopher; Vahala, George

    2018-02-01

    Karlin has introduced an analytically determined entropic lattice Boltzmann (LB) algorithm for Navier-Stokes turbulence. Here, this is partially extended to an LB model of magnetohydrodynamics, on using the vector distribution function approach of Dellar for the magnetic field (which is permitted to have field reversal). The partial entropic algorithm is benchmarked successfully against standard simulations of the Orszag-Tang vortex [Orszag, S.A.; Tang, C.M. J. Fluid Mech. 1979, 90 (1), 129-143].

  6. Resonance neutrino bremsstrahlung ν-->νγ in a strong magnetic field

    NASA Astrophysics Data System (ADS)

    Gvozdev, A. A.; Mikheev, N. V.; Vassilevskaya, L. A.

    1997-10-01

    High energy neutrino bremsstrahlung ν-->ν+γ in a strong magnetic field (B{unknown entity >}Bs) is studied in the framework of the Standard Model (SM). A resonance probability and a four-vector of the neutrino energy and momentum loss are presented. A possible manifestation of the neutrino bremsstrahlung in astrophysical cataclysm of type of a supernova explosion or a merger of neutron stars, as an origin of cosmological γ-burst is briefly discussed.

  7. Pre- and post-natal exposure of children to EMF generated by domestic induction cookers.

    PubMed

    Kos, Bor; Valič, Blaž; Miklavčič, Damijan; Kotnik, Tadej; Gajšek, Peter

    2011-10-07

    Induction cookers are a type of cooking appliance that uses an intermediate-frequency magnetic field to heat the cooking vessel. The magnetic flux density produced by an induction cooker during operation was measured according to the EN 62233 standard, and the measured values were below the limits set in the standard. The measurements were used to validate a numerical model consisting of three vertically displaced coaxial current loops at 35 kHz. The numerical model was then used to compute the electric field (E) and induced current (J) in 26 and 30 weeks pregnant women and 6 and 11 year old children. Both E and J were found to be below the basic restrictions of the 2010 low-frequency and 1998 ICNRIP guidelines. The maximum computed E fields in the whole body were 0.11 and 0.66 V m(-1) in the 26 and 30 weeks pregnant women and 0.28 and 2.28 V m(-1) in the 6 and 11 year old children (ICNIRP basic restriction 4.25 V m(-1)). The maximum computed J fields in the whole body were 46 and 42 mA m(-2) in the 26 and 30 weeks pregnant women and 27 and 16 mA m(-2) in the 6 and 11 year old children (ICNIRP basic restriction 70 mA m(-2)).

  8. On the cosmology of scalar-tensor-vector gravity theory

    NASA Astrophysics Data System (ADS)

    Jamali, Sara; Roshan, Mahmood; Amendola, Luca

    2018-01-01

    We consider the cosmological consequences of a special scalar-tensor-vector theory of gravity, known as MOG (for MOdified Gravity), proposed to address the dark matter problem. This theory introduces two scalar fields G(x) and μ(x), and one vector field phiα(x), in addition to the metric tensor. We set the corresponding self-interaction potentials to zero, as in the standard form of MOG. Then using the phase space analysis in the flat Friedmann-Robertson-Walker background, we show that the theory possesses a viable sequence of cosmological epochs with acceptable time dependency for the cosmic scale factor. We also investigate MOG's potential as a dark energy model and show that extra fields in MOG cannot provide a late time accelerated expansion. Furthermore, using a dynamical system approach to solve the non-linear field equations numerically, we calculate the angular size of the sound horizon, i.e. θs, in MOG. We find that 8× 10‑3rad<θs<8.2× 10‑3 rad which is way outside the current observational bounds. Finally, we generalize MOG to a modified form called mMOG, and we find that mMOG passes the sound-horizon constraint. However, mMOG also cannot be considered as a dark energy model unless one adds a cosmological constant, and more importantly, the matter dominated era is still slightly different from the standard case.

  9. Pre- and post-natal exposure of children to EMF generated by domestic induction cookers

    NASA Astrophysics Data System (ADS)

    Kos, Bor; Valič, Blaž; Miklavčič, Damijan; Kotnik, Tadej; Gajšek, Peter

    2011-10-01

    Induction cookers are a type of cooking appliance that uses an intermediate-frequency magnetic field to heat the cooking vessel. The magnetic flux density produced by an induction cooker during operation was measured according to the EN 62233 standard, and the measured values were below the limits set in the standard. The measurements were used to validate a numerical model consisting of three vertically displaced coaxial current loops at 35 kHz. The numerical model was then used to compute the electric field (E) and induced current (J) in 26 and 30 weeks pregnant women and 6 and 11 year old children. Both E and J were found to be below the basic restrictions of the 2010 low-frequency and 1998 ICNRIP guidelines. The maximum computed E fields in the whole body were 0.11 and 0.66 V m-1 in the 26 and 30 weeks pregnant women and 0.28 and 2.28 V m-1 in the 6 and 11 year old children (ICNIRP basic restriction 4.25 V m-1). The maximum computed J fields in the whole body were 46 and 42 mA m-2 in the 26 and 30 weeks pregnant women and 27 and 16 mA m-2 in the 6 and 11 year old children (ICNIRP basic restriction 70 mA m-2).

  10. O the Development and Use of Four-Dimensional Data Assimilation in Limited-Area Mesoscale Models Used for Meteorological Analysis.

    NASA Astrophysics Data System (ADS)

    Stauffer, David R.

    1990-01-01

    The application of dynamic relationships to the analysis problem for the atmosphere is extended to use a full-physics limited-area mesoscale model as the dynamic constraint. A four-dimensional data assimilation (FDDA) scheme based on Newtonian relaxation or "nudging" is developed and evaluated in the Penn State/National Center for Atmospheric Research (PSU/NCAR) mesoscale model, which is used here as a dynamic-analysis tool. The thesis is to determine what assimilation strategies and what meterological fields (mass, wind or both) have the greatest positive impact on the 72-h numerical simulations (dynamic analyses) of two mid-latitude, real-data cases. The basic FDDA methodology is tested in a 10-layer version of the model with a bulk-aerodynamic (single-layer) representation of the planetary boundary layer (PBL), and refined in a 15-layer version of the model by considering the effects of data assimilation within a multi-layer PBL scheme. As designed, the model solution can be relaxed toward either gridded analyses ("analysis nudging"), or toward the actual observations ("obs nudging"). The data used for assimilation include standard 12-hourly rawinsonde data, and also 3-hourly mesoalpha-scale surface data which are applied within the model's multi-layer PBL. Continuous assimilation of standard-resolution rawinsonde data into the 10-layer model successfully reduced large-scale amplitude and phase errors while the model realistically simulated mesoscale structures poorly defined or absent in the rawinsonde analyses and in the model simulations without FDDA. Nudging the model fields directly toward the rawinsonde observations generally produced results comparable to nudging toward gridded analyses. This obs -nudging technique is especially attractive for the assimilation of high-frequency, asynoptic data. Assimilation of 3-hourly surface wind and moisture data into the 15-layer FDDA system was most effective for improving the simulated precipitation fields because a significant portion of the vertically integrated moisture convergence often occurs in the PBL. Overall, the best dynamic analyses for the PBL, mass, wind and precipitation fields were obtained by nudging toward analyses of rawinsonde wind, temperature and moisture (the latter uses a weaker nudging coefficient) above the model PBL and toward analyses of surface-layer wind and moisture within the model PBL.

  11. Virtual-source diffusion approximation for enhanced near-field modeling of photon-migration in low-albedo medium.

    PubMed

    Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng

    2015-01-26

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.

  12. Dynamic Reconstruction Algorithm of Three-Dimensional Temperature Field Measurement by Acoustic Tomography

    PubMed Central

    Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.

    2017-01-01

    Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930

  13. Extracting the field-effect mobilities of random semiconducting single-walled carbon nanotube networks: A critical comparison of methods

    NASA Astrophysics Data System (ADS)

    Schießl, Stefan P.; Rother, Marcel; Lüttgens, Jan; Zaumseil, Jana

    2017-11-01

    The field-effect mobility is an important figure of merit for semiconductors such as random networks of single-walled carbon nanotubes (SWNTs). However, owing to their network properties and quantum capacitance, the standard models for field-effect transistors cannot be applied without modifications. Several different methods are used to determine the mobility with often very different results. We fabricated and characterized field-effect transistors with different polymer-sorted, semiconducting SWNT network densities ranging from low (≈6 μm-1) to densely packed quasi-monolayers (≈26 μm-1) with a maximum on-conductance of 0.24 μS μm-1 and compared four different techniques to evaluate the field-effect mobility. We demonstrate the limits and requirements for each method with regard to device layout and carrier accumulation. We find that techniques that take into account the measured capacitance on the active device give the most reliable mobility values. Finally, we compare our experimental results to a random-resistor-network model.

  14. Computational fluid dynamic on the temperature simulation of air preheat effect combustion in propane turbulent flame

    NASA Astrophysics Data System (ADS)

    Elwina; Yunardi; Bindar, Yazid

    2018-04-01

    this paper presents results obtained from the application of a computational fluid dynamics (CFD) code Fluent 6.3 to modelling of temperature in propane flames with and without air preheat. The study focuses to investigate the effect of air preheat temperature on the temperature of the flame. A standard k-ε model and Eddy Dissipation model are utilized to represent the flow field and combustion of the flame being investigated, respectively. The results of calculations are compared with experimental data of propane flame taken from literature. The results of the study show that a combination of the standard k-ε turbulence model and eddy dissipation model is capable of producing reasonable predictions of temperature, particularly in axial profile of all three flames. Both experimental works and numerical simulation showed that increasing the temperature of the combustion air significantly increases the flame temperature.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, S.; Lewis, I. M.

    One of the simplest extensions of the Standard Model (SM) is the addition of a scalar gauge singlet, S . If S is not forbidden by a symmetry from mixing with the Standard Model Higgs boson, the mixing will generate non-SM rates for Higgs production and decays. Generally, there could also be unknown high energy physics that generates additional effective low energy interactions. We show that interference effects between the scalar resonance of the singlet model and the effective field theory (EFT) operators can have significant effects in the Higgs sector. Here, we examine a non- Z 2 symmetricmore » scalar singlet model and demonstrate that a fit to the 125 GeV Higgs boson couplings and to limits on high mass resonances, S , exhibit an interesting structure and possible large cancellations of effects between the resonance contribution and the new EFT interactions, that invalidate conclusions based on the renormalizable singlet model alone.« less

  16. Sakurai Prize: Extended Higgs Sectors--phenomenology and future prospects

    NASA Astrophysics Data System (ADS)

    Gunion, John

    2017-01-01

    The discovery of a spin-0 state at 125 GeV with properties close to those predicted for the single Higgs boson of the Standard Model does not preclude the existence of additional Higgs bosons. In this talk, models with extended Higgs sectors are reviewed, including two-Higgs-doublet models with and without an extra singlet Higgs field and supersymmetric models. Special emphasis is given to the limit in which the couplings and properties of one of the Higgs bosons of the extended Higgs sector are very close to those predicted for the single Standard Model Higgs boson while the other Higgs bosons are relatively light, perhaps even having masses close to or below the SM-like 125 GeV state. Constraints on this type of scenario given existing data are summarized and prospects for observing these non-SM-like Higgs bosons are discussed. Supported by the Department of Energy.

  17. Accretion Discs Around Black Holes: Developement of Theory

    NASA Astrophysics Data System (ADS)

    Bisnovatyi-Kogan, G. S.

    Standard accretion disk theory is formulated which is based on the local heat balance. The energy produced by a turbulent viscous heating is supposed to be emitted to the sides of the disc. Sources of turbulence in the accretion disc are connected with nonlinear hydrodynamic instability, convection, and magnetic field. In standard theory there are two branches of solution, optically thick, and optically thin. Advection in accretion disks is described by the differential equations what makes the theory nonlocal. Low-luminous optically thin accretion disc model with advection at some suggestions may become advectively dominated, carrying almost all the energy inside the black hole. The proper account of magnetic filed in the process of accretion limits the energy advected into a black hole, efficiency of accretion should exceed ˜ 1/4 of the standard accretion disk model efficiency.

  18. Building-up a database of spectro-photometric standards from the UV to the NIR

    NASA Astrophysics Data System (ADS)

    Vernet, J.; Kerber, F.; Mainieri, V.; Rauch, T.; Saitta, F.; D'Odorico, S.; Bohlin, R.; Ivanov, V.; Lidman, C.; Mason, E.; Smette, A.; Walsh, J.; Fosbury, R.; Goldoni, P.; Groot, P.; Hammer, F.; Kaper, L.; Horrobin, M.; Kjaergaard-Rasmussen, P.; Royer, F.

    2010-11-01

    We present results of a project aimed at establishing a set of 12 spectro-photometric standards over a wide wavelength range from 320 to 2500 nm. Currently no such set of standard stars covering the near-IR is available. Our strategy is to extend the useful range of existing well-established optical flux standards (Oke 1990, Hamuy et al. 1992, 1994) into the near-IR by means of integral field spectroscopy with SINFONI at the VLT combined with state-of-the-art white dwarf stellar atmospheric models (TMAP, Holberg et al. 2008). As a solid reference, we use two primary HST standard white dwarfs GD71 and GD153 and one HST secondary standard BD+17 4708. The data were collected through an ESO “Observatory Programme” over ~40 nights between February 2007 and September 2008.

  19. Proceedings of the 1993 Particle Accelerator Conference Held in Washington, DC on May 17-20, 1993. Volume 5

    DTIC Science & Technology

    1994-05-18

    1801 Control System Architecture: The Standard and Non -Standard Models (Invited Paper) - M. E. Thuot, L. R. Dalesio, LANL...extracted beam intensity and feedback on lbe in lbe AGS, lbe non -linear space charge force can blow up lbe strength of lbe sextupole field to control lb...cromsings at the two experimental areas BO and DO, and bling the mas rnge accessible for discovery, a menu bar. In the menu bar there are controls to inject

  20. SU-E-T-554: Comparison of Electron Disequilibrium Factor in External Photon Beams for Different Models of Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LIU, B; Zhu, T

    Purpose: The dose in the buildup region of a photon beam is usually determined by the transport of the primary secondary electrons and the contaminating electrons from accelerator head. This can be quantified by the electron disequilibrium factor, E, defined as the ratio between total dose and equilibrium dose (proportional to total kerma), E = 1 in regions beyond buildup region. Ecan be different among accelerators of different models and/or manufactures of the same machine. This study compares E in photon beams from different machine models/ Methods: Photon beam data such as fractional depth dose curve (FDD) and phantom scattermore » factors as a function of field size and phantom depth were measured for different Linac machines. E was extrapolated from these fractional depth dose data while taking into account inverse-square law. The ranges of secondary electron were chosen as 3 and 6 cm for 6 and 15 MV photon beams, respectively. The field sizes range from 2x2 to 40x40 cm{sup 2}. Results: The comparison indicates the standard deviations of electron contamination among different machines are about 2.4 - 3.3% at 5 mm depth for 6 MV and 1.2 - 3.9% at 1 cm depth for 15 MV for the same field size. The corresponding maximum deviations are 3.0 - 4.6% and 2 - 4% for 6 and 15 MV, respectively. Both standard and maximum deviations are independent of field sizes in the buildup region for 6 MV photons, and slightly decreasing with increasing field size at depths up to 1 cm for 15 MV photons. Conclusion: The deviations of electron disequilibrium factor for all studied Linacs are less than 3% beyond the depth of 0.5 cm for the photon beams for the full range of field sizes (2-40 cm) so long as they are from the same manufacturer.« less

Top