Physik gestern und heute Von der Metallstange zum Hochenergielaser
NASA Astrophysics Data System (ADS)
Heering, Peter
2002-05-01
Im Mai 1752 wurde in Marly bei Paris auf Anregung des amerikanischen Forschers und Politikers Benjamin Franklin erstmals die elektrische Natur des Blitzes nachgewiesen. Damals beschrieb Franklin auch eine technische Vorrichtung, die als Schutz von Gebäuden vor Blitzschlägen dienen sollte: den Blitzableiter. Diese aus heutiger Sicht scheinbar triviale Vorrichtung wurde aber keineswegs unmittelbar akzeptiert. Und bis heute ist die Forschung zum Schutz von Einrichtungen vor Blitzschlägen nicht abgeschlossen.
NASA Astrophysics Data System (ADS)
Ebert, Karl-Herbert; Ammer, Daniel; Hoffstetter, Marc; Wintermantel, Erich
Bei der Betrachtung von aktuellen Produktentwicklungen lässt sich durch alle Branchen hinweg ein deutlicher Trend zur Miniaturisierung und Funktionsintegration auf kleinstem Raum erkennen. Der Einsatz technischer Kunststoffe, die überwiegend im Thermoplast-Spritzgießverfahren verarbeitet werden, leistet dabei einen wichtigen Beitrag um diese Produktentwicklungen in marktfähige Artikel umsetzen zu können. Hierbei sind im Vergleich zum Standardspritzgießen einige Besonderheiten hinsichtlich des Formenbaus, der Anlagen- sowie Prozesstechnik und der Qualitätssicherung zu beachten.
Chips aus Plastik: Organische Elektronik
NASA Astrophysics Data System (ADS)
Kiy, Michael
2003-01-01
Künstliche organische Materialien werden in Zukunft vermehrt in der Elektronik eingesetzt werden. Obwohl sie gute Isolatoren sind, kann ein ausreichend großes elektrisches Feld in ihnen einen elektrischen Strom fließen lassen. Dazu injizieren Metallelektroden freie Ladungsträger wie Elektronen oder Löcher in das organische Material. Diese Ladungsträgerinjektion kann die elektrischen Eigenschaften geeigneter organischer Materialien vom Isolator bis zum Leiter steuern. Eine zukünftige Domäne solcher Kunststoffe werden einfache, billige Chips sein. Bei den Displays könnten sie bald die konventionellen Flüssigkristallanzeigen technisch überflügeln.
Titanisierung von Implantatoberflächen
NASA Astrophysics Data System (ADS)
Zimmermann, Hanngörg; Heinlein, Markus; Guldner, Norbert W.
Titan gilt seit Jahrzehnten als einer der wichtigsten Implantatwerkstoffe in der Medizin. Neben den guten mechanischen Eigenschaften (Leichtigkeit, hohe Festigkeit etc.), besitzen Titanimplantate vor allem eine hervorragende Körperverträglichkeit, so dass die Implantate optimal in den humanen Organismus integriert werden [1]. Ist jedoch aufgrund der Anforderungen an das Implantat eine hohe Flexibilität und/ oder Elastizität gefragt, so scheidet der Werkstoff Titan aufgrund seiner spröden und unflexiblen Materialeigenschaften aus. Die Folge ist der Einsatz von Implantatmaterialien, sowohl künstlichen als auch biologischen Ursprungs, welche nicht selten eine unzureichende Biokompatibilität aufweisen und somit zu Fremdköper- und immunologischen Reaktionen und Einkapselung des Implantates führen können. Die Erhöhung der Körperverträglichkeit, eine Adaption an das biologische Umfeld und eine hohe Biokompatibilität sind demzufolge die wichtigsten Eigenschaften bei der bedarfsgerechten Herstellung von Implantaten und Implantatoberflächen. Zur Gestaltung von innovativen, biokompatiblen Oberflächen stehen unterschiedliche technische Lösungsansätze zur Verfügung. Zum einen besteht die Möglichkeit, geeignete Oberflächeneigenschaften aus dem Grundmaterial selbst zu optimieren. Dies geschieht unter anderem durch Modifikation der Werkstoffoberflächen in Form von Texturierungen und Oberflächenrauhigkeiten. Zum anderen können die Oberflächeneigenschaften unabhängig von denen des Trägermaterials gestaltet werden. Durch Funktionalisierung der Oberflächen mit geeigneten Beschichtungen oder der Zugabe von Medikamenten (Drug Eluting) werden die Kunststoffimplantate dahingehend verändert, dass eine Steigerung der Körperakzeptanz erreicht wird. Die Titanbeschichtung von Implantatoberflächen kombiniert die positiven Materialeigenschaften von Titan und Polymer.
Stenting und technische Stentumgebung
NASA Astrophysics Data System (ADS)
Hoffstetter, Marc; Pfeifer, Stefan; Schratzenstaller, Thomas; Wintermantel, Erich
In hoch entwickelten Industrieländern stehen laut Weltgesundheitsorganisation (WHO) Herz-Kreislauf-Erkrankungen und speziell die Koronare Herzkrankheit (KHK) an erster Stelle der Todesursachen. In Deutschland betrug die Zahl der erfassten, an KHK erkrankten Personen ohne Berücksichtigung der Dunkelziffer allein im Jahre 2001 über 473.000. Die KHK war im Jahre 2003 mit 92.673 erfassten Todesfällen immer noch die häufigste Todesursache, obgleich in Deutschland die Häufigkeit der Koronarinterventionen zur Behandlung der KHK zwischen 1984 und 2003 um fast das 80fache von 2.809 auf 221.867 Eingriffe pro Jahr gestiegen ist [1]. Neben der hohen Zahl an Todesfällen haben die betroffenen Personen durch chronische Schmerzen und eingeschränkte körperliche Leistungsfähigkeit zusätzlich eine starke Beeinträchtigung der Lebensqualität [2].In Folge dessen wird die erkrankte Person häufig zum Pflegefall was neben den gesundheitlichen Aspekten auch eine sozioökonomische Komponente in Form der fehlenden Arbeitskraft und den auftretenden Pflegekosten nach sich zieht. Die Kosten für die Behandlung der KHK in Deutschland beliefen sich im Jahre 2002 laut Statistischem Bundesamt auf rund 6,9 Mrd. €. Verglichen mit ähnlichen Zahlen der USA dürfte sich der entstandene Schaden für die deutsche Volkswirtschaft im zwei- bis dreistelligen Milliardenbereich bewegen [3].
Einführung in die Technische Chemie
NASA Astrophysics Data System (ADS)
Behr, Arno; Agar, David W.; Jörissen, Jakob
Die "Technische Chemie" ist ein Lehrfach an Universitäten und Hochschulen. Nach dem die Studierenden der Chemie in den ersten Semestern ihres Studiums ausrei chen de theoretische Kenntnisse in Allgemeiner, Anorganischer, Organischer und Physikalischer Chemie erlangt haben, soll die Technische Chemie einen Blick auf die praktische Anwendung dieser Naturwissenschaft in unserer Wirtschaft lenken. Es gibt keine "biologische Industrie", "physikalische Industrie" oder "mathematische Industrie", wohl aber seit über 150 Jahren eine "chemische Industrie", die in dieser lan gen Zeit zahlreiche chemische Prozesse entwickelt und dazu vielfältige Methoden erarbeitet hat. Das Lehrfach Technische Chemie gibt einen Überblick über diese Pro zesse und Methoden und erleichtert dadurch den Schritt von der Universität zur be ruflichen Praxis.
Educational Service Quality in Zanjan University of Medical Sciences from Students' Point of View
ERIC Educational Resources Information Center
Mohammadi, Ali; Mohammadi, Jamshid
2014-01-01
This study aims at evaluating perceived service quality in Zanjan University of Medical Sciences (ZUMS). This study was cross-sectional and authors surveyed educational services at ZUMS. Through stratified random sampling, 384 students were selected and an adapted SERVQUAL instrument was used for data collection. Data analysis was performed by…
ZumBeat: Evaluation of a Zumba Dance Intervention in Postmenopausal Overweight Women
Rossmeissl, Anja; Lenk, Soraya; Hanssen, Henner; Donath, Lars; Schmidt-Trucksäss, Arno; Schäfer, Juliane
2016-01-01
Physical inactivity is a major public health concern since it increases individuals’ risk of morbidity and mortality. A subgroup at particular risk is postmenopausal overweight women. The aim of this study was to assess the feasibility and effect of a 12-week ZumBeat dance intervention on cardiorespiratory fitness and psychosocial health. Postmenopausal women with a body mass index (BMI) >30 kg/m2 or a waist circumference >94 cm who were not regularly physically active were asked to complete a 12-week ZumBeat dance intervention with instructed and home-based self-training sessions. Before and after the intervention, peak oxygen consumption (VO2peak) was assessed on a treadmill; and body composition and several psychometric parameters (including quality of life, sports-related barriers and menopausal symptoms) were investigated. Of 17 women (median age: 54 years; median BMI: 30 kg/m2) enrolled in the study, 14 completed the study. There was no apparent change in VO2peak after the 12-week intervention period (average change score: −0.5 mL/kg/min; 95% confidence interval: −1.0, 0.1); but, quality of life had increased, and sports-related barriers and menopausal symptoms had decreased. A 12-week ZumBeat dance intervention may not suffice to increase cardiorespiratory fitness in postmenopausal overweight women, but it increases women’s quality of life. PMID:29910253
Von Donuts und Zucker: Mit Neutronen biologische Makromoleküle erforschen
NASA Astrophysics Data System (ADS)
May, Roland P.
2003-05-01
Für die Erforschung von Biomolekülen bieten Neutronen einzigartige Eigenschaften. Vor allem ihre unterschiedliche Wechselwirkung mit dem natürlichen Wasserstoff und seinem schweren Isotop Deuterium ermöglicht tiefe Einblicke in Struktur, Funktion und Dynamik von Proteinen, Nukleinsäuren und Biomembranen. Bei vielen Fragestellungen zur Strukturaufklärung gibt es kaum oder keine Alternative zum Neutron. Das Institut Laue-Langevin trägt Bahnbrechendes zum Erfolg der Neutronen-Methoden in der Biologie bei.
NASA Astrophysics Data System (ADS)
Prabowo, D. W.; Mulyani, S.; van Pée, K.-H.; Indriyanti, N. Y.
2018-05-01
This research aims to apprehend: (1) the shape of tetrahedral chemistry education which is called the future of chemistry education, (2) comprehensive understanding of chemistry first-year students of Technische Universität Dresden according to the chemistry education’s tetrahedral shape on mole concept subject matter. This research used quantitative and qualitative; paper and pencil test and interview. The former was conducted in the form of test containing objective test instrument. The results of this study are (1) learning based on tetrahedral shape of chemistry education put the chemical substance (macroscopic), symbolic representation (symbol), and its process (molecular) in the context of human beings (human element) by integrating content and context, without emphasis on one thing and weaken another, (2) first-year chemistry students of Technische Universität Dresden have comprehensively understood the mole concept associated with the context of everyday life, whereby students are able to find out macroscopic information from statements that are contextual to human life and then by using symbols and formulas are able to comprehend the molecular components as well as to interpret and analyse problems effectively.
Geometrie verstehen: statisch - kinematisch
NASA Astrophysics Data System (ADS)
Kroll, Ekkehard
Dem Allgemeinen steht begrifflich das Besondere gegenüber. In diesem Sinne sind allgemeine Überlegungen zum Verstehen von Mathematik zu ergänzen durch Untersuchungen hinsichtlich des Verstehens der einzelnen mathematischen Disziplinen, insbesondere der Geometrie. Hier haben viele Schülerinnen und Schüler Probleme. Diese rühren hauptsächlich daher, dass eine fertige geometrische Konstruktion in ihrer statischen Präsentation auf Papier nicht mehr die einzelnen Konstruktionsschritte erkennen lässt; zum Nachvollzug müssen sie daher ergänzend in einer Konstruktionsbeschreibung festgehalten werden.
NASA Astrophysics Data System (ADS)
Vogel, Helmut
Das beliebte Arbeitsbuch "Probleme aus der Physik" bietet nun auch zur 17. Auflage von Gerthsen Vogel "Physik" (ISBN 3-540-56638-4) mit über 1150 gelösten Aufgaben aus der Physik und ihren Anwendungen in Technik, Astrophysik, Geound Biowissenschaften eine Fülle an Material zum Üben und Weiterlernen, zur Prüfungsvorbereitung und zum Selbststudium. Neu hinzugekommen ist ein Kapitel zur nichtlinearen Dynamik. Aufgaben aller Schwierigkeitsgrade machen "Probleme aus der Physik" unentbehrlich für Studenten der Physik im Haupt- und Nebenfach; Schüler der Leistungskurse Physik finden hier eine hervorragende Ergänzung.
Konventionelle Dampfkraftwerke
NASA Astrophysics Data System (ADS)
Zahoransky, Richard; Allelein, Hans-Josef; Bollin, Elmar; Oehler, Helmut; Schelling, Udo
Das Dampfkraftwerk beruht als thermisches Kraftwerk auf einem thermodynamischen Kreisprozess, der Wärme in technische Arbeit umwandelt. Die Arbeit wird mittels Elektrogenerator als elektrische Energie abgegeben.
Clinical investigation of vestibular damage by antituberculous drugs.
Nakayama, M; Natori, Y; Tachi, H; Yoshizawa, M; Takayama, S; Miura, H; Kanayama, M; Kamei, T
1986-01-01
Vestibular function testing was done regularly on the cases given streptomycin, kanamycin, or enviomycin and a method to detect the cases of vestibular dysfunction at an early stage was discussed, as well as the time these drugs should be discontinued. Subjects were 85 cases of tuberculosis treated with streptomycin, kanamycin, or enviomycin who were admitted to our hospital from December 1984 to May 1986. The method of equilibrium examination performed at regular intervals is as follows: standing test (Romberg test), stepping test, and Meyer zum Gottesberge's head-shaking test were done once a week for a month after starting antituberculous injections and they were re-examined once every 2 weeks for at least 3 months after beginning the injections. After the 3 months these tests were done once a month. Eight cases of vestibular damage due to streptomycin or enviomycin could be easily detected at an early stage by performing Meyer zum Gottesberge's head-shaking test, together with the standing test and the stepping test. Vestibular dysfunction is apt to occur after about 1 month or within a month from the start of daily injections especially with streptomycin. Therefore, the method of equilibrium examination, we suggest, is that the Meyer zum Gottesberge's head-shaking test, the standing test (Romberg test), and the stepping test should be performed once a week during the first month after the start of this drug. When the result of the Meyer zum Gottesberge's head-shaking test is less than 50% and swaying and/or rotation occur in the stepping test, the drugs being given should be discontinued.
NASA Astrophysics Data System (ADS)
Bauer, Jürgen
Die technisch orientierte Betriebswirtschaft unterstützt den Techniker und Ingenieur bei der Planung und Realisierung wirtschaftlicher Prozesse (Fertigungsprozesse, Entwicklungsprozesse im F+E-Bereich, Vertriebsprozesse, Beschaffungsprozesse),
Innovative BI-Lösungen als Basis für eine erfolgreiche Transformation zu Utility 4.0
NASA Astrophysics Data System (ADS)
Phillipp, Daniel; Ebert, Sebastian
Für eine erfolgreiche Transformation, vom reinen Energieversorger hin zum Energiedienstleister, werden innovative Business-Intelligence-Lösungen notwendig sein und eine zentrale Rolle einnehmen. Dabei ist es zunächst essenziell, die Herausforderungen zu kennen und ihnen mit geeigneten Analysen zu begegnen. Die Basis hierzu bildet eine abgestimmte und auf die strategischen Unternehmensziele ausgerichtete Architektur und Vorgehensweise. Zwei Beispiele veranschaulichen, wie ein gesamtheitlicher Ansatz, auch bei Datenvielfalt und hoher Komplexität, operative Prozesse optimiert, und fortgeschrittene Analysen zukünftig einen Beitrag zum Unternehmenserfolg liefern können.
NASA Astrophysics Data System (ADS)
Kramer, Florian
Heutige Pkw sind zum Schutz der Insassen bei Frontalkollisionen zu etwa 90 % fahrerseitig und zu ca. 70 % auf der Beifahrerseite mit Airbags ausgestattet, während die Seiten-Airbags zum Schutz des Kopfes und des Thorax von Insassen bei Seitenkollisionen nur mit ungefähr 40 bis 50% vertreten sind [1]. Weitere Schutzmaßnahmen wie Fuß- und Fond-Airbags befinden sich im Entwicklungsstadium, ihr Einsatz in der Serie ist umstritten und wird sich, wenn überhaupt, nur in Einzelfällen durchsetzen. In Bild C3-1 sind Airbags dargestellt, die heute serienmäßig in Pkw anzutreffen sind.
Meilensteine in der Erforschung der kompakten Objekte
NASA Astrophysics Data System (ADS)
Camenzind, Max
Kompakte Objekte besitzen zum einen eine sehr hohe Dichte, und zum anderen sind sie durch die Tatsache charakterisiert, dass keine nuklearen Reaktionen mehr in ihrem Inneren stattfinden können. Aus diesem Grund können sie im Unterschied zu gewöhnlichen Sternen der Gravitation nicht mehr mit dem Druck des thermischen Gases widerstehen. In den Weißen Zwergen bzw. Neutronensternen wird der Gravitation der Quantendruck eines Elektronengases bzw. einer Neutronenflüssigkeit entgegengesetzt. Ein solches Gas besteht aus Elektronen bzw. Neutronen, die auf ihr niedrigstes Energieniveau zusammengepresst wurden. Durch die daraus resultierende hohe Bewegungsenergie der Fermionen wird der sogenannte Quantendruck erzeugt.
NASA Astrophysics Data System (ADS)
Flügge, Jens; Köning, Rainer; Schötka, Eugen; Weichert, Christoph; Köchert, Paul; Bosse, Harald; Kunzmann, Horst
2014-12-01
The paper describes recent improvements of Physikalisch-Technische Bundesanstalt's (PTB) reference measuring instrument for length graduations, the so-called nanometer comparator, intended to achieve a measurement uncertainty in the domain of 1 nm for a length up to 300 mm. The improvements are based on the design and realization of a new sample carriage, integrated into the existing structure and the optimization of coupling this new device to the vacuum interferometer, by which the length measuring range of approximately 540 mm with sub-nm resolution is given. First, measuring results of the enhanced nanometer comparator are presented and discussed, which show the improvements of the measuring capabilities and verify the step toward the sub-nm accuracy level.
Magnetoseed - Vasculäres Tissue Engineering
NASA Astrophysics Data System (ADS)
Perea Saavedra, Héctor; Methe, Heiko; Wintermantel, Erich
Gegenwärtig sind kardiovaskuläre Erkrankungen, allen voran die Arteriosklerose koronarer und zerebraler Gefäße, Ursache für 38% aller Todesfälle in Nordamerika und häufigste Todesursache europäischer Männer < 65 Jahre und zweithäufigste Todesursache bei Frauen [4]. Es wird prognostiziert, dass innerhalb der nächsten 10-15 Jahre kardiovaskuläre Erkrankungen und deren Komplikationen weltweit die häufigste Todesursache stellen werden. Dies ist zum einen Folge der ansteigenden Prävalenz kardiovaskulärer Erkrankungen in Osteuropa und zunehmend auch in den Entwicklungsländern, zum anderen Folge der kontinuierlich ansteigenden Inzidenz von Übergewicht und Diabetes mellitus in den westlichen Ländern.
Reich-Schupke, Stefanie; Schmeller, Wilfried; Brauer, Wolfgang Justus; Cornely, Manuel E; Faerber, Gabriele; Ludwig, Malte; Lulay, Gerd; Miller, Anya; Rapprich, Stefan; Richter, Dirk Frank; Schacht, Vivien; Schrader, Klaus; Stücker, Markus; Ure, Christian
2017-07-01
Die vorliegende überarbeitete Leitlinie zum Lipödem wurde unter der Federführung der Deutschen Gesellschaft für Phlebologie (DGP) erstellt und finanziert. Die Inhalte beruhen auf einer systematischen Literaturrecherche und dem Konsens von acht medizinischen Fachgesellschaften und Berufsverbänden. Die Leitlinie beinhaltet Empfehlungen zu Diagnostik und Therapie des Lipödems. Die Diagnose ist dabei auf der Basis von Anamnese und klinischem Befund zu stellen. Charakteristisch ist eine umschriebene, symmetrisch lokalisierte Vermehrung des Unterhautfettgewebes an den Extremitäten mit deutlicher Disproportion zum Stamm. Zusätzlich finden sich Ödeme, Hämatomneigung und eine gesteigerte Schmerzhaftigkeit der betroffenen Körperabschnitte. Weitere apparative Untersuchungen sind bisher besonderen Fragestellungen vorbehalten. Die Erkrankung ist chronisch progredient mit individuell unterschiedlichem und nicht vorhersehbarem Verlauf. Die Therapie besteht aus vier Säulen, die individuell kombiniert und an das aktuelle Beschwerdebild angepasst werden sollten: komplexe physikalische Entstauungstherapie (manuelle Lymphdrainage, Kompressionstherapie, Bewegungstherapie, Hautpflege), Liposuktion und plastisch-chirurgische Interventionen, Ernährung und körperliche Aktivität sowie ggf. additive Psychotherapie. Operative Maßnahmen sind insbesondere dann angezeigt, wenn trotz konsequent durchgeführter konservativer Therapie noch Beschwerden bestehen bzw. eine Progredienz des Befundes und/oder der Beschwerden auftritt. Eine begleitend zum Lipödem bestehende morbide Adipositas sollte vor einer Liposuktion therapeutisch angegangen werden. © 2017 The Authors | Journal compilation © Blackwell Verlag GmbH, Berlin.
Einsatz molekularer Methoden für Starterkulturen
NASA Astrophysics Data System (ADS)
Ehrmann, Matthias A.; Pavlovic, Melanie
Unter Starterkulturen versteht man Mikroorganismen (Bakterien, Hefen, Pilze), die pflanzlichen bzw. tierischen Rohstoffen zur gezielten Veränderung ihrer chemischen Zusammensetzung zugesetzt werden. Sie dienen im Wesentlichen der Aromabildung, der Strukturveränderung und der Konservierung der Lebensmittel und werden aufgrund spezieller, funktioneller Eigenschaften selektiert. Die Zugabe von Starterkulturen erfolgt in der Regel in relativ hohen Keimzahlen in Form von Rein- oder Mischkulturen. Die zum Einsatz kommenden Mikroorganismen sind ebenso zahlreich wie die daraus resultierenden Produkte und reichen von der Fermentation von Milchprodukten, Fleisch und Gemüse durch Milchsäurebakterien über die Essigsäureherstellung bis hin zum Einsatz von Hefen in der Brau- und Weinindustrie. Hieraus ergibt sich auch eine zunehmende Bedeutung schneller und zuverlässiger Methoden zur taxonomischen Identifizierung, aber auch zur Charakterisierung des genetischen Potenzials der jeweiligen Starterkulturen.
S2k-Leitlinie zum Gebrauch von Präparationen zur lokalen Anwendung auf der Haut (Topika).
Wohlrab, Johannes; Staubach, Petra; Augustin, Matthias; Eisert, Lisa; Hünerbein, Andreas; Nast, Alexander; Reimann, Holger; Strömer, Klaus; Mahler, Vera
2018-03-01
Diese Leitlinie richtet sich an Assistenz- und Fachärzte der Dermatologie sowie an Kostenträger und politische Entscheidungsgremien. Die Leitlinie wurde im formellen Konsensusverfahren (S2k) von Dermatologen unter Einbindung von Apothekern erstellt. Die Leitlinie stellt allgemeine Aspekte der Pharmakokinetik sowie der regulatorischen Begrifflichkeiten dar. Es werden Empfehlungen zur Indikation von Magistralrezepturen sowie deren Qualitätssicherung gegeben. Die Bedeutung der galenischen Grundlagen und die Problematik bei einer Substitution gegeneinander verschiedener Grundlagen werden dargestellt. Die Leitlinie umfasst Kriterien zur Auswahl einer adäquaten Grundlage sowie spezifische Aspekte zur Therapieplanung. Die Leitlinie gibt Empfehlungen zum Management bei Unverträglichkeiten gegenüber Bestandteilen der Grundlagen oder Hilfsstoffe. © 2018 The Authors | Journal compilation © Blackwell Verlag GmbH, Berlin.
Report on the Evaluation Results of the Course "Mechanic Undercarriage Leopard 2"
1998-01-14
is onderdeel van TNO Defensieonderzoek waartoe verder behoren: TNO Fysisch en Elektronisch Laboratorium TNO Prins Maurits Laboratorium ...beroepsonderwijs MFO Militaire Functie Opleiding NBC Nucleair Biologisch Chemisch OCLOG Onderwijs Centrum Logistiek OCTD Onderhouds Centrum Technische
Mikrodaten und statistische Auswertungsmethoden
NASA Astrophysics Data System (ADS)
Hujer, Reinhard
Mit der zunehmenden Verfügbarkeit immer größerer Querschnitts- und Längschnittsdatensätze für Personen, Haushalte und Betriebe sowie deren Verknüpfungen hat sich die mikroökonometrische Forschung in den vergangenen Jahren rasant weiterentwickelt. Dies gilt sowohl aus methodischer als auch aus empirischer, anwendungsorientierter Sicht. Mikrodaten und mikroökonometrische Ansätze dienen dazu, aktuelle, politikrelevante Fragen aufzugreifen, sie zu analysieren und fundierte politische Empfehlungen zu geben, beispielsweise im Rahmen der Arbeitsmarkt- und Sozialpolitik, der Finanzanalyse und der Marketingforschung. Die Deutsche Statistische Gesellschaft (DStatG) und deren Mitglieder haben sich in den Ausschüssen und in Hauptversammlungen kontinuierlich mit den Weiterentwicklungen der mikroökonometrischen Methodik und den empirischen Anwendungen befasst. Zahlreiche Publikationen von Mitgliedern der DStatG haben entscheidend zum kritischen Diskurs und zum wissenschaftlichen Fortschritt in diesem Bereich beigetragen.
NASA Astrophysics Data System (ADS)
Schiwietz, Gregor; Klaumünzer, Siegfried; Mahnke, Heinz-Eberhard
2007-03-01
This NIM-B issue contains the Proceedings of the 22nd International Conference on Atomic Collisions in Solids (ICACS-22) held in the main building of the Technische Universität Berlin (Strasse des 17.Juni 135, 10623 Berlin, Germany) from the 21st until the 26th of July 2006.
A Course on Reconfigurable Processors
ERIC Educational Resources Information Center
Shoufan, Abdulhadi; Huss, Sorin A.
2010-01-01
Reconfigurable computing is an established field in computer science. Teaching this field to computer science students demands special attention due to limited student experience in electronics and digital system design. This article presents a compact course on reconfigurable processors, which was offered at the Technische Universitat Darmstadt,…
... or bruxism (say: BRUK-sih-zum), which is grinding your teeth while you sleep. Grinding stretches the muscles and joints in your mouth ... closing completely at night, which keeps you from grinding your teeth. Getting Fitted for and Wearing a ...
Motivating First-Year University Students by Interdisciplinary Study Projects
ERIC Educational Resources Information Center
Koch, Franziska D.; Dirsch-Weigand, Andrea; Awolin, Malte; Pinkelman, Rebecca J.; Hampe, Manfred J.
2017-01-01
In order to increase student commitment from the beginning of students' university careers, the Technische Universität Darmstadt has introduced interdisciplinary study projects involving first-year students from the engineering, natural, social and history, economics and/or human sciences departments. The didactic concept includes sophisticated…
Science and Technology Libraries Section. Special Libraries Division. Papers.
ERIC Educational Resources Information Center
International Federation of Library Associations, The Hague (Netherlands).
Papers on science and technology library and information services presented at the 1982 International Federation of Library Associations (IFLA) conference include: (1) "The Central Subject Libraries of the Federal Republic of Germany--For Example: The Technische Informationsbibliothek Hannover" by Gerhard Schlitt and Jobst Tehnzen; (2)…
Extrasolare Monde - schöne neue Welten?
NASA Astrophysics Data System (ADS)
Heller, René
2013-10-01
Während mittlerweile rund 950 Planeten außerhalb des Sonnensystems gefunden wurden, steht der Nachweis von extrasolaren Monden noch aus. Aktuelle Studien zeigen, dass dies mit der heutigen Technologie zum ersten Mal möglich ist.
Physik gestern und heute: Visualisierung mit der Schlierenmethode
NASA Astrophysics Data System (ADS)
Heering, Peter
2006-07-01
Der Name des österreichischen Forschers Ernst Mach ist heute noch mit der Schallgeschwindigkeit verbunden. Diese Auszeichnung resultiert aus Machs Untersuchungen, wie sich Projektile mit Überschallgeschwindigkeit durch die Luft bewegen. Gerade in jüngster Zeit hat die Anwendung derartiger Methoden durch technische Modifikationen wieder einen Aufschwung erfahren.
Sentinel-Lymphknoten-Biopsie des Melanoms mittels Indocyaningrün und "FOVIS"-System.
Göppner, Daniela; Nekwasil, Stephan; Jellestad, Anne; Sachse, Alexander; Schönborn, Karl-Heinz; Gollnick, Harald
2017-02-01
Der Nachweis metastatischer Infiltrate im Sentinel-Lymphkoten (SLN) gilt als wesentlicher prognostischer Faktor des Melanoms. Alternativ zur Farbstoffmethode mit Patentblau zum Goldstandard der SLN-Biopsie (SLNB) mittels Radiokolloid wird die fluoreszenzoptische Darstellung mit Hilfe von Indocyaningrün (ICG) und Nahinfrarot (NIR)-Kamerasystem kommuniziert. Im Vergleich zur konventionellen Methode wurde die Wertigkeit des ICG-/NIR-Verfahrens in Abhängigkeit vom Body-Mass-Index (BMI) des Patienten und der Konzentration von ICG bezüglich der Visualisierung des Lymphabstroms und des SLNs untersucht. An zehn Patienten wurde die SLNB mittels Technetium-99m, Patentblau und ICG durchgeführt. Die Fluoreszenz-Darstellung von Lymphbahnen und SLN erfolgte in Echtzeit mittels der NIR-Kameratechnik "FOVIS". Je nach erzielter Bildqualität wurde ICG in einer Dosis von 0,25 mg bis 2,5 mg intrakutan appliziert. Neun der zehn SLN wurden fluoreszenzoptisch identifiziert (90 %), alle zehn radioaktiv (100 %), nur acht (80 %) mittels ICG-Grünfärbung bzw. Patenblau-Markierung. Transdermal wurde ein SLN dargestellt (10 %). In Korrelation zum BMI waren höhere ICG-Mengen, bis zu 2,5 mg intrakutan absolut, in der Darstellung der Lymphbahnen von Vorteil. Die SLN-Fluoreszenzmarkierung mit dem ICG/NIR-Kamera-System "FOVIS" stellt eine sichere Alternative zur Farbstoffmethode mit Patentblau ergänzend zur Radiokolloidmethode mit Technetium-99m dar. Weitere Studien zur optimalen Dosierung von ICG und transdermalen Bildgebung in Relation zum BMI sind notwendig. © 2017 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Labisch, Susanna
Konstruktion und Fertigung erfolgen in der Praxis fast ausschließlich rechnerunterstützt. Mit diesem Rechnereinsatz beim Konstruieren (CAD, Computer Aided Design) und Fertigen CAM (Computer Aided Manufacturing) scheint die technische Zeichnung an Bedeutung zu verlieren, da die Verständigung zwischen Konstruktions- und Fertigungsabteilung primär durch den Austausch digitaler Daten erfolgen kann.
"Crafts and Technology" and "Technical Education" in Austria
ERIC Educational Resources Information Center
Seiter, Josef
2009-01-01
In Austria, the syllabus for "Technisches Werken/Crafts and Technology" for all types of school in general education was issued more than 30 years ago. The authors believed that it might lay the foundations for technical literacy. The paper is about how the situation of the subject and, with it, technical education has developed since…
ERIC Educational Resources Information Center
Stratigakos, Despina
2007-01-01
This article reconstructs women's entry into the architecture classrooms of Germany's "Technische Hochschulen," which were, and remain, the nation's primary institutions for training architects. Created in the 1860s and '70s to supply an industrializing nation with well-educated engineers and building officials, these elite colleges…
NASA Astrophysics Data System (ADS)
Kostmann, Dirk
Normung ist aus dem täglichen Leben nicht wegzudenken. In allen Bereichen des Lebens begegnet man Normen, die Aktivitäten reichen von Festlegungen für Kindersitze im Auto über Implantate zum Gelenkersatz bis zu Schraubengrößen oder Verfahren zur Optimierung von Unternehmen.
Zum Problem der Hochschulreform in Spanien: Einige ausgewahlte Daten.
ERIC Educational Resources Information Center
Val, Jose Cajide; Philipp, Rita Radl; Castro, Ana Porto
1998-01-01
Investigates the teaching, research, and management entailed in four new degree programs--physics, agricultural engineering, agricultural food-processing technology, and pharmacy courses--at Spain's University of Santiago de Compostela. Reports students' opinions of reforms in these courses, revealing dissatisfaction with facilities for practical…
Grundlagen der zeichnerischen Darstellung
NASA Astrophysics Data System (ADS)
Döring, Peter
Eine Technische Zeichnung muss nach DIN 6774 Teil 1 in der Weise angefertigt werden, dass sie übersichtlich, unmissverständlich, auch in verkleinertem Maßstab lesbar bleibt, kostengünstig reproduzierbar und dauerhaft archivierbar ist. Zu dem Zweck benötigt man entsprechendes Papier und angepasstes Zeichengerät. Heute ist die Anfertigung mit entsprechenden Rechnerprogrammen möglich.
Company Decontamination Systems (Compagnieontsmettingssystemen)
1998-06-01
TNO-rapport Compagnieontsmettingssystemen PML 1998-AI08 TNO Prins Maurits Laboratorium Lange Kleiweg 137 Datum Postbus 45 2280 AA Rijswijk juni 1999...I8ECTED 4 TNO Prins Maurits Laboratorium is onderdeel _____ van de hoofdgroep TNO Defensieonderzoek waartoe verder behoren: TNO Fysisch en...Elektronisch Laboratorium Nederlandse Organisatie voor toegepast- TNO Technische Menskunde natuurwetenschappelijk onderzoek TNO TNO-rapport PML 1998-A108 2
The Technical Information Library: TIB
NASA Technical Reports Server (NTRS)
Rosemann, Uwe
1994-01-01
The Technische Informationsbibliothek Hannover (TIB) is the German national central library for all areas of technology and related sciences, especially chemistry, computer science, mathematics, and physics. The TIB acquires and makes available a comprehensive collection of conventional and non-conventional literature, especially foreign material, with particular emphasis on specialized new publications which are difficult to obtain or in difficult languages.
ERIC Educational Resources Information Center
Hollinshead, Graham
2006-01-01
This study is set in the rapidly changing higher educational environment that has ensued in Serbia and Montenegro in the post Milosevic era. Its primary focus is a "Training Trainers" initiative, mounted by the GTZ (Deutsche Gesellschaft fur Technische Zusammenarbeit/Society for Technical Co-operation), designed to upgrade the teaching…
Toekomstige Radiocommunicatie in OVO (Soon-to-be Radiocommunication in OVG)
2004-12-01
Delft, The Netherlands Brassersplein 2, Delft, The Netherlands 33538 9 . SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES 10. SPONSORING/MONITORING...AGENCY REPORT NUMBER KCenGM Prins Bernhardkazeme, Barchman Wuytierslaan 198, Amersfoort, The Netherlands TD04-0463 11 . SUPPLEMENTARY NO TES Text in...perationeel-technische criteria ............................................................ 9 3. Wireless Local Area Networks (WLANs) en militaire
Theoretische Konzepte der Physik
NASA Astrophysics Data System (ADS)
Longair, Malcolm S.; Simon, B.; Simon, H.
"Dies ist kein Lehrbuch der theoretischen Physik, auch kein Kompendium der Physikgeschichte ... , vielmehr eine recht anspruchsvolle Sammlung historischer Miniaturen zur Vergangenheit der theoretischen Physik - ihrer "Sternstunden", wenn man so will. Frei vom Zwang, etwas Erschöpfendes vorlegen zu müssen, gelingt dem Autor etwas Seltenes: einen "lebendigen" Zugang zum Ideengebäude der modernen Physik freizulegen, ... zu zeigen, wie Physik in praxi entsteht... Als Vehikel seiner Absichten dienen dem Autor geschichtliche Fallstudien, insgesamt sieben an der Zahl. Aus ihnen extrahiert er das seiner Meinung nach Lehrhafte, dabei bestrebt, mathematische Anachronismen womöglich zu vermeiden... Als Student hätte ich mir diese gescheiten Essays zum Werden unserer heutigen physikalischen Weltsicht gewünscht. Sie sind originell, didaktisch klug und genieren sich auch nicht, von der Faszination zu sprechen, die ... von der Physik ausgeht. Unnötig darauf hinzuweisen, das sie ein gründliches "konventionelles" Studium weder ersetzen wollen noch können, sie vermögen aber, dazu zu ermuntern." #Astronomische Nachrichten (zur englischen Ausgabe)#1
Quantensprung Digitalisierung - Energiewirtschaft im 21. Jahrhundert
NASA Astrophysics Data System (ADS)
Thyen, Elmar
Die Energiewende wird ohne eine umfassende Digitalisierung der Energiewirtschaft Stückwerk bleiben. Die historisch gewachsene, aus hunderten fossilen Großkraftwerken getriebene Energieversorgung hat sich durch den Zubau von mehr als einer Millionen dezentraler Erzeugungseinheiten innerhalb der vergangenen 15 Jahren radikal verändert. Zum Ausgleich von Last und Erzeugung, aber auch zum Aufbau neuer Geschäftsfelder ist die digitale Technik unverzichtbar. Ihre Möglichkeiten scheinen nahezu unbegrenzt, ihre Rolle wird in einer zukünftig nahezu vollständig elektrifizierten Gesellschaft zunehmend wichtiger werden. Neue Anbieter drängen auf den Markt und setzen die traditionelle Energiewirtschaft unter Druck. Energieversorger, die sich dem Wandel nicht stellen, drohen den Anschluss zu verpassen. Noch werden Verbundunternehmen und Stadtwerke von weitreichenden regulatorischen Vorgaben geschützt. Beispiele aus anderen Branchen zeigen aber, dass die Digitalisierung im Stande ist, regulatorische Mechanismen außer Kraft zu setzen. Zugleich verhindert die enge Regulierung und ein falsch verstandener Datenschutzbegriff in Deutschland die Entwicklung neuer Geschäftsmodelle, die energie- und volkswirtschaftlich sinnvoll wären.
ERIC Educational Resources Information Center
Cortina, Regina
2010-01-01
Working in Latin America for several decades to address the educational needs of poor and indigenous groups, the GTZ (Gesellschaft fur Technische Zusammenarbeit) has helped to develop the knowledge base of intercultural bilingual education. The goal of this article is to analyze Germany's impact from the mid-1970s to the present as the GTZ has…
The Planck-Balance—using a fixed value of the Planck constant to calibrate E1/E2-weights
NASA Astrophysics Data System (ADS)
Rothleitner, C.; Schleichert, J.; Rogge, N.; Günther, L.; Vasilyan, S.; Hilbrunner, F.; Knopf, D.; Fröhlich, T.; Härtig, F.
2018-07-01
A balance is proposed, which allows the calibration of weights in a continuous range from 1 mg to 1 kg using a fixed value of the Planck constant, h. This so-called Planck-Balance (PB) uses the physical approach of Kibble balances that allow the Planck constant to be derived from the mass. Using the PB no calibrated mass standards are required during weighing processes any longer, because all measurements are traceable via the electrical quantities to the Planck constant, and to the meter and the second. This allows a new approach of balance types after the expected redefinition of the SI-units by the end of 2018. In contrast to many scientific oriented developments, the PB is focused on robust and daily use. Therefore, two balances will be developed, PB2 and PB1, which will allow relative measurement uncertainties comparable to the accuracies of class E2 and E1 weights, respectively, as specified in OIML R 111-1. The balances will be developed in a cooperation of the Physikalisch-Technische Bundesanstalt (PTB) and the Technische Universität Ilmenau in a project funded by the German Federal Ministry of Education and Research.
Normzahlen, Toleranzen, Passungen
NASA Astrophysics Data System (ADS)
Böge, Gert; Böge, Wolfgang
Vor allem wegen der Kosten ist es sinnvoll, sich beim Festlegen von Maßen aller Art auf Vorzugszahlen zu beschränken (Baugrößen, Drehzahlen, Drehmomente, Leistungen, Drücke usw.). Man verwendet dazu eine geometrisch gestufte Zahlenfolge (siehe Teil Mathematik). Abb. 40.1 zeigt, dass bei der geometrischen Stufung die Werte im unteren Bereich fein, im oberen grob gestuft sind. Das ist nicht nur technisch sinnvoll.
Transition Control with Dielectric Barrier Discharge Plasmas
2010-10-01
AFRL-AFOSR-UK-TR-2011-0007 Transition Control with Dielectric Barrier Discharge Plasmas Cameron Tropea Technische...Discharge Plasmas 5a. CONTRACT NUMBER FA8655-08-1-3032 5b. GRANT NUMBER Grant 08-3032 5c. PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S...is to control natural boundary-layer transition through the use of plasma actuators. Transition delay or even suppression has its merits not only in
Lokalisatie Maskergelaatslekkage (Localization of Face-Seal Leak Sites)
2004-03-01
2004-A 12 TNO Prins Maurits Laboratorium Lokallisatie Maskergellaatslekkage DISTRIBUTION STATEMENT A Approved for Public Release Lange Kleiweg 137 Datum...1 /- O 9 TNO Prins Maurits Laboratorium is onderdeel van de hoofdgroep TNO Defensieonderzoek waartoe verder b~ehoren: TNO Fysisct en Elektronisch... Laboratorium Nedertandse Organisatie voor toegepast- TNO Te~chnische Menakunde natuurwetenschaDpeliik onderzoek TNO TNO Prn Ma rt LA. mu * s gg 2
Ein statistisches Modell zum Einfluß der thermischen Bewegung auf NMR-Festkörperspektren
NASA Astrophysics Data System (ADS)
Ploss, W.; Freude, D.; Pfeifer, H.; Schmiedel, H.
Es wird ein statistisches Modell zum Einfluß der thermischen Bewegung auf die NMR-Linienform vorgestellt, das die Verschmälerung von Festkörper-Spektren bei wachsender Temperatur beschreibt. Das Modell geht von der Annahme aus, daß nach einer Ortsveränderung eines Kerns infolge thermischer Bewegung jede beliebige Kernresonanzfrequenz mit der durch das Festkörperspektrum vorgegebenen Wahrscheinlichkeit angenommen werden kann. Am Beispiel der Festkörper-Gaußlinie wird der Unterschied zu dem bekannten Modell von ANDERSON und WEISS verdeutlicht.Translated AbstractA Statistical Model for the Influence of Thermal Motion on N. M. R. Spectra in SolidsA theory is proposed which allows to describe the narrowing of n. m. r.-line width in the presence of thermal motions of the spins. The model is based on the assumption, that the local resonance frequency of a given spin immediately after the jump is distributed according to the n. m. r.-line shape of the rigid lattice. The difference to the well-known ANDERSON-WEISS-model of spectral narrowing is demonstrated for a gaussian line shape.
Oberflächenstrukturierung metallischer Werkstoffe, z. B. für stents
NASA Astrophysics Data System (ADS)
Stöver, Michael; Wintermantel, Erich
Eine topologische Oberflächenmodifikation von metallischen Implantaten kann aus verschiedenen Gründen sinnvoll sein. Im Allgemeinen lassen sich zwei Hauptziele unterscheiden. Zum einen dienen Oberflächen dazu, bestimmte Zellreaktionen zu forcieren. Die Anwendungsbeispiele reichen hier von sehr rauen Oberflächen in Fällen wo eine gute Integration eines Permanentimplantates in das Gewebe erwünscht ist bis hin zu glatt polierten Oberflächen. Letztere werden in erster Linie dort eingesetzt, wo das Implantat in direktem Kontakt mit Blut ist. Ein Beispiel für die Erforderlichkeit einer hohen Rauheit (Rz > 100 μm) sind meist aus Titan gefertigte Schäfte von Gelenksimplantaten [1,2]. Die Autoren machten den Vorschlag, die Aufrauung von Titan- und Edelstahl-Stents analog der Aufrauung bei Hüftprothesenschäften zu versuchen, um eine noch bessere Biokompatibilität zu erreichen. [7, 8 ,9]. Sehr glatte Oberflächen, in der Regel mit Rz Werten von unter 0,1 μm, sind z. B. bei Herzklappenprothesen und der Innenseite von Gefäßstützen gefordert. Mittlere Rauheiten werden oft bei temporären Implantaten eingesetzt, in die in reguliertem Maße Gewebe einwachsen, allerdings keine unlösbare Verbindung bilden sollen. Sehr genau eingestellt werden müssen auch die Oberflächentopographien bei Implantaten in sehr empfindlichen Gebieten wie z. B. der Gehirnregion. Hierbei muss eine gute Verankerung im Gewebe vorhanden sein, um ein Verrutschen des Implantates zu verhindern. Zum anderen darf keine überschüssige Zellproliferation entstehen, um ein Einwachsen in sensible Regionen zu verhindern. Zum anderen werden Oberflächenmikrostrukturen genutzt, um Wirkstoffe in Implantate einzubringen, die lokal über einen bestimmten Zeitraum freigesetzt werden sollen. Als wichtigste Wirkstoffe sind hier Antibiotika und entzündungshemmende Mittel sowie im kardiovaskulären Bereich Proliferationshemmer zu nennen.
1998-01-01
TNO-rapport PML1997-A81 Eerste NATO/SIBCA-oefening in monster- name van chemische strijdmiddelen TNO Prins Maurits Laboratorium Lange Kleiweg...27 Aantal pagina’s : 53 (incl. bijlagen, excl. RDP & distributielijst) Aantal bijlagen : 4 TNO Prins Maurits Laboratorium is onderdeel...van de hoofdgroep TNO Defensieonderzoek waartoe verder behoren: TNO Fysisch en Elektronisch Laboratorium TNO Technische Menskunde Nederlandse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturrock, P. A.; Fischbach, E.; Jenkins, J.
2014-10-10
We present the results of an analysis of measurements of the beta-decay rates of Ag 108, Ba 133, Eu 152, Eu 154, Kr 85, Ra 226, and Sr 90 acquired at the Physikalisch-Technische Bundesanstalt from 1990 through 1995. Although the decay rates vary over a range of 165 to 1 and the measured detector current varies over a range of 19 to 1, the detrended and normalized count rate measurements exhibit a sinusoidal annual variation with amplitude in the small range 0.068%-0.088% (mean 0.081%, standard deviation 0.0072%, a rejection of the zero-amplitude hypothesis) and phase-of-maximum in the small range 0.062-0.083more » (January 23 to January 30). In comparing these results with those of other related experiments that yield different results, it may be significant that this experiment, at a standards laboratory, seems to be unique in using a 4π detector. These results are compatible with a solar influence, and do not appear to be compatible with an experimental or environmental influence. It is possible that Ba 133 measurements are also subject to a non-solar (possibly cosmic) influence.« less
ERIC Educational Resources Information Center
Weisgerber, Leo
1972-01-01
Discussion of two basic conceptions: Wilhelm von Humboldt's idea of language as energeia'' existing within and without man, and Noam Chomsky's idea of language generated by the speaker according to an innate apparatus. Revised version of lectures presented at the University of Bonn, West Germany in August 1971. (RS)
Ein Erfahrungsbericht zum Thema Interaktion (A Report of Experience on the Theme of Interaction)
ERIC Educational Resources Information Center
Schroedter-Albers, Henning
1977-01-01
Describes work at a Goethe Institute branch with two groups of foreign students learning German, in which radio news, after preparatory work by the teacher, was used to induce question-and-answer dialogue. Many types of teaching aids and exercises used are described, including three-way conversation. (Text is in German.) (IFS/WGA)
MODEL TESTS ON BALL LIGHTNING; Modellversuche zum Kugelblitz
DOE Office of Scientific and Technical Information (OSTI.GOV)
nauer, H.
1959-10-31
Ball lightning phenomena and properties gleaned from a collection of observations are examined. The observations of a diffusion combustion of minute gas admixtures in air are thoroughly examined because they display the greatest resemblance to natural ball lightning. A comparison of properties with the qualities of the luminous clouds during diffusion combustion shows very good agreement. (W.D.M.)
Proceedings of the Second European Conference on Cognitive Modelling
1998-04-04
student record contains a field for the student’s Social Security Number (SSN), which is displayed on the screen. To retrieve information about this...Cambridge, Massachusetts: MIT Press. Chandler, M. J., & Boyes, M. (1982). Social -Cogni- tive Development. In B. B. Wolman (Ed.), Hand- book of...Effecten van vermoeidheid als functie van soort taak en sociale omgeving. TNO-rapport TM 1994 A-9, TNO Technische Menskunde, Postbus 23, 3769 ZG
Payload specialist Wubbo Ockels in new sleeping restraint
1985-10-30
61A-08-018 (30 Oct.-6 Nov. 1985) --- Wubbo J. Ockels, a Dutch scientists representing the European Space Agency (ESA), crawls from an unique sleeping restraint in D-1 science module. Unlike the other crewmembers on STS 61A, Ockels did not sleep in the middeck of the Challenger. Ockels proposed this concept of sleeping facility and the actual hardware was developed by the Technisch Natur Wetenschappelyk Onderzoek (TNO), a Dutch government organization.
2007-11-01
TRANSFER ( TWSTFT ) INCLUDING A TROPOSPHERE DELAY MODEL D. Piester, A. Bauch Physikalisch-Technische Bundesanstalt (PTB) Bundesallee 100...Abstract Two-way satellite time and frequency transfer ( TWSTFT ) is one of the leading techniques for remote comparisons of atomic frequency standards...nanosecond level. These achievements are due to the fact that many delay variations of the transmitted signals cancel out in TWSTFT because of the
Sizing Determination Final Report
1988-02-01
A Name: ERIC WHEATLEY Subject No.: 4 S/N: ------------ Sex.: M Race: BLACK Age...z:ni:zum Frcx-tal Arc - Tahc and -Marker Tool 19.= 6- - B-prozy.ncmatic_ ,’entlon Ar- - T"p_ and Marler Tool ......... .. 26.:; i7 Bitragion Mini-o-u- Frc...tape and Marler Tool ............. 24.5 7 . ilt’-aoion inigmu F-ontal Ar-c - -:•c= Oil- ---................... . ?.. S:-.-_,_ . ................. 1
NASA Astrophysics Data System (ADS)
Plaßmann, Wilfried
Um Sprache und Musik über größere Entfernungen zu übertragen, wird der Hörschall mit einem Mikrofon in ein proportionales elektrisches Signal umgewandelt und auf drahtlosem Wege (Beispiel Rundfunkübertragung) oder drahtgebunden (Beispiel Telefon, Kabelrundfunk, oft in Kombination mit dem drahtlosen Weg) zum Empfänger übermittelt. Am Empfangsort setzt z. B. ein Lautsprecher das elektrische Signal wieder in ein akustisches Signal um.
Forschungspolitik USA weisen zunehmend ausländische Physikstudenten ab
NASA Astrophysics Data System (ADS)
Bührke, Thomas
2003-11-01
In keinem anderen Land der Erde ist der Anteil ausländischer Studenten im Fach Physik größer als in den USA. Nach den Terroranschlägen vom 11. September werden jedoch zunehmend ausländische Studenten, insbesondere aus China und dem mittleren Osten, abgewiesen - zum Leidwesen der Studenten und der Universitäten.http://www.aip.org/statistics
Standardization Today and Tomorrow
1998-05-01
products and for occupational safety and health protection, into a majority rule, the elaboration and adoption of directives was additionally speeded up...they contain. Safety standards serve to protect life, health and material goods and so, for the field of technology, express the requirements laid...p. 19, 13.02.1996 Press release 05-97: Entscheidung zum Arbeitsschutzmanagement (Decision on industrial safety management ) http://www.din.de
NASA Astrophysics Data System (ADS)
Hahn, Hans Jürgen; Gutjahr, Simon
2014-09-01
In seinem Kommentar schließt Traugott Scheytt die faunistische Bioindikation über die Nutzung von Grundwassermessstellen aus methodischen Gründen aus. Er postuliert auch, dass die faunistische Indikation hydrogeologischer Zusammenhänge wegen der eingeschränkten Ausbreitungsfähigkeit der Tiere in Porengrundwasserleitern nicht möglich sei und stellt grundsätzlich die Befunde unserer Untersuchungen am Kaiserstuhl in Frage. Dabei überträgt Herr Scheytt seine Erfahrungen aus der unbelebten Hydrogeologie direkt auf den Lebensraum Grundwasser. Seine Argumentation berücksichtigt dabei weder die Prinzipien der Ökologie noch den aktuellen Stand der grundwasserökologischen Forschung. Wir gehen davon aus, dass für die Untersuchungen am Kaiserstuhl sowohl unsere Arbeitshypothese wie auch die angewandten Methoden und die Interpretation der Ergebnisse der Fragestellung angemessen sind und internationalem, wissenschaftlichem Standard entsprechen. Aus den oben benannten Gründen bleiben wir dabei: Biondikation im Grundwasser funktioniert und sie bietet hervorragende Möglichkeiten, gerade auch für die Hydrogeologie.
NASA Astrophysics Data System (ADS)
Schulz, Detlef
Nur mit einer zuverlässigen und wirtschaftlichen elektrischen Energieversorgung ist eine nachhaltige gesellschaftliche Entwicklung von Industrienationen möglich. Gleichzeitig muss diese selbst so nachhaltig, d. h. umweltverträglich gestaltet werden, dass auch zukünftige Generationen nicht in ihrer Entfaltung behindert werden. Eine versorgungssichere, wirtschaftliche und umweltverträgliche Energiebereitstellung ergibt sich nicht zwingend allein aus einer technisch evolutionären Entwicklung und ist auch bis heute nicht ausschließlich nur mit einer der bekannten Wandlungstechnologien nachweislich realisierbar.
NASA Technical Reports Server (NTRS)
Rued, Klaus
1987-01-01
The requirements for fundamental experimental studies of the influence of free stream turbulence, pressure gradients and wall cooling are discussed. Under turbine-like free stream conditions, comprehensive tests of transitional boundary layers with laminar, reversing and turbulent flow increments were performed to decouple the effects of the parameters and to determine the effects during mutual interaction.
Evidence for Solar Influences on Nuclear Decay Rates
2010-07-01
collected at Brookhaven National Laboratory ( BNL ) measuring 32Si and 36Cl,2,3,5–7 and 226Ra data collected at the Physikalisch-Technische Bundesanstalt...conditions. In what follows we present several arguments against a simplistic, systematic explanation of the BNL and PTB data fluctuations in terms of...any known seasonal environmental effect.4 (2) In both the BNL experiment, which studied 32Si and 36Cl in the same detector,7 and the CNRC (Children’s
Bewertung von Fahrzeuggeräuschen
NASA Astrophysics Data System (ADS)
Genuit, Klaus; Schulte-Fortkamp, Brigitte; Fiebig, André; Haverkamp, Michael
Bei der Wahrnehmung und Beurteilung eines Automobils sind unzählige Merkmale und Eigenschaften von Bedeutung. Dabei können Merkmale objektiv-technisch beschrieben werden, wie Angaben zur Motorisierung, Höchstgeschwindigkeit, Drehmoment, zulässige Zuladung, Verbrauch usw. Daneben sind weitere Eigenschaften von Bedeutung, die sich einer einfachen objektiv-technischen Beschreibung entziehen. Hier sind Begriffe zu nennen, wie Sicherheit, allgemeine Qualitätsanmutung, Design, Ergonomie, Komfort, Haptik, Fahrdynamik, Zuverlässigkeit, die deutlich schwieriger objektiv erfassbar und beschreibbar sind (Abb. 4.1).
RADIATION BIOLOGY AND ISOTOPE UTILIZATION IN APPLIED BOTANY (in German)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glubrecht, H.
A short survey is given of the problems being studied in the Institute for Radiation Biology in the Technische Hochschule in Hannover. Extensive data are presented on a method of studying the effect of ionizing radiation on plant cells with the aid of UV microspectroscopy. From the change in UV absorption in ranges of a few mu , conclusions can be drawn as to the probability of primary ionizations in protein molecules and nucleic acids. (auth)
NASA Astrophysics Data System (ADS)
Sturrock, P. A.; Buncher, J. B.; Fischbach, E.; Gruenwald, J. T.; Javorsek, D.; Jenkins, J. H.; Lee, R. H.; Mattes, J. J.; Newport, J. R.
2010-12-01
Evidence for an anomalous annual periodicity in certain nuclear-decay data has led to speculation on a possible solar influence on nuclear processes. We have recently analyzed data concerning the decay rates of 36Cl and 32Si, acquired at the Brookhaven National Laboratory (BNL), to search for evidence that might be indicative of a process involving solar rotation. Smoothing of the power spectrum by weighted-running-mean analysis leads to a significant peak at frequency 11.18 year-1, which is lower than the equatorial synodic rotation rates of the convection and radiative zones. This article concerns measurements of the decay rates of 226Ra acquired at the Physikalisch-Technische Bundesanstalt (PTB) in Germany. We find that a similar (but not identical) analysis yields a significant peak in the PTB dataset at frequency 11.21 year-1, and a peak in the BNL dataset at 11.25 year-1. The change in the BNL result is not significant, since the uncertainties in the BNL and PTB analyses are estimated to be 0.13 year-1 and 0.07 year-1, respectively. Combining the two running means by forming the joint power statistic leads to a highly significant peak at frequency 11.23 year-1. We will briefly comment on the possible implications of these results for solar physics and for particle physics.
Nanomeasuring and nanopositioning engineering
NASA Astrophysics Data System (ADS)
Jäger, G.; Hausotte, T.; Manske, E.; Büchner, H.-J.; Mastylo, R.; Dorozhovets, N.; Hofmann, N.
2006-11-01
The paper describes traceable nanometrology based on a nanopositioning machine with integrated nanoprobes. The operation of a high-precision long range three-dimensional nanopositioning and nanomeasuring machine (NPM-Machine) having a resolution of 0,1 nm over the positioning and measuring range of 25 mm x 25 mm x 5 mm is explained. An Abbe offset-free design of three miniature plan mirror interferometers and applying a new concept for compensating systematic errors resulting from mechanical guide systems provide very small uncertainties of measurement. The NPM-Machine has been developed by the Institute of Process Measurement and Sensor Technology of the Technische Universitaet Ilmenau and manufactured by the SIOS Messtechnik GmbH Ilmenau. The machines are operating successfully in several German and foreign research institutes including the Physikalisch-Technische Bundesanstalt (PTB), Germany. The integration of several, optical and tactile probe systems and nanotools makes the NPM-Machine suitable for various tasks, such as large-area scanning probe microscopy, mask and wafer inspection, nanostructuring, biotechnology and genetic engineering as well as measuring mechanical precision workpieces, precision treatment and for engineering new material. Various developed probe systems have been integrated into the NPM-Machine. The measurement results of a focus sensor, metrological AFM, white light sensor, tactile stylus probe and of a 3D-micro-touch-probe are presented. Single beam-, double beam- and triple beam interferometers built in the NPM-Machine for six degrees of freedom measurements are described.
ERIC Educational Resources Information Center
Breitung, H. A.; And Others
1974-01-01
New placement procedure at Humboldt University includes interviews and placement tests. Interviews reveal response ability, tempo, pronunciation, comprehension, etc. The 60-minute test that follows is described and results discussed, as well as difficulty level and grading. Results: better grouping of students, less shifting, better work. (Text is…
Mobile Number Portability in Europe
2005-08-01
Anmerkungen zum Balassa - Samuelson -Effekt, Nr. 3/2002, erschienen in: Stefan Reitz (Hg.): Theoretische und wirtschaftspolitische Aspekte der internatio- nalen...However, the argument is slightly more complex. Using a simple model with differentiated networks, Buehler and Haucap (2004) show that the incumbent’s...Elasticities The above arguments suggest that it is more difficult to gain market share in the presence of switching costs, as undercutting needs to be
Unanimous Constitutional Consent and the Immigration Problem
2004-12-01
Wolf, EU-Erweiterung: Anmerkungen zum Balassa - Samuelson -Effekt, Nr. 3/2002, erschienen in: Stefan Reitz (Hg.): Theoretische und wirtschafispolitische...the individualistic norm. Their main argument is that the contradiction between collective coercion and individual freedom cannot be dissolved at the... arguments against this way of reasoning for there usually will be holes in that veil, so that it may be possible to draw conclusions from the past with
ERIC Educational Resources Information Center
Gramberg, Anne-Kathrin; Heinze, Karin U.
1993-01-01
This article talks about the subjunctive of indirect speech, in which its important functions and meanings are depicted. An analysis of the instructional materials used in the first and second years of language study, followed by practical curriculum recommendations, demonstrates how this grammatical phenomenon can be established in an advanced…
ERIC Educational Resources Information Center
Hellwig, Karlheinz
1980-01-01
The questionnaire dealt with: earmarks of weaker students, parents' attitude, the teachers' training and working conditions, problems of individual instruction, time-frame, existence of professional associations, evaluation of professional conferences, and the use of media. Some evaluations are appended. (IFS/WGA)
PREFACE: 15th International Conference on the Strength of Materials (ICSMA-15)
NASA Astrophysics Data System (ADS)
Skrotzki, Werner; Oertel, Carl-Georg; Biermann, Horst; Heilmaier, Martin
2010-04-01
The 15th International Conference on the Strength of Materials (ICSMA 15) took place in Dresden, Germany, August 16-21, 2009. It belongs to the triennial series of ICSMA meetings with a long tradition, starting in 1967 - Tokyo, 1970 - Asilomar, 1973 - Cambridge, 1976 - Nancy, 1979 - Aachen, 1982 - Melbourne, 1985 - Montreal, 1988 - Tampere, 1991 - Haifa, 1994 - Sendai, 1997 - Prague, 2000 - Asilomar, 2003 - Budapest, 2006 - Xian. ICSMA 15 was hosted by the Dresden University of Technology, Institute of Structural Physics. Following the tradition of this conference series, it was the main focus of ICSMA 15 to promote and strengthen the fundamental understanding of the basic processes that govern the strength of materials. Nonetheless, it was the aim to forge links between basic research on model materials and applied research on engineering materials of technical importance. Thus, ICSMA 15 provided a forum for the presentation and discussion of research on the mechanical properties of all materials which are of interest to materials scientists and engineers from many different areas. The topics covered by ICSMA 15 were: 1.Atomistic and microstructural aspects of plastic deformation 2.Atomistic and microstructural aspects of fracture 3.Adhesion and interfacial strength 4.Cyclic deformation and fatigue 5.High temperature deformation and creep 6.Mechanical properties related to phase transformations 7.Large and severe plastic deformation 8.Nano- and microscale phenomena in plasticity and fracture 9.Strength issues in biological systems and biomaterials 10.Mechanical behaviour of glasses and non-crystalline solids 11.Multiscale modelling and experimental validation 12.Insight through new experimental methods 13.Other new developments related to the field While there was large interest in the new topics 7 and 8, contributions to topic 9 were much less than expected. ICSMA 15 attracted 352 scientists from 30 countries with one fourth of the participants being students. This is a very good ratio showing that we could attract the young generation. There have been 272 oral and 135 poster presentations. It is our pleasure to thank the members of the International ICSMA Committee for their valuable help, especially for proposing and choosing the 18 plenary speakers. 187 papers were submitted for publication in the proceedings, 167 were accepted after reviewing. We would like to express our thanks to all referees for their efficient and prompt efforts. We acknowledge particularly support from the German Research Society (DFG), the Saxon Ministry for Science and Art and the City of Dresden. We are also grateful for industrial support from PLANSEE Metall GmbH, Goodfellow GmbH, MTS Systems GMB, Nagoya University and IOP Publishing. Finally we thank all members of the Local Organizing Committee, Intercom Dresden and Conwerk / Laboratory Ten for the excellent organization of ICSMA 15 and the very pleasant collaboration. During the conference the International ICSMA Committee decided to convene the next conference in Bangalore, India, in 2012. We wish the organizers of ICSMA 16 great success and look forward to meeting you in Bangalore. Werner Skrotzki (Technische Universität Dresden) Carl-Georg Oertel (Technische Universität Dresden) Horst Biermann (Bergakademie Technische Universität Freiberg) Martin Heilmaier (Technische Universität Darmstadt) Guest Editors Dresden, June 23, 2009 (* Corresponding author; e-mail address: werner.skrotzki@physik.tu-dresden.de)
1989-12-09
and anodized aluminum to stability of the prebunching cavities is a suppress emission on the remainder of the cathode, difficult constraint...with means of a thick, aluminum anode plate, and 2) a lower a (0.2 -0.3). A wiggler has been utilized to thin stainless steel anode plate, field shaping...Omar DUCTOR OXIDES - S. Yoshimori and M. Kawamura, Dept rf and K. Schiinemann, Technische Universitit Hamburg-Harburg, Physical Elec, Faculty of Engr
Digitalisierung in der Energiewirtschaft - empirische Untersuchung und Wertschöpfungskette
NASA Astrophysics Data System (ADS)
Dell, Timo
Die Energiewirtschaft nutzt seit je her digitale Strukturen zur Umsetzung ihrer Prozesse. Durch den (neuen) verabschiedeten politischen Ordnungsrahmen - dem Gesetz zur Digitalisierung der Energiewende - und durch die rasante Fortentwicklung technologischer Strukturen ergeben sich jedoch die Wertschöpfungsstufen erweiternde, diversifizierende und innovative Möglichkeiten für Energieversorger (EVU) Geschäftsfelder auszubauen bzw. neue zu erschließen. Dabei ist die digitale (R)Evolution keine rein technische Umsetzung, sondern insbesondere auch eine unternehmensinterne, strategische und intern-kulturelle Herausforderung.
2015-06-01
structure at the micro- and nanoscale. In other words, development of nanocomposites, multilayers, and superlattices via appropriate design and control of...C-B and C-N bonds as C-C and B-N bonds. Later, the same research group , based on first-principles total-energy, and dynamic phonon calculations...Vickers hardness values.7 Another research group employed an ab initio evolutionary algorithm42 to resolve the crystal structure of the observed
ERIC Educational Resources Information Center
Ludwig, Peter H.
2003-01-01
Argues that the thesis of discrimination against girls in coeducation schools has been replaced by a belief that, during certain phases or in specific subjects, abandonment of coeducation would promote equal opportunities. Questions whether classic or recent surveys provide empirical evidence for this moderate skeptical attitude towards…
Ebenen des Verstehens: Überlegungen zu einem Verfahren zum Wurzelziehen
NASA Astrophysics Data System (ADS)
Winter, Martin
Wir bemühen uns, insbesondere bei Kindern, den Lernprozess auch im Mathematikunterricht durch den Einsatz von Materialien zu unterstützen. Die Arbeitsschritte dienen dabei oft der Vorbereitung oder Herleitung von Verfahren - in der Hoffnung, dass durch die Veranschaulichung Zusammenhänge besser verstanden werden. Worin dann das Verstehen besteht, wenn im Ergebnis ein Verfahren von den Kindern erfolgreich abgearbeitet wird, ist nicht unmittelbar zu sehen.
ERIC Educational Resources Information Center
Lehner, H.; Weingartz, M.
The typical traits of a ready-made system as well as those of an individualized system are constituent characteristics of distance education. Within the framework set by these qualities and the extent to which they differ from one another, one seeks to achieve the fundamental educational aim of distance students' autonomy. The way institutions see…
Middle-Class Consensus, Social Capital and the Mechanics of Economic Development
2005-01-01
Michael, Social Capital and Regional Mobility, Nr. 4/2002. "* Schdfer, Wolf, EU-Erweiterung: Anmerkungen zum Balassa - Samuelson -Effekt, Nr. 3/2002...variations, while the argument of both the present paper and most of the previous literature on inequality and growth refers to long-run growth effects of...Diskussionsbeitriige zur Finanzwissenschaft " Josten, Stefan, Crime, Inequality, and Economic Growth. A Classical Argument for Distributional Equality
ERIC Educational Resources Information Center
Koppensteiner, Juergen
1978-01-01
Nearly all German textbooks currently used in the USA either entirely fail to mention Austria, or else portray it by means of myths, cliches (Apfelstrudel, Sachertorte, Gemuetlichkeit), and often with outright untruths. The American student gets a picture of Austria as an "underdeveloped Disneyland of Europe". (WGA)
ERIC Educational Resources Information Center
Bauer, Hans-Ludwig
1976-01-01
Reports on the introduction of video recorders at various branches of the Goethe Institute. The characteristics of video are compared with those of films, tapes and printed matter. Video's advantage is that through it authentic, unprepared material can be presented, and it provides strong motivation. (Text is in German.) (IFS/WGA)
ERIC Educational Resources Information Center
Zielsprache Englisch, 1976
1976-01-01
The phonetic symbols in the "Advanced Learners Dictionary" (Oxford University Press, London) are discussed critically in articles by L. Alfes, H. Arndt, E. Bauch, G. Dahlmann-Resing, W. Friedrich, E. Germer, B. Haycraft, H. P. Kelz. Reference is made to an earlier article "Neue Zeichen", by H. G. Hoffmann. (Text is in German.)…
ERIC Educational Resources Information Center
Bielefeld Univ. (West Germany).
The ten papers in this document were developed for a meeting prepared for the Third International Congress on Mathematical Education. Each paper is concerned with research from 1970-1975 related to the mathematical learning process. The first paper describes projects conducted in England on both content and process learning. The second paper…
Chirurgie angeborener Herzfehler
NASA Astrophysics Data System (ADS)
Schreiber, Christian; Libera, Paul; Lange, Rüdiger
Störungen der embryonalen Entwicklung in der frühen Phase der Schwangerschaft können zu Fehlbildungen am Herz- und Gefäßsystem führen. Die Häufigkeit liegt bei 0.8-1 % aller lebend geborenen Kinder. In Deutschland werden jedes Jahr etwa 6.000 Kinder mit einem Herzfehler geboren (Quelle: http://www.kompetenznetzahf.de). Das Spektrum reicht von einfachen Fehlern, die das Herz-Kreislauf-System wenig beeinträchtigen, bis zu sehr schweren Herzerkrankungen, die unbehandelt zum Tode führen. Fortschritte der Kinderkardiologie, Herzchirurgie und Anästhesie ermöglichen heute ein Überleben bei über 90 % der Patienten. Auch die spezialisierte Pränataldiagnostik (vorgeburtliche Diagnostik) ermöglicht schon die frühe Weichenstellung für mögliche Therapieoptionen. Bei der chirurgischen Therapie ist jedoch festzuhalten, dass ein Herzfehler entweder korrigierend behandelt wird oder nur "palliiert“ werden kann. Bei letzterer Therapie wird bei einem Patienten eine medizinische Maßnahme durchgeführt, die nicht die Herstellung normaler Körperfunktionen zum Ziel hat, sondern in Anpassung an die physiologischen Besonderheiten des Patienten dessen Zustand lediglich stabilisiert und optimiert. Dies kann beispielsweise bei einer nicht korrigierbaren angeborenen Fehlbildung notwendig sein, bei der lediglich eine funktionelle Herzkammer vorhanden ist (z. B. hypoplastisches Linksherz). Hierbei muss eine prothetische Verbindung zur Lungenstrombahn in der Folgezeit entfernt werden.
ERIC Educational Resources Information Center
Endt, Ernst
This bibliography lists publications concerned with bilingual education and immersion programs and how they are used in and outside of Canada. In the beginning, an overview is provided of publications from related disciplines that have brought crucial recognition to the fields of bilingual and immersion education. These include: second and foreign…
Hey, Christiane
2018-04-01
Bock JM et al. Evaluation of the natural history of patients who aspirate. Laryngoscope 2017; 127: S1–S10 DIE KLINISCHE PROGRESSION DER ASPIRATION BIS ZU EVENTUELLEN PULMONALEN STöRUNGEN IST NICHT VOLLSTäNDIG VERSTANDEN. EMPFEHLUNGEN ZUR ERNäHRUNGSUMSTELLUNG, SCHWERE VON PENETRATION UND ASPIRATION GEMäß PAS SOWIE DIE ÄTIOLOGIE DER DYSPHAGIE BEEINFLUSSEN MöGLICHERWEISE DIE ZEITSPANNE BIS ZUM AUFTRETEN DES ERSTEN PULMONALEN EREIGNISSES SOWIE DAS GESAMTüBERLEBEN VON PATIENTEN MIT VFS-DOKUMENTIERTER, ASYMPTOMATISCHER PENETRATION UND ASPIRATION.
NASA Astrophysics Data System (ADS)
1981-04-01
The main topics discussed were related to nonparametric statistics, plane and antiplane states in finite elasticity, free-boundary-variational inequalities, the numerical solution of free boundary-value problems, discrete and combinatorial optimization, mathematical modelling in fluid mechanics, a survey and comparison regarding thermodynamic theories, invariant and almost invariant subspaces in linear systems with applications to disturbance isolation, nonlinear acoustics, and methods of function theory in the case of partial differential equations, giving particular attention to elliptic problems in the plane.
Optical-Fiber Power Meter Comparison Between NIST and PTB.
Vayshenker, I; Haars, H; Li, X; Lehman, J H; Livigni, D J
2003-01-01
We describe the results of a comparison of reference standards between the National Institute of Standards and Technology (NIST-USA) and Physikalisch-Technische Bundesanstalt (PTB-Germany) at nominal wavelengths of 1300 nm and 1550 nm using an optical-fiber cable. Both laboratories used thermal detectors as reference standards. A novel temperature-controlled, optical-trap detector was used as a transfer standard to compare two reference standards. Measurement results showed differences of less than 1.5 × 10(-3), which is within the combined uncertainty for both laboratories.
ERIC Educational Resources Information Center
Bach, Gerhard
1975-01-01
Using Richard Wright's novel "Black Boy" as a model to work on, the author illustrates a 4-dimensional approach to the teaching of foreign literature: (1) scientific definition of the goals, (2) making a teaching plan, (3) defining methods, and (4) actual use on a teaching model. (Text is in German.) (IFS/WGA)
Effects of Sediment Composition on Growth of Submersed Aquatic Vegetation
1986-01-01
environ- mental conditions, the growth of Hydri a and Myriophylwn is relatively poor on low density, highly organic sediments and on high density sands. Whi...A1jriophyZum 5.08 1 0.33 0.93 K HydritZa 27.8 t 0.8 0,98 M-plroph,,Uum 20.4 t 0.6 0.98 Na Hydri Z 0.42 ± 0.03 0.93 MyxiophyTZum 8.03 t 0.43 0.95 H9
ERIC Educational Resources Information Center
Kerschgens, Edda
1977-01-01
Reports on an 18-hour teaching project with students at a teachers' college. Four Baldwin texts were used. Questions considered included whether Baldwin's treatment of the race problem reveals any changes or shifts of emphasis. Suggestions are made for adaptation to teaching in grades 11-13. (Text is in German.) (IFS/WGA)
Umsetzung der Unternehmensstrategie mit der Balanced Scorecard
NASA Astrophysics Data System (ADS)
Crespo, Isabel; Bergmann, Lars; Portmann, Stefan; Lacker, Thomas; Lacker, Michael; Fleischmann, Jürgen; Kozó, Hans
Die Balanced Scorecard (BSC) ist ein Ansatz zum strategischen Management, der neben der Ausrichtung des Unternehmens auf finanzielle Zielwerte ebenso großes Gewicht auf so genannte weiche Faktoren legt, die den wirtschaftlichen Erfolg eines Unternehmens erst ermöglichen. Das entscheidende Merkmal der Balanced Scorecard ist dabei, dass sie ein ausgewogenes System strategischer Ziele herstellt, welches das Unternehmen hinsichtlich der vier Perspektiven Finanzen, Kunden, interne Prozesse und Mitarbeiter und Potenziale strategisch ausrichtet (Kaplan u. Norton 1997).
Polynomials with Restricted Coefficients and Their Applications
1987-01-01
sums of exponentials of quadratics, he reduced such ýzums to exponentials of linears (geometric sums!) by simplg multiplying by their conjugates...n, the same algebraic manipulations as before lead to rn V`-~ v ie ? --8-- el4V’ .fk ts with = a+(2r+l)t, A = a+(2r+2m+l)t. To estimate the right...coefficients. These random polynomials represent the deviation in frequency response of a linear , equispaced antenna array cauised by coefficient
Betriebsführung multimodaler Energiesysteme
NASA Astrophysics Data System (ADS)
Mackensen, Reinhard
Die Transformation des Energiesystems von einer zentral geprägten, unidirektional orientierten und in unterschiedliche Sektoren separierten Struktur hin zu einer umfassenden, multimodalen, dezentralen und flexiblen Erzeuger- und Verbraucherlandschaft findet auf verschiedenen Ebenen statt. Randbedingung bei dieser Umwälzung ist immer die Einhaltung der Teilziele Versorgungssicherheit, Wirtschaftlichkeit und Effizienz. Im Einzelnen schlägt sich diese Transformation in einer Diversifizierung der Akteurslandschaft durch die Mechanismen des Unbundling nieder. Weiterhin finden eine Dezentralisierung der Erzeugerlandschaft und damit eine Substitution von mehrheitlich fossil betriebener Großkraftwerkstechnologie durch eine Vielzahl dezentraler Erzeuger mit zumeist regenerativem Charakter statt. Dieser Wandel hat im Wesentlichen zwei Hauptkonsequenzen. Zum einen ergeben sich durch dezentrale, flächige Verteilung der Erzeuger neue Anforderungen an den Energieaustausch, bspw. aus der Erweiterung der Stromnetze für einen bidirektionalen Energieaustausch, zum anderen werden Abstimmungsmechanismen erforderlich, welche die fluktuierende Einspeisung derart mit dem Verbrauch in Waage hält, dass sowohl elektrische Netzrestriktionen, die Qualität der Versorgung und Aspekte der Energieeffizienz und damit der Wirtschaftlichkeit berücksichtigt werden. Mögliche Antworten auf die mit dieser Betrachtung einhergehenden Fragen liegen in der Konzeption eines multimodalen Energiesystems, also in der Gesamtbetrachtung der Sektoren Strom, Wärme und Verkehr. Dieses Kapitel soll Mechanismen darlegen und Wege aufzeigen, wie eine solche Konzeption gestaltet werden kann und wie solche komplexen Systeme in der Praxis betrieben werden können.
Long-range nanopositioning and nanomeasuring machine for application to micro- and nanotechnology
NASA Astrophysics Data System (ADS)
Jäger, Gerd; Hausotte, Tino; Büchner, Hans-Joachim; Manske, Eberhard; Schmidt, Ingomar; Mastylo, Rostyslav
2006-03-01
The paper describes the operation of a high-precision long range three-dimensional nanopositioning and nanomeasuring machine (NPM-Machine). The NPM-Machine has been developed by the Institute of Process Measurement and Sensor Technology of the Technische Universität Ilmenau. The machine was successfully tested and continually improved in the last few years. The machines are operating successfully in several German and foreign research institutes including the Physikalisch-Technische Bundesanstalt (PTB). Three plane mirror miniature interferometers are installed into the NPM-machine having a resolution of less than 0,1 nm over the entire positioning and measuring range of 25 mm x 25 mm x 5 mm. An Abbe offset-free design of the three miniature plane mirror interferometers and applying a new concept for compensating systematic errors resulting from mechanical guide systems provide extraordinary accuracy with an expanded uncertainty of only 5 - 10 nm. The integration of several, optical and tactile probe systems and nanotools makes the NPM-Machine suitable for various tasks, such as large-area scanning probe microscopy, mask and wafer inspection, nanostructuring, biotechnology and genetic engineering as well as measuring mechanical precision workpieces, precision treatment and for engineering new material. Various developed probe systems have been integrated into the NPM-Machine. The measurement results of a focus sensor, metrological AFM, white light sensor, tactile stylus probe and of a 3D-micro-touch-probe are presented. Single beam-, double beam- and triple beam interferometers built in the NPM-Machine for six degrees of freedom measurements are described.
NASA Astrophysics Data System (ADS)
Jäger, Kristina
2017-04-01
Local actors, such as municipal authorities, can promote and shape the settlement of foreign companies in a particular manner. In many instances, actions of economic promotion can lead to several conflicts; firstly between foreign investors and their national representatives, and secondly with (supra)regional industries or business circles. Here, flexible and long-term settlement projects, the permission of migrant infrastructure, and moreover, their high-profile marketing may attract foreign companies and long-term investors.
Sustainability Gaps in Municipal Solid Waste Management: The Case of Landfills
2006-02-01
Regional Mobility, Nr. 4/2002. "* Schafer, Wolf, EU-Erweiterung: Anmerkungen zum Balassa - Samuelson -Effekt, Nr. 3/2002, erschienen in: Stefan Reitz (Hg...generations with high long-term external costs as it is the case in the dry tomb technology. Besides the efficiency aspect an argument of justice is... argument of time- consistency according to Strotz does not play any role because the assumed "rational" way of planning cannot be realized due to the non
Die Grundlagen der Fernsehtechnik: Systemtheorie und Technik der Bildübertragung
NASA Astrophysics Data System (ADS)
Mahler, Gerhard
Umfassende Einführung in die Grundlagen der Bewegtbild-Übertragung von den Anfängen bis zum heutigen Stand des digitalen Fernsehens mit einer aus der Praxis entstandenen systemtheoretischen Analyse. Die kompakte und anschaulich bebilderte Darstellung mit elementaren mathematischen Beschreibungen macht es dem Leser leicht, sich in die Bildübertragungstechnik einzuarbeiten. Thematische Einheiten erweitern den Wissensstoff - u.a. zu den Themen visuelle Wahrnehmung, mehrdimensionale Signaldarstellung, Farbmetrik, Digitalisierung, Elektronenoptik - und zeigen deren Anwendung auf die elektronische Bildübertragung.
1980-08-11
cross - Bruttoquerschni tt section Zahl der Flugstunden = number of flight hours Fig 2.8 Fatigue strength of ...Relationship between residual strength, strain increase and development of damage as a function of the number of cycles (Ref 732) 118 / 0 /+45 / 90 /- CFK ... cross -section) only the nett cross -section. Number of 900 where intralaminar cracks ha! increased further. cracks had occurred 4. Delaminations
ERIC Educational Resources Information Center
Loeffler, Renate
1979-01-01
Describes work at an American college with students who are receiving an introduction to colloquial German. Role-playing and picture stories prove useful in learning, in both productive and receptive aspects. Describing a picture on three levels--factual, psychological and contemplative--is shown to be very useful. (IFS/WGA)
Visualisierung analoger Schaltungen durch 3-D Animation von transienten SPICE-Simulationen
NASA Astrophysics Data System (ADS)
Becker, J.; Manoli, Y.
2007-06-01
Für das Zeichnen analoger Schaltpläne wird oft versucht, die Potentialverteilung in der entsprechenden Schaltung auszunutzen, um eine Platzierung der Bauteile nach abfallendem Potential vorzunehmen. Mit Hilfe von Computerunterstützung gelingt es, eine verallgemeinerte dreidimensionale Platzierungsstrategie anzuwenden, die allein auf Grund der Potentialwerte einer Schaltung die automatische Generierung einer technisch exakten Potentialdarstellung erlaubt. Somit ist es möglich, die Ergebnisse von transienten SPICE-Simulationen in jedem Zeitschritt darzustellen und eine Animation des zeitlichen Verhaltens zu erzeugen. Die Umsetzung dieser Methode zur Einbettung in eine webbasierte Lern - und Arbeitsplattform wird im Folgenden erläutert.
2004-08-01
Medizinische Elektronik, TU München, Munich, Germany) 16.25-16-45 TH_SP_6 Drug delivery for nerve tissue regeneration POSTER II POSTER I ORAL SESSIONS MAIN...Carolina 29208, USA) WE-P8 Effects of Synthesis Conditions on the Growth of MWCNTs Using an Ultra Sonic Evaporator with Pyrolysis of Hydrocarbon... regeneration , F. C. Soumetz1, M. Giacomini1, L. Pastorino1, J. B. Phillips2, R. A. Brown2, C. Ruggiero1, (1D.I.S.T, University of Genova, Via Opera
Grundlagen und Grundbegriffe der Messtechnik
NASA Astrophysics Data System (ADS)
Plaßmann, Wilfried
Es ist eine wesentliche Aufgabe der Messtechnik, technische Vorgänge quantitativ zu erfassen und anhand der gemessenen Größen Funktionsabläufe zu steuern. Als Beispiel sei ein Kraftwerk zur Energieerzeugung genannt, bei dem nur über die Messung von Temperaturen, Leistungen, Drücken und anderen Größen Aussagen über den momentanen Zustand möglich sind und bei Abweichungen vom Sollwert geeignete Eingriffe in das System erfolgen können. Damit eine eindeutige Kommunikation möglich wird, sind die in der Messtechnik verwendeten Begriffe, Messverfahren und Maßeinheiten in entsprechenden Normen oder Vorschriften festgelegt.
NASA Astrophysics Data System (ADS)
Lacalli, Christina; Jähne, Marion; Wesarg, Stefan
In diesem Beitrag stellen wir neue, automatisierte Verfahren zur Visualisierung der Koronararterien einerseits und für eine direkte Vergleichbarkeit mit konventionellen Angiogrammen andererseits vor. Unser Ansatz umfasst Methoden für die automatische Extraktion des Herzens aus kontrastverstärkten CT-Daten, sowie für die Maskierung grosser kontrastmittelgefüllter Kavitäten des Herzens, um die Sichtbarkeit der Koronararterien bei der Darstellung mittels Volumenrendering zu verbessern. Zum direkten Vergleich mit konventionellen Angiographien wurde ein Verfahren zur automatischen Generierung von Projektionsansichten aus den CT-Daten entwickelt.
Einsteins Spuren in den Archiven der Wissenschaft: Physikgeschichte
NASA Astrophysics Data System (ADS)
Marx, Werner
2005-07-01
Die Erwähnungen und Zitierungen von Einsteins Arbeiten dokumentieren lediglich den quantifizierbaren Anteil von Einsteins Beitrag zur Physik. Gleichwohl belegen sie die außergewöhnliche Resonanz und Langzeitwirkung seiner Arbeiten. Die Häufigkeit der Zitierungen entspricht nicht der allgemeinen Einschätzung ihrer Bedeutung. Insbesondere die Pionierarbeiten werden inzwischen als bekannt vorausgesetzt und nicht mehr explizit zitiert. Interessanterweise ist seine nach 1945 meist zitierte Arbeit nicht eine der Pionierarbeiten zur Quantenphysik oder Relativitätstheorie, sondern jene aus dem Jahr 1935 zum berühmten Einstein-Podolsky-Rosen-Paradoxon.
Wildner, Manfred
2018-06-01
„Ich habe doch nur mit dem Finger gedroht“, sagte der Angeklagte, und unterließ es unfairerweise darauf hinzuweisen, dass dieser am Abzug einer Pistole gelegen hatte. Der Bezug dieses Bonmots zum Gesundheitswesen soll im Folgenden weniger auf die konkreten gesundheitlichen Risiken durch Schusswaffen gelegt werden, sondern vielmehr auf die nicht weniger bedeutsame Rolle des Kontextes für Gesundheit, Krankheit und Wohlbefinden. Dass der „Kontext“, verstanden als System- und Sinnzusammenhang, sowohl das objektive Auftreten von Krankheiten in ihrer Häufigkeit und Schwere als auch das subjektive Erleben von Gesundheitszuständen beeinflusst, ist hinreichend bekannt.
Plattformbasierte Dienste als technologische Notwendigkeit im disruptiven Marktwandel
NASA Astrophysics Data System (ADS)
Elsner, Daniel
Die energiewirtschaftliche Digitalisierung führt zu einem disruptiven Marktwandel. Der smarte, vernetzte Energiemarkt von morgen umfasst neue Player, neue Kommunikationsanforderungen, geändertes Kundenverhalten und mehr Daten. Etablierte Marktteilnehmer sind gezwungen, ihre bisherigen Geschäftsmodelle zu überdenken. IT wird dabei mehr und mehr zum Wettbewerbsfaktor. Ein erfolgreiches Managen der technologischen Veränderungsprozesse ist zwingende Voraussetzung für die nachhaltige Bewältigung der energiewirtschaftlichen Digitalisierung. In diesem Zusammenhang erweisen sich die daten- und entwicklungsspezifischen Synergieeffekte plattformbasierter Dienste als zentraler Mehrwert einer innovationsgetriebenen strategischen Marktpositionierung und damit als technologische Notwendigkeit.
Beitrag zum mechanismus der oxydation von freiblei in bleiakkumulatorpaste bei der reifung
NASA Astrophysics Data System (ADS)
Duc Hung, Nguyen; Garche, J.; Wiesener, K.
The kinetics of lead oxidation during curing was studied by chemical analysis of the free lead content as well as by the gas volumetry of oxygen. The difference in the results in this research lies in the evolution of hydrogen as a curing reaction. This agrees with the results of curing the paste The optimal water content of the paste for lead oxidation was determined. A model for the electrolyte film during curing has been developed, which allows the results to be interpreted satisfactorily.
Alternative Antriebe für Automobile: Hybridsysteme, Brennstoffzellen, alternative Energieträger
NASA Astrophysics Data System (ADS)
Stan, Cornel
Über die Realisierungsmöglichkeiten zukünftiger Antriebskonzepte - von Hybridsystemen Elektro-/Verbrennungsmotor über Brennstoffzellen bis zu alternativen Energieträgern wie Wasserstoff oder Alkohol - werden fundierte Kriterien der Qualität eines Antriebs entscheiden. Leistungsdichte, Drehmomentverlauf, Beschleunigungscharakteristik, spezifischer Energieverbrauch sowie Emission chemischer Stoffe und Geräusche sind dafür wichtige Merkmale zur Qualitätsbeurteilung. Die Verfügbarkeit und die Speicherfähigkeit vorgesehener Energieträger, die technische Komplexität, Kosten, Sicherheit, Infrastruktur und Service werden die Randbedingungen für die Einführung realisierbarer Konzepte alternativer Antriebe für Automobile stellen.
Hydrogel research in Germany: the priority programme, Intelligent Hydrogels
NASA Astrophysics Data System (ADS)
Wallmersperger, Thomas; Sadowski, Gabriele
2009-03-01
The priority programme "Intelligent Hydrogels" was established by the German Research Foundation (DFG) in 2006 in order to strengthen the hydrogel-related research in Germany. The programme is being coordinated by Gabriele Sadowski, Technische Universität Dortmund. The aim of this priority programme is to develop new methods for the synthesis and characterization of smart hydrogels and to develop new modelling strategies in order to a) prepare the hydrogels for special applications and/or b) to develop and extend their capabilities for any desired use. In this programme, 73 scientists (36 professors and 37 scientific assistants/PhD students) from all over Germany are involved, working in 23 projects.
NASA Technical Reports Server (NTRS)
Urlichs, K.
1983-01-01
Self-excited rotor whirl represents a serious hazard in the operation of turbomachines. The reported investigation has, therefore, the objective to measure the lateral forces acting on the rotor and to determine the characteristic pressure distribution in the rotor clearance area. A description is presented of an approach for calculating the leakage flow in the case of an eccentric rotor position on the basis of empirical loss coefficients. The results are reported of an experimental investigation with a turbine stage, taking into account a variation of the clearance characteristics. The pressure data measured are consistent with the theoretical considerations.
Integriertes Informationsmanagement am KIT Was bleibt? Was kommt?
NASA Astrophysics Data System (ADS)
Labitzke, Sebastian; Nussbaumer, Martin; Hartenstein, Hannes; Juling, Wilfried
Der Beitrag beschreibt wesentliche seit dem Jahr 2005 an der Universität Karlsruhe bzw. am Karlsruher Institut für Technologie erzielte Ergebnisse in Bezug auf technische und organisatorische Integration für das Informationsmanagement. Insbesondere stehen die Portaldienste und das Identitätsmanagement im Fokus der technischen Innovation. Daneben werden zwei organisatorische Innovationen vorgestellt, die sich dediziert Fragestellungen zur IT-Governance und IT-Compliance widmen. Abschließend werden erzielte Schlüsselerfahrungen diskutiert, die im Zuge des Aufbaus eines integrierten Informationsmanagements gemacht wurden. Ausblickend wollen wir einen Bezug zu den allgegenwärtigen Problemstellungen und verbundenen Herausforderungen derartiger Vorhaben aufzeigen - Herausforderungen, die waren, sind und bleiben werden.
Long-term comparisons between two-way satellite and geodetic time transfer systems.
Plumb, John F; Larson, Kristine M
2005-11-01
Global Positioning System (GPS) observations recorded in the United States and Europe were used to evaluate time transfer capabilities of GETT (geodetic time transfer). Timing estimates were compared with two-way satellite time and frequency transfer (TWSTFT) systems. A comparison of calibrated links at the U.S. Naval Observatory, Washington, D.C., and Colorado Springs, CO, yielded agreement of 2.17 ns over 6 months with a standard deviation of 0.73 ns. An uncalibrated link between the National Institute of Standards and Technology (NIST) and Physikalisch-Technische Bundesanstalt, Braunschweig, Germany, has a standard deviation of 0.79 ns over the same time period.
Neue Laser und Strahlquellen - alte und neue Risiken?
Paasch, Uwe; Schwandt, Antje; Seeber, Nikolaus; Kautz, Gerd; Grunewald, Sonja; Haedersdal, Merete
2017-05-01
Die Entwicklungen im Bereich dermatologischer Laser, hochenergetischer Blitzlampen, LED und neuer Energie- und Strahlquellen der letzten Jahre haben gezeigt, dass mit neuen Wellenlängen, Konzepten und Kombinationen zusätzliche, zum Teil über den ästhetischen Bereich hinaus gehende therapeutische Optionen für den Dermatologen erschlossen werden konnten. Wurden bisher zum Beispiel mit fraktionalen Lasern Falten behandelt, sind eben diese Systeme heute in Kombination mit Medikamenten wichtige Werkzeuge bei der Behandlung von Narben, bei Feldkanzerisierung und epithelialen Tumoren. Die Anforderungen an den die Indikation stellenden und vorzugsweise therapierenden Arzt steigen mit der immer komplexer werdenden Technik und den zunehmenden Komorbiditäten und Komedikationen einer älter werdenden Patientenklientel. Parallel etabliert wurden, zunächst für einige wenige Indikationen, Geräte für die Heimanwendung, die sich durch geringe Leistung und spezielle Sicherheitsvorkehrungen zur Vermeidung von Unfällen, Risiken und Nebenwirkungen auszeichnen. Trotz der reduzierten Effizienz solcher Selbstbehandlungsmaßnahmen steigt die Wahrscheinlichkeit einer Fehlanwendung, da die Grundvoraussetzung für eine korrekte Therapie, nämlich die exakte Diagnose und Indikationsstellung, nicht vorausgesetzt werden kann. Bei einer Haarentfernung können so Pigmenttumoren, bei einer Faltentherapie neoplastische Hautveränderungen adressiert und zu erwartende, unvorhergesehene und neue Nebenwirkungen und Komplikationen induziert werden. In diesem Szenario ist es wichtig, alle potenziellen Anwender dieser neuen Technologien vor deren Einsatz so zu qualifizieren, dass den Therapierten maximale Therapiesicherheit bei höchster Effizienz unter dem Leitbild diagnosis certa - ullae therapiae fundamentum garantiert wird. © 2017 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
Spirituelles Wohlbefinden und Coping bei Sklerodermie, Lupus erythematodes und malignem Melanom.
Pilch, Michaela; Scharf, Sabina Nadine; Lukanz, Martin; Wutte, Nora Johanna; Fink-Puches, Regina; Glawischnig-Goschnik, Monika; Unterrainer, Human-Friedrich; Aberer, Elisabeth
2016-07-01
Religiös-spirituelles Wohlbefinden ist verbunden mit höherer Vitalität und verminderter Depressionsneigung. In unserer Studie untersuchten wir die Strategien zur Krankheitsbewältigung und die Rolle von Religiosität-Spiritualität (R-S) zur Verbesserung des subjektiven Wohlbefindens. 149 Patienten (107 Frauen), 44 mit systemischer Sklerodermie (SKL), 48 mit Lupus erythematodes (LE) und 57 mit malignem Melanom (MM), Stadium I-II, wurden mittels eines selbstentwickelten Fragebogens zum subjektiven Wohlbefinden, zu den mit der Erkrankung einhergehenden Umständen sowie mit dem Multidimensionalen Inventar (MI-RSB) zu R-S befragt. LE-Patienten sind zum Zeitpunkt der Diagnosestellung stärker belastet als SKL- und MM-Patienten. SKL- und LE-Patienten können erst nach Jahren die Erkrankung akzeptieren. Der Gesamtscore des religiös-spirituellen Befindens liegt bei LE-Patienten signifikant unter dem Wert der Normalbevölkerung. Fotosensitivität und Gelenksschmerzen sind bei LE-Patienten negativ assoziiert mit der Fähigkeit Vergeben zu können. SKL-Patienten mit Gesichtsveränderungen und Lungenbeteiligung zeigen höhere allgemeine Religiosität. MM-Patienten haben höhere Werte für transzendente Hoffnung. Vorträge über die Krankheit und psychologische Betreuung sind die wichtigsten Bedürfnisse von Patienten mit SKL, LE und MM an ihre Betreuer. Religiös-spirituelle Angebote zur Krankheitsverarbeitung scheinen derzeit eine untergeordnete Rolle zu spielen, könnten aber eine wichtige Ressource sein, der man in Zukunft mehr Aufmerksamkeit schenken sollte. © 2016 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
Quantenwelt im Nanozylinder: Elektronische Eigenschaften von Kohlenstoff-Nanoröhrchen
NASA Astrophysics Data System (ADS)
Strunk, Christoph
2005-07-01
Kohlenstoff-Nanoröhren sind einzelne oder mehrfach ineinander gesteckte molekulare Hohlzylinder. In ihnen bilden Kohlenstoffatome ein Graphit ähnliches Kristallgitter. Diese Fullerene zeichnen sich durch eine außerordentlich hohe Elastizität und Zugfestigkeit aus. In ihren elektronischen Eigenschaften verhalten sie sich entweder wie Halbleiter oder wie metallische Leiter. Aus halbleitenden Nanoröhren konnten bereits winzige Feldeffekttransistoren hergestellt werden, ein erster Schritt hin zu einer molekularen Elektronik. Die Grundlagenforscher interessiert vor allem das Verhalten metallischer Nanoröhren bei tiefen Temperaturen. An ihren elektronischen Systemen lassen sich zum Beispiel Quanteninterferenzphänomene oder Elektron-Elektron-Wechselwirkungen untersuchen.
NASA Astrophysics Data System (ADS)
Tschirschwitz, Christian
Auf einer außerörtlichen Bundesstraße geriet ein mit vier Personen besetzter Pkw Toyota Corolla aus letztlich nicht vollständig geklärten Gründen ins Schleudern. Nachdem sich das Fahrzeug beträchtlich entgegen dem Uhrzeigersinn ausgedreht hatte, prallte ein entgegenkommender Kleintransporter VW T4 frontal an die rechte Flanke des Toyota. Der Transporter wurde gedreht, ausgehoben und durch einen Pkw Ford Escort unterfahren. Alle Fahrzeuge kamen in Kollisionsortnähe zum Endstand. Die vier Toyota-Insassen wurden getötet. Aus den anderen Fahrzeugen wurden sechs Personen überwiegend schwer verletzt. Unbeteiligte Zeugen waren nicht vorhanden.
Imagineering the astronomical revolution - Essay review
NASA Astrophysics Data System (ADS)
Jardine, Nicholas.
2006-11-01
Concerning following Books: (I) Transmitting knowledge - words, images, and instruments in early modern Europe. Kusukawa and Maclean (eds.), OUP, Oxford, 2006; (II) Widmung, Welterklärung und Wissenschaftslegitimierung: Titelbilder und ihre Funktionen in der wissenschaftlichen Revolution. Remmert, Harrassowitz, Wiesbaden, 2005; (III) The power of images in early modern science. Lefevre, Renn and Schoepflin (eds.), Birkhäuser, Basel, 2003; (IV) Immagini per conoscere - dal Rinascimento alla rivoluzione scientifica. Meroi and Pogliano (eds.), Olschki, Florenz, 2001; (V) Erkenntnis Erfindung Konstruktion - Studien zur Bildgeschichte von Naturwissenschaften und Technik vom 16. bis zum 19. Jahrhundert. Holländer (ed.), Mann, Berlin, 2000.
Gebändigtes Knallgas: Brennstoffzellen im mobilen und stationären Einsatz
NASA Astrophysics Data System (ADS)
Waidhas, Manfred; Landes, Harald
2001-07-01
Die Brennstoffzelle hat aus technischer Sicht einen hohen Stand erreicht. Die PEMFC konnte ihre Zuverlässigkeit in einer Reihe von Nischenanwendungen, aber auch in Form erster mobiler und dezentraler Prototypen beweisen. Die SOFC und die MCFC konnten bereits in Anlagen von 100 kW und mehr in Erprobung gehen. Um jedoch wirtschaftlich konkur-renzfähig zu den etablierten Technologien der mobilen und dezentralen Energiewandlung zu werden, muss noch eine drastische Kostenreduktion sowohl beim Brennstoffzellen-Stack als auch bei den zu seinem Betrieb notwendigen Hilfsaggregaten erreicht werden. Für Fahrzeugantriebe muss außerdem eine Antwort auf die noch offene Treibstofffrage (Infrastruktur, H2-Erzeugung und H2-Speicherung) gefunden werden.
237Np absolute delayed neutron yield measurements
NASA Astrophysics Data System (ADS)
Doré, D.; Ledoux, X.; Nolte, R.; Gagnon-Moisan, F.; Thulliez, L.; Litaize, O.; Roettger, S.; Serot, O.
2017-09-01
237Np absolute delayed neutron yields have been measured at different incident neutron energies from 1.5 to 16 MeV. The experiment was performed at the Physikalisch-Technische Bundesanstalt (PTB) facility where the Van de Graaff accelerator and the cyclotron CV28 delivered 9 different neutron energy beams using p+T, d+D and d+T reactions. The detection system is made up of twelve 3He tubes inserted into a polyethylene cylinder. In this paper, the experimental setup and the data analysis method are described. The evolution of the absolute DN yields as a function of the neutron incident beam energies are presented and compared to experimental data found in the literature and data from the libraries.
Sicherung mathematischer Grundkompetenzen am Beispiel des österreichischen Zentralabiturs
NASA Astrophysics Data System (ADS)
Peschek, Werner
Der österreichische Nationalrat (Parlament) hat im Sommer 2009 eine Neugestaltung der Reifeprüfung (Abitur) beschlossen; die wesentlichste Änderung besteht darin, dass die Aufgabenstellungen der für alle Schülerinnen und Schüler verbindlichen schriftlichen Reifeprüfung (sRP) in den Fächern Deutsch, Mathematik und einer lebenden Fremdsprache zentral und nicht wie bisher durch die jeweilige Klassenlehrerin bzw. den Klassenlehrer erfolgen. Für die Allgemeinbildenden Höheren Schulen ("Gymnasien") soll diese neue Regelung ab dem Schuljahr 2013/14 gelten, für die Berufsbildenden Höheren Schulen (u. a. höhere technische oder kaufmännische Schulen mit Abitur) ab dem Schuljahr 2014/15.
Digitalisierung des Bösen: Energiewirtschaft als Cyberopfer
NASA Astrophysics Data System (ADS)
Bartsch, Michael; Frey, Stefanie
Die Energieversorgung ist eine kritische Infrastruktur, da alle anderen Sektoren von der Stromversorgung abhängig sind und eine Störung katastrophale Auswirkungen mit unvorhersehbaren Kaskadeneffekten mit sich bringen würde. Das oberste Ziel der Betreiber ist daher sicherzustellen, dass Maßnahmen für eine störungsfreie Stromversorgung ergriffen werden. Cyberangriffe stellen ein hohes Risiko für die Energieversorgung dar. Dabei kann die Energiewirtschaft von Massenphänomenen wie Cybercrime betroffen sein, aber auch Gegenstand von technisch komplexer Cybersabotage werden, wie bei dem Angriff Ende 2015 auf einen ukrainischen Energieversorger. Die Stromversorgung fiel für mehrere Stunden aus und hatte weitreichende Auswirkungen für die Bevölkerung und Unternehmen.
Grote, Mathias; Keuck, Lara
2015-06-01
Historical analyses of what metabolism has been conceived of, how concepts of metabolism were related to disciplines such as nineteenth-century nutritional physiology or twentieth-century biochemistry, and how their genealogies relate to the current developments may be helpful to understand the various, at times polemic, ways in which the boundaries between metabolism and heredity have been re-drawn. Against this background, a small number of scholars gathered in Berlin for a workshop that equally aimed at bringing new stories to the fore, and at considering seemingly known ones in a new light. Some aspects of the discussions are summarized in this paper.
Re-evaluation of the correction factors for the GROVEX
NASA Astrophysics Data System (ADS)
Ketelhut, Steffen; Meier, Markus
2018-04-01
The GROVEX (GROssVolumige EXtrapolationskammer, large-volume extrapolation chamber) is the primary standard for the dosimetry of low-dose-rate interstitial brachytherapy at the Physikalisch-Technische Bundesanstalt (PTB). In the course of setup modifications and re-measuring of several dimensions, the correction factors have been re-evaluated in this work. The correction factors for scatter and attenuation have been recalculated using the Monte Carlo software package EGSnrc, and a new expression has been found for the divergence correction. The obtained results decrease the measured reference air kerma rate by approximately 0.9% for the representative example of a seed of type Bebig I25.S16C. This lies within the expanded uncertainty (k = 2).
Experimental investigation of a newly designed supersonic wind tunnel
NASA Astrophysics Data System (ADS)
Wu, J.; Radespiel, R.
2015-06-01
The flow characteristics of the tandem nozzle supersonic wind tunnel at the Institute of Fluid Mechanics, Technische Universität Braunschweig, a are investigated. Conventional measurement techniques were utilized. The flow development is examined by pressure sensors installed at various streamwise positions. The temperature is measured in the storage tube and the settling chamber. The influence of flow treatment in the settling chamber on the flow quality is also studied. The flow quality of test section is evaluated by a 6-probe Pitot rake. The pressure fluctuations in the test section are studied by a sharp cone model. Eventually, good agreement between the measurements and numerical simulation of the tunnel design is achieved.
NASA Astrophysics Data System (ADS)
2008-11-01
Mohab Abou ZeidInstitut des Hautes Études Scientifiques, Bures-sur-Yvette Ido AdamMax-Planck-Institut für Gravitationsphysik (AEI), Potsdam Henrik AdorfLeibniz Universität Hannover Mohammad Ali-AkbariIPM, Tehran Antonio Amariti Università di Milano-Bicocca Nicola Ambrosetti Université de Neuchâtel Martin Ammon Max-Planck-Institut für Physik, München Christopher AndreyÉcole Polytechnique Fédérale de Lausanne (EPFL) Laura AndrianopoliPolitecnico di Torino David AndriotLPTHE, Université UPMC Paris VI Carlo Angelantonj Università di Torino Pantelis ApostolopoulosUniversitat de les Illes Balears, Palma Gleb ArutyunovInstitute for Theoretical Physics, Utrecht University Davide AstolfiUniversità di Perugia Spyros AvramisUniversité de Neuchâtel Mirela BabalicChalmers University, Göteborg Foday BahDigicom Ioannis Bakas University of Patras Igor BandosUniversidad de Valencia Jose L F BarbonIFTE UAM/CSIC Madrid Till BargheerMax-Planck-Institut für Gravitationsphysik (AEI), Potsdam Marco Baumgartl Eidgenössische Technische Hochschule (ETH), Zürich James BedfordImperial College London Raphael BenichouLaboratoire de Physique Théorique, École Normale Supérieure, Paris Francesco Benini SISSA, Trieste Eric Bergshoeff Centre for Theoretical Physics, University of Groningen Alice BernamontiVrije Universiteit, Brussel Julia BernardLaboratoire de Physique Théorique, École Normale Supérieure, Paris Adel Bilal Laboratoire de Physique Théorique, École Normale Supérieure, Paris Marco Billo' Università di Torino Matthias Blau Université de Neuchâtel Guillaume BossardAlbert-Einstein-Institut, Golm Leonardo BriziÉcole Polytechnique Fédérale de Lausanne (EPFL) Johannes BroedelLeibniz Universität Hannover (AEI) Tom BrownQueen Mary, University of London Ilka BrunnerEidgenössische Technische Hochschule (ETH), Zürich Erling BrynjolfssonUniversity of Iceland Dmitri BykovSteklov Institute, Moscow and Trinity College, Dublin Joan CampsUniversitat de Barcelona Davide CassaniLaboratoire de Physique Théorique, École Normale Supérieure, Paris Alejandra CastroUniversity of Michigan Claudio Caviezel Max-Planck-Institut für Physik, München Alessio Celi Universitat de Barcelona Anna Ceresole Istituto Nazionale di Fisica Nucleare, Università di Torino Athanasios ChatzistavrakidisNational Technical University of Athens Wissam ChemissanyCentre for Theoretical Physics, University of Groningen Eugen-Mihaita CioroianuUniversity of Craiova Andres CollinucciTechnische Universität Wien Paul CookUniversità di Roma, Tor Vergata Lorenzo CornalbaUniversità di Milano-Bicocca Aldo CotroneKatholieke Universiteit Leuven Ben Craps Vrije Universiteit, Brussel Stefano Cremonesi SISSA, Trieste Riccardo D'AuriaPolitecnico di Torino Gianguido Dall'AgataUniversity of Padova Jose A de AzcarragaUniversidad de Valencia Jan de BoerInstituut voor Theoretische Fysica, Universiteit van Amsterdam Sophie de BuylInstitut des Hautes Études Scientifiques, Bures-sur-Yvette Marius de LeeuwUtrecht University Frederik De RooVrije Universiteit, Brussel Jan De Rydt Katholieke Universiteit Leuven and CERN, Geneva Bernard de WitInstitute for Theoretical Physics, Utrecht University Stephane DetournayIstituto Nazionale di Fisica Nucleare, Sezione di Milano Paolo Di Vecchia Niels Bohr Institute, København Eugen DiaconuUniversity of Craiova Vladimir Dobrev Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, Sofia Nick DoreyUniversity of Cambridge Hajar Ebrahim NajafabadiIPM, Tehran Federico Elmetti Università di Milano Oleg Evnin Vrije Universiteit, Brussel Francesco Fiamberti Università di Milano Davide Forcella SISSA, Trieste and CERN, Geneva Valentina Forini Humboldt-Universität zu Berlin Angelos Fotopoulos Università di Torino Denis Frank Université de Neuchâtel Marialuisa Frau Università di Torino Matthias Gaberdiel Eidgenössische Technische Hochschule (ETH), Zürich Diego Gallego SISSA/ISAS, Trieste Maria Pilar Garcia del MoralIstituto Nazionale di Fisica Nucleare, Università di Torino Valentina Giangreco Marotta PulettiUppsala University Valeria L GiliQueen Mary, University of London Luciano GirardelloUniversità di Milano-Bicocca Gian GiudiceCERN, Geneva Kevin Goldstein Institute for Theoretical Physics, Utrecht University Joaquim Gomis Universitat de Barcelona Pietro Antonio GrassiUniversità del Piemonte Orientale, Alessandria Viviane GraßLudwig-Maximilians-Universität, München Gianluca Grignani Università di Perugia Luca Griguolo Università di Parma Johannes GrosseJagiellonian University, Krakow Umut Gursoy École Polytechnique, Palaiseau Norberto Gutierrez RodriguezUniversity of Oviedo Babak HaghighatPhysikalisches Institut, Universität Bonn Troels Harmark Niels Bohr Institute, København Robert HaslhoferEidgenössische Technische Hochschule (ETH), Zürich Tae-Won HaPhysikalisches Institut, Universität Bonn Alexander HauptImperial College London and Max-Planck-Institut für Gravitationsphysik (AEI), Potsdam Marc HenneauxUniversité Libre de Bruxelles Johannes HennLAPTH, Annecy-le-Vieux Shinji HiranoNiels Bohr Institute, København Stefan HoheneggerEidgenössische Technische Hochschule (ETH), Zürich Jan HomannLudwig-Maximilians-Universität, München Gabriele Honecker CERN, Geneva Joost HoogeveenInstituut voor Theoretische Fysica, Universiteit van Amsterdam Mechthild HuebscherUniversidad Autónoma de Madrid Chris HullImperial College London Carmen-Liliana IonescuUniversity of Craiova Ella JasminUniversité Libre de Bruxelles Konstantin KanishchevInstitute of Theoretical Physics, University of Warsaw Stefanos Katmadas Utrecht University Alexandros KehagiasNational Technical University of Athens Christoph Keller Eidgenössische Technische Hochschule (ETH), Zürich Patrick Kerner Max-Planck-Institut für Physik, München Rebiai KhaledLaboratoire de Physique Mathématique et Physique Subatomique, Université Mentouri, Constantine Elias Kiritsis Centre de Physique Théorique, École Polytechnique, Palaiseau and University of Crete Denis KleversPhysikalisches Institut, Universität Bonn Paul Koerber Max-Planck-Institut für Physik, München Simon Koers Max-Planck-Institut für Physik, München Karl KollerLudwig-Maximilians-Universität, München Peter Koroteev Institute for Theoretical and Experimental Physics (ITEP), Moscow and Max-Planck-Institut für Gravitationsphysik (AEI), Potsdam Alexey KoshelevVrije Universiteit, Brussel Costas KounnasÉcole Normale Supérieure, Paris Daniel KreflCERN, Geneva Charlotte KristjansenNiels Bohr Institute, København Finn LarsenCERN, Geneva and University of Michigan Arnaud Le DiffonÉcole Normale Supérieure, Lyon Michael LennekCentre de Physique Théorique, École Polytechnique, Palaiseau Alberto Lerda Università del Piemonte Orientale, Alessandria Andreas LiberisUniversity of Patras Maria A Lledo Universidad de Valencia Oscar Loaiza-Brito CINVESTAV, Mexico Florian Loebbert Max-Planck-Institut für Gravitationsphysik (AEI), Potsdam Yolanda Lozano University of Oviedo Dieter Luest Ludwig-Maximilians-Universität, München Tomasz Łukowski Jagiellonian University, Krakow Diego Mansi University of Crete Alberto Mariotti Università di Milano-Bicocca Raffaele Marotta Istituto Nazionale di Fisica Nucleare, Napoli Alessio Marrani Istituto Nazionale di Fisica Nucleare and LNF, Firenze Andrea Mauri University of Crete Liuba Mazzanti École Polytechnique, Palaiseau Sean McReynoldsUniversità di Milano-Bicocca AKM Moinul Haque Meaze Chittagong University Patrick Meessen Instituto de Física Teórica, Universidad Autónoma de Madrid Carlo MeneghelliUniversità di Parma and Albert-Einstein-Institut, Golm Lotta Mether University of Helsinki and CERN, Geneva René Meyer Max-Planck-Institut für Physik, München Georgios MichalogiorgakisCenter de Physique Théorique, École Polytechnique, Palaiseau Giuseppe Milanesi Eidgenössische Technische Hochschule (ETH), Zürich Samuel Monnier Université de Genève Wolfgang MueckUniversità di Napoli Federico II Elena Méndez Escobar University of Edinburgh Iulian Negru University of Craiova Emil NissimovInstitute for Nuclear Research and Nuclear Energy, Sofia Teake NutmaCentre for Theoretical Physics, University of Groningen Niels Obers Niels Bohr Institute, København Olof Ohlsson SaxUppsala University Rodrigo OleaIstituto Nazionale di Fisica Nucleare, Sezione di Milano Domenico OrlandoUniversité de Neuchâtel Marta Orselli Niels Bohr Institute, København Tomas OrtinInstituto de Física Teórica, Universidad Autónoma de Madrid Yaron OzTel Aviv University Enrico PajerLudwig-Maximilians-Universität, München Angel Paredes GalanUtrecht University Sara PasquettiUniversité de Neuchâtel Silvia PenatiUniversità di Milano-Bicocca Jan PerzKatholieke Universiteit Leuven Igor PesandoUniversità di Torino Tassos PetkouUniversity of Crete Marios PetropoulosCenter de Physique Théorique, École Polytechnique, Palaiseau Franco PezzellaIstituto Nazionale di Fisica Nucleare, Sezione di Napoli Moises Picon PonceUniversity of Padova Marco PirroneUniversità di Milano-Bicocca Andrea PrinslooUniversity of Cape Town Joris RaeymaekersKatholieke Universiteit Leuven Alfonso RamalloUniversidade de Santiago de Compostela Carlo Alberto RattiUniversità di Milano-Bicocca Marco RauchPhysikalisches Institut, Universität Bonn Ronald Reid-EdwardsUniversity of Hamburg Patricia RitterUniversity of Edinburgh Peter RoenneDESY, Hamburg Jan RosseelUniversità di Torino Clement RuefService de Physique Théorique, CEA Saclay Felix RustMax-Planck-Institut für Physik, München Thomas RyttovNiels Bohr Institute, København and CERN, Geneva Agustin Sabio VeraCERN, Geneva Christian SaemannTrinity College, Dublin Houman Safaai SISSA, Trieste Henning SamtlebenÉcole Normale Supérieure, Lyon Alberto SantambrogioIstituto Nazionale di Fisica Nucleare, Sezione di Milano Silviu Constantin SararuUniversity of Craiova Ricardo SchiappaCERN, Geneva Ionut Romeo SchiopuChalmers University, Göteborg Cornelius Schmidt-ColinetEidgenössische Technische Hochschule (ETH), Zürich Johannes SchmudeSwansea University Waldemar SchulginLaboratoire de Physique Théorique, École Normale Supérieure, Paris Domenico SeminaraUniversità di Firenze Alexander SevrinVrije Universiteit, Brussel Konstadinos SfetsosUniversity of Patras Igor ShenderovichSt Petersburg State University Jonathan ShockUniversidade de Santiago de Compostela Massimo SianiUniversità di Milano-Bicocca Christoph SiegUniversità Degli Studi di Milano Joan SimonUniversity of Edinburgh Paul SmythUniversity of Hamburg Luca SommovigoUniversidad de Valencia Dmitri Sorokin Istituto Nazionale di Fisica Nucleare, Padova Christos SourdisUniversity of Patras Wieland StaessensVrije Universiteit, Brussel Ivan StefanovUniversity of Patras Sigurdur StefanssonUniversity of Iceland Kellogg Stelle Imperial College London Giovanni Tagliabue Università di Milano Laura Tamassia Katholieke Universiteit Leuven Javier TarrioUniversidade de Santiago de Compostela Dimitri TerrynVrije Universiteit, Brussel Larus Thorlacius University of Iceland Mario ToninDipartimento Di Fisica, Sezione Di Padova Mario Trigiante Politecnico di Torino Efstratios TsatisUniversity of Patras Arkady TseytlinImperial College London Pantelis TziveloglouCornell University, New York and CERN, Geneva Angel Uranga CERN, Geneva Dieter Van den Bleeken Katholieke Universiteit Leuven Ernst van Eijk Università di Napoli Federico II Antoine Van Proeyen Katholieke Universiteit Leuven Maaike van ZalkUtrecht University Pierre Vanhove Service de Physique Théorique, CEA Saclay Silvia Vaula Instituto de Física Teórica, Universidad Autónoma de Madrid Cristian Vergu Service de Physique Théorique, CEA Saclay Alessandro VichiÉcole Polytechnique Fédérale de Lausanne (EPFL) Marlene WeissCERN, Geneva and Eidgenössische Technische Hochschule (ETH), Zürich Sebastian Weiss Université de Neuchâtel Alexander WijnsUniversity of Iceland Linus WulffUniversity of Padova Thomas WyderKatholieke Universiteit Leuven Ahmed YoussefAstroParticule et Cosmologie (APC), Université Paris Diderot Daniela ZanonUniversità Degli Studi di Milano Andrea ZanziPhysikalisches Institut, Universität Bonn Andrey ZayakinInstitute for Theoretical and Experimental Physics (ITEP), Moscow Tobias ZinggUniversity of Iceland Dimitrios ZoakosUniversidade de Santiago de Compostela Emanuele ZorzanUniversità di Milano Konstantinos ZoubosNiels Bohr Institute, København
NASA Astrophysics Data System (ADS)
Scheler, Fabian; Mitzlaff, Martin; Schröder-Preikschat, Wolfgang
Die Entscheidung, einen zeit- bzw. ereignisgesteuerten Ansatz für ein Echtzeitsystem zu verwenden, ist schwierig und sehr weitreichend. Weitreichend vor allem deshalb, weil diese beiden Ansätze mit äußerst unterschiedlichen Kontrollflussabstraktionen verknüpft sind, die eine spätere Migration zum anderen Paradigma sehr schwer oder gar unmöglich machen. Wir schlagen daher die Verwendung einer Zwischendarstellung vor, die unabhängig von der jeweils verwendeten Kontrollflussabstraktion ist. Für diesen Zweck verwenden wir auf Basisblöcken basierende Atomic Basic Blocks (ABB) und bauen darauf ein Werkzeug, den Real-Time Systems Compiler (RTSC) auf, der die Migration zwischen zeit- und ereignisgesteuerten Systemen unterstützt.
NASA Astrophysics Data System (ADS)
Greiner, Katharina; Egger, Jan; Großkopf, Stefan; Kaftan, Jens N.; Dörner, Ralf; Freisleben, Bernd
In diesem Beitrag werden Active Appearance Models (AAMs) zur Segmentierung der äußeren Kontur von Aortenaneurysmen eingesetzt. Diese Aufgabe ist wegen des geringen Kontrastes zum umliegenden Gewebe und des Aufbaus der teils thrombotisierten oder kalzifizierten Gefäßwände im Bereich eines Aneurysmas so komplex, dass sie aufgrund der Vielgestalt der Kontur in CT-Angiographie-Bildern die Verwendung eines statistischen Modells für Form und eingeschlossene Textur rechtfertigt. Für die Evaluation des Verfahrens wurden verschiedene statistische Modelle aus Schichten von neun CTA-Datensätzen trainiert und die Segmentierung anhand von Leave-One-Out-Tests überprüft.
NASA Astrophysics Data System (ADS)
Barkleit, Gerhard
Dem nuklearen Patt zwischen Ostblock und westlichem Staatenbündnis ist es nach weitgehend übereinstimmender Auffassung von Politik und Wissenschaft zu danken, dass der "Kalte Krieg" in der zweiten Hälfte des 20. Jahrhunderts nicht zum weltumfassenden Flächenbrand eskalierte. An der raschen Herstellung dieses Patts waren zwei Dresdner Physiker maßgeblich beteiligt, deren einer im Manhattan-Projekt in den USA gearbeitet hatte und später in England der Spionage für die Sowjetunion und des Verrats des Know-how der Atombombe überführt wurde.
Corrections on energy spectrum and scatterings for fast neutron radiography at NECTAR facility
NASA Astrophysics Data System (ADS)
Liu, Shu-Quan; Bücherl, Thomas; Li, Hang; Zou, Yu-Bin; Lu, Yuan-Rong; Guo, Zhi-Yu
2013-11-01
Distortions caused by the neutron spectrum and scattered neutrons are major problems in fast neutron radiography and should be considered for improving the image quality. This paper puts emphasis on the removal of these image distortions and deviations for fast neutron radiography performed at the NECTAR facility of the research reactor FRM- II in Technische Universität München (TUM), Germany. The NECTAR energy spectrum is analyzed and established to modify the influence caused by the neutron spectrum, and the Point Scattered Function (PScF) simulated by the Monte-Carlo program MCNPX is used to evaluate scattering effects from the object and improve image quality. Good analysis results prove the sound effects of the above two corrections.
Auf Proteinjagd in der T-Zelle: Einzelmolekül-Mikroskopie
NASA Astrophysics Data System (ADS)
Brameshuber, Mario; Mörtelmaier, Manuel
2004-07-01
Einzelne Moleküle in lebenden Zellen mit Fluoreszenzmikroskopen zu beobachten, ist derzeit noch eine technische Herausforderung. Sie lohnt sich aber: So können erstmals die molekularen Bausteine des Lebens, etwa Proteine, bei ihren Aktivitäten direkt verfolgt werden. Das erlaubt völlig neue Einblicke in die Funktionsmechanismen biologischer Zellen. Ein interessantes Forschungsobjekt sind die T-Zellen, die das Immunsystem steuern. Die Beobachtung einzelner Proteine kann das Rätsel lösen helfen, wie T-Zellen gefährliche von ungefährlichen Erregern unterscheiden können. Damit Einzelmolekül-Mikroskope routinemäßig in Biolabors eingesetzt werden können, müssen sie noch robuster, billiger und einfacher handhabbar werden.
Neuausrichtung und Konsolidierung
NASA Astrophysics Data System (ADS)
Grohmann, Heinz
Mit der Wahl von Wolfgang Wetzel zum Vorsitzenden der Deutschen Statistischen Gesellschaft im Jahre 1972 begann eine 32jährige Ära, in der die praktische und die theoretische Statistik in einem ausgewogenen Verhältnis gepflegt wurden. Ein regelmäßiger vierjähriger Wechsel im Vorsitz stärkte die Gemeinschaft und die praktische wie die wissenschaftliche Arbeit gleichermaßen. Die jährlichen Hauptversammlungen behandelten gesellschaftlich aktuelle wie zukunftsorientierte Themen, und die Ausschüsse sowie weitere Veranstaltungen gaben Gelegenheit zur Förderung und Pflege einer Vielzahl von Arbeitsgebieten der Statistik. Darüber wird nicht nur in diesem Kapitel, sondern auch in den Teilen II und III des Bandes berichtet.
Zum Wissenschaftsverständnis der modernen Evolutionsbiologie
NASA Astrophysics Data System (ADS)
Sommer, Ralf J.
Die moderne Evolutionsbiologie hat ihren Ursprung in den Arbeiten von Charles Darwin und Alfred Wallace (Darwin 1963). Der gemeinsame Ausgangspunkt des Evolutionsgedanken ist dabei die Beobachtung, dass die biologische Welt nicht konstant ist. Biologische Systeme und alle darin lebenden Organismen unterliegen über längere Zeiträume hinweg einer stetigen Veränderung. Diese grundlegende Eigenschaft biologischer Systeme macht die Biologie zu einer historischen Wissenschaft und stellt einen wichtigen Gegensatz zu großen Teilen der Physik dar. Obwohl die Aussage von der Veränderlichkeit der Arten heute trivial klingt, war sie im 19. Jahrhundert eine Revolution, da die Konstanz der Arten und der Welt eine vorherrschende Stellung im damaligen Weltbild hatte (Amundson 2005).
NASA Astrophysics Data System (ADS)
Eppmann, Helmut; Fürnrohr, Michael
Viele Aufgaben in Politik, Wirtschaft und Gesellschaft erfordern nicht nur globale, sondern auch regionale Lösungen. Die Regionalstatistik ist deshalb unentbehrlich für viele Planungs- und Entscheidungsprozesse. Ihren Ausbau und ihre Nutzung zu fördern, hat sich der Ausschuss für Regionalstatistik der Deutschen Statistischen Gesellschaft zum Ziel gesetzt. Dieses Kapitel stellt zunächst einige Grundlagen der Regionalstatistik und die Aufgaben des Ausschusses dar. Es folgen das umfangreiche regionalstatistische Datenangebot der Statistischen Ämter des Bundes und der Länder und seine Nutzung. Ein ergänzender Abschnitt ist der Arbeit des Instituts für Bau-, Stadt- und Raumforschung gewidmet. Das Kapitel schließt mit einem Ausblick auf die Weiterentwicklung des regionalstatistischen Datenangebotes aus Sicht der amtlichen Statistik.
elecTUM: Umsetzung der eLearning-Strategie der Technischen Universität München
NASA Astrophysics Data System (ADS)
Rathmayer, Sabine; Gergintchev, Ivan
An der TUM wurde ein umfassendes und integriertes eLearning-Konzept umgesetzt, welches Präsenzstudium und eLearning in allen Leistungsbereichen der Universität miteinander verzahnt. Ein besonderer Schwerpunkt lag dabei in der Schaffung einer effizienten und wettbewerbsfähigen integrierten eLearning Infrastruktur in Hinblick auf die noch weiter steigenden Studienanfängerzahlen ab dem Jahr 2011 sowie die Umsetzung von eBologna. Die Etablierung einer hochschulweiten Lernplattform stellte eine wesentliche Basis für die Umsetzung der eLearning-Strategie dar. Die wissenschaftliche und wirtschaftliche Anschlussfähigkeit im Hinblick auf eine Verwertung der Projektergebnisse wurde durch die aktive Beteiligung an einer Vielzahl hochschulübergreifender Arbeitskreise, Fachtagungen und Kooperationen, vor allem über Organisations- und Dienstleistungsmodelle sowie innovative technische Entwicklungen, sichergestellt.
NASA Astrophysics Data System (ADS)
Haupt, Sebastian; Edler, Frank
2018-06-01
The characterization of thermoelectric materials as reference materials for Seebeck coefficients at the Physikalisch-Technische Bundesanstalt (PTB) is based on the usage of gold/platinum differential thermocouples. In the case of thermoelectric materials containing silicon, the gold/platinum thermocouples are insufficient due to reactions with the silicon when the samples are at higher temperatures. To overcome this limitation and to expand the temperature range for the certification process, platinum/palladium thermocouples were incorporated in the measurement setup. This paper discusses the influence of the different differential thermocouples used for the measurement of the Seebeck coefficients. Results of a comparative investigation of Seebeck coefficient measurements of a metallic and two semiconducting reference materials in the temperature range from 300 K to 870 K are presented.
NASA Astrophysics Data System (ADS)
Bock, Th; Ahrendt, H.; Jousten, K.
2009-10-01
This paper describes the metrological characterization of a new large area piston gauge (FRS5, Furness Rosenberg Standard) installed at the vacuum metrology laboratory of the Physikalisch-Technische Bundesanstalt (PTB). The operational procedure and the uncertainty budget for pressures between 30 Pa and 11 kPa are given. Comparisons between the FRS5 and a mercury manometer, a rotary piston gauge and a force-balanced piston gauge are described. We show that the reproducibility of the calibration values of capacitance diaphragm gauges is enhanced by a factor of 6 compared with a static expansion primary standard (SE2). Improvements of the SE2 performance by reducing the number of expansions and smaller uncertainties of expansion ratios are discussed.
Novel reference radiation fields for pulsed photon radiation installed at PTB.
Klammer, J; Roth, J; Hupe, O
2012-09-01
Currently, ∼70 % of the occupationally exposed persons in Germany are working in pulsed radiation fields, mainly in the medical sector. It has been known for a few years that active electronic dosemeters exhibit considerable deficits or can even fail completely in pulsed fields. Type test requirements for dosemeters exist only for continuous radiation. Owing to the need of a reference field for pulsed photon radiation and accordingly to the upcoming type test requirements for dosemeters in pulsed radiation, the Physikalisch-Technische Bundesanstalt has developed a novel X-ray reference field for pulsed photon radiation in cooperation with a manufacturer. This reference field, geared to the main applications in the field of medicine, has been well characterised and is now available for research and type testing of dosemeters in pulsed photon radiation.
Alpha-induced reactions on selenium between 11 and 15 MeV
NASA Astrophysics Data System (ADS)
Fiebiger, Stefan; Slavkovská, Zuzana; Giesen, Ulrich; Göbel, Kathrin; Heftrich, Tanja; Heiske, Annett; Reifarth, René; Schmidt, Stefan; Sonnabend, Kerstin; Thomas, Benedikt; Weigand, Mario
2017-07-01
The production of 77,79,85,85m Kr and 77Br via the reaction Se(α ,x) was investigated between {E}α =11 and 15 MeV using the activation technique. The irradiation of natural selenium targets on aluminum backings was conducted at the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig, Germany. The spectroscopic analysis of the reaction products was performed using a high-purity germanium detector located at PTB and a low energy photon spectrometer detector at the Goethe University Frankfurt, Germany. Thick-target yields were determined. The corresponding energy-dependent production cross sections of 77,79,85,85m Kr and 77Br were calculated from the thick-target yields. Good agreement between experimental data and theoretical predictions using the TALYS-1.6 code was found.
NASA Astrophysics Data System (ADS)
Hornbogen, Erhard; Eggeler, Gunther; Werner, Ewald
Lernziel: Dieses Kapitel vermittelt einen ersten Eindruck von Werkstoffen, die bestimmte technische Eigenschaften besitzen müssen, dabei einfach herstellbar sein sollen und die Forderung der Wirtschaftlichkeit erfüllen müssen. Wir diskutieren Werkstoffe in einfachen, allgemeinen und speziellen Zusammenhängen und lernen das Wissensgebiet Werkstoffkunde kennen, das die Werkstoffwissenschaft und die Werkstofftechnik umfasst. Wir verschaffen uns einen ersten Eindruck vom mikroskopischen Aufbau der vier Werkstoffgruppen Metalle, Gläser/Keramiken, Kunststoffe und Verbundwerkstoffe. Wir lernen einige wichtige Werkstoffeigenschaften kennen. Es geht dann um zuverl ässige Datenüber Eigenschaften von Werkstoffen und in diesem Zusammenhang wird die Prüfung, die Normung und die Bezeichnung von Werkstoffen betrachtet. Schließlich befassen wir uns kurz mit der zeitlichen Entwicklung von Werkstoffen und führen den Begriff der Nachhaltigkeit ein.
NASA Astrophysics Data System (ADS)
Guerra, M.; Sampaio, J. M.; Madeira, T. I.; Parente, F.; Indelicato, P.; Marques, J. P.; Santos, J. P.; Hoszowska, J.; Dousse, J.-Cl.; Loperetti, L.; Zeeshan, F.; Müller, M.; Unterumsberger, R.; Beckhoff, B.
2015-08-01
Fluorescence yields (FYs) for the Ge L shell were determined by a theoretical and two experimental groups within the framework of the International Initiative on X-Ray Fundamental Parameters Collaboration. Calculations were performed using the Dirac-Fock method, including relativistic and QED corrections. The experimental value of the L3FY ωL 3 was determined at the Physikalisch-Technische Bundesanstalt undulator beamline of the synchrotron radiation facility BESSY II in Berlin, Germany, and the L α1 ,2 and L β1 line widths were measured at the Swiss Light Source, Paul Scherrer Institute, Switzerland, using monochromatized synchrotron radiation and a von Hamos x-ray crystal spectrometer. The measured fluorescence yields and line widths are compared to the corresponding calculated values.
Ginger: Measuring Gravitomagnetic Effects by Means of Light
NASA Astrophysics Data System (ADS)
Tartaglia, Angelo
2015-01-01
GINGER is a proposal for a new experiment aimed to the detection of the gravito-magnetic Lense-Thirring effect at the surface of the Earth. A three-dimensional set of ring lasers will be mounted on a rigid "monument". In a ring laser a light beam traveling counterclockwise is superposed to another beam traveling in the opposite sense. The anisotropy in the propagation leads to standing waves with slightly different frequencies in the two directions; the resulting beat frequency is proportional to the absolute rotation rate in space, including the gravito-magnetic drag. The experiment is planned to be built in the Gran Sasso National Laboratories in Italy and is based on an international collaboration among four Italian groups, the Technische Universität München and the University of Canterbury in Christchurch (NZ).
Roudbari, Masoud; Yaghmaei, Minoo
2007-09-01
One of the aims of management priorities in medical universities is the evaluation of learning in educational departments in order to prevent educational retardation and to improve the quality of education. The aim of this study was to evaluate the interns' learning in the obstetrics and gynecology (O&G) department at Zahedan University of Medical Sciences (ZUMS). The study was performed in ZUMS, Iran, in 2002-2003 on all interns at the O&G department, including 30 men and 40 women. For data collection, a questionnaire was used and included some questions regarding the common emergencies and diseases in O&G, together with different learning indicators such as reading, observation, hearing, management, and the capability of management. The data were analyzed using descriptive statistics, tables, t test, and chi-square test using the SPSS software. The mean percentages of learning indicators of observation, bedside teaching, supervised management, and personal management in the common emergencies and diseases of O&G in male interns were significantly lower than those in female interns. Also, the mean percentages of managing capabilities were 12% and 70.5% in common emergencies and 14.2% and 59.3% in common diseases for male and female interns, respectively. The chi-square test showed a significant difference between the mean percentages of the managing capabilities in male and female interns for the majority of the common emergencies and diseases. Also, the chi-square test revealed a significant relationship between the learning indicators and the interns' managing capabilities for common emergencies and diseases. Some learning indicators in the male interns were very low. This needs urgent improvement of the learning quality in the O&G department, especially for the male interns, particularly those who are supposed to work in the deprived areas of the country after graduation in the public service.
Degitz, Klaus; Ochsendorf, Falk
2017-07-01
Akne ist eine chronische Erkrankung mit hoher Prävalenz unter Jugendlichen. Pathogenetische Hauptfaktoren (und ihre klinischen Korrelate) sind gesteigerte Talgproduktion (Seborrhoe), follikuläre Hyperkeratose (Komedonen) und perifollikuläre Entzündungsvorgänge (Papulopusteln). Die Krankheit wird von endogenen (Androgene, IGF-1, neuroendokrine Faktoren) und exogenen (Propionibacterium acnes, Diät, mechanische Irritation, Inhaltsstoffe medizinischer oder kosmetischer Externa) Einflüssen moduliert. Akne geht mit zum Teil hoher Morbidität einher und kann bereits bei leichter Ausprägung eine erhebliche Verschlechterung der Lebensqualität bewirken. Zu Therapie stehen wirksame topische und systemische Behandlungsverfahren zur Verfügung. Eine optimale Behandlung erfordert eine stadiengerechtes Management und kontinuierliche ärztliche Begleitung der Patienten über den erforderlichen Behandlungszeitraum. © 2017 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
Victor F. Weisskopf (1908 - 2002): Physikgeschichte
NASA Astrophysics Data System (ADS)
Jacobi, Manfred
2002-11-01
Universal gebildete und interessierte Menschen sind selten geworden in unserer Zeit. Victor F. Weisskopf war einer von ihnen. Sein geistiger Horizont umfasste nicht nur sein eigentliches Fachgebiet, die Physik, wo er sich durch fundamentale Arbeiten in den Bereichen der Quantenfeldtheorie, der Kern- und Elementarteilchenphysik hervortat. Daneben bildeten Kunst und Musik sowie ein außergewöhnliches Engagement in sozialen und politischen Angelegenheiten die Eckpunkte seines Lebens. Der Einsatz für die Verständigung zwischen den Machtblöcken während des Kalten Krieges war für ihn ebenso selbstverständlich wie das Bedürfnis, die neuen Ideen der Physik einem breiten Publikum nahe zu bringen. Die Wissenschaft erschien ihm als das geeignete Mittel, um zum Wohle der Menschheit zu wirken.
Zeitspiel ist keine Alternative - Warum der Wandel zur Pflicht wird
NASA Astrophysics Data System (ADS)
Dieper, Stephan
"Wege entstehen dadurch, dass man sie geht." (Franz Kafka) Die Welt der Digitalisierung ist voll von Wegen, die jemand gegangen ist, bevor dort ein Weg war. Manche dieser Wege stellten sich als Sackgasse heraus, manche als Abkürzung und aus anderen wurden ganze Wegenetze und Städte. Die Energiewelt wird durch den digitalen Wandel nicht verschont bleiben. Durch die intelligenten Messsysteme und die zugehörigen, neuen Strukturen werden energiefremden Wettbewerbern Chancen zum Markteintritt eröffnet. EVUs müssen sich darauf einstellen, dass der permanente Wandel nicht mehr enden wird. Doch auch den EVUs eröffnen sich Optionen. Um erfolgreich zu sein, müssen sie lernen loszugehen, ohne das genaue Ziel zu kennen.
NASA Astrophysics Data System (ADS)
Reiche, Katherina
Die Digitalisierung erfasst sämtliche Bereiche des Lebens und Wirtschaftens. Auch die Kommunalwirtschaft - insbesondere die kommunale Energiewirtschaft - sieht sich perspektivisch disruptiven Entwicklungen gegenüber. Stadtwerke haben bereits viele Herausforderungen erfolgreich gemeistert und stehen auch der Digitalisierung positiv gegenüber. Vielerorts gestalten kommunale Unternehmen den digitalen Wandel bereits aktiv mit. Dieser Artikel arbeitet die Assets kommunaler Unternehmen heraus und zeigt Strategien und Handlungsoptionen zum Umgang mit der digitalen Transformation für kommunale Energieversorgungsunternehmen auf. Dabei zeigt sich, dass das politische und regulatorische Umfeld für das positive Gelingen der Digitalisierung entscheidend sind. Kommunale Unternehmen benötigen die gleichen Marktzugangsbedingungen wie andere Akteure. Ferner profitieren kommunale Unternehmen von einigen Wettbewerbsvorteilen, etwa hohen Vertrauenswerten ihrer Kunden und umfangreiches Know-how im Datenmanagement.
NASA Astrophysics Data System (ADS)
Fließbach, Torsten
In der Statistischen Physik befassen wir uns mit Systemen aus sehr vielen Teilchen. Beispiele hierfür sind die Atome eines Gases oder einer Flüssigkeit, die Phononen eines Festkörpers oder die Photonen in einem Plasma. Die Gesetze für die Bewegung einzelner Teilchen sind durch die Mechanik oder die Quantenmechanik gegeben. Aufgrund der großen Zahl der Teilchen (zum Beispiel N = 6 • 1023 für ein Mol eines Gases) sind die Bewegungsgleichungen jedoch nicht auswertbar. Das Ergebnis einer solchen Auswertung, also etwa die Bahnen von 6•1023 Teilchen, wäre auch uninteressant und irrelevant. Die Behandlung dieser Systeme erfolgt daher statistisch, das heißt auf der Grundlage von Annahmen über dieWahrscheinlichkeit verschiedener Bahnen oder Zustände.
NASA Astrophysics Data System (ADS)
Grulich, M.; Koop, A.; Ludewig, P.; Gutsmiedl, J.; Kugele, J.; Ruck, T.; Mayer, I.; Schmid, A.; Dietmann, K.
2015-09-01
SMARD (Shape Memory Alloy Reusable Deployment Mechanism) is an experiment for a sounding rocket developed by students at Technische Universität MUnchen (TUM). It was launched in March 2015 on REXUS 18 (Rocket Experiments for University Students). The goal of SMARD was to develop a solar panel holddown and release mechanism (HDRM) for a CubeSat using shape memory alloys (SMA) for repeatable actuation and the ability to be quickly resettable. This paper describes the technical approach as well as the technological development and design of the experiment platform, which is capable of proving the functionality of the deployment mechanism. Furthermore, the realization of the experiment as well as the results of the flight campaign are presented. Finally, the future applications of the developed HDRM and its possible further developments are discussed.
NASA Technical Reports Server (NTRS)
Paul, Klaus G.
1995-01-01
This paper describes the work that is done at the Lehrstuhl fur Raumfahrttechnik (lrt) at the Technische Universitat Munchen to examine particle impacts into germanium surfaces which were flown on board the LDEF satellite. Besides the description of the processing of the samples, a brief overview of the particle launchers at our institute is given together with descriptions of impact morphology of high- and hypervelocity particles into germanium. Since germanium is a brittle, almost glass-like material, the impact morphology may also be interesting for anyone dealing with materials such as optics and solar cells. The main focus of our investigations is to learn about the impacting particle's properties, for example mass, velocity and direction. This is done by examining the morphology, various geometry parameters, crater obliqueness and crater volume.
Status and Plans for the Vienna VLBI and Satellite Software (VieVS 3.0)
NASA Astrophysics Data System (ADS)
Gruber, Jakob; Böhm, Johannes; Böhm, Sigrid; Girdiuk, Anastasiia; Hellerschmied, Andreas; Hofmeister, Armin; Krásná, Hana; Kwak, Younghee; Landskron, Daniel; Madzak, Matthias; Mayer, David; McCallum, Jamie; Plank, Lucia; Schartner, Matthias; Shabala, Stas; Teke, Kamil; Sun, Jing
2017-04-01
The Vienna VLBI and Satellite Software (VieVS) is a geodetic analysis software developed and maintained at Technische Universität Wien (TU Wien) with contributions from groups all over the world. It is used for both academic purposes in university courses as well as for providing Very Long Baseline Interferometry (VLBI) analysis results to the geodetic community. Written in a modular structure in Matlab, VieVS offers easy access to the source code and the possibility to adapt the programs for particular purposes. The new version 3.0, released in early 2017, includes several new features, e.g., improved scheduling capabilities for observing quasars and satellites. This poster gives an overview of all VLBI-related activities in Vienna and provides an outlook to future plans concerning the Vienna VLBI and Satellite Software (VieVS).
Reinraumtechnik für die Medizintechnik
NASA Astrophysics Data System (ADS)
Petek, Max; Jungbluth, Martin; Krampe, Erhard
Die Reinraumtechnik ist heute ein unverzichtbarer Bestandteil bei der Fertigung von Produkten der Life Sciences, den Bereichen Pharma, Lebensmittel, Kosmetik und Medizintechnik. In Anbetracht der langen Historie der Medizintechnik ist sie jedoch eine sehr junge Disziplin. Die Bedeutung von Keimen und die richtige Einschätzung ihrer Größe wurden zwar sehr früh bereits durch Paracelsus erkannt, jedoch wurden daraus noch keine speziellen oder kontinuierlich umgesetzten Hygienevorschriften abgeleitet. Die erste bekannte technische Umsetzung von Hygieneempfehlungen geht auf den Franzosen François Nicolas Appert zurück, der eine aseptische Abfüllmethode für Lebensmittel entwickelte und diese 1810 veröffentlichte [1]. Die erste dokumentierte medizinische Umsetzung stellten Hygienevorschriften für Ärzte dar, die Ignaz Philipp Semmelweis nach 1847 in der Wiener Klinik für Geburtshilfe einführte [2].
Internet discussion forums as part of a student-centred teaching concept of pharmacology.
Sucha, Michael; Engelhardt, Stefan; Sarikas, Antonio
2013-01-01
The world wide web opens up new opportunities to interconnect electronic and classroom teaching and to promote active student participation. In this project article we describe the use of internet discussion forums as part of a student-centred teaching concept of pharmacology and discuss its advantages and disadvantages based on evaluation data and current literature. Final year medical students at the Technische Universität München (Munich, Germany) with the elective pharmacology moderated an internet forum that allowed all students to discuss pharmacology-related questions. Evaluation results of forum participants and elective students demonstrated a learning benefit of internet forums in pharmacology teaching. Internet discussion forums offer an easy-to-implement and effective way to actively engage students and increase the learning benefit of electronic and classroom teaching in pharmacology.
PREFACE: International Conference on Topics in Astroparticle and Underground Physics (TAUP 2011)
NASA Astrophysics Data System (ADS)
Oberauer, Lothar; Raffelt, Georg; Wagner, Robert
2012-07-01
The 12th edition of the International Conference on Topics in Astroparticle and Underground Physics (TAUP 2011) was held 5-9 September 2011 in Munich (and for the first time in Germany). It was organized by the Max Planck Institute for Physics (MPP), the Technical University Munich (TUM) and the Cluster of Excellence 'Origin and Structure of the Universe'. The conference was held in the 'Künstlerhaus', a traditional downtown location for artistic festivities. The meeting attracted 317 participants (61 of which were women) from 29 countries, see figure below. The topics covered by the meeting were Cosmology and particle physics, Dark matter and its detection, Neutrino physics and astrophysics, Gravitational waves and High-energy astrophysics and cosmic rays, and the various interfaces between these areas. The scientific sessions consisted of five mornings of plenary talks, four afternoons of parallel sessions, and an evening poster session. The co-founder of the conference series, Alessandro Bottino, has decided to retire from the position of chairman of the TAUP Steering Committee after the completion of TAUP 2011. On behalf of all followers of this series, we thank him for having started these inspiring events and his many years of dedicated service. We thank all speakers, conveners and participants as well as the members of the organizing, steering and international advisory committee for making this a successful and memorable meeting. Lothar Oberauer, Georg Raffelt, Robert Wagner Proceedings editors Figure Committees International Advisory Committee G AntonUniversity of Erlangen E AprileColumbia University M Baldo-CeolinUniversity of Padova R BattistonUniversity of Perugia & INFN L BergströmUniversity Stockholm R BernabeiUniversity of Rome 'Tor Vergata' A BettiniLSC Canfranc P BinetruyAPC Paris J BlümerKarlsruhe Institute of Technology B CabreraStanford University A CaldwellMax Planck Institute for Physics M ChenQueens University E CocciaUniversity of Rome 'Tor Vergata' K DanzmannMax Planck Institute for Gravitational Physics S DodelsonFermilab G DomogatskyINR Moscow E FioriniUniversità di Milano Bicocca & INFN K FreeseUniversity of Michigan M FukugitaICRR Tokyo T GaisserUniversity of Delaware G GerbierCEA Saclay F HalzenUniversity of Wisconsin W HaxtonLNBL & UC Berkeley J HoughGlasgow University E KomatsuUniversity of Texas E KatsavounidisMassachusetts Institute of Technology M LindnerMax Planck Institute for Nuclear Physics K LeskoLBNL & UC Berkeley A McDonaldQueens University & SNO Laboratory H MurayamaIPMU Tokyo & UC Berkeley A OlintoUniversity of Chicago L ResvanisUniversity of Athens A RubbiaETH Zurich S SarkarUniversity of Oxford A SmirnovICTP Trieste N SmithSNO Laboratory C SpieringDESY Zeuthen N SpoonerUniversity of Sheffield Y SuzukiICRR Tokyo M TeshimaMax Planck Institute for Physics J W F ValleIFIC & University of Valencia L VotanoLNGS E WaxmanWeizmann Institute J WilkersonUniversity of North Carolina TAUP Steering Committee F T AvignoneUniversity of South Carolina B C BarishCaltech E BellottiUniversity of Milan Bicoccia & INFN J BernabeuUniversity of Valencia A BottinoUniversity of Turin & INFN (chair) N FornengoUniversity of Turin & INFN T KajitaICRR Tokyo C W KimJohns Hopkins University & KIAS V MatveevINR Moscow G RaffeltMax Planck Institute for Physics D SinclairUniversity of Carleton M SpiroCEA Saclay Parallel Session Conveners Dark Matter - Candidates and Searches J-C LanfranchiTechnische Universität München T Marrodán UndagoitiaUniversity of Zurich T BringmannUniversität Hamburg Cosmology J WellerLudwig-Maximilians-Universität München S HannestadUniversity of Aarhus Double Beta Decay, Neutrino Mass M HirschIFIC/CSIC - University of Valencia A GiulianiCNRS Orsay Neutrino Oscillations T LachenmaierUniversität Tübingen F SuekaneTohoku University Low-Energy Neutrinos (Geo, Solar, Supernova) A DigheTIFR Mumbai M ChenQueen's University M WurmUniversität Hamburg Gravitational Waves E CocciaUniversity of Rome Tor Vergata and INFN S MarkaColumbia University Astrophysical Messengers (Neutrinos, Gamma-Rays, Cosmic Rays) R M WagnerMax-Planck-Institut für Physik M KachelriessUniversity of Trondheim M KowalskiUniversity of Bonn Organizing Committee N FornengoTorino University and INFN B MajorovitsMax-Planck-Institut für Physik L OberauerTechnische Universität M ü nchen (co-chair) G RaffeltMax-Planck-Institut für Physik (co-chair) S RodríguezMax-Planck-Institut für Physik (conference secretary) S SchönertTechnische Universität München D SinclairSNO Laboratory & Carleton University R M WagnerMax-Planck-Institut für Physik (scientific secretary) B WankerlExcellence Cluster 'Origin and Structure of the Universe' M WurmTechnische Universität München S ZollingerMax-Planck-Institut für Physik Conference photograph
NASA Astrophysics Data System (ADS)
Schreiner, Klaus
Das Kapitel "Verbrennungsmotoren" gibt eine Einführung in das wichte und große Gebiet der Verbrennungsmotoren. Leserinnen und Lesern, die noch nie etwas über Verbrennungsmotoren gehört haben, wird empfohlen, zunächst den Abschnitt 1 zu lesen. Dieser ist bewusst einfach und anschaulich geschrieben, um den Zugang zum Thema zu erleichtern. Danach können gezielt weitergehende Informationen in den Abschnitten 2 bis 9 studiert werden. Diese Abschnitte wurden so verfasst, dass man sie nicht nacheinander lesen muss. Grundlage ist jeweils nur der Abschnitt 1. Dieser selbst ist so verfasst, dass das Wichtigste ganz am Anfang steht. Je weiter hinten man in diesem Abschnitt ankommt, umso spezieller werden die Themen. Bei nur begrenztem Interesse an dem Thema Verbrennungsmotoren kann man das Lesen des Abschnitts 1 jederzeit abbrechen und hat trotzdem das bis dahin Wichtigste erfahren.
NASA Astrophysics Data System (ADS)
Tabelow, Karsten
Bildgebende Verfahren haben sich in den letzten Jahren einen festen Platz in der Medizin erobert und die medizinische Forschung und Diagnostik revolutioniert. Sie ermöglichen Ärzten und Forschern einen Einblick in lebendes Gewebe. Mit der fortschreitenden technischen Entwicklung liefern die Verfahren immer höhere Auflösungen, schärfere Bilder und mehr Details. Bildgebende Verfahren sind ohne Mathematik undenkbar, von der Bildrekonstruktion aus den gemessenen Signalen, bis hin zur Auswertung der Bildinformation. Für die Analyse der großen Menge an Bilddatenpunkten (Voxel, volume element, im Gegensatz zum zweidimensionalen Pixel, picture element) werden häufig insbesondere Methoden der mathematischen Statistik benötigt. Zufällige Fehler in der Messung äußern sich als Bildrauschen, die Bilder wirken unscharf und gestört. Dadurch werden diagnostische Entscheidungen erschwert.
NASA Astrophysics Data System (ADS)
Voigt, Annette
Im Jahr 1859 veröffentlichte Charles Darwin "On the Origin of Species“. Seine Evolutionstheorie ist das wohl spektakulärste Beispiel einer naturwissenschaftlichen Theorie großer gesellschaftlicher Relevanz. Ihre verschiedenen Facetten wurden in der Öffentlichkeit kontrovers diskutiert, unter anderem auch ihre Anwendung zur Erklärung von Zuständen und Prozessen menschlicher Gesellschaften. Zum Teil wurde die Seiensweise der Natur - scheinbar unabhängig von gesellschaftlichen Interessen - für die Erklärung und Legitimation gesellschaftlicher Zustände oder die Legitimation von politischen Ideologien herangezogen (Sozialdarwinismus). Denn Gesellschaft funktioniere ja so, wie Darwin die Natur erklärt habe: es herrsche z. B. Konkurrenzkampf, Auslese und Arbeitsteilung, Erfolg hätten diejenigen, die sich an die Bedingungen am Besten anpassten.
Motivating first-year university students by interdisciplinary study projects
NASA Astrophysics Data System (ADS)
Koch, Franziska D.; Dirsch-Weigand, Andrea; Awolin, Malte; Pinkelman, Rebecca J.; Hampe, Manfred J.
2017-01-01
In order to increase student commitment from the beginning of students' university careers, the Technische Universität Darmstadt has introduced interdisciplinary study projects involving first-year students from the engineering, natural, social and history, economics and/or human sciences departments. The didactic concept includes sophisticated task design, individual responsibility and a differentiated support system. Using a self-determination theory framework, this study examined the effects of the projects based on survey findings from two projects with more than 1000 students. The results showed that the projects were successful in fulfilling students' basic psychological needs and in promoting students' academic engagement. Basic psychological needs were found to be significant predictors of academic engagement. These findings suggest that interdisciplinary study projects can potentially contribute to improving higher education as they fulfil students' basic psychological needs for competence, relatedness and autonomy and enhance students' academic engagement.
Cross sections for ionization of tetrahydrofuran by protons at energies between 300 and 3000 keV
NASA Astrophysics Data System (ADS)
Wang, Mingjie; Rudek, Benedikt; Bennett, Daniel; de Vera, Pablo; Bug, Marion; Buhr, Ticia; Baek, Woon Yong; Hilgers, Gerhard; Rabus, Hans
2016-05-01
Double-differential cross sections for ionization of tetrahydrofuran by protons with energies from 300 to 3000 keV were measured at the Physikalisch-Technische Bundesanstalt ion accelerator facility. The electrons emitted at angles between 15∘ and 150∘ relative to the ion-beam direction were detected with an electrostatic hemispherical electron spectrometer. Single-differential and total ionization cross sections have been derived by integration. The experimental results are compared to the semiempirical Hansen-Kocbach-Stolterfoht model as well as to the recently reported method based on the dielectric formalism. The comparison to the latter showed good agreement with experimental data in a broad range of emission angles and energies of secondary electrons. The scaling property of ionization cross sections for tetrahydrofuran was also investigated. Compared to molecules of different size, the ionization cross sections of tetrahydrofuran were found to scale with the number of valence electrons at large impact parameters.
Photomask linewidth comparison by PTB and NIST
NASA Astrophysics Data System (ADS)
Bergmann, D.; Bodermann, B.; Bosse, H.; Buhr, E.; Dai, G.; Dixson, R.; Häßler-Grohne, W.; Hahm, K.; Wurm, M.
2015-10-01
We report the initial results of a recent bilateral comparison of linewidth or critical dimension (CD) calibrations on photomask line features between two national metrology institutes (NMIs): the National Institute of Standards and Technology (NIST) in the United States and the Physikalisch-Technische Bundesanstalt (PTB) in Germany. For the comparison, a chrome on glass (CoG) photomask was used which has a layout of line features down to 100 nm nominal size. Different measurement methods were used at both institutes. These included: critical dimension atomic force microscopy (CD-AFM), CD scanning electron microscopy (CD-SEM) and ultraviolet (UV) transmission optical microscopy. The measurands are CD at 50 % height of the features as well as sidewall angle and line width roughness (LWR) of the features. On the isolated opaque features, we found agreement of the CD measurements at the 3 nm to 5 nm level on most features - usually within the combined expanded uncertainties of the measurements.
Predicting water suppy and actual evapotranspiration of street trees
NASA Astrophysics Data System (ADS)
Wessolek, Gerd; Heiner, Moreen; Trinks, Steffen
2017-04-01
It's well known that street trees cool air temperature in summer-time by transpiration and shading and also reduce runoff. However, it's difficult to analyse if trees have water shortage or not. This contribution focus on predicting water supply, actual evapotranspiration, and runoff by using easily available climate data (precipiation, potential evapotranspiration) and site characteristics (water retention, space, sealing degree, groundwater depth). These parameter were used as input data for Hydro-Pedotransfer-Functions (HPTFs) allowing the estimation of the annual water budget. Results give statements on water supply of trees, drought stress, and additional water demand by irrigation. Procedure also analyse, to which extent the surrounding partly sealed surfaces deliver water to the trees. Four representative street canyons of Berlin City were analysed and evaluated within in training program for M.A. students of „Urban Eco-system Science" at the Technische Universität Berlin.
Investigation of Periodic Nuclear Decay Data with Spectral Analysis Techniques
NASA Astrophysics Data System (ADS)
Javorsek, D.; Sturrock, P.; Buncher, J.; Fischbach, E.; Gruenwald, T.; Hoft, A.; Horan, T.; Jenkins, J.; Kerford, J.; Lee, R.; Mattes, J.; Morris, D.; Mudry, R.; Newport, J.; Petrelli, M.; Silver, M.; Stewart, C.; Terry, B.; Willenberg, H.
2009-12-01
We provide the results from a spectral analysis of nuclear decay experiments displaying unexplained periodic fluctuations. The analyzed data was from 56Mn decay reported by the Children's Nutrition Research Center in Houston, 32Si decay reported by an experiment performed at the Brookhaven National Laboratory, and 226Ra decay reported by an experiment performed at the Physikalisch-Technische-Bundesanstalt in Germany. All three data sets possess the same primary frequency mode consisting of an annual period. Additionally a spectral comparison of the local ambient temperature, atmospheric pressure, relative humidity, Earth-Sun distance, and the plasma speed and latitude of the heliospheric current sheet (HCS) was performed. Following analysis of these six possible causal factors, their reciprocals, and their linear combinations, a possible link between nuclear decay rate fluctuations and the linear combination of the HCS latitude and 1/R motivates searching for a possible mechanism with such properties.
NASA Astrophysics Data System (ADS)
Michotte, C.; Courte, S.; Ratel, G.; Kossert, K.; Nähle, O. J.
2009-01-01
In 2009, the Physikalisch-Technische Bundesanstalt (PTB), Germany, submitted a sample of known activity of 64Cu to the International Reference System (SIR) for activity comparison at the Bureau International des Poids et Mesures (BIPM). The value of the activity submitted was about 9.3 MBq. The result of this new comparison has been approved for publication by Section II of the Consultative Committee for Ionizing Radiation (CCRI(II)), comparison identifier BIPM.RI(II)-K1.Cu-64. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCRI Section II, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Novel spectrometers for environmental dose rate monitoring.
Kessler, P; Behnke, B; Dabrowski, R; Dombrowski, H; Röttger, A; Neumaier, S
2018-07-01
A new generation of dosemeters, based on the scintillators LaBr 3 , CeBr 3 and SrI 2 , read out with conventional photomultipliers, to be used in the field of environmental gamma-radiation monitoring, was investigated. The main features of these new instruments and especially their outdoor performance, studied by long-term investigations under real weather conditions, are presented. The systems were tested at the reference sites for environmental radiation of the Physikalisch-Technische Bundesanstalt. The measurements are compared with that of well characterized classical dose rate reference instruments to demonstrate the suitability of new spectrometers for environmental dose rate monitoring even in adverse weather conditions. Their potential to replace the (mainly Geiger Müller based) dose rate meters operated in about 5000 European early waning network stations as well as in environmental radiation monitoring in general is shown. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Stereo particle image velocimetry set up for measurements in the wake of scaled wind turbines
NASA Astrophysics Data System (ADS)
Campanardi, Gabriele; Grassi, Donato; Zanotti, Alex; Nanos, Emmanouil M.; Campagnolo, Filippo; Croce, Alessandro; Bottasso, Carlo L.
2017-08-01
Stereo particle image velocimetry measurements were carried out in the boundary layer test section of Politecnico di Milano large wind tunnel to survey the wake of a scaled wind turbine model designed and developed by Technische Universität München. The stereo PIV instrumentation was set up to survey the three velocity components on cross-flow planes at different longitudinal locations. The area of investigation covered the entire extent of the wind turbines wake that was scanned by the use of two separate traversing systems for both the laser and the cameras. Such instrumentation set up enabled to gain rapidly high quality results suitable to characterise the behaviour of the flow field in the wake of the scaled wind turbine. This would be very useful for the evaluation of the performance of wind farm control methodologies based on wake redirection and for the validation of CFD tools.
Traceable terahertz power measurement from 1 THz to 5 THz.
Steiger, Andreas; Kehrt, Mathias; Monte, Christian; Müller, Ralf
2013-06-17
The metrology institute in Germany, the Physikalisch-Technische Bundesanstalt (PTB), calibrates the spectral responsivity of THz detectors at 2.52 THz traceable to International System of Units. The Terahertz detector calibration facility is equipped with a standard detector calibrated against a cryogenic radiometer at this frequency. In order to extend this service to a broader spectral range in the THz region a new standard detector was developed. This detector is based on a commercial thermopile detector. Its absorber was modified and characterized by spectroscopic methods with respect to its absorptance and reflectance from 1 THz to 5 THz and at the wavelength of a helium-neon laser in the visible spectral range. This offers the possibility of tracing back the THz power responsivity scale to the more accurate responsivity scale in the visible spectral range and thereby to reduce the uncertainty of detector calibrations in the THz range significantly.
Measuring and interpreting X-ray fluorescence from planetary surfaces.
Owens, Alan; Beckhoff, Burkhard; Fraser, George; Kolbe, Michael; Krumrey, Michael; Mantero, Alfonso; Mantler, Michael; Peacock, Anthony; Pia, Maria-Grazia; Pullan, Derek; Schneider, Uwe G; Ulm, Gerhard
2008-11-15
As part of a comprehensive study of X-ray emission from planetary surfaces and in particular the planet Mercury, we have measured fluorescent radiation from a number of planetary analog rock samples using monochromatized synchrotron radiation provided by the BESSY II electron storage ring. The experiments were carried out using a purpose built X-ray fluorescence (XRF) spectrometer chamber developed by the Physikalisch-Technische Bundesanstalt, Germany's national metrology institute. The XRF instrumentation is absolutely calibrated and allows for reference-free quantitation of rock sample composition, taking into account secondary photon- and electron-induced enhancement effects. The fluorescence data, in turn, have been used to validate a planetary fluorescence simulation tool based on the GEANT4 transport code. This simulation can be used as a mission analysis tool to predict the time-dependent orbital XRF spectral distributions from planetary surfaces throughout the mapping phase.
Comparison of Fixed Point Realisations between Inmetro and PTB
NASA Astrophysics Data System (ADS)
Santiago, J. F. N.; Petkovic, S. G.; Teixeira, R. N.; Noatsch, U.; Thiele-Krivoj, B.
2003-09-01
An interlaboratory comparison in the temperature range between -190 °C and 420 °C was organised between the National Institute of Quality, Normalisation and Industrial Quality (Inmetro), Brazil, and the Physikalisch Technische Bundesanstalt (PTB), Germany. This comparison followed the same protocol as the EUROMET project 552 comparison and was carried out in the years 2001-2002. A standard platinum resistance thermometer (SPRT) of 25 Ω was calibrated at the temperature fixed points of Ar, Hg, the triple point of water (TWP), Ga, In, Sn and Zn, with at least three realisations of each fixed point at both institutes. The uncertainty evaluation is given by Inmetro and some differences in the calibration procedures or in the measuring instruments used are described. The agreement between the results of laboratories was not in all cases within the combined uncertainties. Results of other comparisons are presented, which give additional information on the equivalence of the realised temperature scales.
Final Report on the Key Comparison CCM.P-K4.2012 in Absolute Pressure from 1 Pa to 10 kPa
Ricker, Jacob; Hendricks, Jay; Bock, Thomas; Dominik, Pražák; Kobata, Tokihiko; Torres, Jorge; Sadkovskaya, Irina
2017-01-01
The report summarizes the Consultative Committee for Mass (CCM) key comparison CCM.P-K4.2012 for absolute pressure spanning the range of 1 Pa to 10 000 Pa. The comparison was carried out at six National Metrology Institutes (NMIs), including National Institute of Standards and Technology (NIST), Physikalisch-Technische Bundesanstalt (PTB), Czech Metrology Institute (CMI), National Metrology Institute of Japan (NMIJ), Centro Nacional de Metrología (CENAM), and DI Mendeleyev Institute for Metrology (VNIIM). The comparison was made via a calibrated transfer standard measured at each of the NMIs facilities using their laboratory standard during the period May 2012 to September 2013. The transfer package constructed for this comparison preformed as designed and provided a stable artifact to compare laboratory standards. Overall the participants were found to be statistically equivalent to the key comparison reference value. PMID:28216793
Characterization of the UV detector of Solar Orbiter/Metis
NASA Astrophysics Data System (ADS)
Uslenghi, Michela; Schühle, Udo H.; Teriaca, Luca; Heerlein, Klaus; Werner, Stephan
2017-08-01
Metis, one of the instruments of the ESA mission Solar Orbiter (to be launched in February 2019), is a coronograph able to perform broadband polarization imaging in the visible range (580-640 nm), and narrow band imaging in UV (HI Lyman-α 121.6 nm) . The detector of the UV channel is an intensified camera, based on a Star-1000 rad-hard CMOS APS coupled via a 2:1 fiber optic taper to a single stage Microchannel Plate intensifier, sealed with an entrance MgF2 window and provided with an opaque KBr photocathode. Before integration in the instrument, the UVDA (UV Detector Assembly) Flight Model has been characterized at the MPS laboratory and calibrated in the UV range using the detector calibration beamline of the Metrology Light Source synchrotron of the Physikalisch-Technische Bundesanstalt (PTB). Linearity, spectral calibration, and response uniformity at 121.6 nm have been measured. Preliminary results are reported in this paper.
ESPACE - a geodetic Master's program for the education of Satellite Application Engineers
NASA Astrophysics Data System (ADS)
Hedman, K.; Kirschner, S.; Seitz, F.
2012-04-01
In the last decades there has been a rapid development of new geodetic and other Earth observation satellites. Applications of these satellites such as car navigation systems, weather predictions, and, digital maps (such as Google Earth or Google Maps) play a more and more important role in our daily life. For geosciences, satellite applications such as remote sensing and precise positioning/navigation have turned out to be extremely useful and are meanwhile indispensable. Today, researchers within geodesy, climatology, oceanography, meteorology as well as within Earth system science are all dependent on up-to-date satellite data. Design, development and handling of these missions require experts with knowledge not only in space engineering, but also in the specific applications. That gives rise to a new kind of engineers - satellite application engineers. The study program for these engineers combines parts of different classical disciplines such as geodesy, aerospace engineering or electronic engineering. The satellite application engineering program Earth Oriented Space Science and Technology (ESPACE) was founded in 2005 at the Technische Universität München, mainly from institutions involved in geodesy and aerospace engineering. It is an international, interdisciplinary Master's program, and is open to students with a BSc in both Science (e.g. Geodesy, Mathematics, Informatics, Geophysics) and Engineering (e.g. Aerospace, Electronical and Mechanical Engineering). The program is completely conducted in English. ESPACE benefits from and utilizes its location in Munich with its unique concentration of expertise related to space science and technology. Teaching staff from 3 universities (Technische Universität München, Ludwig-Maximilian University, University of the Federal Armed Forces), research institutions (such as the German Aerospace Center, DLR and the German Geodetic Research Institute, DGFI) and space industry (such as EADS or Kayser-Threde) are involved in ESPACE. This paper will first give the background and objectives of ESPACE with focus on its specific position in geodetic education programmes. Second, we will introduce the interdisciplinary study program and explain the involvement of external teaching staff. Further we will give an up-to-date description of current students and ESPACE alumni. The job market and international demand for satellite application engineers will be shown especially with focus to geodetic fields.
Werner Heisenberg zum 100. Geburtstag: Pionier der Quantenmechanik
NASA Astrophysics Data System (ADS)
Jacobi, Manfred
2001-11-01
Werner Heisenberg war eine der prägendsten Gestalten der Physik des 20. Jahrhunderts. Zu seinen wichtigsten Verdiensten gehören die Grundlegung der Quantenmechanik, die Formulierung der Unschärferelationen sowie die Beteiligung an der Ausarbeitung der Kopenhagener Deutung der Quantenmechanik. Darüber hinaus lieferte er Arbeiten von fundamentalem Charakter zur Theorie des Atomkerns, zur kosmischen Strahlung und zur Quantenfeldtheorie. Während des Krieges war er an den Arbeiten des Uranvereins beteiligt, der die Möglichkeit einer Entwicklung von Kernwaffen untersuchte, jedoch über Vorarbeiten zur Reaktorphysik nicht hinauskam. Wegen dieser Tätigkeit wurde er bei Kriegsende für einige Monate in England interniert. Nach seiner Rückkehr widmete er sich vor allem dem Aufbau der Physik in Deutschland, die während der NS-Zeit nahezu ihrer gesamten Substanz beraubt worden war.
Zusammenarbeit aus Sicht eines outgesourcten Instandhalters
NASA Astrophysics Data System (ADS)
Grüßer, Stefan; Loeven, Heinz-Wilhelm
Dauerhafter Unternehmenserfolg ist nur mit einer fortschrittlichen Instandhaltung zu erzielen. Durch den enormen Kostendruck infolge der Globalisierung und die Innovationssprünge auf der technischen Seite wird auch die Frage nach der modernen Organisationsform für die Instandhaltung gestellt. Eine Möglichkeit der Kostenoptimierung ist das Outsourcing von Instandhaltungsleistungen. Hierbei ist es unerlässlich, dass sich die Mitarbeiter zum Dienstleister entwickeln. In diesem Beitrag wird die Entwicklung der InfraServ Knapsack von einer internen Instandhaltungsabteilung hin zu einem Industriellen Dienstleister beschrieben und Aspekte der Zusammenarbeit mit externen Kunden aus der Sicht des outgesourcten Instandhalters geschildert. Es werden die wichtigen Entwicklungsschritte zur Dienstleistungsorientierung der früheren Eigeninstandhaltung aufgezeigt. Dieser Beitrag ist nicht als "Königsweg“ zu verstehen, er soll vielmehr anhand der Erfahrungen einer outgesourcten Eigeninstandhaltung Anregungen für die Entwicklung der eigenen Instandhaltungsorganisation liefern.
Implantate und Verfahren in der Augenheilkunde
NASA Astrophysics Data System (ADS)
Neuhann, Tobias H.
Das in der Medizin mit am häufigsten verwendete Implantat weltweit ist die Intraokulare Linse (IOL). Die Gründe hierfür sind vielschichtig: einmal haben die Operationstechniken in den letzten 30 Jahren eine wesentliche Steigerung an Gleichmäßigkeit, Erfolg und Effizienz erfahren, zum anderen verursachen die gestiegenen Anforderungen des Alltags in den Industrienationen und im Berufsleben den höheren Anspruch an das Sehvermögen. Ist die menschliche Linse Ursache für schlechtes Sehvermögen, besteht meist eine Trübung des Linsenproteins. Diese Trübung nennt wird Volksmund Grauer Star genannt, wissenschaftlich die Katarakt (cataracta). Es gibt unterschiedliche Formen wie angeborene (congenita) oder erworbene, traumatische, krankheitsoder altersbedingte Formen [45]. Wird die eingetrübte Linse nun mittels moderner Operationsverfahren entfernt, muss für Ersatz dieses lichtbrechenden Mediums gesorgt werden [2].
Die Schönbuchbahn. Von der Nebenbahn zum S-Bahn-Standard
NASA Astrophysics Data System (ADS)
Brauer, Tobias
2017-09-01
Since the successful reactivation of the schönbuch railway between Böblingen and Dettenhausen the number of passengers has increased continuously. Due to this high demand of over 8000 people per day and the increase of popularity the current expansion projects include the electrification of the 17 km long track and the installation of double-track sections for providing a 15-minute-service between Böblingen and Holzgerlingen. To realize the new operational concept, Zweckverband Schönbuchbahn (ZVS), the public transport authority, has ordered nine light rail-styled vehicles (with costs around 51,3 mio. euro for developing and construction). In this way the schönbuch railway is an important factor to guarantee a sustainable and forward-looking mobility in the metropolitan area of Stuttgart and for restricting car traffic.
NASA Astrophysics Data System (ADS)
Elsässer, Hans
Die Aktivität von Galaxien ist eine relativ seltene und offenbar kurzlebige Erscheinung, die sich vor allem durch eine verstärkte Abstrahlung, oft über das ganze Spektrum hinweg, vom Röntgen- bis zum Radiobereich, bemerkbar macht. Dabei wird auf neue Befunde an Infrarotgalaxien eingegangen, die dafür sprechen, daß die gravitative Wechselwirkung zwischen Galaxien eine wesentliche Rolle spielt. Das Buch ist eine zusammenfassende Darstellung des heutigen Kenntnisstandes über "Aktive Galaxien", ein Thema, das gegenwärtig im Zentrum des astronomischen Interesses und der aktuellen Forschung steht. Nach einem Überblick über die seit längerem bekannten Phänomene und die Probleme ihrer Deutung wird auf neue, am Max-Planck-Institut für Astronomie mit den Teleskopen der Calar Alto Sternwarte (Südspanien) gewonnene, Ergebnisse eingegangen.
NASA Astrophysics Data System (ADS)
Steffen, Horst
Elektrotechnik und Mechanik verknüpfen sich zunehmend, z. B. bei der Konstruktion von Maschinen. Als Beispiel sei ein Roboter genannt, dessen Bewegungsabläufe elektrisch gespeichert sind und über elektrische Antriebe ausgeführt werden. Bei der Bewegung eines massebehafteten Roboterarmes kommen u. a. Begriffe wie Geschwindigkeit, Beschleunigung, Bewegung im Koordinatensystem und Energie bewegter Massen zum Tragen. Dieses Kapitel erfasst die grundlegenden Gesetze der Mechanik. Behandelt werden folgende Themen: Kinematik des Massenpunktes (Geschwindigkeit, Beschleunigung, Freier Fall, Senkrechter Wurf); Zusammengesetzte Bewegungen (Schiefer Wurf, Kreisbewegung); Dynamik; Newtonsche Axiome; Kraft; Zerlegung und Zusammensetzung von Kräften; Impuls; Impulserhaltungssatz; Arbeit; Leistung; Wirkungsgrad; Energie; Stoß (elastisch und unelastisch); Rotation (Drehmoment, Schwerpunkt, Drehimpuls, Trägheitsmoment, Rotationsenergie); Gravitation; Elastische Verformung fester Körper; Mechanik der ruhenden Flüssigkeiten und Gase (Druck, Kompressibilität); Hydrostatischer Druck; Schweredruck in Gasen (Auftrieb); Hydrodynamik (Kontinuitätsgleichung, Bernoulli-Gleichung).
NASA Astrophysics Data System (ADS)
Vortanz, Karsten; Zayer, Peter
Das Gesetz zur Digitalisierung der Energiewende ist verabschiedet. Ab 2017 sind moderne Messeinrichtungen (mME) und intelligente Messsysteme (iMSys) zu verbauen und zu betreiben. Der "deutsche Weg" für die Einführung von Smart Metern sieht einen stufenweisen Rollout sowie ein Höchstmaß an Informations- und Datensicherheit vor. Dabei spielen iMSys und mME eine wichtige Rolle bei der Neugestaltung der intelligenten Netze (Smart Grids) und des neuen Marktmodells (Smart Market). Dieser Beitrag beschäftigt sich mit den neuen Gesetzen, den Marktrollen und ihren Aufgaben, Datenschutz und Datensicherheit, dem iMSys als sichere Lösung, dem sicheren Betrieb von Smart Meter Gateways, Smart Grid - Smart Market, dem Zusammenspiel zwischen reguliertem Bereich und Markt, den Einsatzbereichen der iMSys sowie den Auswirkungen auf Prozesse und Systeme und gibt Handlungsempfehlungen.
Untersuchungen zum Harnsäuremetabolismus von Littorina littorea (Gastropoda)
NASA Astrophysics Data System (ADS)
Heil, K. P.; Eichelberg, D.
1983-12-01
Periwinkles, as typical inhabitants of sea-shores, are subjected to extreme changes of environmental conditions, which affect their excretion. In Littorina littorea uric acid, urea and ammonium were detected particularly in the kidney, but the only metabolite excreted was ammonium. Only the concentration of uric acid was dependent on the availability of water; decreasing periods of submersion during low tide and raised salinities caused a higher concentration of uric acid, while increasing periods of submersion and lowered salinities effected the opposite. Transfer of periwinkles within their intertidal habitat and laboratory experiments to test the effect of salinity showed that the concentration of uric acid in the kidney is adaptable. The dependence of uric acid concentration in the kidney on environmental conditions and the ammoniotelic excretion of L. littorea are discussed with regard to its particular living conditions. It is suggested that uric acid serves as nitrogen depot and has a particular function in osmoregulation.
NASA Astrophysics Data System (ADS)
Pell, Wolfgang
Energie wird zum Gebrauchsgegenstand, zur Commodity und rückt doch in den Blickpunkt der Aufmerksamkeit. Volkswirtschaftliche, politische, gesellschaftliche und betriebswirtschaftliche Ansprüche lassen Services rund um die Energieversorgung (Energy-related Services) entstehen. Convenience Services, die den Ansprüchen der Konsumenten gerecht werden, wie Visualisierung von (dezentraler) Energieerzeugung und -verbrauch auf Basis digitaler Smart Meter, die den analogen Ferraris-Zähler ersetzen, sowie optimierter Energieeinsatz halten in Haushalten als digitalisierten Standorten (Smart Sites) Einzug. Energieoptimierung auf Basis des Paradigmas "Verbrauch folgt Erzeugung" stellt Nachfrageflexibilität industrieller Prozesse (Demand Response) als Energie-Effizienz-Faktor in den Vordergrund und lässt Services wie ihre Vermarktung als Regelenergie zur Stabilisierung der Netzfrequenz entstehen. Ein Innovation Action Plan liefert einen Ausblick, wohin die Integration neuer Technologien, die Steigerung der Kundennähe und die Entwicklung neuer Geschäftsmodelle die Energiewirtschaft führen kann. Mit Eco-Home und Power-Pool werden zwei konkrete Beispiele für Energy as a Service vorgestellt.
NASA Astrophysics Data System (ADS)
Lenke, Nils; Roudet, Nicolas
Until now, Philipp Feselius has been perceived only indirectly as Kepler's antagonist. Not much is known about his life besides his work as Baden private physician and his book against astrology which was cited intensely in Kepler's «Tertius Interveniens». This paper traces the stations of his career as a physician, about his presumable provenance and education in Strasbourg, his academic career in Tübingen, Strasbourg, Rostock and Padua, the doctorate in Basel in 1592, up to his employment, in 1599, as a court physician in Sulzburg and later in Durlach. Further hand-written and printed traces of Feselius are presented, and his social environment is investigated so that his personality becomes clearer, and relations can be established between his education and his writing against astrology.
Vom Big Business zum Smart Business in der Energiewirtschaft
NASA Astrophysics Data System (ADS)
Klaus, Jürgen; Anthonijsz, Jos
Kaum eine Branche hat in den letzten zehn Jahren einen tiefgreifenderen Wandel erfahren als die Energiewirtschaft. Einstmals geprägt durch Großkonzerne, die flächendeckend alle Formen von Energiedienstleistungen erbracht haben, stellt sich heute eine gänzlich veränderte Landschaft dar. Nicht nur das es heute eine Vielzahl von Erzeugern gibt, die zunehmend dezentral aufgestellt sind, auch die klassischen Marktrollen wechseln: Erzeuger werden zu Händlern, Verbraucher werden zu Erzeugern. Darüber hinaus drängen heute neue, vormals branchenfremde, Marktteilnehmer in die Energiewirtschaft. Leicht nachzuvollziehen, dass es eine neue Art der Kommunikation braucht. Ziel ist alle Akteure zu vernetzen, um so neuartige Geschäftsmodelle zu ermöglichen. Das Internet of Things (IoT) und die Serviceplattformen bieten hierzu die geeignete Grundlage und Lösungen für neue und zukünftige Prozesse in der Energiebranche. Hierbei stehen auch Themen, wie Big Data, Datenformate, Datennutzung und -sicherheit im Fokus.
NASA Astrophysics Data System (ADS)
Elsässer, Hans
Die Aktivität von Galaxien ist eine relativ seltene und offenbar kurzlebige Erscheinung, die sich vor allem durch eine verstärkte Abstrahlung, oft über das ganze Spektrum hinweg, vom Röntgen- bis zum Radiobereich, bemerkbar macht. Dabei wird auf neue Befunde an Infrarotgalaxien eingegangen, die dafür sprechen, daß die gravitative Wechselwirkung zwischen Galaxien eine wesentliche Rolle spielt. Das Buch ist eine zusammenfassende Darstellung des heutigen Kenntnisstandes über "Aktive Galaxien", ein Thema, das gegenwärtig im Zentrum des astronomischen Interesses und der aktuellen Forschung steht. Nach einem Überblick über die seit längerem bekannten Phänomene und die Probleme ihrer Deutung wird auf neue, am Max-Planck-Institut für Astronomie mit den Teleskopen der Calar Alto Sternwarte (Südspanien) gewonnene, Ergebnisse eingegangen.
[Altered hip muscle activation in patients with chronic non-specific low back pain].
Nötzel, D; Puta, C; Wagner, H; Anders, C; Petrovich, A; Gabriel, H H W
2011-04-01
The aim of this study was to examine postural control in patients with chronic non-specific low back pain (CNRS). Furthermore the influence of visual information (eyes open versus eyes closed) was analyzed. A total of 8 patients with CNRS and 12 healthy control subjects were examined. Surface electromyography (SEMG) recordings were made from 5 trunk and 5 lower limb muscles as well as one hip muscle during application of distal lateral perturbation. Healthy controls (mean ± standard deviation: 96.42±64.77 µV) showed a significantly higher maximum amplitude of the gluteus medius muscle in comparison to patients with CNRS (56.29±39.63 µV). Furthermore activation of several lower limb muscles was found to be dependent on visual information. Patients showed an altered reflex response of the gluteus medius muscle which could be associated with reduced hip stability. © Deutsche Gesellschaft zum Studium des Schmerzes
NASA Astrophysics Data System (ADS)
Rückert, G.
1984-03-01
Representatives of the family Myxococcaceae, Myxococcus fulvus and M. virescens as well as Archangium gephyra could be isolated from marine sediments (depth range 5 58 m), collected near the island of Helgoland (North Sea); dunes and rudiments of salt marshes additionally yielded M. coralloides and the rare species Melittangium licenicola and M. boletus (Cystobacteriaceae). In soil samples from the island, M. fulvus, M. virescens, M. coralloides, A. gephyra, Cystobacter fuscus and Stigmatella erecta were found. These results were confirmed by data, obtained from the coastal zone of the island of Amrum and marine sediments from various regions. On the other hand samples from shallow fresh water (depth range 0.3 1 m) proved to be richer in species. It is assumed that the myxobacteria found in marine sediments occur as resting cells.
Digitalisierung als Inkubator für die Energieversorgung von morgen
NASA Astrophysics Data System (ADS)
Arnold, Christian; Postina, Matthias
Neue Treiber der Digitalisierung bestimmen die Geschicke der etablierten Unternehmen im Energiebereich, dabei haben diese in vielen Fällen noch nicht einmal adäquat auf die altbekannten Treiber der Energiewende reagiert. Eine ganze Branche ist im Umbruch, alte Erlösmodelle brechen ein und Einigkeit scheint bei den Verantwortlichen nur in der Frage zu herrschen, wie in Zukunft noch Geld verdient werden kann. Dieser Beitrag erklärt die prägenden Treiber der Veränderung, analysiert die Ausgangslage und nennt Herausforderungen der Branche. Darüber hinaus legt er den Fokus auf die Betrachtung der neuen digitalen Wertschöpfung. Dazu wird ein Modell eingeführt, welches von der klassischen Wertschöpfungskette über datenbasierte Geschäftsmodelle im Energiesystem bis hin zum Internet of Smart Services reicht. Mit dem Modell geben die Autoren gleichsam Einblick in das geplante Energiewendeprojekt enera.
ARGon{sup 3}: ''3D appearance robot-based gonioreflectometer'' at PTB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoepe, A.; Atamas, T.; Huenerhoff, D.
At the Physikalisch-Technische Bundesanstalt, the National Metrology Institute of Germany, a new facility for measuring visual appearance-related quantities has been built up. The acronym ARGon{sup 3} stands for ''3D appearance robot-based gonioreflectometer''. Compared to standard gonioreflectometers, there are two main new features within this setup. First, a photometric luminance camera with a spatial resolution of 28 {mu}m on the device under test (DUT) enables spatially high-resolved measurements of luminance and color coordinates. Second, a line-scan CCD-camera mounted to a spectrometer provides measurements of the radiance factor, respectively the bidirectional reflectance distribution function, in full V({lambda})-range (360 nm-830 nm) with arbitrarymore » angles of irradiation and detection relative to the surface normal, on a time scale of about 2 min. First goniometric measurements of diffuse reflection within 3D-space above the DUT with subsequent colorimetric representation of the obtained data of special effect pigments based on the interference effect are presented.« less
Ganter, Camille
2010-01-01
In the Closing Remarks at the Symposium on 'Frontiers in Bioorganic Chemistry' (Friday, February 6, 2009, Pharmacenter, University of Basel) in honour of Daniel Bellus, his arrival in Zürich in fall 1967 and especially his postdoctoral work at the Laboratorium für Organische Chemie at the Eidgenössische Technische Hochschule (ETH) in Zürich throughout the year 1967/68 were mentioned. In his most remarkable paper (published in 1969 in Helv. Chim. Acta), the photochemistry of the alpha,beta-unsaturated cyclohexenones O-acetyl-testosterone and 10-methy-delta1,9-octalon-(2) is described in detail. Change of solvent leads to lowering or increasing of the n,pi*- and (pi,pi*)-triplet energies, resulting in a crossing of the two energy levels. Personal remarks on Daniel Bellus and warmest thanks to him, to Profs. Beat Ernst and Bernd Giese (the organizers of the symposium) and to all the speakers concluded this most special event.
Electromagnetic DM technology meets future AO demands
NASA Astrophysics Data System (ADS)
Hamelinck, Roger; Rosielle, Nick; Steinbuch, Maarten; Doelman, Niek
New deformable mirror technology is developed by the Technische Universiteit Eindhoven, Delft University of Technology and TNO Science and Industry. Several prototype adaptive deformable mirrors are realized mirrors, up to 427 actuators and ∅150mm diameter, with characteristics suitable for future AO systems. The prototypes consist of a 100µm thick, continuous facesheet on which low voltage, electromagnetic, push-pull actuators impose out-of-plane displacements. The variable reluctance actuators with ±10µm stroke and nanometer resolution are located in a standard actuator module. Each module with 61 actuators connects to a single PCB with dedicated, 16 bit, PWM based, drivers. A LVDS multi-drop cable connects up to 32 actuator modules. With the actuator module, accompanying PCB and multi-drop system the deformable mirror technology is made modular in its mechanics and electronics. An Ethernet-LVDS bridge enables any commercial PC to control the mirror using the UDP standard. Latest results of the deformable mirror technology development are presented.
Simulation of a GOX-kerosene subscale rocket combustion chamber
NASA Astrophysics Data System (ADS)
Höglauer, Christoph; Kniesner, Björn; Knab, Oliver; Kirchberger, Christoph; Schlieben, Gregor; Kau, Hans-Peter
2011-12-01
In view of future film cooling tests at the Institute for Flight Propulsion (LFA) at Technische Universität München, the Astrium in-house spray combustion CFD tool Rocflam-II was validated against first test data gained from this rocket test bench without film cooling. The subscale rocket combustion chamber uses GOX and kerosene as propellants which are injected through a single double swirl element. Especially the modeling of the double swirl element and the measured wall roughness were adapted on the LFA hardware. Additionally, new liquid kerosene fluid properties were implemented and verified in Rocflam-II. Also the influences of soot deposition and hot gas radiation on the wall heat flux were analytically and numerically estimated. In context of reviewing the implemented evaporation model in Rocflam-II, the binary diffusion coefficient and its pressure dependency were analyzed. Finally simulations have been performed for different load points with Rocflam-II showing a good agreement compared to test data.
Study of glass hydrometer calibration by hydrostatic weighting
NASA Astrophysics Data System (ADS)
Chen, Chaoyun; Wang, Jintao; Li, Zhihao; Zhang, Peiman
2016-01-01
Glass hydrometers are simple but effective instruments for measuring the density of liquids. Glass hydrometers calibration based on the Archimedes law, using silicon ring as a reference standard solid density, n-tridecane with density stability and low surface tension as the standard working liquid, based on hydrostatic weighing method designs a glass hydrometer calibration system. Glass hydrometer calibration system uses CCD image measurement system to align the scale of hydrometer and liquid surface, with positioning accuracy of 0.01 mm. Surface tension of the working liquid is measured by Whihemy plate. According to twice glass hydrometer weighing in the air and liquid can calculate the correction value of the current scale. In order to verify the validity of the principle of the hydrostatic weighing method of glass hydrometer calibration system, for measuring the density range of (770-790) kg/m3, with a resolution of 0.2 kg/m3 of hydrometer. The results of measurement compare with the Physikalisch-Technische Bundesanstalt(PTB) ,verifying the validity of the calibration system.
Research on new dynamic force calibration system
NASA Astrophysics Data System (ADS)
Zhang, Li
2008-06-01
Sinusoidal force calibration method based on electrodynamic shaker and interferometric system was studied several years before at Physikalisch-Technische Bundesanstalt (PTB). In that system a load mass are screwed on the top of force transducer, the sinusoidal forces realized by accelerated load masses are traceable to acceleration and mass according to the force definition F(t) = ma(t), where m is the total mass acting on the sensing element of the force transducer and a is the time and spatial-dependent acceleration of the mass, which is directly measured by a laser interferometer. This paper will introduce a new dynamic force calibration system developed at Changcheng Institute of Metrology and Measurement (CIMM). It uses electrodynamic shakers to generate dynamic force in the range from 1N to 20kN, and heterodyne laser interferometers are used for acceleration measurement. A new air bearing system is developed to increase the performance of shakers and an active vibration isolator is used to reduce enviromental disturbance to the interferometric system.
NASA Astrophysics Data System (ADS)
Heintze, Joachim
Bisher haben wir nur das thermische Verhalten von Stoffen einheitlicher Konsistenz diskutiert. Eines der auffälligsten Phänomene in der Physik der Wärme ist aber, dass ein Stoff in verschiedenen Aggregatzuständen vorliegen kann: fest, flüssig oder gasförmig, und dass durch Wärmezufuhr und -abfuhr Übergänge zwischen diesen verschiedenen Phasen bewirkt werden. Zunächst werden wir die Phasenübergänge flüssig-gasförmig und fest-gasförmig ausführlich diskutieren. Dann zeigen wir, dass der Phasenübergang fest-flüssig auf ganz ähnliche Weise beschrieben werden kann. Eine zusammenfassende Darstellung der Phasenübergänge ist in der Form von Zustandsdiagrammen möglich. Die Untersuchung solcher Diagramme führt auf das interessante Phänomen des kritischen Punktes. Am Schluss des Kapitels betrachten wir Phasenübergänge in Zweistoffsystemen. Sie weisen einige Eigenschaften auf, die nicht nur merkwürdig und physikalisch interessant, sondern vor allem auch technisch von Bedeutung sind.
NASA Astrophysics Data System (ADS)
Groenig, Hans
Topics discussed in this volume include shock wave structure, propagation, and interaction; shocks in condensed matter, dusty gases, and multiphase media; chemical processes and related combustion and detonation phenomena; shock wave reflection, diffraction, and focusing; computational fluid dynamic code development and shock wave application; blast and detonation waves; advanced shock tube technology and measuring technique; and shock wave applications. Papers are presented on dust explosions, the dynamics of shock waves in certain dense gases, studies of condensation kinetics behind incident shock waves, the autoignition mechanism of n-butane behind a reflected shock wave, and a numerical simulation of the focusing process of reflected shock waves. Attention is also given to the equilibrium shock tube flow of real gases, blast waves generated by planar detonations, modern diagnostic methods for high-speed flows, and interaction between induced waves and electric discharge in a very high repetition rate excimer laser.
Time-resolved Fast Neutron Radiography of Air-water Two-phase Flows
NASA Astrophysics Data System (ADS)
Zboray, Robert; Dangendorf, Volker; Mor, Ilan; Tittelmeier, Kai; Bromberger, Benjamin; Prasser, Horst-Michael
Neutron imaging, in general, is a useful technique for visualizing low-Z materials (such as water or plastics) obscured by high-Z materials. However, when significant amounts of both materials are present and full-bodied samples have to be examined, cold and thermal neutrons rapidly reach their applicability limit as the samples become opaque. In such cases one can benefit from the high penetrating power of fast neutrons. In this work we demonstrate the feasibility of time-resolved, fast neutron radiography of generic air-water two-phase flows in a 1.5 cm thick flow channel with Aluminum walls and rectangular cross section. The experiments have been carried out at the high-intensity, white-beam facility of the Physikalisch-Technische Bundesanstalt, Germany. Exposure times down to 3.33 ms have been achieved at reasonable image quality and acceptable motion artifacts. Different two-phase flow regimes such as bubbly slug and churn flows have been examined. Two-phase flow parameters like the volumetric gas fraction, bubble size and bubble velocities have been measured.
Supplementary comparison EURAMET.L-S24 on involute gear standards
NASA Astrophysics Data System (ADS)
Kniel, K.; Chanthawong, N.; Eastman, N.; Frazer, R.; Kupko, V.; Osawa, S.; Xue, Z.
2014-01-01
At its meeting in 2007, the EURAMET TC-Length decided to run an intercomparison of involute gear standards as a regional comparison with non-European involvement. The Physikalisch-Technische Bundesanstalt, Braunschweig und Berlin, Germany (PTB) was identified as the pilot laboratory, responsible for planning, organizing and analyzing the comparison. In all, seven participants from China, Germany, Japan, Thailand, Ukraine, United Kingdom and USA were asked to measure three different gear standards specialized for profile, helix and pitch measurements. The measurements started mid 2008 and were finished at the end of 2010. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The comparison was registered as the supplementary comparison EURAMET.L-S24. The final report has been peer-reviewed and approved for publication by the CCL WG-MRA, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Carrier-phase two-way satellite frequency transfer over a very long baseline
NASA Astrophysics Data System (ADS)
Fujieda, M.; Piester, D.; Gotoh, T.; Becker, J.; Aida, M.; Bauch, A.
2014-06-01
In this paper we report that carrier-phase two-way satellite time and frequency transfer (TWSTFT) was successfully demonstrated over a very long baseline of 9000 km, established between the National Institute of Information and Communications Technology (NICT) and the Physikalisch-Technische Bundesanstalt (PTB). We verified that the carrier-phase TWSTFT (TWCP) result agreed with those obtained by conventional TWSTFT and GPS carrier-phase (GPSCP) techniques. Moreover, a much improved short-term instability for frequency transfer of 2 × 10-13 at 1 s was achieved, which is at the same level as previously confirmed over a shorter baseline within Japan. The precision achieved was so high that the effects of ionospheric delay became significant; they are ignored in conventional TWSTFT even over a long link. We compensated for these effects using ionospheric delays computed from regional vertical total electron content maps. The agreement between the TWCP and GPSCP results was improved because of this compensation.
A new Ultra Precision Interferometer for absolute length measurements down to cryogenic temperatures
NASA Astrophysics Data System (ADS)
Schödel, R.; Walkov, A.; Zenker, M.; Bartl, G.; Meeß, R.; Hagedorn, D.; Gaiser, C.; Thummes, G.; Heltzel, S.
2012-09-01
A new Ultra Precision Interferometer (UPI) was built at Physikalisch-Technische Bundesanstalt. As its precursor, the precision interferometer, it was designed for highly precise absolute length measurements of prismatic bodies, e.g. gauge blocks, under well-defined temperature conditions and pressure, making use of phase stepping imaging interferometry. The UPI enables a number of enhanced features, e.g. it is designed for a much better lateral resolution and better temperature stability. In addition to the original concept, the UPI is equipped with an external measurement pathway (EMP) in which a prismatic body can be placed alternatively. The temperature of the EMP can be controlled in a much wider range compared to the temperature of the interferometer's main chamber. An appropriate cryostat system, a precision temperature measurement system and improved imaging interferometry were established to permit absolute length measurements down to cryogenic temperature, demonstrated for the first time ever. Results of such measurements are important for studying thermal expansion of materials from room temperature towards less than 10 K.
NASA Astrophysics Data System (ADS)
Arenz, M.; Baek, W.-J.; Beck, M.; Beglarian, A.; Behrens, J.; Bergmann, T.; Berlev, A.; Besserer, U.; Blaum, K.; Bode, T.; Bornschein, B.; Bornschein, L.; Brunst, T.; Buzinsky, N.; Chilingaryan, S.; Choi, W. Q.; Deffert, M.; Doe, P. J.; Dragoun, O.; Drexlin, G.; Dyba, S.; Edzards, F.; Eitel, K.; Ellinger, E.; Engel, R.; Enomoto, S.; Erhard, M.; Eversheim, D.; Fedkevych, M.; Fischer, S.; Formaggio, J. A.; Fränkle, F. M.; Franklin, G. B.; Friedel, F.; Fulst, A.; Gil, W.; Glück, F.; Ureña, A. Gonzalez; Grohmann, S.; Grössle, R.; Gumbsheimer, R.; Hackenjos, M.; Hannen, V.; Harms, F.; Haußmann, N.; Heizmann, F.; Helbing, K.; Herz, W.; Hickford, S.; Hilk, D.; Hillesheimer, D.; Howe, M. A.; Huber, A.; Jansen, A.; Kellerer, J.; Kernert, N.; Kippenbrock, L.; Kleesiek, M.; Klein, M.; Kopmann, A.; Korzeczek, M.; Kovalík, A.; Krasch, B.; Kraus, M.; Kuckert, L.; Lasserre, T.; Lebeda, O.; Letnev, J.; Lokhov, A.; Machatschek, M.; Marsteller, A.; Martin, E. L.; Mertens, S.; Mirz, S.; Monreal, B.; Neumann, H.; Niemes, S.; Off, A.; Osipowicz, A.; Otten, E.; Parno, D. S.; Pollithy, A.; Poon, A. W. P.; Priester, F.; Ranitzsch, P. C.-O.; Rest, O.; Robertson, R. G. H.; Roccati, F.; Rodenbeck, C.; Röllig, M.; Röttele, C.; Ryšavý, M.; Sack, R.; Saenz, A.; Schimpf, L.; Schlösser, K.; Schlösser, M.; Schönung, K.; Schrank, M.; Seitz-Moskaliuk, H.; Sentkerestiová, J.; Sibille, V.; Slezák, M.; Steidl, M.; Steinbrink, N.; Sturm, M.; Suchopar, M.; Suesser, M.; Telle, H. H.; Thorne, L. A.; Thümmler, T.; Titov, N.; Tkachev, I.; Trost, N.; Valerius, K.; Vénos, D.; Vianden, R.; Hernández, A. P. Vizcaya; Weber, M.; Weinheimer, C.; Weiss, C.; Welte, S.; Wendel, J.; Wilkerson, J. F.; Wolf, J.; Wüstling, S.; Zadoroghny, S.
2018-05-01
The neutrino mass experiment KATRIN requires a stability of 3 ppm for the retarding potential at - 18.6 kV of the main spectrometer. To monitor the stability, two custom-made ultra-precise high-voltage dividers were developed and built in cooperation with the German national metrology institute Physikalisch-Technische Bundesanstalt (PTB). Until now, regular absolute calibration of the voltage dividers required bringing the equipment to the specialised metrology laboratory. Here we present a new method based on measuring the energy difference of two ^{83{m}}Kr conversion electron lines with the KATRIN setup, which was demonstrated during KATRIN's commissioning measurements in July 2017. The measured scale factor M=1972.449(10) of the high-voltage divider K35 is in agreement with the last PTB calibration 4 years ago. This result demonstrates the utility of the calibration method, as well as the long-term stability of the voltage divider.
NASA Technical Reports Server (NTRS)
Selbach, H. J.
1984-01-01
The controlled oxidation in air of Raney nickel electrocatalysts was studied, with special attention paid to the quantitative analysis of nickel hydroxide. The content of the latter was determined through X-ray studies, thermogravimetric measurements, and spectral photometric examinations. The dependence of the content on the drying of activated catalyst is determined. The influence of nickel hydroxide on the electrochemical parameters of the catalyst, such as diffusion polarization, is studied, including a measurement of the exchange current density using the potential drop method. Conservation by oxidation in air with ancillary stabilization of the oxide in an H2 flow at 300 C is explored, including reduction by H2, the influence of tempering time, and structural studies on conserved and stabilized catalyst, long term research on the catalyst, including the influence of aging on the reduced catalyst, and the results of impedance measurements are presented.
Realization of the medium and high vacuum primary standard in CENAM, Mexico
NASA Astrophysics Data System (ADS)
Torres-Guzman, J. C.; Santander, L. A.; Jousten, K.
2005-12-01
A medium and high vacuum primary standard, based on the static expansion method, has been set up at Centro Nacional de Metrología (CENAM), Mexico. This system has four volumes and covers a measuring range of 1 × 10-5 Pa to 1 × 103 Pa of absolute pressure. As part of its realization, a characterization was performed, which included volume calibrations, several tests and a bilateral key comparison. To determine the expansion ratios, two methods were applied: the gravimetric method and the method with a linearized spinning rotor gauge. The outgassing ratios for the whole system were also determined. A comparison was performed with Physikalisch-Technische Bundesanstalt (comparison SIM-Euromet.M.P-BK3). By means of this comparison, a link has been achieved with the Euromet comparison (Euromet.M.P-K1.b). As a result, it is concluded that the value obtained at CENAM is equivalent to the Euromet reference value, and therefore the design, construction and operation of CENAM's SEE-1 vacuum primary standard were successful.
Improved GPS-based time link calibration involving ROA and PTB.
Esteban, Héctor; Palacio, Juan; Galindo, Francisco Javier; Feldmann, Thorsten; Bauch, Andreas; Piester, Dirk
2010-03-01
The calibration of time transfer links is mandatory in the context of international collaboration for the realization of International Atomic Time. In this paper, we present the results of the calibration of the GPS time transfer link between the Real Instituto y Observatorio de la Armada (ROA) and the Physikalisch-Technische Bundesanstalt (PTB) by means of a traveling geodetic-type GPS receiver and an evaluation of the achieved type A and B uncertainty. The time transfer results were achieved by using CA, P3, and also carrier phase PPP comparison techniques. We finally use these results to re-calibrate the two-way satellite time and frequency transfer (TWSTFT) link between ROA and PTB, using one month of data. We show that a TWSTFT link can be calibrated by means of GPS time comparisons with an uncertainty below 2 ns, and that potentially even sub-nanosecond uncertainty can be achieved. This is a novel and cost-effective approach compared with the more common calibration using a traveling TWSTFT station.
Comparison of the NIST and PTB Air-Kerma Standards for Low-Energy X-Rays.
O'Brien, Michelle; Bueermann, Ludwig
2009-01-01
A comparison has been made of the air-kerma standards for low-energy x rays at the National Institute of Standards and Technology (NIST) and the Physikalisch-Technische Bundesanstalt (PTB). The comparison involved a series of measurements at the PTB and the NIST using the air-kerma standards and two NIST reference-class transfer ionization chamber standards. Results are presented for the reference radiation beam qualities in the range from 25 kV to 50 kV for low energy x rays, including the techniques used for mammography dose traceability. The tungsten generated reference radiation qualities, between 25 kV and 50 kV used for this comparison, are new to NIST; therefore this comparison will serve as the preliminary comparison for NIST and a verification of the primary standard correction factors. The mammography comparison will repeat two previously unpublished comparisons between PTB and NIST. The results show the standards to be in reasonable agreement within the standard uncertainty of the comparison of about 0.4 %.
Effects of anisotropy in permeability on the two-phase flow and heat transfer in a porous cavity
NASA Astrophysics Data System (ADS)
Zhang, X. L.; Nguyen, T. Hung; Kahawita, R.
Zusammenfassung In der Arbeit wird über die Ergebnisse einer numerischen Studie, betreffend die stationäre Konvektionsströmung und den stationären Wärmeübergang in einer rechteckigen, mit einem porösen, phasenveränderlichen Medium (PCM) verfüllten Kavität, berichtet. Den zwei vertikalen Berandungen der Kavität sind zwei, den Schmelzpunkt des PCM einschließende Temperaturen aufgeprägt, während die beiden horizontalen Berandungen adiabat gehalten werden. Das poröse Medium ist durch einen anisotropen Permeabilitätstensor charakterisiert, dessen Hauptachsen bezüglich des Gravitationsvektors beliebig orientiert sein können. Das Problem ist durch das Seitenverhältnis A, die Rayleigh-Zahl Ra, das Anisotropienverhältnis R und den Orientierungswinkel Θ des Permeabilitätstensor bestimmt. Hauptaugenmerk gilt dem Einfluß der anisotropen Permeabilität auf das Strömungsverhalten und den Wärme-übergang beim Phasenwechselprozeß flüssig/fest. Die Lösungsmethode basiert auf dem Kontrollvolumenprinzip in Verbindung mit der Landau-Transformation über welche das irreguläre Strömungsgebiet in ein rechteckiges abgebildet wird. Ergebnisse bezüglich Strömungsfeld, Temperaturverteilung, Phasengrenzenort und Wärmeübergang werden fürA=2,5Ra=40 0<=Θ<=π 0,25<=R<=4 mitgeteilt. Es zeigte sich, daß der Gleichgewichtszustand des Phasenwechselsprozesses fest/flüssig sowohl durch das Anisotropieverhältnis R als auch durch den Orientierungswinkel Θ des Permeabilitätstensors wesentlich beeinflußt werden kann. Zum einen existiert bei festgehaltenen ParameternA, Ra undR eine optimale Orientierung Θmax, bei der die Stromstärke, das Flüssigkeitsvolumen und der Wärmestrom Maximalwerte erreichen, während für Θmin=Θmax+π/2 Minimalwerte resultieren. Ist das anisotrope Medium entlang der Optimalrichtung Θmax orientiert, so ergibt sich zum anderen, daß eine Vergrößerung der in diese Richtung fallenden Permeabilitätskomponente die Stromstärke und den Wärmestrom in gleichem Maße erhöht, während eine Vergrößerung der anderen Permeabilitätskomponente nur vernachlässigbaren Einfluß hat. In den untersuchten Parameterbereichen lag die Optimalrichtung zwischen dem Gravitationsvektor und der Hauptstromrichtung.
Immunotoxicity and genotoxicity testing for in-flight experiments under microgravity
NASA Astrophysics Data System (ADS)
Hansen, Peter-Diedrich; Hansen, Peter-Diedrich; Unruh, Eckehardt
Life Sciences as Related to Space (F) Influence of Spaceflight Environment on Biological Systems (F44) Immunotoxicity and genotoxicity testing for In-flight experiments under microgravity Sensing approaches for ecosystem and human health Author: Peter D. Hansen Technische Universit¨t Berlin, Faculty VI - Planen, Bauen, Umwelt, a Institute for Ecological Research and Technology, Department for Ecotoxicology, Berlin, Germany Peter-diedrich.hansen@tu-berlin.de Eckehardt Unruh Technische Universit¨t Berlin, Faculty VI - Planen, Bauen, Umwelt, Institute a for Ecological Research and Technology, Department for Ecotoxicology, Berlin, Germany An immune response by mussel hemocytes is the selective reaction to particles which are identified as foreign by its immune system shown by phagocytosis. Phagocytotic activity is based on the chemotaxis and adhesion, ingestion and phagosome formation. The attachment at the surface of the hemocytes and consequently the uptake of the particles or bacteria can be directly quantified in the format of a fluorescent assay. Another relevant endpoint of phagocytosis is oxidative burst measured by luminescence. Phagocytosis-related production of ROS will be stimulated with opsonised zymosan. The hemocytes will be stored frozen at -80oC and reconstituted in-flight for the experiment. The assay system of the TRIPLELUX-B Experiment has been performed with a well-defined quantification and evaluation of the immune function phagocytosis. The indicator cells are the hemocytes of blue mussels (Mytilus edulis). The signals of the immuno cellular responses are translated into luminescence as a rapid optical reporter system. The results expected will determine whether the observed responses are caused by microgravity and/or radiation (change in permeability, endpoints in genotoxicity: DNA unwinding). The samples for genotoxicity will be processed after returning to earth. The immune system of invertebrates has not been studied so far in space. The choice of phagocytes from invertebrates is justified to study the universal validity of innate immune responses. The TRIPLELUX-B Experiment contributes to risk assessment concerning immunotoxicity under space flight conditions. The components of the phagocytosis test system for the BIOLAB are now established and the technical realization of the TRIPLELUX- B Experiment is in final progess. The components of the fully automated AEC (Advanced Experimental Containment) will be demonstrated in the poster. There will be two AECs for reference measurements at 1xg and 0xg. The AEC of the TRIPLELUX-B experiment will contribute to a real time operational monitoring for immunotoxicity testing on earth. The AEC will allow "real time monitoring" providing automated observations of immunotoxicity in coastal and inland waters.
Sonification of acoustic emission data
NASA Astrophysics Data System (ADS)
Raith, Manuel; Große, Christian
2014-05-01
While loading different specimens, acoustic emissions appear due to micro crack formation or friction of already existing crack edges. These acoustic emissions can be recorded using suitable ultrasonic transducers and transient recorders. The analysis of acoustic emissions can be used to investigate the mechanical behavior of different specimens under load. Our working group has undertaken several experiments, monitored with acoustic emission techniques. Different materials such as natural stone, concrete, wood, steel, carbon composites and bone were investigated. Also the experimental setup has been varied. Fire-spalling experiments on ultrahigh performance concrete and pullout experiments on bonded anchors have been carried out. Furthermore uniaxial compression tests on natural stone and animal bone had been conducted. The analysis tools include not only the counting of events but the analysis of full waveforms. Powerful localization algorithms and automatic onset picking techniques (based on Akaikes Information Criterion) were established to handle the huge amount of data. Up to several thousand events were recorded during experiments of a few minutes. More sophisticated techniques like moment tensor inversion have been established on this relatively small scale as well. Problems are related to the amount of data but also to signal-to-noise quality, boundary conditions (reflections) sensor characteristics and unknown and changing Greens functions of the media. Some of the acoustic emissions recorded during these experiments had been transferred into audio range. The transformation into the audio range was done using Matlab. It is the aim of the sonification to establish a tool that is on one hand able to help controlling the experiment in-situ and probably adjust the load parameters according to the number and intensity of the acoustic emissions. On the other hand sonification can help to improve the understanding of acoustic emission techniques for training purposes (students, co-workers). On goal is to establish a real-time frequency transformation into the audio range to avoid time consuming visual data processing during the experiments. It is also the intention to analyze the signals using psycho-acoustic methods with the help of specialists from electrical engineering. Reference: Raith, Manuel (2013). "Schallemissionsanalyse bei Pulloutexperimenten an Verbunddübeln" Masterarbeit. Technische Universität München, Lehrstuhl für Zerstörungsfreie Prüfung. Malm, Fabian (2012). "Schallemissionsanalyse am humanen Femur" Masterarbeit. Technische Universität München, Lehrstuhl für Zerstörungsfreie Prüfung. Richter R. (2009): Einsatz der Schallemissionsanalyse zur Detektion des Riss und Abplatzungsverhaltens von Beton unter Brandeinwirkung. Diplomarbeit. Materialprüfungsanstalt Universität Stuttgart Keywords: Acoustic emission, bonded anchors, femur, pullout test, fire-spalling
Magnetic jets from accretion disks : field structure and X-ray emission
NASA Astrophysics Data System (ADS)
Memola, Elisabetta
2002-06-01
Jets are highly collimated flows of matter. They are present in a large variety of astrophysical sources: young stars, stellar mass black holes (microquasars), galaxies with an active nucleus (AGN) and presumably also intense flashes of gamma-rays. In particular, the jets of microquasars, powered by accretion disks, are probably small-scale versions of the outflows from AGN. Beside observations of astrophysical jet sources, also theoretical considerations have shown that magnetic fields play an important role in jet formation, acceleration and collimation. Collimated jets seem to be systematically associated with the presence of an accretion disk around a star or a collapsed object. If the central object is a black hole, the surrounding accretion disk is the only possible location for a magnetic field generation. We are interested in the formation process of highly relativistic jets as observed from microquasars and AGN. We theoretically investigate the jet collimation region, whose physical dimensions are extremely tiny even compared to radio telescopes spatial resolution. Thus, for most of the jet sources, global theoretical models are, at the moment, the only possibility to gain information about the physical processes in the innermost jet region. For the first time, we determine the global two-dimensional field structure of stationary, axisymmetric, relativistic, strongly magnetized (force-free) jets collimating into an asymptotically cylindrical jet (taken as boundary condition) and anchored into a differentially rotating accretion disk. This approach allows for a direct connection between the accretion disk and the asymptotic collimated jet. Therefore, assuming that the foot points of the field lines are rotating with Keplerian speed, we are able to achieve a direct scaling of the jet magnetosphere in terms of the size of the central object. We find a close compatibility between the results of our model and radio observations of the M87 galaxy innermost jet. We also calculate the X-ray emission in the energy range 0.2--10.1,keV from a microquasar relativistic jet close to its source of 5 solar masses. In order to do it, we apply the jet flow parameters (densities, velocities, temperatures of each volume element along the collimating jet) derived in the literature from the relativistic magnetohydrodynamic equations. We obtain theoretical thermal X-ray spectra of the innermost jet as composition of the spectral contributions of the single volume elements along the jet. Since relativistic effects as Doppler shift and Doppler boosting due to the motion of jets toward us might be important, we investigate how the spectra are affected by them considering different inclinations of the line of sight to the jet axis. Emission lines of highly ionized iron are clearly visible in our spectra, probably also observed in the Galactic microquasars GRS 1915+105 and XTE J1748-288. The Doppler shift of the emission lines is always evident. Due to the chosen geometry of the magnetohydrodynamic jet, the inner X-ray emitting part is not yet collimated. Ergo, depending on the viewing angle, the Doppler boosting does not play a major role in the total spectra. This is the first time that X-ray spectra have been calculated from the numerical solution of a magnetohydrodynamic jet. Astrophysikalische Jets sind stark kollimierte Materieströmungen hoher Geschwindigkeit. Sie stehen im Zusammenhang mit einer Fülle verschiedener astrophysikalischer Objekte wie jungen Sternen, stellaren schwarzen Löchern ('Mikro-Quasare'), Galaxien mit aktivem Kern (AGN) und wahrscheinlich auch mit dem beobachteten intensiven Aufblitzen von Gamma-Strahlung (Gamma Ray Bursts). Insbesondere hat sich gezeigt, dass die Jets der Mikro-Quasare wahrscheinlich als kleinskalige Version der Jets der AGN anzusehen sind. Neben den Beobachtungen haben vor allem auch theoretische Überlegungen gezeigt, dass Magnetfelder bei der Jetentstehung, -beschleunigung und -kollimation eine wichtige Rolle spielen. Weiterhin scheinen Jets systematisch verknüpft zu sein mit dem Vorhandensein einer Akkretionsscheibe um das zentrale Objekt. Insbesondere wenn ein schwarzes Loch den Zentralkörper darstellt, ist die umgebende Akkretionsscheibe der einzig mögliche Ort um Magnetfeld erzeugen zu können. Wir sind speziell interessiert am Entstehungsprozess hoch relativistischer Jets wie sie bei Mikro-Quasaren und AGN beobachtet werden. Insbesondere untersuchen wir die Region, in der der Jet kollimiert, eine Region, deren räumliche Ausdehnung extrem klein ist selbst im Vergleich zur Auflösung der Radioteleskope. Dies ist ein Grund, wieso zum heutigen Zeitpunkt für die meisten Quellen die theoretische Modellierung die einzige Möglichkeit darstellt, um Information über die physikalischen Prozesse in der innersten Region der Jetentstehung zu erhalten. Uns ist es zum ersten Mal gelungen, die globale zwei-dimensionale Magnetfeldstruktur stationärer, axialsymmetrischer, relativistischer und stark magnetisierter (kräfte-freier) Jets zu berechnen, die zum einen asymptotisch in einen zylindrischen Jet kollimieren, zum anderen aber in einer differential rotierenden Akkretionsscheibe verankert sind. Damit erlaubt dieser Ansatz eine physikalische Verkn¨upfung zwischen Akkretionsscheibe und dem asymptotischen Jet. Nimmt man also an, dass die Fupunkte der Magnetfeldlinien mit Keplergeschwindigkeit rotieren, so kann man eine direkte Skalierung der Jetmagnetosphere mit der Gröe des Zentralobjektes erhalten. Unsere Resultate zeigen eine gute Übereinstimmung zwischen unserem Modell und Beobachtungen des Jets von M87. Für das Beispiel eines relativistischen Mikroquasarjets haben wir die Röntgenemission im Bereich von 0.2-10.1 keV berechnet. Dafür haben wir in der Literatur aus den relativistischen magnetohydrodynamischen Gleichungen berechnete Jetgröen (Dichte-, Geschwindigkeits-, und Temperaturprofil) verwendet und das Spektrum für jeden Punkt entlang der Jetströmung abgeleitet. Das theoretische thermische Röntgenspektrum des innersten, heien Teils des Jets erhalten wir zusammengesetzt aus den spektralen Anteilen der einzelnen Volumenelemente entlang des Jets. Um relativistische Effekte wie Dopplerverschiebung und -verstärkung (boosting) aufgrund der Jetbewegung zu untersuchen, haben wir für verschiedene Inklinationswinkel des Jets zur Sichtlinie berechnet, wie die erhaltenen Spektren davon beeinflusst werden. Unsere Spektren zeigen deutlich die hochionisierten Eisen-Emissionslinien, die in den galaktischen Mikroquasaren GRS 1915+105 und XTE J1748-288 andeutungsweise beobachtet wurden. Eine Dopplerverschiebung dieser Linien ist in unseren Spektren deutlichzu sehen. Da die innerste, Röntgenstrahlung emittierende Region des magnetohydrodynamischen Jets allerdings noch unkollimiert ist, spielt Dopplerboosting in unseren Spektren, abhängig vom Sichtwinkel, keine groe Rolle. Mit unseren Resultaten konnte zum ersten Mal ein Röntgenspektrum gewonnen werden, das auf der numerischen Lösung eines magnetohydrodynamischen Jets beruht.
Power spectrum analyses of nuclear decay rates
NASA Astrophysics Data System (ADS)
Javorsek, D.; Sturrock, P. A.; Lasenby, R. N.; Lasenby, A. N.; Buncher, J. B.; Fischbach, E.; Gruenwald, J. T.; Hoft, A. W.; Horan, T. J.; Jenkins, J. H.; Kerford, J. L.; Lee, R. H.; Longman, A.; Mattes, J. J.; Morreale, B. L.; Morris, D. B.; Mudry, R. N.; Newport, J. R.; O'Keefe, D.; Petrelli, M. A.; Silver, M. A.; Stewart, C. A.; Terry, B.
2010-10-01
We provide the results from a spectral analysis of nuclear decay data displaying annually varying periodic fluctuations. The analyzed data were obtained from three distinct data sets: 32Si and 36Cl decays reported by an experiment performed at the Brookhaven National Laboratory (BNL), 56Mn decay reported by the Children's Nutrition Research Center (CNRC), but also performed at BNL, and 226Ra decay reported by an experiment performed at the Physikalisch-Technische Bundesanstalt (PTB) in Germany. All three data sets exhibit the same primary frequency mode consisting of an annual period. Additional spectral comparisons of the data to local ambient temperature, atmospheric pressure, relative humidity, Earth-Sun distance, and their reciprocals were performed. No common phases were found between the factors investigated and those exhibited by the nuclear decay data. This suggests that either a combination of factors was responsible, or that, if it was a single factor, its effects on the decay rate experiments are not a direct synchronous modulation. We conclude that the annual periodicity in these data sets is a real effect, but that further study involving additional carefully controlled experiments will be needed to establish its origin.
Qualitätsmanagement in molekularbiologischen Laboratorien
NASA Astrophysics Data System (ADS)
Schulze, Manuela
Jedes Laboratorium führt Untersuchungen nach bestem Wissen und Gewissen durch, und jeder Analytiker weiß von sich, dass er/sie gute Arbeit macht. Trotzdem können Analysen des gleichen Probenmaterials in verschiedenen Laboren zu unterschiedlichen Ergebnissen führen. Sofern es sich dabei nicht um Untersuchungen im Bereich der Nachweisgrenze und damit letztlich um statistisch bedingte Unterschiede oder Inhomogenitäten im Probenmaterial handelt, trägt dies nicht zur Vertrauenswürdigkeit von analytischen Untersuchungsergebnissen bei. Mit der zunehmenden Globalisierung der Märkte rückt die gegenseitige Anerkennung von analytischen Resultaten immer stärker in den Vordergrund. Die Vergleichbarkeit von Laborresultaten wird erleichtert, wenn sich die Laboratorien auf die gleichen Richtlinien zur Vorgehensweise und Handhabung ihrer Arbeiten verständigen. Im Bereich der Laboruntersuchungen von Lebensmitteln, Futtermitteln und Saatgut ist als derartige Rischt-Schnur die EN ISO/IEC 17025 [1] anerkannt. Diese Norm enthält alle Anforderungen, die Prüflaboratorien erfüllen müssen, wenn sie nachweisen wollen, dass sie ein Qualitätsmanagementsystem betreiben, technisch (meint: fachlich) kompetent und fähig sind, fachlich fundierte Ergebnisse zu erzielen.
Cluster of the Technische Universität Dresden for greenhouse gas and water fluxes
NASA Astrophysics Data System (ADS)
Moderow, Uta; Eichelmann, Uwe; Grünwald, Thomas; Prasse, Heiko; Queck, Ronald; Spank, Uwe; Bernhofer, Christian
2017-04-01
How different land uses change CO2-fluxes under similar climatic conditions is a core question concerning the estimation of carbon sinks. Here, the TUD-cluster forms an excellent basis since it provides long-term measurements of Eddy-Covariance fluxes for different land uses. Measurements started at the Anchor Station Tharandter Wald (Spruce) in 1996. Since then the TUD-cluster has been successively complemented by continuous greenhouse gas flux observatories at Grillenburg (grassland), Klingenberg (crop rotation) and Spreewald (wetland), which have been operated since 2002, 2004 and 2010. The results of the TUD-cluster have been shared internationally in research frameworks such as EUROFLUX and subsequent research frameworks and is now part of ICOS-D (Integrated Carbon Observation System), the German branch to ICOS Europe. This contribution focuses on the presentation of the different sites with comparatively similar climatic conditions but different CO2-fluxes, water fluxes and energy fluxes. Influences of management and climatic conditions will be shown which are apparent in long-term data as well as interesting aspects of distinct land uses.
Status and plans for the future of the Vienna VLBI Software
NASA Astrophysics Data System (ADS)
Madzak, Matthias; Böhm, Johannes; Böhm, Sigrid; Girdiuk, Anastasiia; Hellerschmied, Andreas; Hofmeister, Armin; Krasna, Hana; Kwak, Younghee; Landskron, Daniel; Mayer, David; McCallum, Jamie; Plank, Lucia; Schönberger, Caroline; Shabala, Stanislav; Sun, Jing; Teke, Kamil
2016-04-01
The Vienna VLBI Software (VieVS) is a VLBI analysis software developed and maintained at Technische Universität Wien (TU Wien) since 2008 with contributions from groups all over the world. It is used for both academic purposes in university courses as well as for providing VLBI analysis results to the geodetic community. Written in a modular structure in Matlab, VieVS offers easy access to the source code and the possibility to adapt the programs for particular purposes. The new version 2.3, released in December 2015, includes several new parameters to be estimated in the global solution, such as tidal ERP variation coefficients. The graphical user interface was slightly modified for an improved user functionality and, e.g., the possibility of deriving baseline length repeatabilities. The scheduling of satellite observations was refined, the simulator newly includes the effect of source structure which can also be corrected for in the analysis. This poster gives an overview of all VLBI-related activities in Vienna and provides an outlook to future plans concerning the Vienna VLBI Software.
NASA Technical Reports Server (NTRS)
Rebstock, Rainer
1987-01-01
Numerical methods are developed for control of three dimensional adaptive test sections. The physical properties of the design problem occurring in the external field computation are analyzed, and a design procedure suited for solution of the problem is worked out. To do this, the desired wall shape is determined by stepwise modification of an initial contour. The necessary changes in geometry are determined with the aid of a panel procedure, or, with incident flow near the sonic range, with a transonic small perturbation (TSP) procedure. The designed wall shape, together with the wall deflections set during the tunnel run, are the input to a newly derived one-step formula which immediately yields the adapted wall contour. This is particularly important since the classical iterative adaptation scheme is shown to converge poorly for 3D flows. Experimental results obtained in the adaptive test section with eight flexible walls are presented to demonstrate the potential of the procedure. Finally, a method is described to minimize wall interference in 3D flows by adapting only the top and bottom wind tunnel walls.
Beta/gamma and alpha backgrounds in CRESST-II Phase 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, R.; Angloher, G.; Ferreiro Iachellini, N.
2015-06-01
The experiment CRESST-II aims at the detection of dark matter with scintillating CaWO{sub 4} crystals operated as cryogenic detectors. Recent results on spin-independent WIMP-nucleon scattering from the CRESST-II Phase 2 allowed to probe a new region of parameter space for WIMP masses below 3 GeV/c{sup 2}. This sensitivity was achieved after background levels were reduced significantly. We present extensive background studies of a CaWO{sub 4} crystal, called TUM40, grown at the Technische Universität München. The average beta/gamma rate of 3.51/[kg keV day] (1-40 keV) and the total intrinsic alpha activity from natural decay chains of 3.08±0.04 mBq/kg are the lowestmore » reported for CaWO{sub 4} detectors. Contributions from cosmogenic activation, surface-alpha decays, external radiation and intrinsic alpha/beta emitters are investigated in detail. A Monte-Carlo based background decomposition allows to identify the origin of the majority of beta/gamma events in the energy region relevant for dark matter search.« less
NASA Astrophysics Data System (ADS)
Zuber, Ralf; Sperfeld, Peter; Riechelmann, Stefan; Nevas, Saulius; Sildoja, Meelis; Seckmeyer, Gunther
2018-04-01
A compact array spectroradiometer that enables precise and robust measurements of solar UV spectral direct irradiance is presented. We show that this instrument can retrieve total ozone column (TOC) accurately. The internal stray light, which is often the limiting factor for measurements in the UV spectral range and increases the uncertainty for TOC analysis, is physically reduced so that no other stray-light reduction methods, such as mathematical corrections, are necessary. The instrument has been extensively characterised at the Physikalisch-Technische Bundesanstalt (PTB) in Germany. During an international total ozone measurement intercomparison at the Izaña Atmospheric Observatory in Tenerife, the high-quality applicability of the instrument was verified with measurements of the direct solar irradiance and subsequent TOC evaluations based on the spectral data measured between 12 and 30 September 2016. The results showed deviations of the TOC of less than 1.5 % from most other instruments in most situations and not exceeding 3 % from established TOC measurement systems such as Dobson or Brewer.
Vienna VLBI and Satellite Software (VieVS) for Geodesy and Astrometry
NASA Astrophysics Data System (ADS)
Böhm, Johannes; Böhm, Sigrid; Boisits, Janina; Girdiuk, Anastasiia; Gruber, Jakob; Hellerschmied, Andreas; Krásná, Hana; Landskron, Daniel; Madzak, Matthias; Mayer, David; McCallum, Jamie; McCallum, Lucia; Schartner, Matthias; Teke, Kamil
2018-04-01
The Vienna VLBI and Satellite Software (VieVS) is state-of-the-art Very Long Baseline Interferometry (VLBI) analysis software for geodesy and astrometry. VieVS has been developed at Technische Universität Wien (TU Wien) since 2008, where it is used for research purposes and for teaching space geodetic techniques. In the past decade, it has been successfully applied on Very Long Baseline Interferometry (VLBI) observations for the determination of celestial and terrestrial reference frames as well as for the estimation of celestial pole offsets, universal Time (UT1-UTC), and polar motion based on least-squares adjustment. Furthermore, VieVS is equipped with tools for scheduling and simulating VLBI observations to extragalactic radio sources as well as to satellites and spacecraft, features which proved to be very useful for a variety of applications. VieVS is now available as version 3.0 and we do provide the software to all interested persons and institutions. A wiki with more information about VieVS is available at http://vievswiki.geo.tuwien.ac.at/.
Giacomelli, L; Zimbal, A; Reginatto, M; Tittelmeier, K
2011-01-01
A compact NE213 liquid scintillation neutron spectrometer with a new digital data acquisition (DAQ) system is now in operation at the Physikalisch-Technische Bundesanstalt (PTB). With the DAQ system, developed by ENEA Frascati, neutron spectrometry with high count rates in the order of 5×10(5) s(-1) is possible, roughly an order of magnitude higher than with an analog acquisition system. To validate the DAQ system, a new data analysis code was developed and tests were done using measurements with 14-MeV neutrons made at the PTB accelerator. Additional analysis was carried out to optimize the two-gate method used for neutron and gamma (n-γ) discrimination. The best results were obtained with gates of 35 ns and 80 ns. This indicates that the fast and medium decay time components of the NE213 light emission are the ones that are relevant for n-γ discrimination with the digital acquisition system. This differs from what is normally implemented in the analog pulse shape discrimination modules, namely, the fast and long decay emissions of the scintillating light.
Compact NE213 neutron spectrometer with high energy resolution for fusion applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimbal, A.; Reginatto, M.; Schuhmacher, H.
Neutron spectrometry is a tool for obtaining important information on the fuel ion composition, velocity distribution and temperature of fusion plasmas. A compact NE213 liquid scintillator, fully characterized at Physikalisch-Technische Bundesanstalt, was installed and operated at the Joint European Torus (JET) during two experimental campaigns (C8-2002 and trace tritium experiment-TTE 2003). The results show that this system can operate in a real fusion experiment as a neutron (1.5 MeV
Patrick Couvreur: inspiring pharmaceutical innovation.
Stanwix, Hannah
2014-05-01
Patrick Couvreur speaks to Hannah Stanwix, Managing Comissioning Editor: Professor Patrick Couvreur received his pharmacy degree from the Université Catholique de Louvain (Louvain-la-Neuve, Belgium) in 1972. He holds a PhD in pharmaceutical technology from the same university and completed a postdoctoral fellowship at the Eidgenössische Technische Hochschule (Zürich, Switzerland). Since 1984, Professor Couvreur has been Full Professor of Pharmacy at the Paris-Sud University (Paris, France) and was holder of the Chair of Innovation Technologique at the prestigious Collège de France (Paris, France). He has published more than 450 peer-reviewed articles and has an H-index of 73, with over 19,000 citations. Professor Coureur has been recognized by numerous national and international awards, including the 2004 Pharmaceutical Sciences World Congress Award, the prestigious Host Madsen Medal, the Prix Galien, the European Pharmaceutical Scientist Award 2011 from the European Federation of Pharmaceutical Sciences, the Médaille de l'Innovation from the Centre National de la Recherche Scientifique, and recently the European Inventor Award 2013 from the European Patent Office.
NASA Astrophysics Data System (ADS)
Dopheide, D.; Taux, G.; Krey, E.-A.
1990-01-01
In the Physikalisch-Technische Bundesanstalt (PTB), a research test facility for the accurate measurement of gas (volume and mass) flowrates has been set up in the last few years on the basis of a laser Doppler anemometer (LDA) with a view to directly measuring gas flowrates with a relative uncertainty of only 0,1%. To achieve this, it was necessary to develop laser Doppler anemometry into a precision measuring technique and to carry out detailed investigations on stationary low-turbulence nozzle flow. The process-computer controlled test facility covers the flowrate range from 100 to 4000 m3/h (~0,03 - 1,0 m3/s), any flowrate being measured directly, immediately and without staggered arrangement of several flow meters. After the development was completed, several turbine-type gas meters were calibrated and international comparisons carried out. The article surveys the most significant aspects of the work and provides an outlook on future developments with regard to the miniaturization of optical flow and flowrate sensors for industrial applications.
[The development of self-esteem of children in Germany between 1989 and 2009].
Schauder, Thomas
2012-01-01
While establishing new norms for the questionnaire Aussagen-Liste zum Selbstwertgefühl für Kinder und Jugendliche (ALS; Schauder, 1991, 1996, 2011) data from the year 1989 were compared to the new data from 2009. The expected differences in the areas school, leisure and family and a certain trend during puberty towards a decrease of self-esteem between the agegroups 10/11 and 12/13 could be shown for both times of examination. The difference in experiencing their self-esteem for boys and girls is not relevant anymore in 2009. Girls show higher scores today and express an equally high self-esteem as boys. Changes in the self-esteem over the period of times can be seen as follows: overall the tested children in 2009 express a higher self-esteem than in 1989. This applies to all age groups and boys and girls for all tested areas and is partly statistically very significant. Girls show the most obvious improvement in self-esteem.
Wie man Wert aus Smart Data schöpft
NASA Astrophysics Data System (ADS)
Schüller, Katharina; Fritsch, Stefan
Der vorliegende Beitrag diskutiert an einem konkreten Forschungsprojekt, wie aus den Überwachungsdaten von Photovoltaikanlagen Algorithmen entwickelt wurden, die zukünftig die automatisierte Fehlererkennung und damit eine verbesserte Betriebsführung ermöglichen können. Um von Daten zum optimierten Prozess zu gelangen, sind vier Stufen notwendig. Nach der Datenintegration folgen die Qualitätssicherung, dann die Analyse und schließlich die Umsetzung in eine betrieblich nutzbare Anwendung. Für die Entwicklung valider, praxisrelevanter Modelle stellte es sich als unumgänglich heraus, dass bereits frühzeitig die datengenerierenden Prozesse und damit auch die physikalischen Grundlagen der Anlagen nicht nur von den Prozessexperten, sondern genauso von den Data Scientists verstanden wurden: Es genügt eben nicht, Daten zu konsolidieren und in ein Analysetool zu stecken, sondern die Wertschöpfung aus Daten gelingt nur, wenn eine domänen- und kompetenzübergreifend interdisziplinäre Zusammenarbeit erfolgt, in der beide Seiten bereit sind, kontinuierlich voneinander zu lernen.
Grundwasserzufluss zum Steißlinger See und Folgen für die Wasserchemie
NASA Astrophysics Data System (ADS)
Gilfedder, Benjamin; Peiffer, Stefan; Pöschke, Franziska; Spirkaneder, Andrea
2018-06-01
Groundwater can play an important role in lake water and chemical budgets. The aim of this study was to map and quantify groundwater discharge to Lake Steißlingen, a small lake in south-west Germany, using Rn as a natural groundwater tracer. 222Rn, nutrients, temperature and oxygen concentration were measured in depth profiles and in sediment-near water samples during April and June 2016. The spatial distribution of Rn activities (max. 944 Bq m-3) showed that groundwater discharges into the middle of the lake and along the northeastern shore from springs in the hypolimnion (11-14 m depth). Based on a Rn mass-balance groundwater accounts for 70% of the total water input (GWflux = 11 l s-1) to the lake. There were significant positive correlations between methane and nitrite with Rn activity, suggesting the groundwater as a common source. Overall the inflow of groundwater causes a deterioration in lake water quality.
NASA Astrophysics Data System (ADS)
Graf, Dittmar; Soran, Haluk
Es wird eine Untersuchung vorgestellt, in der Wissen und Überzeugungen von Lehramtsstudierenden aller Fächer zum Thema Evolution an zwei Universitäten in Deutschland und der Türkei erhoben worden sind. Die Befragung wurde in Dortmund und in Ankara durchgeführt. Es stellte sich heraus, dass ausgeprägte Defizite im Verständnis der Evolutionsmechanismen herrschen. Viele Studierende, insbesondere aus der Türkei, sind nicht von der Faktizität der Evolution überzeugt. Dies gilt sowohl für Studierende mit Fach Biologie als auch für Studierende mit anderen Fächern. Näher untersucht worden sind die Faktoren, die die Überzeugungen zur Evolution beeinflussen können, was ja in Anbetracht der hohen Ablehnungsrate der Evolution von besonderem Interesse ist. Das Vertrauen in die Wissenschaft spielt hierbei eine besondere Rolle: Wer der Wissenschaft vertraut, ist auch eher von der Evolution überzeugt, als diejenigen, die skeptisch gegenüber der Wissenschaft sind.
Ein Entscheidungsmodell zur Weitergabe persönlicher Daten im Internet
NASA Astrophysics Data System (ADS)
Treiblmaier, Horst
In den vergangenen zwei Jahrzehnten wandelte sich das Internet von einer Spielwiese für technikbegeisterte Computerspezialisten zu einem vielseitig einsetzbaren weltweiten Netzwerk für Privatpersonen und Unternehmen. Maßgeblichen Anteil daran besaß die rasante Entwicklung des World Wide Web (WWW), das, durch die Möglichkeit multimediale Inhalte zu vermitteln, für einen großen Teil der Bevölkerung industrialisierter Länder zu einem wesentlichen Bestandteil des täglichen Lebens wurde. Dass diese Entwicklung noch lange nicht abgeschlossen ist, zeigt die derzeitige Diskussion zum Thema Web 2.0 bzw. 3.0. Waren es in den letzten Jahren die hohen Umsatzzuwächse im E-Commerce und multimedial gestaltete Webseiten in Kombination mit aufwändigen Applikationen, die für ständig steigende Nutzerzahlen im World Wide Web sorgten, so wird dieser Innovationsschub nunmehr durch eine Vielzahl von Anwendungen fortgesetzt, die sich durch die zunehmende Vernetzung der Nutzer untereinander auszeichnen.
NASA Astrophysics Data System (ADS)
2007-11-01
Mohab Abou ZeidVrije Universiteit, Brussel Joke AdamKatholieke Universiteit Leuven Nikolas AkerblomMax-Planck-Institut für Physik, München Luis Fernando Alday Utrecht University Stelios Alexandris University of Patras Antonio Amariti Università di Milano-Bicocca Nicola Ambrosetti Université de Neuchâtel Pascal Anastasopoulos Università di Roma Tor Vergata Laura Andrianopoli Enrico Fermi Center Carlo Angelantonj Università di Torino Lilia Anguelova Queen Mary, University of London Daniel AreanUniversidade de Santiago de Compostela Gleb ArutyunovUtrecht University Spyros Avramis NTU Athens—University of Patras Ioannis Bakas University of Patras Subrata Bal Dublin Institute for Advanced Studies Igor Bandos Valencia University Jessica Barrett University of Iceland Marco Baumgartl Eidgenössische Technische Hochschule, Zürich Jacopo Bechi Università di Firenze James Bedford Queen Mary, University of London Jorge Bellorin Universidad Autonoma de Madrid Francesco Benini SISSA, Trieste Eric Bergshoeff Centre for Theoretical Physics, University of Groningen Gaetano BertoldiUniversity of Wales, Swansea Adel Bilal Laboratoire de Physique Théorique, École Normale Superieure, Paris Matthias Blau Université de Neuchâtel Johannes BroedelUniversität Hannover Felix Brümmer Universität Heidelberg Julio Cesar Bueno de Andrade São Paulo State University—UNESP Cliff Burgess McMaster University Agostino Butti Laboratoire de Physique Théorique, École Normale Superieure, Paris Marco Caldarelli Universitat de Barcelona Pablo G Camara Centre de Physique Théorique, École Polytechnique, Palaiseau Joan Camps Universitat de Barcelona Felipe Canoura FernandezUniversidade de Santiago de Compostela Luigi Cappiello Università di Napoli Federico II Luca Carlevaro École Polytechnique, Palaiseau Roberto Casero Centre de Physique Théorique, École Polytechnique, Palaiseau Claudio Caviezel Max-Planck-Institut für Physik, München Alessio Celi Universitat de Barcelona Anna Ceresole Istituto Nazionale di Fisica Nucleare and Università di Torino Kang Sin Choi University of Bonn Michele Cirafici University of Patras Andres Collinucci Katholieke Universiteit Leuven Aldo Cotrone Universitat de Barcelona Ben Craps Vrije Universiteit, Brussel Stefano Cremonesi SISSA, Trieste Gianguido Dall'Agata Padova University Sanjit Das Indian Institute of Technology, Kharagpur Forcella Davide SISSA, Trieste Jose A de Azcarraga Valencia University and Instituto de Fìsica Corpuscular (CSIC-UVEG), Valencia Sophie de BuylInstitut des Hautes Études Scientifiques, Bures-sur-Yvette Jean-Pierre Derendinger Université de Neuchâtel Stephane Detournay Università Degli Studi di Milano Paolo Di Vecchia NORDITA, København Oscar Dias Universitat de Barcelona Vladimir Dobrev Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, Sofia Joel Ekstrand Department of Theoretical Physics, Uppsala University Federico Elmetti Università di Milano I Diaconu Eugen University of Craiova Oleg Evnin Vrije Universiteit, Brussel Bo Feng Imperial College, London Livia Ferro Università di Torino Pau Figueras Universitat de Barcelona Raphael Flauger University of Texas at Austin Valentina Forini Università di Perugia Angelos Fotopoulos Università di Torino Denis Frank Université de Neuchâtel Lisa Freyhult Albert-Einstein-Institut, Golm Carlos Fuertes Instituto de Física Teórica, Madrid Matthias Gaberdiel Eidgenössische Technische Hochschule, Zürich Maria Pilar Garcia del Moral Università di Torino Daniel Gerber Instituto de Física Teórica, Madrid Valentina Giangreco Marotta Puletti Uppsala University Joaquim Gomis Universitat de Barcelona Gianluca Grignani Università di Perugia Luca Griguolo Università di Parma Umut Gursoy École Polytechnique, Palaiseau and École Normale Supérieure, Paris Michael Haack Ludwig-Maximilians-Universität, München Troels Harmark Niels Bohr Institute, København Alexander Haupt Imperial College, London Michal Heller Jagiellonian University, Krakow Samuli Hemming University of Iceland Yasuaki Hikida DESY, Hamburg Christian Hillmann Max-Planck-Institut für Gravitationsphysik, Potsdam Stephan Hoehne Max-Planck-Institut für Physik, München Gabriele Honecker CERN, Geneva Carlos Hoyos University of Wales, Swansea Mechthild Huebscher Consejo Superior de Investigaciones Cientificas, Madrid Matthias Ihl University of Texas at Austin Emiliano Imeroni University of Wales, Swansea Nikos Irges University of Crete Negru Iulian University of Craiova Matthias Kaminski Ludwig-Maximilians-Universität, München Stefanos Katmadas Universiteit Utrecht Shoichi Kawamoto Oxford University Christoph Keller Eidgenössische Technische Hochschule, Zürich Arjan Keurentjes Vrije Universiteit, Brussel Sadi Khodaee Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran Michael Kiermaier Massachusetts Institute of Technology, Cambridge, MA Elias Kiritsis Centre de Physique Théorique, École Polytechnique, Palaiseau and University of Crete Ingo KirschEidgenössische Technische Hochschule, Zürich Johanna Knapp CERN, Geneva Paul Koerber Max-Planck-Institut für Physik, München Simon Koers Max-Planck-Institut für Physik, München Anatoly Konechny Heriot-Watt University, Edinburgh Peter Koroteev Institute for Theoretical and Experimental Physics (ITEP), Moscow Daniel KreflLudwig-Maximilians-Universität and Max-Planck-Institut für Physik, München Chethan KrishnanUniversité Libre de Bruxelles Stanislav Kuperstein Université Libre de Bruxelles Alberto Lerda Università del Piemonte Orientale, Alessandria Roman Linares Universidad Autonoma Metropolitana, Iztapalapa, México Maria A Lledo Universidad de Valencia Dieter Luest Ludwig-Maximilians-Universität and Max-Planck-Institut für Physik, München Joseph Lykken Fermi National Accelerator Laboratory (Fermilab), Batavia, IL Carlo Maccaferri Vrije Universiteit, Brussel Oscar Macia Universidad de Valencia Tristan Maillard Centre de Physique Théorique, École Polytechnique, Palaiseau Diego Mansi Università Degli Studi di Milano Matteo Marescotti Università del Piemonte Orientale, Alessandria Alberto Mariotti Università di Milano-Bicocca Raffaele Marotta Istituto Nazionale di Fisica Nucleare, Napoli Alessio Marrani Istituto Nazionale di Fisica Nucleare and LNF, Firenze Luca Martucci Instituto de Física Teórica, Madrid and Katholieke Universiteit Leuven David Mateos University of California, Santa Barbara Andrea Mauri Università di Milano Liuba Mazzanti Università di Milano-Bicocca Patrick Meessen Instituto de Física Teórica, Universidad Autónoma de Madrid Lotta Mether Helsinki Institute of Physics Rene Meyer Max-Planck-Institut für Physik, München Giuseppe Milanesi SISSA, Trieste Cesar Miquel-Espanya Universitat de Valencia and Instituto de Física Corpuscular, Valencia Alexander Monin Institute for Theoretical and Experimental Physics (ITEP), Moscow and Moscow State University (MSU) Samuel Monnier Université de Genève Sergio Montero Instituto de Física Teórica, Madrid Nicola Mori Università di Firenze Alexander Marcel Morisse University of California, Santa Cruz Sebastian Moster Max-Planck-Institut für Physik, München Adele Nasti Queen Mary, University of London Vasilis Niarchos École Polytechnique, Palaiseau Emil Nissimov Institute for Nuclear Research and Nuclear Energy, Sofia Francesco Nitti École Polytechnique, Palaiseau Eoin O'Colgain Imperial College, London Niels Obers Niels Bohr Institute, København Rodrigo Olea Università Degli Studi di Milano Marta Orselli Niels Bohr Institute, København Enrico PajerLudwig-Maximilians-Universität, München Eran PaltiOxford University Georgios PapathanasiouBrown University, Providence, RI Angel ParedesCentre de Physique Théorique, École Polytechnique, Palaiseau Jeong-Hyuck ParkMax-Planck-Institut für Physik, München Sara PasquettiUniversità di Parma Silvia PenatiUniversità di Milano-Bicocca Igor PesandoUniversità di Torino Marios PetropoulosÉcole Polytechnique, Palaiseau Roberto PettorinoUniversità di Napoli Federico II Franco PezzellaIstituto Nazionale di Fisica Nucleare, Napoli Moises Picon PonceIstituto Nazionale di Fisica Nucleare, Padova Marco PirroneUniversità di Milano-Bicocca Erik PlauschinnMax-Planck-Institut für Physik, München Andre PloeghCentre for Theoretical Physics, University of Groningen Giuseppe PolicastroLaboratoire de Physique Théorique, École Normale Superieure, Paris Josep PonsUniversitat de Barcelona S Prem KumarUniversity of Wales, Swansea Nikolaos PrezasCERN, Geneva Carlo Alberto RattiUniversità di Milano-Bicocca Riccardo RicciImperial College, London Alejandro RiveroEscuela Universitaria Politécnica de Teruel, Universidad de Zaragoza Irene RodriguezInstituto de Física Teórica, Madrid Maria Jose RodriguezUniversitat de Barcelona Diederik RoestUniversitat de Barcelona Alberto RomagnoniLaboratoire de Physique Théorique d'Orsay, Paris Christian RomelsbergerDublin Institute for Advanced Studies Jan RosseelKatholieke Universiteit Leuven Sebastiano RossiEidgenössische Technische Hochschule, Zürich Felix RustMax-Planck-Institut für Physik, München Cheol RyouPohang University of Science and Technology (POSTECH) Christian SaemannDublin Institute for Advanced Studies Houman Safaai SISSA, Trieste Alberto SantambrogioIstituto Nazionale di Fisica Nucleare, Sezione di Milano Frank SaueressigUniversiteit Utrecht Ricardo SchiappaCERN, Geneva Cornelius Schmidt-ColinetEidgenössische Technische Hochschule, Zürich Maximilian Schmidt-SommerfeldMax-Planck-Institut für Physik, München Waldemar SchulginMax-Planck-Institut für Physik, München Claudio ScruccaUniversité de Neuchâtel Nathan SeibergInstitute of Advanced Studies, Princeton, NJ Domenico SeminaraUniversità di Firenze Alexander SevrinVrije Universiteit, Brussel Konstadinos SfetsosUniversity of Patras Kostas SiamposUniversity of Patras Christoph SiegUniversità Degli Studi di Milano Vaula Silvia Instituto de Física Teórica, Madrid Aaron Sim Imperial College, London Woojoo Sim Pohang University of Science and Technology (POSTECH) Sergey Slizovskiy Department of Theoretical Physics, Uppsala University Paul Smyth Katholieke Universiteit Leuven Corneliu Sochichiu Laboratori Nazionali di Frascati Dmitri Sorokin Istituto Nazionale di Fisica Nucleare, Padova Kellogg Stelle Imperial College, London Piotr Surowka Jagiellonian University, Krakow Yasutoshi Takayama Niels Bohr Institute, København Laura Tamassia Katholieke Universiteit Leuven Radu Tatar University of Liverpool Larus Thorlacius University of Iceland Paavo Tiitola Helsinki Institute of Physics Diego Trancanelli Stony Brook University, NY Michele TraplettiInstitut für Theoretische Physik, Universität Heidelberg Mario Trigiante Politecnico di Torino Angel Uranga CERN, Geneva and Instituto de Física Teórica, Madrid Roberto Valandro SISSA, Trieste Dieter Van den Bleeken Katholieke Universiteit Leuven Antoine Van Proeyen Katholieke Universiteit Leuven Thomas Van Riet Centre for Theoretical Physics, University of Groningen Pierre Vanhove Service de Physique Théorique, Saclay Oscar Varela Universidad de Valencia Alessandro Vichi Scuola Normale Superiore di Pisa Massimiliano VinconQueen Mary, University of London John Ward Queen Mary, University of London and CERN, Geneva Brian Wecht Massachusetts Institute of Technology, Cambridge, MA Marlene Weiss Eidgenössische Technische Hochschule, Zürich and CERN, Geneva Sebastian Weiss Université de Neuchâtel Alexander Wijns Vrije Universiteit, Brussel Przemek Witaszczyk Jagiellonian University, Krakow Timm Wrase University of Texas at Austin Jun-Bao Wu SISSA, Trieste Amos Yarom Ludwig-Maximilians-Universität, München Marco Zagermann Max-Planck-Institut für Physik, München Daniela Zanon Dipartimento di Fisica, Università di Milano Andrea Zanzi University of Bonn Andrey Zayakin Moscow State University (MSU) and Institute for Theoretical and Experimental Physics (ITEP), Moscow Konstantinos Zoubos Queen Mary, University of London
Development and production of a multilayer-coated x-ray reflecting stack for the Athena mission
NASA Astrophysics Data System (ADS)
Massahi, S.; Ferreira, D. D. M.; Christensen, F. E.; Shortt, B.; Girou, D. A.; Collon, M.; Landgraf, B.; Barriere, N.; Krumrey, M.; Cibik, L.; Schreiber, S.
2016-07-01
The Advanced Telescope for High-Energy Astrophysics, Athena, selected as the European Space Agency's second large-mission, is based on the novel Silicon Pore Optics X-ray mirror technology. DTU Space has been working for several years on the development of multilayer coatings on the Silicon Pore Optics in an effort to optimize the throughput of the Athena optics. A linearly graded Ir/B4C multilayer has been deposited on the mirrors, via the direct current magnetron sputtering technique, at DTU Space. This specific multilayer, has through simulations, been demonstrated to produce the highest reflectivity at 6 keV, which is a goal for the scientific objectives of the mission. A critical aspect of the coating process concerns the use of photolithography techniques upon which we will present the most recent developments in particular related to the cleanliness of the plates. Experiments regarding the lift-off and stacking of the mirrors have been performed and the results obtained will be presented. Furthermore, characterization of the deposited thin-films was performed with X-ray reflectometry at DTU Space and in the laboratory of the Physikalisch-Technische Bundesanstalt at the synchrotron radiation facility BESSY II.
Angle comparison using an autocollimator
NASA Astrophysics Data System (ADS)
Geckeler, Ralf D.; Just, Andreas; Vasilev, Valentin; Prieto, Emilio; Dvorácek, František; Zelenika, Slobodan; Przybylska, Joanna; Duta, Alexandru; Victorov, Ilya; Pisani, Marco; Saraiva, Fernanda; Salgado, Jose-Antonio; Gao, Sitian; Anusorn, Tonmueanwai; Leng Tan, Siew; Cox, Peter; Watanabe, Tsukasa; Lewis, Andrew; Chaudhary, K. P.; Thalmann, Ruedi; Banreti, Edit; Nurul, Alfiyati; Fira, Roman; Yandayan, Tanfer; Chekirda, Konstantin; Bergmans, Rob; Lassila, Antti
2018-01-01
Autocollimators are versatile optical devices for the contactless measurement of the tilt angles of reflecting surfaces. An international key comparison (KC) on autocollimator calibration, EURAMET.L-K3.2009, was initiated by the European Association of National Metrology Institutes (EURAMET) to provide information on the capabilities in this field. The Physikalisch-Technische Bundesanstalt (PTB) acted as the pilot laboratory, with a total of 25 international participants from EURAMET and from the Asia Pacific Metrology Programme (APMP) providing measurements. This KC was the first one to utilise a high-resolution electronic autocollimator as a standard. In contrast to KCs in angle metrology which usually involve the full plane angle, it focused on relatively small angular ranges (+/-10 arcsec and +/-1000 arcsec) and step sizes (10 arcsec and 0.1 arcsec, respectively). This document represents the approved final report on the results of the KC. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCL, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
New x-ray parallel beam facility XPBF 2.0 for the characterization of silicon pore optics
NASA Astrophysics Data System (ADS)
Krumrey, Michael; Müller, Peter; Cibik, Levent; Collon, Max; Barrière, Nicolas; Vacanti, Giuseppe; Bavdaz, Marcos; Wille, Eric
2016-07-01
A new X-ray parallel beam facility (XPBF 2.0) has been installed in the laboratory of the Physikalisch-Technische Bundesanstalt at the synchrotron radiation facility BESSY II in Berlin to characterize silicon pore optics (SPOs) for the future X-ray observatory ATHENA. As the existing XPBF which is operated since 2005, the new beamline provides a pencil beam of very low divergence, a vacuum chamber with a hexapod system for accurate positioning of the SPO to be investigated, and a vertically movable CCD-based camera system to register the direct and the reflected beam. In contrast to the existing beamline, a multilayer-coated toroidal mirror is used for beam monochromatization at 1.6 keV and collimation, enabling the use of beam sizes between about 100 μm and at least 5 mm. Thus the quality of individual pores as well as the focusing properties of large groups of pores can be investigated. The new beamline also features increased travel ranges for the hexapod to cope with larger SPOs and a sample to detector distance of 12 m corresponding to the envisaged focal length of ATHENA.
Microgravity collisions of dust aggregates as an analogue to early planetesimal formation
NASA Astrophysics Data System (ADS)
Whizin, Akbar; Blum, Jürgen; Colwell, Joshua
2014-11-01
During the early stages of planet formation the dusty progenitors of planetesimals collided with each other continuously to form the seeds of planets. These collisions could result in growth or disruption depending on the individual impact velocities. Based on input from solar nebula models a laboratory-based microgravity dust collision experiment was developed for a drop tower at the Technische Universität Braunschweig, Germany. We collided 1.0 - 1.6 mm SiO2 dust aggregates with clusters of these aggregates at a range of velocities and mass ratios to determine the thresholds between bouncing, sticking, and fragmentation. Presented here are the results of 264 microgravity collisions occurring at velocities of 1 - 160 cm/s with target-impactor mass ratios of 5:1 to 400:1. We also present the coefficient of restitutions for low-velocity collisions and we find the specific collision energy of fragmentation Q* for aggregates of this size. We find sticking occurs at mass ratios larger than 40:1, but only for low velocities ≤ 3 cm/s, clear boundaries exist for bouncing up to 30 cm/s, and fragmentation at ~50 cm/s and up, with total disruption occurring above 1 m/s.
Influence of ionospheric disturbances onto long-baseline relative positioning in kinematic mode
NASA Astrophysics Data System (ADS)
Wezka, Kinga; Herrera, Ivan; Cokrlic, Marija; Galas, Roman
2013-04-01
Ionospheric disturbances are fast and random variabilities in the ionosphere and they are difficult to detect and model. Some strong disturbances can cause, among others, interruption of GNSS signal or even lead to loss of signal lock. These phenomena are especially harmful for kinematic real-time applications, where the system availability is one of the most important parameters influencing positioning reliability. Our investigations were conducted using long time series of GNSS observations gathered at high latitude, where ionospheric disturbances more frequently occur. Selected processing strategy was used to monitor ionospheric signatures in time series of the coordinates. Quality of the data of input and of the processing results were examined and described by a set of proposed parameters. Variations in the coordinates were compared with available information about the state of ionosphere derived from Neustrelitz TEC Model (NTCM) and with the time series of raw observations. Some selected parameters were also calculated with the "iono-tools" module of the TUB-NavSolutions software developed by the Precise Navigation and Positioning Group at Technische Universitaet Berlin. The paper presents very first results of evaluation of the robustness of positioning algorithms with respect to ionospheric anomalies using the NTCM model and our calculated ionospheric parameters.
Self-triggering readout system for the neutron lifetime experiment PENeLOPE
NASA Astrophysics Data System (ADS)
Gaisbauer, D.; Bai, Y.; Konorov, I.; Paul, S.; Steffen, D.
2016-02-01
PENeLOPE is a neutron lifetime measurement developed at the Technische Universität München and located at the Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II) aiming to achieve a precision of 0.1 seconds. The detector for PENeLOPE consists of about 1250 Avalanche Photodiodes (APDs) with a total active area of 1225 cm2. The decay proton detector and electronics will be operated at a high electrostatic potential of -30 kV and a magnetic field of 0.6 T. This includes shaper, preamplifier, ADC and FPGA cards. In addition, the APDs will be cooled to 77 K. The 1250 APDs are divided into 14 groups of 96 channels, including spares. A 12-bit ADC digitizes the detector signals with 1 MSps. A firmware was developed for the detector including a self-triggering readout with continuous pedestal calculation and configurable signal detection. The data transmission and configuration is done via the Switched Enabling Protocol (SEP). It is a time-division multiplexing low layer protocol which provides determined latency for time critical messages, IPBus, and JTAG interfaces. The network has a n:1 topology, reducing the number of optical links.
Cross-scale MD simulations of dynamic strength of tantalum
NASA Astrophysics Data System (ADS)
Bulatov, Vasily
2017-06-01
Dislocations are ubiquitous in metals where their motion presents the dominant and often the only mode of plastic response to straining. Over the last 25 years computational prediction of plastic response in metals has relied on Discrete Dislocation Dynamics (DDD) as the most fundamental method to account for collective dynamics of moving dislocations. Here we present first direct atomistic MD simulations of dislocation-mediated plasticity that are sufficiently large and long to compute plasticity response of single crystal tantalum while tracing the underlying dynamics of dislocations in all atomistic details. Where feasible, direct MD simulations sidestep DDD altogether thus reducing uncertainties of strength predictions to those of the interatomic potential. In the specific context of shock-induced material dynamics, the same MD models predict when, under what conditions and how dislocations interact and compete with other fundamental mechanisms of dynamic response, e.g. twinning, phase-transformations, fracture. In collaboration with: Luis Zepeda-Ruiz, Lawrence Livermore National Laboratory; Alexander Stukowski, Technische Universitat Darmstadt; Tomas Oppelstrup, Lawrence Livermore National Laboratory. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Klötzsch, C; Sliwka, U; Berlit, P; Noth, J
1996-06-01
Alerted by the number of patients with transient global amnesia (TGA) in whom Valsalvalike activities immediately preceded the onset of TGA, we have investigated the frequency of patent foramen ovale (PFO) as the prerequisite for paradoxical embolism. Case series with comparison to a control group. Hospitalized and ambulatory patients at the neurological departments of the Alfried Krupp Hospital, Essen, Germany, and the Rheinisch-Westfälische-Technische Hochschule, Aachen, Germany. Fifty-three consecutive patients with TGA were evaluated by the 2 centers between 1988 and 1995. Using contrast transcranial Doppler sonography we have observed a PFO in 55% of the patients with TGA, compared with 27% of a control group of 100 patients. This difference was statistically significant (P < .01). Twenty-five patients with TGA (47%), 15 of them with a proven PFO, reported a precipitating activity, such as the lifting of heavy weights, immediately before the TGA occurred. In addition to other pathological mechanisms, paradoxical embolism with temporobasal ischemia could possibly play a role in the clinical syndrome of TGA. This hypothesis could explain the frequent observation of preceding Valsalvalike activities in patients with TGA.
Hill, Peter S
2002-01-01
As a major European donor, German government development assistance faces a series of challenges. Recent political changes have raised expectations for demonstrable health outcomes as a result of German development assistance; there has been a deepened commitment to collaboration with other bilateral and multilateral donors; and partner countries are increasingly open to new approaches to development. German development assistance also reflects a new ethos of partnership and the shift to programmatic and sector based development approaches. At the same time, its particular organizational structure and administrative framework highlight the extent of structural and systems reforms required of donors by changing development relationships, and the tensions created in responding to these. This paper examines organizational changes within the German Agency for Technical Cooperation (Deutsche Gesellschaft für Technische, Zusammenarbeit) (GTZ), aimed at increasing its Regional, Sectoral, Managerial and Process competence as they affect health and related sectors. These include the decentralization of GTZ, the trend to integration of projects, the increasing focus on policy and health systems reform, increased inter-sectoral collaboration, changes in recruitment and training, new perspectives in planning and evaluation and the introduction of a quality management programme.
Alebić-Juretić, Ana
2017-12-01
Since 1850 the town of Pola (today Pula, Croatia) underwent big changes and growth due to its transformation into the principal military port of Austrian-Hungarian Empire. Besides the Admiralty that governed the naval actions, the harbor was supported by different organizations needed for normal functioning of the harbor. One of this organizations was Naval Technical Committee (Marine Technisches Komitee), founded in 1874 with the purpose of solving the technical and technological issues related to the navy. The outbreak of World War I (WWI) posed new challenges for Europe. Thus, on February 29th 1916, the Hygienic Institute was founded in the harbor area and Dr. Karl Cafasso was appointed as the first director. The purpose of the Institute was to provide scientific and professional aid to the Head of the Medical Corps of the Ports' Board (Kriegs-Hafenkommando) in the field of epidemiology, microbiology, social medicine and hygiene, the main fields of public health even today. By the end of the war, the Institute ceased its activity, and similar was founded only in 1938, under Italian rule and has been developed to the present Institute of Public Health.
NASA Astrophysics Data System (ADS)
Paul, Andrea; Meyer, Klas; Ruiken, Jan-Paul; Illner, Markus; Müller, David-Nicolas; Esche, Erik; Wozny, Günther; Westad, Frank; Maiwald, Michael
2017-03-01
A major industrial reaction based on homogeneous catalysis is hydroformylation for the production of aldehydes from alkenes and syngas. Hydroformylation in microemulsions, which is currently under investigation at Technische Universität Berlin on a mini-plant scale, was identified as a cost efficient approach which also enhances product selectivity. Herein, we present the application of online Raman spectroscopy on the reaction of 1-dodecene to 1-tridecanal within a microemulsion. To achieve a good representation of the operation range in the mini-plant with regard to concentrations of the reactants a design of experiments was used. Based on initial Raman spectra partial least squares regression (PLSR) models were calibrated for the prediction of 1-dodecene and 1-tridecanal. Limits of predictions arise from nonlinear correlations between Raman intensity and mass fractions of compounds in the microemulsion system. Furthermore, the prediction power of PLSR models becomes limited due to unexpected by-product formation. Application of the lab-scale derived calibration spectra and PLSR models on online spectra from a mini-plant operation yielded promising estimations of 1-tridecanal and acceptable predictions of 1-dodecene mass fractions suggesting Raman spectroscopy as a suitable technique for process analytics in microemulsions.
ECG compression using non-recursive wavelet transform with quality control
NASA Astrophysics Data System (ADS)
Liu, Je-Hung; Hung, King-Chu; Wu, Tsung-Ching
2016-09-01
While wavelet-based electrocardiogram (ECG) data compression using scalar quantisation (SQ) yields excellent compression performance, a wavelet's SQ scheme, however, must select a set of multilevel quantisers for each quantisation process. As a result of the properties of multiple-to-one mapping, however, this scheme is not conducive for reconstruction error control. In order to address this problem, this paper presents a single-variable control SQ scheme able to guarantee the reconstruction quality of wavelet-based ECG data compression. Based on the reversible round-off non-recursive discrete periodised wavelet transform (RRO-NRDPWT), the SQ scheme is derived with a three-stage design process that first uses genetic algorithm (GA) for high compression ratio (CR), followed by a quadratic curve fitting for linear distortion control, and the third uses a fuzzy decision-making for minimising data dependency effect and selecting the optimal SQ. The two databases, Physikalisch-Technische Bundesanstalt (PTB) and Massachusetts Institute of Technology (MIT) arrhythmia, are used to evaluate quality control performance. Experimental results show that the design method guarantees a high compression performance SQ scheme with statistically linear distortion. This property can be independent of training data and can facilitate rapid error control.
NASA Astrophysics Data System (ADS)
Hönicke, Philipp; Kolbe, Michael; Müller, Matthias; Mantler, Michael; Krämer, Markus; Beckhoff, Burkhard
2014-10-01
An experimental method for the verification of the individually different energy dependencies of L1-, L2-, and L3- subshell photoionization cross sections is described. The results obtained for Pd and Mo are well in line with theory regarding both energy dependency and absolute values, and confirm the theoretically calculated cross sections by Scofield from the early 1970 s and, partially, more recent data by Trzhaskovskaya, Nefedov, and Yarzhemsky. The data also demonstrate the questionability of quantitative x-ray spectroscopical results based on the widely used fixed jump ratio approximated cross sections with energy independent ratios. The experiments are carried out by employing the radiometrically calibrated instrumentation of the Physikalisch-Technische Bundesanstalt at the electron storage ring BESSY II in Berlin; the obtained fluorescent intensities are thereby calibrated at an absolute level in reference to the International System of Units. Experimentally determined fixed fluorescence line ratios for each subshell are used for a reliable deconvolution of overlapping fluorescence lines. The relevant fundamental parameters of Mo and Pd are also determined experimentally in order to calculate the subshell photoionization cross sections independently of any database.
Analysis of gamma radiation from a radon source: Indications of a solar influence
NASA Astrophysics Data System (ADS)
Sturrock, P. A.; Steinitz, G.; Fischbach, E.; Javorsek, D.; Jenkins, J. H.
2012-08-01
This article presents an analysis of about 29,000 measurements of gamma radiation associated with the decay of radon in a sealed container at the Geological Survey of Israel (GSI) Laboratory in Jerusalem between 28 January 2007 and 10 May 2010. These measurements exhibit strong variations in time of year and time of day, which may be due in part to environmental influences. However, time-series analysis reveals a number of periodicities, including two at approximately 11.2 year-1 and 12.5 year-1. We have previously found these oscillations in nuclear-decay data acquired at the Brookhaven National Laboratory and at the Physiklisch-Technische Bundesanstalt, and we have suggested that these oscillations are attributable to some form of solar radiation that has its origin in the deep solar interior. A curious property of the GSI data is that the annual oscillation is much stronger in daytime data than in nighttime data, but the opposite is true for all other oscillations. This may be a systematic effect but, if it is not, this property should help narrow the theoretical options for the mechanism responsible for decay-rate variability.
NASA Astrophysics Data System (ADS)
Hubert, S.; Boubault, F.
2018-03-01
In this article, we present the first X-ray calibration performed over the 0.1-1.5 keV spectral range by means of a soft X-ray Manson source and the monochromator SYMPAX. This monochromator, based on a classical Rowland geometry, presents the novelty to be able to board simultaneously two detectors and move them under vacuum in front of the exit slit of the monochromatizing stage. This provides the great advantage to perform radiometric measurements of the monochromatic X-ray photon flux with one reference detector while calibrating another X-ray detector. To achieve this, at least one secondary standard must be operated with SYMPAX. This paper presents thereby an efficiency transfer experiment between a secondary standard silicon drift detector (SDD), previously calibrated on BESSY II synchrotron Facility, and another one ("unknown" SDD), devoted to be used permanently with SYMPAX. The associated calibration process is described as well as corresponding results. Comparison with calibrated measurements performed at the Physikalisch-Technische Bundesanstalt (PTB) Radiometric Laboratory shows a very good agreement between the secondary standard and the unknown SDD.
Dynamic system simulation of small satellite projects
NASA Astrophysics Data System (ADS)
Raif, Matthias; Walter, Ulrich; Bouwmeester, Jasper
2010-11-01
A prerequisite to accomplish a system simulation is to have a system model holding all necessary project information in a centralized repository that can be accessed and edited by all parties involved. At the Institute of Astronautics of the Technische Universitaet Muenchen a modular approach for modeling and dynamic simulation of satellite systems has been developed called dynamic system simulation (DySyS). DySyS is based on the platform independent description language SysML to model a small satellite project with respect to the system composition and dynamic behavior. A library of specific building blocks and possible relations between these blocks have been developed. From this library a system model of the satellite of interest can be created. A mapping into a C++ simulation allows the creation of an executable system model with which simulations are performed to observe the dynamic behavior of the satellite. In this paper DySyS is used to model and simulate the dynamic behavior of small satellites, because small satellite projects can act as a precursor to demonstrate the feasibility of a system model since they are less complex compared to a large scale satellite project.
A multichannel decision-level fusion method for T wave alternans detection
NASA Astrophysics Data System (ADS)
Ye, Changrong; Zeng, Xiaoping; Li, Guojun; Shi, Chenyuan; Jian, Xin; Zhou, Xichuan
2017-09-01
Sudden cardiac death (SCD) is one of the most prominent causes of death among patients with cardiac diseases. Since ventricular arrhythmia is the main cause of SCD and it can be predicted by T wave alternans (TWA), the detection of TWA in the body-surface electrocardiograph (ECG) plays an important role in the prevention of SCD. But due to the multi-source nature of TWA, the nonlinear propagation through thorax, and the effects of the strong noises, the information from different channels is uncertain and competitive with each other. As a result, the single-channel decision is one-sided while the multichannel decision is difficult to reach a consensus on. In this paper, a novel multichannel decision-level fusion method based on the Dezert-Smarandache Theory is proposed to address this issue. Due to the redistribution mechanism for highly competitive information, higher detection accuracy and robustness are achieved. It also shows promise to low-cost instruments and portable applications by reducing demands for the synchronous sampling. Experiments on the real records from the Physikalisch-Technische Bundesanstalt diagnostic ECG database indicate that the performance of the proposed method improves by 12%-20% compared with the one-dimensional decision method based on the periodic component analysis.
Concepts for VLBI Station Control as Part of NEXPReS
NASA Astrophysics Data System (ADS)
Ettl, M.; Neidhardt, A.; Schönberger, M.; Alef, W.; Himwich, E.; Beaudoin, C.; Plötz, C.; Lovell, J.; Hase, H.
2012-12-01
In the Novel EXploration Pushing Robust e-VLBI Services-project (NEXPReS) the Technische Universität München (TUM) realizes concepts for continuous quality monitoring and station remote control in cooperation with the Max-Planck-Institute for Radio Astronomy, Bonn. NEXPReS is a three-year project, funded within the European Seventh Framework program. It is aimed to develop e-VLBI services for the European VLBI Network (EVN), which can also support the IVS observations (VLBI2010). Within this project, the TUM focuses on developments of an operational remote control system (e-RemoteCtrl) with authentication and authorization. It includes an appropriate role management with different remote access states for future observation strategies. To allow a flexible control of different systems in parallel, sophisticated graphical user interfaces are designed and realized. The software is currently under test in the new AuScope network, Australia/New Zealand. Additional system parameters and information are collected with a new system monitoring (SysMon) for a higher degree of automation, which is currently under preparation for standardization within the IVS Monitoring and Control Infrastructure (MCI) Collaboration Group. The whole system for monitoring and control is fully compatible with the NASA Field System and extends it.
ZERODUR TAILORED for cryogenic application
NASA Astrophysics Data System (ADS)
Jedamzik, R.; Westerhoff, T.
2014-07-01
ZERODUR® glass ceramic from SCHOTT is known for its very low thermal expansion coefficient (CTE) at room temperature and its excellent CTE homogeneity. It is widely used for ground-based astronomical mirrors but also for satellite applications. Many reference application demonstrate the excellent and long lasting performance of ZERODUR® components in orbit. For space application a low CTE of the mirror material is required at cryogenic temperatures together with a good match of the thermal expansion to the supporting structure material. It is possible to optimize the coefficient of thermal expansion of ZERODUR® for cryogenic applications. This paper reports on measurements of thermal expansion of ZERODUR® down to cryogenic temperatures of 10 K performed by the PTB (Physikalisch Technische Bundesanstallt, Braunschweig, Germany, the national metrology laboratory). The ZERODUR® TAILORED CRYO presented in this paper has a very low coefficient of thermal expansion down to 70 K. The maximum absolute integrated thermal expansion down to 10 K is only about 20 ppm. Mirror blanks made from ZERODUR® TAILORED CRYO can be light weighted to almost 90% with our modern processing technologies. With ZERODUR® TAILORED CRYO, SCHOTT offers the mirror blank material for the next generation of space telescope applications.
NASA Astrophysics Data System (ADS)
Ola, Max; Thomas, Christiane; Hesse, Ullrich
2017-08-01
Compressor performance test procedures are defined by the standard DIN EN 13771, wherein a variety of possible calorimeter and flow rate measurement methods are suggested. One option is the selection of two independent measurement methods. The accuracies of both selected measurement methods are essential. The second option requires only one method. However the measurement accuracy of the used device has to be verified and recalibrated on a regular basis. The compressor performance test facility at the Technische Universitaet Dresden uses a calibrated flow measurement sensor, a hot gas bypass and a mixed flow heat exchanger. The test bench can easily be modified for tests of various compressor types at different operating ranges and with various refrigerants. In addition, the modified test setup enables the investigation of long term liquid slug and its effects on the compressor. The modification comprises observational components, adjustments of the control system, safety measures and a customized oil recirculation system for compressors which do not contain an integrated oil sump or oil level regulation system. This paper describes the setup of the test bench, its functional principle, the key modifications, first test results and an evaluation of the energy balance.
Die Deutsche Statistische Gesellschaft in der Weimarer Republik und während der Nazidiktatur
NASA Astrophysics Data System (ADS)
Wilke, Jürgen
Nach anfänglichen Schwierigkeiten durch den 1. Weltkrieg erlangte die Deutsche Statistische Gesellschaft (DStatG) unter dem renommierten Statistiker und Vorsitzenden der DStatG, Friedrich Zahn, durch eine Vielzahl von Aktivitäten hohes Ansehen. Es gab Bestrebungen, Statistiker aus allen Arbeitsfeldern der Statistik in die DStatG zu integrieren, wobei die "Mathematische Statistik" nur zögerlich akzeptiert wurde (Konjunkturforschung, Zeitreihenanalyse). Nach der Machtübernahme 1933 durch Adolf Hitler geriet die DStatG in das Fahrwasser nationalsozialistischer Ideologie und Politik (Führerprinzip, Gleichschaltung des Vereinswesens). Damit war eine personelle Umstrukturierung in der DStatG verbunden. Politisch Missliebige und rassisch Verfolgte mussten die DStatG verlassen (Bernstein, Freudenberg, Gumbel u.a.). Unter den Statistikern gab es alle Abstufungen im Verhalten zum Regime von Ablehnung und zwangsweiser Anpassung über bereitwilliges Mitläufertum bis zu bewusster Täterschaft. Besonders die Bevölkerungsstatistik wurde durch die NS- Rassenpolitik auf lange Sicht diskreditiert. Im Rahmen von Wirtschaftsplanung und Aufrüstung wurden neue zukunftsträchtige statistische Modelle (Grünig, Bramstedt, Leisse) entwickelt.
Der Begriff mathematischer Schönheit in einer empirisch informierten Ästhetik der Mathematik
NASA Astrophysics Data System (ADS)
Müller-Hill, Eva; Spies, Susanne
Dieses Zitat des britischen Mathematikers G. H. Hardy bringt pointiert die unter praktizierenden Mathematikern, aber auch unter Philosophen der Mathematik weithin akzeptierte Ansicht zum Ausdruck, dass mathematische Schönheit eine nicht zu vernachlässigende Rolle in der mathematischen Forschungspraxis spielt und sowohl interessante ästhetiktheoretische, epistemische als auch ontologische Aspekte aufweist. Danach beeinflusst also das Verständnis dessen, was mathematische Schönheit ist, auch das Verständnis dessen, was Mathematik ist: "Was sind die Träger mathematischer Schönheit?" ist die Frage nach der Art der Gegenstände, für deren Schönheit Mathematiker sich begeistern und nach der sie streben. "Was sind die Kriterien für mathematische Schönheit?" ist die Frage nach den Kategorien, unter denen Mathematiker ihre Arbeit bewerten. Egal, ob sich das Phänomen mathematischer Schönheit als Ausnahmemerkmal oder als ständiger Begleiter mathematischen Tuns erweist - ein adäquates allgemeines Mathematikverständnis sollte dieses Phänomen berücksichtigen und bestenfalls auch erklären können.
Zuverlässigkeit digitaler Schaltungen unter Einfluss von intrinsischem Rauschen
NASA Astrophysics Data System (ADS)
Kleeberger, V. B.; Schlichtmann, U.
2011-08-01
Die kontinuierlich fortschreitende Miniaturisierung in integrierten Schaltungen führt zu einem Anstieg des intrinsischen Rauschens. Um den Einfluss von intrinsischem Rauschen auf die Zuverlässigkeit zukünftiger digitaler Schaltungen analysieren zu können, werden Methoden benötigt, die auf CAD-Verfahren wie Analogsimulation statt auf abschätzenden Berechnungen beruhen. Dieser Beitrag stellt eine neue Methode vor, die den Einfluss von intrinsischem Rauschen in digitalen Schaltungen für eine gegebene Prozesstechnologie analysieren kann. Die Amplituden von thermischen, 1/f und Schrotrauschen werden mit Hilfe eines SPICE Simulators bestimmt. Anschließend wird der Einfluss des Rauschens auf die Schaltungszuverlässigkeit durch Simulation analysiert. Zusätzlich zur Analyse werden Möglichkeiten aufgezeigt, wie die durch Rauschen hervorgerufenen Effekte im Schaltungsentwurf mit berücksichtigt werden können. Im Gegensatz zum Stand der Technik kann die vorgestellte Methode auf beliebige Logikimplementierungen und Prozesstechnologien angewendet werden. Zusätzlich wird gezeigt, dass bisherige Ansätze den Einfluss von Rauschen bis um das Vierfache überschätzen.
[Medical end-of-life decisions and assisted suicide].
Bosshard, Georg
2008-07-01
Medical end-of-life decisions that potentially shorten life (Sterbehilfe) are normally divided into four categories: Passive Sterbehilfe refers to withholding or withdrawing life-prolonging measures, indirect Sterbehilfe refers to the use of agents such as opioids or sedatives to alleviate symptoms of a terminally ill patient, assisted suicide (Suizidbeihilfe or Beihilfe zum Suizid) refers to prescribing and/or supplying a lethal drug in order to help someone to end his own life, and active euthanasia - which is illegal in any circumstances - means a doctor actively ending a patients life. In passive and indirect euthanasia, the will of a competent patient, or the presumed will of an incompetent patient respectively, is crucial. Assisted suicide is not illegal according to the Swiss Penal Code as long as there are no motives of self-interest of the individual assisting, and the individual assisted has decisional capacity. However, for doctors participating in assisted suicide, specific requirements of medical due care have to be met. What this means in the context of assisted suicide has recently been elaborated by the Swiss Federal Court of Justice.
Darwinische Kulturtheorie - Evolutionistische und "evolutionistische`` Theorien sozialen Wandels
NASA Astrophysics Data System (ADS)
Antweiler, Christoph
Evolutionistische Argumentationen außerhalb der Biologie sind weit verbreitet. Wenn sie vertreten werden, heißt das mitnichten, dass sie notwendigerweise von darwinischen Argumenten geprägt sind. Wenn man Evolution und Kultur aus explizit darwinischer Perspektive zusammen bringt, bedeutet das noch lange nicht unbedingt Soziobiologie. Und es bedeutet sicherlich nicht Sozialdarwinismus. Dieser Beitrag soll einen Überblick der so genannten evolutionären Ansätze bzw. evolutionistischen Ansätze zu menschlichen Gesellschaften bzw. Kulturen geben. Es soll gezeigt werden, was in den Ansätzen analytisch zu trennen ist und was synthetisch zusammen gehört. Mein Beitrag ist nicht wissenschaftsgeschichtlich angelegt, sondern systematisch ausgerichtet und hat zwei Schwerpunkte (Antweiler 2008; Antweiler 2009b). Zum einen geht es um kausale Zusammenhänge von organischer Evolution und gesellschaftlichem Wandel. Auf der anderen Seite werden Analogien zwischen biotischer und kultureller Evolution erläutert, die als spezifische Ähnlichkeiten dieser beiden als grundsätzlich verschieden gesehenen Prozesse aufgefasst werden. Dadurch wird die Frage aufgeworfen, ob die Evolution von Organismen einerseits und die Transformation von Gesellschaften bzw. Kulturen andererseits, spezielle Fälle eines allgemeinen Modells von Evolution darstellen.
Einfluss des Internets auf das Informations-, Einkaufs- und Verkehrsverhalten
NASA Astrophysics Data System (ADS)
Nerlich, Mark R.; Schiffner, Felix; Vogt, Walter
Mit Daten aus eigenen Erhebungen können das einkaufsbezogene Informations- und Einkaufsverhalten im Zusammenhang mit den verkehrlichen Aspekten (Distanzen, Verkehrsmittel, Wegekopplungen) dargestellt werden. Die Differenzierung in die drei Produktkategorien des täglichen, mittelfristigen und des langfristigen Bedarfs berücksichtigt in erster Linie die Wertigkeit eines Gutes, die seine Erwerbshäufigkeit unmittelbar bestimmt. Der Einsatz moderner IKT wie das Internet eröffnet dem Endverbraucher neue Möglichkeiten bei Information und Einkauf. Die verkehrliche Relevanz von Online-Shopping wird deutlich, wenn man berücksichtigt, dass im Mittel rund 17% aller Online-Einkäufe, die die Probanden durchgeführt haben, Einkäufe in Ladengeschäften ersetzen. Dies gilt in verstärktem Maße für Online-Informationen: etwa die Hälfte hätte alternativ im stationären Einzelhandel stattgefunden. Da der Erwerb von Gütern des täglichen Bedarfs häufig nahräumlich und in relevantem Anteil nicht-motorisiert erfolgen kann, sind in diesem Segment - im Gegensatz zum mittel- und langfristigen Bedarf - nur geringe Substitutionseffekte zu beobachten.
NASA Astrophysics Data System (ADS)
Doleski, Oliver D.
Die Energiewirtschaft benötigt neue, digitale Geschäftsmodelle. Gegenwärtig folgt auf Liberalisierung und Energiewende die nächste Stufe einer weitreichenden Bereinigung des Versorgungsmarktes. Digitalisierung und Dezentralisierung sind heute in aller Munde und verlangen nach neuen Produkten und Dienstleistungen. Dabei wirken die immensen Herausforderungen einer digitalen Energiewelt wie Beschleuniger für die Transformation im Versorgungssektor und tragen damit zur breiten Etablierung von Utilities 4.0 bei. Dieser Entwicklungsprozess vollzieht sich mithilfe unterschiedlicher Methoden zur Realisierung neuer Geschäftsideen. Allerdings greifen die gängigen Konzepte zur Entwicklung von Geschäftsmodellen gerade im Hinblick auf die Berücksichtigung komplexer, unbeständiger Rahmenbedingungen und spezifischer Anforderungen der digitalen Energiewelt mitunter zu kurz. Vor diesem Hintergrund wird das auf dem ganzheitlichen St. Galler Management-Konzept beruhende Integrierte Geschäftsmodell iOcTen als geeignetes Instrumentarium zur Geschäftsmodellentwicklung vorgestellt. Neben der Modellbeschreibung unterstützt ein intuitiv verständlicher Leitfaden den Praktiker bei der Transformation vom klassischen Versorgungsunternehmen zum digitalen Energiedienstleistungsunternehmen.
PREFACE: The International Conference on Highly Frustrated Magnetism HFM2008
NASA Astrophysics Data System (ADS)
Eremin, Ilya; Brenig, Wolfram; Kremer, Reinhard; Litterst, Jochen
2009-01-01
The International Conference on Highly Frustrated Magnetism 2008 (HFM2008) took place on 7-12 September 2008 at the Technische Universität Carolo-Wilhelmina zu Braunschweig, Germany. This conference was the fourth event in a series of meetings, which started in Waterloo, Canada (HFM 2000), followed by the second one in Grenoble, France (HFM 2003), and the third meeting in Osaka, Japan (HFM 2006). HFM2008 attracted more than 220 participants from all over the world. The number of participants of the HFM conference series has been increasing steadily, from about 80 participants at HFM 2000, to 120 participants at HFM 2003, and 190 participants at HFM 2006, demonstrating that highly frustrated magnetism remains a rapidly growing area of research in condensed matter physics. At the end of HFM2008 it was decided that the next International Conference on Highly Frustrated Magnetism will be held in Baltimore, USA in 2010. HFM2008 saw four plenary talks by R Moessner, S Nakatsuji, S-W Cheong, and S Sachdev, 18 invited presentations, 30 contributed talks and about 160 poster presentations from all areas of frustrated magnetism. The subjects covered by the conference included: Kagome systems Itinerant frustrated systems Spinels and pyrochlore materials Triangular systems Unconventional order and spin liquids Chain systems Chain systems Novel frustrated systems This volume of Journal of Physics: Conference Series contains the proceedings of HFM2008 with 83 papers that provide a scientific record of the scientific topics covered by the conference. All articles have been refereed by experts in the field. It is our hope that the reader will enjoy and profit from the HFM2008 Proceedings. Ilya Eremin Proceedings Editor Wolfram Brenig, Reinhard Kremer, and Jochen Litterst Co-Editors International Advisory Board L Balents (USA) F Becca (Italy) S Bramwell (UK) P Fulde (Germany) B D Gaulin (Canada) J E Greedan (Canada) A Harrison (France) Z Hiroi (Japan) H Kawamura (Japan) A Keren (Israel) C Lacroix (France) C Lhuillier (France) A Loidl (Germany) G Misguich (France) J Richter (Germany) A M Olés (Poland) P Schiffer (USA) R Stern (Estonia) O Tchernyshyov (USA) M R. Valenti (Germany) G Zwicknagl (Germany) International Program Committee W Brenig (Germany) C Broholm (USA) M Gingras (Canada) K Ueda (Japan) P Mendels (France) F Mila (Switzerland) R Moessner (Germany) Conference photograph On behalf of the HFM2008 Organizing Committee, I wish to express my sincere thanks to everyone who supported us in organizing and setting up HFM2008. Especially, I would like to thank the European Science Foundation and the Max-Planck-Institut für Festkörperforschung for the generous financial support, the Technische Universität Braunschweig and the City of Braunschweig for hosting the conference, all colleagues who served on the Advisory and Program Board, the Referees, and last but not least, the staff members from Stuttgart and Braunschweig, Mrs Gisela Siegle, Mrs Regine Noack and Mrs Katharina Schnettler, for their helpful and invaluable assistance. Reinhard Kremer
NASA Astrophysics Data System (ADS)
Meyer, K.; Ruiken, J.-P.; Illner, M.; Paul, A.; Müller, D.; Esche, E.; Wozny, G.; Maiwald, M.
2017-03-01
Reaction monitoring in disperse systems, such as emulsions, is of significant technical importance in various disciplines like biotechnological engineering, chemical industry, food science, and a growing number other technical fields. These systems pose several challenges when it comes to process analytics, such as heterogeneity of mixtures, changes in optical behavior, and low optical activity. Concerning this, online nuclear magnetic resonance (NMR) spectroscopy is a powerful technique for process monitoring in complex reaction mixtures due to its unique direct comparison abilities, while at the same time being non-invasive and independent of optical properties of the sample. In this study the applicability of online-spectroscopic methods on the homogeneously catalyzed hydroformylation system of 1-dodecene to tridecanal is investigated, which is operated in a mini-plant scale at Technische Universität Berlin. The design of a laboratory setup for process-like calibration experiments is presented, including a 500 MHz online NMR spectrometer, a benchtop NMR device with 43 MHz proton frequency as well as two Raman probes and a flow cell assembly for an ultraviolet and visible light (UV/VIS) spectrometer. Results of high-resolution online NMR spectroscopy are shown and technical as well as process-specific problems observed during the measurements are discussed.
NASA Astrophysics Data System (ADS)
Sarwo Wibowo, Arif
2018-03-01
Bandung is one of the most important colonial cities in Indonesia. In the early 20th century the capital city of Dutch East-Indies Government planned to move in Bandung. Critical infrastructures were intensively built during that period, such as streets and railways, houses, governmental buildings, train stations, hospitals and educational facilities. Besides the famous campus of Technische Hoogeschool te Bandoeng (ITB), still in the same period, several schools were also constructed. One of the most important schools was Hoogere Burgerschool in Bandung (HBS Bandung), now SMUN 3 and 5, Bandung designed by Charles Prosper Wolff Schoemaker and constructed in 1915. HBS Bandung was the fourth HBS constructed by Dutch East Indies Government, therefore became important and put itself as a reference for the later school buildings in Bandung. This study is analyzing how the architects’ frame of mind in producing this design works. Survey and direct data collecting were used to take the exact embodiment of building design. Usage and functional analysis were also used to match space and other standard used in a school building at that time. This study will give an understanding of building typology of school during the Dutch Colonial Period in Indonesia.
Final report on the key comparison CCM.P-K4.2012 in absolute pressure from 1 Pa to 10 kPa
NASA Astrophysics Data System (ADS)
Ricker, Jacob; Hendricks, Jay; Bock, Thomas; Dominik, Pražák; Kobata, Tokihiko; Torres, Jorge; Sadkovskaya, Irina
2017-01-01
The report summarizes the Consultative Committee for Mass (CCM) key comparison CCM.P-K4.2012 for absolute pressure spanning the range of 1 Pa to 10 000 Pa. The comparison was carried out at six National Metrology Institutes (NMIs), including National Institute of Standards and Technology (NIST), Physikalisch-Technische Bundesanstalt (PTB), Czech Metrology Institute (CMI), National Metrology Institute of Japan (NMIJ), Centro Nacional de Metrología (CENAM), and DI Mendeleyev Institute for Metrology (VNIIM). The comparison was made via a calibrated transfer standard measured at each of the NMIs facilities using their laboratory standard during the period May 2012 to September 2013. The transfer package constructed for this comparison preformed as designed and provided a stable artifact to compare laboratory standards. Overall the participants were found to be statistically equivalent to the key comparison reference value. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Astrophysics Data System (ADS)
Yoshida, Hajime; Arai, Kenta; Komatsu, Eiichi; Fujii, Kenichi; Bock, Thomas; Jousten, Karl
2015-01-01
A bilateral comparison of absolute gas pressure measurements from 3 × 10-9 Pa to 9 × 10-4 Pa was performed between the National Metrology Institute of Japan (NMIJ) and Physikalisch-Technische Bundesanstalt (PTB). It is a pilot study CCM.P-P1 for the next international comparison in this pressure range to test the stability of ultrahigh vacuum gauges (UHV gauges) as transfer standards. Two spinning rotor gauges (SRGs), an axial-symmetric transmission gauge (ATG), and an extractor gauge (EXG) were used as transfer standards. The calibration ratio of one SRG was sufficiently stable, but the other was not. This result indicates that improvements in the transport mechanism for SRG are needed. The two ionization gauges ATG and EXG, on the other hand, were sufficiently stable. Provisional equivalence of the pressures realized by the primary standards at NMIJ and PTB was found. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by CCM-WGS.
Surviving the Glut: The Management of Event Streams in Cyberphysical Systems
NASA Astrophysics Data System (ADS)
Buchmann, Alejandro
Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de
NASA Astrophysics Data System (ADS)
Zboray, Robert; Dangendorf, Volker; Mor, Ilan; Bromberger, Benjamin; Tittelmeier, Kai
2015-07-01
In a previous work, we have demonstrated the feasibility of high-frame-rate, fast-neutron radiography of generic air-water two-phase flows in a 1.5 cm thick, rectangular flow channel. The experiments have been carried out at the high-intensity, white-beam facility of the Physikalisch-Technische Bundesanstalt, Germany, using an multi-frame, time-resolved detector developed for fast neutron resonance radiography. The results were however not fully optimal and therefore we have decided to modify the detector and optimize it for the given application, which is described in the present work. Furthermore, we managed to improve the image post-processing methodology and the noise suppression. Using the tailored detector and the improved post-processing, significant increase in the image quality and an order of magnitude lower exposure times, down to 3.33 ms, have been achieved with minimized motion artifacts. Similar to the previous study, different two-phase flow regimes such as bubbly slug and churn flows have been examined. The enhanced imaging quality enables an improved prediction of two-phase flow parameters like the instantaneous volumetric gas fraction, bubble size, and bubble velocities. Instantaneous velocity fields around the gas enclosures can also be more robustly predicted using optical flow methods as previously.
Diffraction efficiency of radially-profiled off-plane reflection gratings
NASA Astrophysics Data System (ADS)
Miles, Drew M.; Tutt, James H.; DeRoo, Casey T.; Marlowe, Hannah; Peterson, Thomas J.; McEntaffer, Randall L.; Menz, Benedikt; Burwitz, Vadim; Hartner, Gisela; Laubis, Christian; Scholze, Frank
2015-09-01
Future X-ray missions will require gratings with high throughput and high spectral resolution. Blazed off-plane reflection gratings are capable of meeting these demands. A blazed grating profile optimizes grating efficiency, providing higher throughput to one side of zero-order on the arc of diffraction. This paper presents efficiency measurements made in the 0.3 - 1.5 keV energy band at the Physikalisch-Technische Bundesanstalt (PTB) BESSY II facility for three holographically-ruled gratings, two of which are blazed. Each blazed grating was tested in both the Littrow configuration and anti-Littrow configuration in order to test the alignment sensitivity of these gratings with regard to throughput. This paper outlines the procedure of the grating experiment performed at BESSY II and discuss the resulting efficiency measurements across various energies. Experimental results are generally consistent with theory and demonstrate that the blaze does increase throughput to one side of zero-order. However, the total efficiency of the non-blazed, sinusoidal grating is greater than that of the blazed gratings, which suggests that the method of manufacturing these blazed profiles fails to produce facets with the desired level of precision. Finally, evidence of a successful blaze implementation from first diffraction results of prototype blazed gratings produce via a new fabrication technique at the University of Iowa are presented.
NASA Astrophysics Data System (ADS)
Leiden, A.; Posselt, G.; Bhakar, V.; Singh, R.; Sangwan, K. S.; Herrmann, C.
2018-01-01
The Indian economy is one of the fastest growing economies in the world and the demand for the skilled engineers is increasing. Subsequently the Indian education sector is growing to provide the necessary number of skilled engineers. Current Indian engineering graduates have broad theoretical background but lack in methodological, soft and practical skills. To bridge this gap, the experience lab ideas from the engineering education at “Die Lernfabrik” (learning factory) of the Technische Universität Braunschweig (TU Braunschweig) is transferred to the Birla Institute of Technology and Science in Pilani (BITS Pilani), India. This Lernfabrik successfully strengthened the methodological, soft and practical skills of the TU Braunschweig production-engineering graduates. The target group is discrete manufacturing education with focusing on energy and resource efficiency as well as cyber physical production systems. As the requirements of industry and academia in India differs from Germany, the transfer of the experience lab to the Indian education system needs special attention to realize a successful transfer project. This publication provides a unique approach to systematically transfer the educational concept in Learning Factory from a specific university environment to a different environment in a newly industrialized country. The help of a bilateral university driven practice partnership between the two universities creates a lighthouse for the Indian university environment.
Block sparsity-based joint compressed sensing recovery of multi-channel ECG signals.
Singh, Anurag; Dandapat, Samarendra
2017-04-01
In recent years, compressed sensing (CS) has emerged as an effective alternative to conventional wavelet based data compression techniques. This is due to its simple and energy-efficient data reduction procedure, which makes it suitable for resource-constrained wireless body area network (WBAN)-enabled electrocardiogram (ECG) telemonitoring applications. Both spatial and temporal correlations exist simultaneously in multi-channel ECG (MECG) signals. Exploitation of both types of correlations is very important in CS-based ECG telemonitoring systems for better performance. However, most of the existing CS-based works exploit either of the correlations, which results in a suboptimal performance. In this work, within a CS framework, the authors propose to exploit both types of correlations simultaneously using a sparse Bayesian learning-based approach. A spatiotemporal sparse model is employed for joint compression/reconstruction of MECG signals. Discrete wavelets transform domain block sparsity of MECG signals is exploited for simultaneous reconstruction of all the channels. Performance evaluations using Physikalisch-Technische Bundesanstalt MECG diagnostic database show a significant gain in the diagnostic reconstruction quality of the MECG signals compared with the state-of-the art techniques at reduced number of measurements. Low measurement requirement may lead to significant savings in the energy-cost of the existing CS-based WBAN systems.
Czejka, Martin; Schüller, Johannes; Kletzl, Heidemarie
2017-08-25
The cytoprotective agent amifostine (AMI) is capable to protect healthy cells (contrary to tumor cells) due to higher activity of alkaline phosphatase at the membrane site of normal cells. In seven clinical trials the influence of AMI on the pharmacokinetics of different cytostatics was investigated. Preadministration of AMI increased Cmax of doxorubicin (+ 44 %, p < 0.06), epirubicin (+ 31 %, P < 0.08), mitomycin C (+ 41 %, p < 0.01) and docetaxel (+ 31 % and + 17 %, not significant). In contrary, the peak concentration of pirarubicin , the tetrahydropyranyl-prodrug of doxorubicin was decreased (- 50 %, P < 0.03), leading to an equal higher concentrationof doxorubicin in the blood . In accordance to the peak concentrations, the AUC'ast was increased by chemoprotection: doxorubicin + 53 % (p < 0.01) and epirubicin + 23 % (not significant), docetaxel + 25 % and + 31 % (not significant). AUC'ast of mitomycin C and paclitaxel seemed to be unaffected by preadministered AMI. A particular inhibition of the protein binding by AMI has been identified as one reason for higher serum concentrations of anthracycline drugs. After cytoprotection, a possible increase of the cytostatic's Serum concentrations should be taken into account for optimal dosage schedules.
NASA Astrophysics Data System (ADS)
Ha, Suk-Woo; Wintermantel, Erich
Der Aufbau des menschlichen Körpers ist derart komplex, dass die vollständige funktionelle Substitution seiner Strukturen mit künstlichen Werkstoffen und Bauteilen unwahrscheinlich ist. Die meisten heute klinisch eingesetzten Implantate ersetzen in der Regel einfache mechanische oder andere physikalische Funktionen des menschlichen Körpers, die aufgrund eines singulären Defektes im Gewebe oder als Ergebnis einer chronischen Erkrankung substituiert werden müssen. Gelenkprothesen beispielsweise übertragen Lasten, eine künstliche intraokulare Linse ermöglicht Lichttransmission und eine künstliche Arterie sorgt für die Aufrechterhaltung der Blutversorgung. Neben der Funktionserfüllung müssen die medizinisch eingesetzten Werkstoffe zusätzlich den Anforderungen der Körperverträglichkeit genügen, die die vollständige und dauerhafte Aufnahme des Implantates im Körper zum Ziel hat. Die Erkenntnisse der Werkstoffwissenschaft und deren Umsetzung in neue Produkte hat die Entwicklung und Fortschritte in der Medizin und in der Chirurgie entscheidend geprägt. Werkstoffe stehen in ihrem klinischen Einsatz als Temporärimplantate (z. B. Kathetersysteme) sowie als Langzeitimplantate (z. B. Hüftgelenksimplantate oder Herzschrittmacher) in direktem Kontakt mit den Geweben des Körpers und müssen deshalb biokompatibel sein.
Medizintechnik in der Tumororthopädie
NASA Astrophysics Data System (ADS)
Burgkart, Rainer; Gollwitzer, Hans; Holzapfel, Boris; Rudert, Maximilian; Rechl, Hans; Gradinger, Reiner
Die Behandlung der Knochentumoren unterlag in den letzten 20 Jahren einem raschen und stetigen Wandel, was zum einen auf die verbesserten Therapieerfolge durch den Einsatz von neoadjuvanten Therapieformen zurückzuführen ist, und andererseits von medizintechnischen Entwicklungen bezüglich moderner Schnittbilddiagnostik, neuer 3D Operationsplanungsverfahren wie das Rapid Prototyping und adaptiv modularer Tumorendoprothesensystemen u. a. begleitet wurde. Gerade die technischen Entwicklungen haben dazu geführt, daß im Bereich der Extremitäten und der Wirbelsäule radikalere Eingriffe durchgeführt werden können, was die lokale Tumorkontrolle wesentlich verbessert hat. In zunehmenden Maße werden deshalb nicht nur Kurzzeiterfolge sondern auch mittel- und langfristige Fortschritte bei der Behandlung der malignen Knochentumoren einschließlich der Metastasenbehandlung erreicht. Grundlage der Therapie ist dabei immer primär die Sicherung der Diagnose mittels Biopsie und die bildgebende sowie histologische Stadieneinteilung des malignen Tumors. Nach der Tumorresektion kann die Rekonstruktion biologisch oder mit Endoprothesensystemen erfolgen. Gerade die weiterentwickelten modularen Systeme führen zu guten funktionellen Ergebnissen mit langen Standzeiten und einer reduzierten Komplikationsrate. Individuell angefertigte Implantate sind vor allem im Bereich der Rekonstruktion komplexer Beckentumoren von großer klinischer Bedeutung.
The acceleration of the masculine in early-twentieth-century Berlin.
Prickett, David James
2012-01-01
In early-twentieth-century Berlin, agents of speed and industrialisation, such as the railway, contributed to the seemingly unbridled velocity of urban life. Doctors and cultural critics took an ambivalent stance toward the impact of speed and technology on the human body. Critics argued that these factors, in conjunction with sexual excess and prostitution, accelerated the sexual maturation of young men, thereby endangering ‘healthy’ male sexuality. This comparison of Hans Ostwald's socio-literary study Dunkle Winkel in Berlin (1904) with Georg Buschan's sexual education primer Vom Jüngling zum Mann (1911) queries the extent to which speed shaped the understanding of ‘the masculine’ in pre-World-War-I Germany. The essay thus examines Ostwald's and Buschan's arguments and postulates that speed in the city (Berlin) can be seen as a feminised, sexualised force that determined sex in the city. According to this reading, the homosexual urban dandy resisted the accelerated modernist urban tempo, whereas the heterosexual man and hegemonic, heteronormative masculinity yielded to speed. ‘“Das Verhältnis”’ became a fleeting, momentary alternative to stable marital relationships, which in turn contributed to the general ‘crisis’ of – and in– masculinity in early-twentieth-century Berlin.
Recent work of decay spectroscopy at RIBF
NASA Astrophysics Data System (ADS)
Söderström, Pär-Anders
2014-09-01
β- and isomer-decay spectroscopy are sensitive probes of nuclear structure, and are often the only techniques capable of providing data for exotic nuclei that are producted with very low rates. Decay properties of exotic nuclei are also essential to model astrophysical events responible for the evolution of the universe such as the rp- and r-process. The EURICA project (EUROBALL RIKEN Cluster Array) has been launched in 2012 with the goal of performing spectroscopy of very exotic nuclei. Since 2012, four experimental campaigns have been successfully completed using fragmentation of 124Xe beam and in-flight-fission of 238U beam, approaching for example the key nuclei 78Ni, 110Zr, 100Sn, 128Pd, and 138Sn. This contribution highlights the experiments performed, results obtained, and discusses the future perspective of the EURICA project. In collaboration with Shunji Nishimura, Hidetada Baba, RIKEN Nishina Center; Frank Browne, Brighton University; Pieter Doornenbal, RIKEN Nishina Center; Guillaume Gey, Universite Joseph Fourier Grenoble; Tadaaki Isobe and Giuseppe Lorusso, RIKEN Nishina Center; Daniel Lubos, Technische Universitat Munchen; Kevin Mochner, University of Cologne; Zena Patel and Simon Rice, University of Surrey; Hiroyoshi Sakurai, RIKEN Nishina Center; Laura Sinclair, University of York; Toshiyuki Sumikama, Tohoku University; Jan Taprogge, Universidad Autonoma de Madrid; Zsolt Vajta, MTA Atomki; Hiroshi Watanabe, Beihang University; Jin Wu, Peking University; and Zhengyu Xu, University of Tokyo.
NASA Astrophysics Data System (ADS)
Yue, J.; Yang, Y.; Sabuga, W.
2016-01-01
This report summarizes the results of the Asia-Pacific Metrology Programme (APMP) supplementary comparison APMP.M.P-S5 for hydraulic gauge pressure in the range of 1 MPa to 10 MPa, which is a bilateral comparison carried out at the National Institute of Metrology, China (NIM) and the Physikalisch-Technische Bundesanstalt, Germany (PTB) during the period June 2014 to June 2015. NIM piloted the comparison and provided the transfer standard, which was a piston-cylinder assembly (PCA) of 1 cm2 nominal effective area built in a hydraulic pressure balance manufactured by Fluke Corporation. The laboratory standards of NIM and PTB are both hydraulic pressure balances equipped with PCAs, of which the nominal effective area is 1 cm2 for NIM and 5 cm2 for PTB. The results of the comparison successfully demonstrated that the hydraulic gauge pressure standards of NIM and PTB in the range of 1 MPa to 10 MPa are equivalent within their claimed uncertainties. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Neutron induced fission cross section measurements of 240Pu and 242Pu
NASA Astrophysics Data System (ADS)
Belloni, F.; Eykens, R.; Heyse, J.; Matei, C.; Moens, A.; Nolte, R.; Plompen, A. J. M.; Richter, S.; Sibbens, G.; Vanleeuw, D.; Wynants, R.
2017-09-01
Accurate neutron induced fission cross section of 240Pu and 242Pu are required in view of making nuclear technology safer and more efficient to meet the upcoming needs for the future generation of nuclear power plants (GEN-IV). The probability for a neutron to induce such reactions figures in the NEA Nuclear Data High Priority Request List [1]. A measurement campaign to determine neutron induced fission cross sections of 240Pu and 242Pu at 2.51 MeV and 14.83 MeV has been carried out at the 3.7 MV Van De Graaff linear accelerator at Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig. Two identical Frisch Grid fission chambers, housing back to back a 238U and a APu target (A = 240 or A = 242), were employed to detect the total fission yield. The targets were molecular plated on 0.25 mm aluminium foils kept at ground potential and the employed gas was P10. The neutron fluence was measured with the proton recoil telescope (T1), which is the German primary standard for neutron fluence measurements. The two measurements were related using a De Pangher long counter and the charge as monitors. The experimental results have an average uncertainty of 3-4% at 2.51 MeV and for 6-8% at 14.81 MeV and have been compared to the data available in literature.
Results for the response function determination of the Compact Neutron Spectrometer
NASA Astrophysics Data System (ADS)
Gagnon-Moisan, F.; Reginatto, M.; Zimbal, A.
2012-03-01
The Compact Neutron Spectrometer (CNS) is a Joint European Torus (JET) Enhancement Project, designed for fusion diagnostics in different plasma scenarios. The CNS is based on a liquid scintillator (BC501A) which allows good discrimination between neutron and gamma radiation. Neutron spectrometry with a BC501A spectrometer requires the use of a reliable, fully characterized detector. The determination of the response matrix was carried out at the Ion Accelerator Facility (PIAF) of the Physikalisch-Technische Bundesanstalt (PTB). This facility provides several monoenergetic beams (2.5, 8, 10, 12 and 14 MeV) and a white field (Emax ~ 17 MeV), which allows for a full characterization of the spectrometer in the region of interest (from ~ 1.5 MeV to ~ 17 MeV). The energy of the incoming neutrons was determined by the time of flight method (TOF), with time resolution in the order of 1 ns. To check the response matrix, the measured pulse height spectra were unfolded with the code MAXED and the resulting energy distributions were compared with those obtained from TOF. The CNS project required modification of the PTB BC501A spectrometer design, to replace an analog data acquisition system (NIM modules) with a digital system developed by the Ente per le Nuove tecnologie, l'Energia e l'Ambiente (ENEA). Results for the new digital system were evaluated using new software developed specifically for this project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosch, R.; Trosseille, C.; Caillaud, T.
The Laser Megajoule (LMJ) facility located at CEA/CESTA started to operate in the early 2014 with two quadruplets (20 kJ at 351 nm) focused on target for the first experimental campaign. We present here the first set of gated x-ray imaging (GXI) diagnostics implemented on LMJ since mid-2014. This set consists of two imaging diagnostics with spatial, temporal, and broadband spectral resolution. These diagnostics will give basic measurements, during the entire life of the facility, such as position, structure, and balance of beams, but they will also be used to characterize gas filled target implosion symmetry and timing, to studymore » x-ray radiography and hydrodynamic instabilities. The design requires a vulnerability approach, because components will operate in a harsh environment induced by neutron fluxes, gamma rays, debris, and shrapnel. Grazing incidence x-ray microscopes are fielded as far as possible away from the target to minimize potential damage and signal noise due to these sources. These imaging diagnostics incorporate microscopes with large source-to-optic distance and large size gated microchannel plate detectors. Microscopes include optics with grazing incidence mirrors, pinholes, and refractive lenses. Spatial, temporal, and spectral performances have been measured on x-ray tubes and UV lasers at CEA-DIF and at Physikalisch-Technische Bundesanstalt BESSY II synchrotron prior to be set on LMJ. GXI-1 and GXI-2 designs, metrology, and first experiments on LMJ are presented here.« less
NASA Astrophysics Data System (ADS)
Fedchak, J. A.; Bock, Th; Jousten, K.
2014-01-01
This report describes the bilateral key comparison CCM.P-K3.1 between the National Institute of Standards and Technology (NIST) and Physikalisch-Technische Bundesanstalt (PTB) for absolute pressure in the range from 3 × 10-6 Pa to 9 × 10-4 Pa. This comparison was a follow-up to the comparison CCM.P-K3. Two ionization gauges and two spinning rotor gauges (SRGs) were used as the transfer standards for the comparison. The SRGs were used to compare the standards at a pressure of 9 × 10-4 Pa and to normalize the ionization gauge readings. The two ionization gauges were used to compare the standards in the pressure range of from 3 × 10-6 Pa to 3 × 10-4 Pa. Both laboratories used dynamic expansion chambers as standards in the comparison. The two labs showed excellent agreement with each other and with the CCM.P-K3 key comparison reference value (KCRV) over the entire range. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Cagnazzo, M; Borio di Tigliole, A; Böck, H; Villa, M
2018-05-01
Aim of this work was the detection of fission products activity distribution along the axial dimension of irradiated fuel elements (FEs) at the TRIGA Mark II research reactor of the Technische Universität (TU) Wien. The activity distribution was measured by means of a customized fuel gamma scanning device, which includes a vertical lifting system to move the fuel rod along its vertical axis. For each investigated FE, a gamma spectrum measurement was performed along the vertical axis, with steps of 1 cm, in order to determine the axial distribution of the fission products. After the fuel elements underwent a relatively short cooling down period, different fission products were detected. The activity concentration was determined by calibrating the gamma detector with a standard calibration source of known activity and by MCNP6 simulations for the evaluation of self-absorption and geometric effects. Given the specific TRIGA fuel composition, a correction procedure is developed and used in this work for the measurement of the fission product Zr 95 . This measurement campaign is part of a more extended project aiming at the modelling of the TU Wien TRIGA reactor by means of different calculation codes (MCNP6, Serpent): the experimental results presented in this paper will be subsequently used for the benchmark of the models developed with the calculation codes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Quantum efficiency measurements of eROSITA pnCCDs
NASA Astrophysics Data System (ADS)
Ebermayer, Stefanie; Andritschke, Robert; Elbs, Johannes; Meidinger, Norbert; Strüder, Lothar; Hartmann, Robert; Gottwald, Alexander; Krumrey, Michael; Scholze, Frank
2010-07-01
For the eROSITA X-ray telescope, which is planned to be launched in 2012, detectors were developed and fabricated at the MPI Semiconductor Laboratory. The fully depleted, back-illuminated pnCCDs have an ultrathin pn-junction to improve the low-energy X-ray response function and quantum efficiency. The device thickness of 450 μm is fully sensitive to X-ray photons yielding high quantum efficiency of more than 90% at photon energies of 10 keV. An on-chip filter is deposited on top of the entrance window to suppress visible and UV light which would interfere with the X-ray observations. The pnCCD type developed for the eROSITA telescope was characterized in terms of quantum efficiency and spectral response function. The described measurements were performed in 2009 at the synchrotron radiation sources BESSY II and MLS as cooperation between the MPI Semiconductor Laboratory and the Physikalisch-Technische Bundesanstalt (PTB). Quantum efficiency measurements over a wide range of photon energies from 3 eV to 11 keV as well as spectral response measurements are presented. For X-ray energies from 3 keV to 10 keV the quantum efficiency of the CCD including on-chip filter is shown to be above 90% with an attenuation of visible light of more than five orders of magnitude. A detector response model is described and compared to the measurements.
Progress on glass ceramic ZERODUR enabling nanometer precision
NASA Astrophysics Data System (ADS)
Jedamzik, Ralf; Kunisch, Clemens; Nieder, Johannes; Weber, Peter; Westerhoff, Thomas
2016-03-01
The Semiconductor Industry is making continuous progress in shrinking feature size developing technologies and process to achieve < 10 nm feature size. The required Overlay specification for successful production is in the range one nanometer or even smaller. Consequently, materials designed into metrology systems of exposure or inspection tools need to fulfill ever tighter specification on the coefficient of thermal expansion (CTE). The glass ceramic ZERODUR® is a well-established material in critical components of microlithography wafer stepper and offered with an extremely low coefficient of thermal expansion, the tightest tolerance available on market. SCHOTT is continuously improving manufacturing processes and it's method to measure and characterize the CTE behavior of ZERODUR®. This paper is focusing on the "Advanced Dilatometer" for determination of the CTE developed at SCHOTT in the recent years and introduced into production in Q1 2015. The achievement for improving the absolute CTE measurement accuracy and the reproducibility are described in detail. Those achievements are compared to the CTE measurement accuracy reported by the Physikalische Technische Bundesanstalt (PTB), the National Metrology Institute of Germany. The CTE homogeneity is of highest importance to achieve nanometer precision on larger scales. Additionally, the paper presents data on the short scale CTE homogeneity and its improvement in the last two years. The data presented in this paper will explain the capability of ZERODUR® to enable the extreme precision required for future generation of lithography equipment and processes.
Toward a standard reference database for computer-aided mammography
NASA Astrophysics Data System (ADS)
Oliveira, Júlia E. E.; Gueld, Mark O.; de A. Araújo, Arnaldo; Ott, Bastian; Deserno, Thomas M.
2008-03-01
Because of the lack of mammography databases with a large amount of codified images and identified characteristics like pathology, type of breast tissue, and abnormality, there is a problem for the development of robust systems for computer-aided diagnosis. Integrated to the Image Retrieval in Medical Applications (IRMA) project, we present an available mammography database developed from the union of: The Mammographic Image Analysis Society Digital Mammogram Database (MIAS), The Digital Database for Screening Mammography (DDSM), the Lawrence Livermore National Laboratory (LLNL), and routine images from the Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen. Using the IRMA code, standardized coding of tissue type, tumor staging, and lesion description was developed according to the American College of Radiology (ACR) tissue codes and the ACR breast imaging reporting and data system (BI-RADS). The import was done automatically using scripts for image download, file format conversion, file name, web page and information file browsing. Disregarding the resolution, this resulted in a total of 10,509 reference images, and 6,767 images are associated with an IRMA contour information feature file. In accordance to the respective license agreements, the database will be made freely available for research purposes, and may be used for image based evaluation campaigns such as the Cross Language Evaluation Forum (CLEF). We have also shown that it can be extended easily with further cases imported from a picture archiving and communication system (PACS).
„3D-augmented-reality“-Visualisierung für die navigierte Osteosynthese von Beckenfrakturen
Befrui, N.; Fischer, M.; Fuerst, B.; Lee, S.-C.; Fotouhi, J.; Weidert, S.; Johnson, A.; Euler, E.; Osgood, G.; Navab, N.; Böcker, W.
2018-01-01
Zusammenfassung Hintergrund Trotz großer Fortschritte in der Entwicklung der Hard- und Software von Navigationssystemen finden diese aufgrund ihrer vermeintlichen Komplexität, umständlichen Integration in klinische Arbeitsabläufe und fraglichen Vorteilen gegenüber konventionellen bildgebenden Verfahren bisher wenig Einsatz in den heutigen Operationssälen. Ziel der Arbeit Entwicklung einer „Augmented-reality“(AR)-Darstellung zur chirurgischen Navigation ohne Infrarot(„IR“)-Tracking-Marker und Vergleich zum konventioneller Röntgen in einem simulierten Eingriff. Material und Methoden Navigationssystem bestehend aus „Cone-beam-CT“(CBCT)-fähigem C-Bogen und „Red-green-blue-depth“(RGBD)-Kamera. Testung durch Kirschner(K)-Draht-Platzierung in Modellen unter Berücksichtigung der benötigten Zeit, der Strahlendosis und der Benutzerfreundlichkeit der Systeme. Ergebnisse Eine signifikante Reduktion der benötigten Zeit, der Röntgenbilder und der gesamten Strahlendosis bei der AR-Navigation gegenüber dem konventionellen Röntgen bei gleichbleibender Präzision. Schlussfolgerung Die AR-Navigation mithilfe der RGBD-Kamera bietet flexible und intuitive Darstellungsmöglichkeiten des Operations-situs für navigierte Osteosynthesen ohne Tracking-Marker. Hiermit ist es möglich, Operationen schneller, einfacher und mit geringerer Strahlenbelastung für Patient und OP-Personal durchzuführen. PMID:29500506
Ganzheitliche Digitalisierungsansätze im Stadtwerk: Von der Strategie bis zur Umsetzung
NASA Astrophysics Data System (ADS)
Dudenhausen, Roman; Hahn, Heike
Digitalisierung muss im Stadtwerk dazu führen, Kundenerwartungen, die heutzutage schon vielfach durch digitales Know-how und Erfahrungen geprägt sind, in einzigartiger Weise zu entsprechen - in Form digitaler Kundenkontaktpunkte, automatisierter Prozesse oder plattformbasierter Geschäftsmodelle. Eine große Rolle spielen dabei unternehmensweit nutzbare Informationen, die eine 360-Grad-Sicht auf den Kunden ermöglichen. Nur in dieser Kombination werden sich nachhaltig Wettbewerbsvorteile generieren lassen. Manch ein Kunde wird die Lust, einen Prozess zu Ende zu gehen, schon vor dem Abschluss verlieren, wenn er nicht unmittelbar und ohne die digitale Welt zu verlassen zum Ziel kommt. Eine nur "halb digitale Kundenerfahrung" wird weder zu Neugeschäft noch zur positiven emotionalen Bindung zwischen Kunden und Stadtwerk führen. Nicht zu unterschätzen sind zudem Erwartungen hinsichtlich zukünftiger Geschäftsmodelle, aus denen sich disruptive Bedrohungen für die herkömmlichen Strom- und Gasangebote ergeben werden. Erste innovative Ansätze finden sich bereits im Markt, die erahnen lassen, dass zurzeit viel diskutierte Technologien wie die Blockchain nicht mehr nur hypothetischer Natur sind. Die Auseinandersetzung mit der Digitalisierung erfolgt dabei sinnvollerweise in einem unternehmensweit abgestimmten Rahmen, der eine zielgerichtete und ganzheitliche Vorgehensweise ermöglicht.
A Novel Two-Step Hierarchial Quantitative Structure-Activity ...
Background: Accurate prediction of in vivo toxicity from in vitro testing is a challenging problem. Large public–private consortia have been formed with the goal of improving chemical safety assessment by the means of high-throughput screening. Methods and results: A database containing experimental cytotoxicity values for in vitro half-maximal inhibitory concentration (IC50) and in vivo rodent median lethal dose (LD50) for more than 300 chemicals was compiled by Zentralstelle zur Erfassung und Bewertung von Ersatz- und Ergaenzungsmethoden zum Tierversuch (ZEBET ; National Center for Documentation and Evaluation of Alternative Methods to Animal Experiments) . The application of conventional quantitative structure–activity relationship (QSAR) modeling approaches to predict mouse or rat acute LD50 values from chemical descriptors of ZEBET compounds yielded no statistically significant models. The analysis of these data showed no significant correlation between IC50 and LD50. However, a linear IC50 versus LD50 correlation could be established for a fraction of compounds. To capitalize on this observation, we developed a novel two-step modeling approach as follows. First, all chemicals are partitioned into two groups based on the relationship between IC50 and LD50 values: One group comprises compounds with linear IC50 versus LD50 relationships, and another group comprises the remaining compounds. Second, we built conventional binary classification QSAR models t
Werner-Syndrom. Eine prototypische Form der segmentalen Progerie
Lessel, D.; Oshima, J.; Kubisch, C.
2013-01-01
Das Werner-Syndrom ist eine segmental progeroide Erkrankung mit Beginn in der Adoleszenz oder im frühen Erwachsenenalter. Typische Symptome, die zum vorgealterten Phänotyp beitragen, sind ein post-pubertär auftretender Kleinwuchs, Katarakte, eine vorzeitige Ergrauung/Ausdünnung des Haupthaars, sklerodermieähnliche Hautveränderungen und eine regionale Atrophie des subkutanen Fettgewebes. Darüber hinaus kommt es früh und gehäuft zu „Alterserkrankungen“ wie z. B. einem Diabetes mellitus Typ 2, einer Osteoporose, einer Atherosklerose sowie verschiedenen malignen Tumoren. Das Werner-Syndrom wird autosomal- rezessiv vererbt und ist durch Mutationen im Werner-Gen (WRN) bedingt. Es wurden bis heute mehr als 70 über das gesamte Gen verteilte Mutationen identifiziert, die typischerweise zu einem Verlust der Genfunktion führen. WRN kodiert für eine RecQ-Typ- Helikase, die u. a. an der DNA-Reparatur und der Aufrechterhaltung der DNA-Integrität beteiligt ist, was sich in einer erhöhten genetischen Instabilität in Patientenzellen wider-spiegelt. Trotz der relativen Seltenheit ist die Analyse des Werner-Syndroms von allgemeiner Bedeutung, um die Rolle der DNA-Stabilität und Integrität für das Altern sowie die Entwicklung altersassoziierter Erkrankungen besser zu verstehen. PMID:25309043
Review of access, licenses and understandability of open datasets used in hydrology research
NASA Astrophysics Data System (ADS)
Falkenroth, Esa; Arheimer, Berit; Lagerbäck Adolphi, Emma
2015-04-01
The amount of open data available for hydrology research is continually growing. In the EU-funded project SWITCH-ON (Sharing Water-related Information to Tackle Changes in the Hydrosphere - for Operational Needs), we are addressing water concerns by exploring and exploiting the untapped potential of these new open data. This work is enabled by many ongoing efforts to facilitate the use of open data. For instance, a number of portals (such as the GEOSS Portal and the INSPIRE community geoportal) provide the means to search for such open data sets and open spatial data services. However, in general, the systematic use of available open data is still fairly uncommon in hydrology research. Factors that limits (re)usability of a data set include: (1) accessibility, (2) understandability and (3) licences. If you cannot access the data set, you cannot use if for research. If you cannot understand the data set you cannot use it for research. Finally, if you are not permitted to use the data, you cannot use it for research. Early on in the project, we sent out a questionnaire to our research partners (SMHI, Universita di Bologna, University of Bristol, Technische Universiteit Delft and Technische Universitaet Wien) to find out what data sets they were planning to use in their experiments. The result was a comprehensive list of useful open data sets. Later, this list of data sets was extended with additional information on data sets for planned commercial water-information products and services. With the list of 50 common data sets as a starting point, we reviewed issues related to access, understandability and licence conditions. Regarding access to data sets, a majority of data sets were available through direct internet download via some well-known transfer protocol such as ftp or http. However, several data sets were found to be inaccessible due to server downtime, incorrect links or problems with the host database management system. One possible explanation for this could be that many data sets have been assembled by research project that no longer are funded. Hence, their server infrastructure would be less maintained compared to large-scale operational services. Regarding understandability of the data sets, the issues encountered were mainly due to incomplete documentation or metadata and problems with decoding binary formats. Ideally, open data sets should be represented in well-known formats and they should be accompanied with sufficient documentation so the data set can be understood. Furthermore, machine-readable format would be preferrable. Here, the development efforts on Water ML and NETCDF and other standards should improve understandability of data sets over time but in this review, only a few data sets were provided in these wellknown formats. Instead, the majority of datasets were stored in various text-based or binary formats or even document-oriented formats such as PDF. For some binary formats, we could not find information on what software was necessary to decipher the files. Other domains such as meteorology have long-standing traditions of operational data exchange format whereas hydrology research is still quite fragmented and the data exchange is usually done on a case-by-case basis. With the increased sharing of open data there is a good chance the situation will improve for data sets used in hydrology research. Finally, regarding licensce issue, a high number of data sets did not have a clear statement on terms of use and limitation for access. In most cases the provider could be contacted regarding licensing issues.
Aperture alignment in autocollimator-based deflectometric profilometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geckeler, R. D., E-mail: Ralf.Geckeler@ptb.de; Just, A.; Kranz, O.
2016-05-15
During the last ten years, deflectometric profilometers have become indispensable tools for the precision form measurement of optical surfaces. They have proven to be especially suitable for characterizing beam-shaping optical surfaces for x-ray beamline applications at synchrotrons and free electron lasers. Deflectometric profilometers use surface slope (angle) to assess topography and utilize commercial autocollimators for the contactless slope measurement. To this purpose, the autocollimator beam is deflected by a movable optical square (or pentaprism) towards the surface where a co-moving aperture limits and defines the beam footprint. In this paper, we focus on the precise and reproducible alignment of themore » aperture relative to the autocollimator’s optical axis. Its alignment needs to be maintained while it is scanned across the surface under test. The reproducibility of the autocollimator’s measuring conditions during calibration and during its use in the profilometer is of crucial importance to providing precise and traceable angle metrology. In the first part of the paper, we present the aperture alignment procedure developed at the Advanced Light Source, Lawrence Berkeley National Laboratory, USA, for the use of their deflectometric profilometers. In the second part, we investigate the topic further by providing extensive ray tracing simulations and calibrations of a commercial autocollimator performed at the Physikalisch-Technische Bundesanstalt, Germany, for evaluating the effects of the positioning of the aperture on the autocollimator’s angle response. The investigations which we performed are crucial for reaching fundamental metrological limits in deflectometric profilometry.« less
On the Claim of Modulations in 36Cl Beta Decay and Their Association with Solar Rotation
NASA Astrophysics Data System (ADS)
Pommé, S.; Kossert, K.; Nähle, O.
2017-11-01
Recently, claims were made by Sturrock et al. ( Astropart. Phys. 42, 62, 2013), Sturrock, Fischbach, and Scargle ( Solar Phys. 291, 3467, 2016; arXiv http://arxiv.org/abs/arXiv:1705.03010, 2017) that beta decay can be induced by interaction of the nucleus with solar neutrinos and that cyclic modulations in decay rates are indicative of the dynamics of the solar interior. Transient modulations in residuals from a purely exponential decay curve were observed at frequencies near 11 a^{-1} and 12.7 a^{-1} in repeated activity measurements of a 36Cl source by Alburger, Harbottle, and Norton ( Earth Planet Sci. Lett. 78, 168, 1986) at Brookhaven National Laboratory in a period from 1984 to 1985. Sturrock et al. have speculatively associated them with rotational influence on the solar neutrino flux. In this work, more accurate 36Cl decay-rate measurements - performed at the Physikalisch-Technische Bundesanstalt Braunschweig in the period 2010 - 2013 by means of the triple-to-double coincidence ratio measurement technique - are scrutinised. The residuals from an exponential decay curve were analysed by a weighted Lomb-Scargle periodogram. The existence of modulations in the frequency range between 0.2 a^{-1} and 20 a^{-1} could be excluded down to an amplitude of about 0.0016%. The invariability of the 36Cl decay constant contradicts the speculations made about the deep solar interior on the basis of instabilities in former activity measurements.
Thermographic techniques and adapted algorithms for automatic detection of foreign bodies in food
NASA Astrophysics Data System (ADS)
Meinlschmidt, Peter; Maergner, Volker
2003-04-01
At the moment foreign substances in food are detected mainly by using mechanical and optical methods as well as ultrasonic technique and than they are removed from the further process. These techniques detect a large portion of the foreign substances due to their different mass (mechanical sieving), their different colour (optical method) and their different surface density (ultrasonic detection). Despite the numerous different methods a considerable portion of the foreign substances remain undetected. In order to recognise materials still undetected, a complementary detection method would be desirable removing the foreign substances not registered by the a.m. methods from the production process. In a project with 13 partner from the food industry, the Fraunhofer - Institut für Holzforschung (WKI) and the Technische Unsiversität are trying to adapt thermography for the detection of foreign bodies in the food industry. After the initial tests turned out to be very promising for the differentiation of food stuffs and foreign substances, more and detailed investigation were carried out to develop suitable algorithms for automatic detection of foreign bodies. In order to achieve -besides the mere visual detection of foreign substances- also an automatic detection under production conditions, numerous experiences in image processing and pattern recognition are exploited. Results for the detection of foreign bodies will be presented at the conference showing the different advantages and disadvantages of using grey - level, statistical and morphological image processing techniques.
Technische Systeme für den Herzersatz und die Herzunterstützung
NASA Astrophysics Data System (ADS)
Schöb, Reto; Loree, Howard M.
Herzkrankheiten verursachen allein in den Vereinigten Staaten jährlich mehr als 700’000 Todesfälle. Ungefähr 3 Millionen Patienten in den U.S.A. leiden gemäss der American Heart Association (AHA) und dem National Heart, Lung and Blood Institute (NHLBI) an kongestivem Herzversagen (Congestive Heart Failure, CHF), welches eine chronische, sehr entkräftende und degenerative Krankheit ist: Das Herz ist dabei unfähig, hinreichend Blut zu den Organen des Körpers zu pumpen. Über 400’000 Fälle von CHF werden jedes Jahr diagnostiziert. Ähnliche Zahlen werden für Europa und Japan zusammen geschätzt. Basierend auf Daten vom AHA und NHLBI beträgt die fünfjährige Überlebensrate für CHF-Patienten lediglich etwa 50% [1]. 70’000-120’000 dieser Patienten könnten von einer Herzverpflanzung profitieren. 1999 wurden in den USA aber nur 2185 Herztransplantationen durchgeführt während die Warteliste über 4000 Patienten beträgt [2]. Ein akuter Mangel an Spenderherzen und die enormen Kosten (250’000-400’000 USD pro Patient) sind die begrenzenden Faktoren für Herztransplantationen [3]. Dies bedeutet, dass eine riesige Anzahl von Patienten durch ein zuverlässiges und verschleissfreies, nichtthrombotisches, total implantierbares, künstliches Herz gerettet werden könnten. Bis heute jedoch kein derartiges Implantat kommerziell verfügbar.
Development and evaluation of thermal model reduction algorithms for spacecraft
NASA Astrophysics Data System (ADS)
Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus
2015-05-01
This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.
Limitations of Dower's inverse transform for the study of atrial loops during atrial fibrillation.
Guillem, María S; Climent, Andreu M; Bollmann, Andreas; Husser, Daniela; Millet, José; Castells, Francisco
2009-08-01
Spatial characteristics of atrial fibrillatory waves have been extracted by using a vectorcardiogram (VCG) during atrial fibrillation (AF). However, the VCG is usually not recorded in clinical practice and atrial loops are derived from the 12-lead electrocardiogram (ECG). We evaluated the suitability of the reconstruction of orthogonal leads from the 12-lead ECG for fibrillatory waves in AF. We used the Physikalisch-Technische Bundesanstalt diagnostic ECG database, which contains 15 simultaneously recorded signals (12-lead ECG and three Frank orthogonal leads) of 13 patients during AF. Frank leads were derived from the 12-lead ECG by using Dower's inverse transform. Derived leads were then compared to true Frank leads in terms of the relative error achieved. We calculated the orientation of AF loops of both recorded orthogonal leads and derived leads and measured the difference in estimated orientation. Also, we investigated the relationship of errors in derivation with fibrillatory wave amplitude, frequency, wave residuum, and fit to a plane of the AF loops. Errors in derivation of AF loops were 68 +/- 31% and errors in the estimation of orientation were 35.85 +/- 20.43 degrees . We did not find any correlation among these errors and amplitude, frequency, or other parameters. In conclusion, Dower's inverse transform should not be used for the derivation of orthogonal leads from the 12-lead ECG for the analysis of fibrillatory wave loops in AF. Spatial parameters obtained after this derivation may differ from those obtained from recorded orthogonal leads.
Frequency Comparison of [Formula: see text] Ion Optical Clocks at PTB and NPL via GPS PPP.
Leute, J; Huntemann, N; Lipphardt, B; Tamm, Christian; Nisbet-Jones, P B R; King, S A; Godun, R M; Jones, J M; Margolis, H S; Whibberley, P B; Wallin, A; Merimaa, M; Gill, P; Peik, E
2016-07-01
We used precise point positioning, a well-established GPS carrier-phase frequency transfer method to perform a direct remote comparison of two optical frequency standards based on single laser-cooled [Formula: see text] ions operated at the National Physical Laboratory (NPL), U.K. and the Physikalisch-Technische Bundesanstalt (PTB), Germany. At both institutes, an active hydrogen maser serves as a flywheel oscillator which is connected to a GPS receiver as an external frequency reference and compared simultaneously to a realization of the unperturbed frequency of the (2)S1/2(F=0)-(2)D3/2(F=2) electric quadrupole transition in [Formula: see text] via an optical femtosecond frequency comb. To profit from long coherent GPS-link measurements, we extrapolate the fractional frequency difference over the various data gaps in the optical clock to maser comparisons which introduces maser noise to the frequency comparison but improves the uncertainty from the GPS-link instability. We determined the total statistical uncertainty consisting of the GPS-link uncertainty and the extrapolation uncertainties for several extrapolation schemes. Using the extrapolation scheme with the smallest combined uncertainty, we find a fractional frequency difference [Formula: see text] of -1.3×10(-15) with a combined uncertainty of 1.2×10(-15) for a total measurement time of 67 h. This result is consistent with an agreement of the frequencies realized by both optical clocks and with recent absolute frequency measurements against caesium fountain clocks within the corresponding uncertainties.
Kühbeck, Felizian; Engelhardt, Stefan; Sarikas, Antonio
2014-01-01
Audience response (AR) systems are increasingly used in undergraduate medical education. However, high costs and complexity of conventional AR systems often limit their use. Here we present a novel AR system that is platform independent and does not require hardware clickers or additional software to be installed. "OnlineTED" was developed at Technische Universität München (TUM) based on Hypertext Preprocessor (PHP) with a My Structured Query Language (MySQL)-database as server- and Javascript as client-side programming languages. "OnlineTED" enables lecturers to create and manage question sets online and start polls in-class via a web-browser. Students can participate in the polls with any internet-enabled device (smartphones, tablet-PCs or laptops). A paper-based survey was conducted with undergraduate medical students and lecturers at TUM to compare "OnlineTED" with conventional AR systems using clickers. "OnlineTED" received above-average evaluation results by both students and lecturers at TUM and was seen on par or superior to conventional AR systems. The survey results indicated that up to 80% of students at TUM own an internet-enabled device (smartphone or tablet-PC) for participation in web-based AR technologies. "OnlineTED" is a novel web-based and platform-independent AR system for higher education that was well received by students and lecturers. As a non-commercial alternative to conventional AR systems it may foster interactive teaching in undergraduate education, in particular with large audiences.
A New Solar Spectrum from 656 to 3088 nm
NASA Astrophysics Data System (ADS)
Meftah, M.; Damé, L.; Bolsée, D.; Pereira, N.; Sluse, D.; Cessateur, G.; Irbah, A.; Sarkissian, A.; Djafer, D.; Hauchecorne, A.; Bekki, S.
2017-08-01
The solar spectrum is a key parameter for different scientific disciplines such as solar physics, climate research, and atmospheric physics. The SOLar SPECtrometer (SOLSPEC) instrument of the Solar Monitoring Observatory (SOLAR) payload onboard the International Space Station (ISS) has been built to measure the solar spectral irradiance (SSI) from 165 to 3088 nm with high accuracy. To cover the full wavelength range, three double-monochromators with concave gratings are used. We present here a thorough analysis of the data from the third channel/double-monochromator, which covers the spectral range between 656 and 3088 nm. A new reference solar spectrum is therefore obtained in this mainly infrared wavelength range (656 to 3088 nm); it uses an absolute preflight calibration performed with the blackbody of the Physikalisch-Technische Bundesanstalt (PTB). An improved correction of temperature effects is also applied to the measurements using in-flight housekeeping temperature data of the instrument. The new solar spectrum (SOLAR-IR) is in good agreement with the ATmospheric Laboratory for Applications and Science (ATLAS 3) reference solar spectrum from 656 nm to about 1600 nm. However, above 1600 nm, it agrees better with solar reconstruction models than with spacecraft measurements. The new SOLAR/SOLSPEC measurement of solar spectral irradiance at about 1600 nm, corresponding to the minimum opacity of the solar photosphere, is 248.08 ± 4.98 mW m-2 nm-1 (1 σ), which is higher than recent ground-based evaluations.
[Recalled parental rearing and the wish to have a child - are there associations?].
Schumacher, Jörg; Stöbel-Richter, Yve; Brähler, Elmar
2002-07-01
The present study concerns the impact of recalled parental rearing behaviour on both the intensity of the wish to have a child and on different motives to have a child. Until now there are no empirical studies as to this objective. Our study is based on a representative sample of 1509 persons aged 18 to 50 years. The statistical analyses were restricted to those subjects who lived in partnership and reported an actual wish to have a child (n = 331). The data were assessed by self-reporting scales: The Questionnaire of Recalled Parental Rearing Behaviour "Fragebogen zum erinnerten elterlichen Erziehungsverhalten, FEE", the Partnership Questionnaire "Partnerschaftsfragebogen, PFB", and the Leipzig Questionnaire of Motives to Have a Child "Leipziger Fragebogen zu Kinderwunschmotiven, LKM". A recalled parental rearing behaviour, which was characterized as having been rejective, overprotective and less emotionally warm was associated with such motives which do not promote the wish to have own children (fear of personal restrictions and a low degree of social support). Simultaneously, a negative parental rearing behaviour was correlated with a stronger desire for social recognition by an own child. The recalled maternal rearing behaviour was altogether stronger associated with motives to have a child than the paternal. On the other hand, no relevant associations could be found between the recalled parental rearing behaviour and the intensity of the wish to have a child.
Lee, Sunhee; Kim, Dong Hee
2017-01-01
Abstract Return and adjustment to school in adolescents who have survived cancer have become of increasing interest as the numbers of childhood cancers survivors have grown due to advances in treatments. Perceived parental rearing behavior is an important factor related to school adjustment. This study examined the relationships between maternal and parental rearing practices, general characteristics, and school adjustment in adolescent cancer survivors in Korea. We conducted a descriptive, exploratory study of 84 adolescents with cancer using the Korean version of the Fragebogen zum erinnerten elterlichen Erziehungsverhalten: FEE (Recalled Parental Rearing Behavior) and a school adjustment measurement. Descriptive, Pearson correlational, and multiple regression analyses were used to investigate the data. In bivariate analysis, age (r = −0.358, P < .05), mother's emotional warmth (r = 0.549, P < .01), and father's emotional warmth (r = 0.391, P < .05) were significantly associated with school adjustment. However, the results of multiple regression analysis showed that only mother's emotional warmth (β = .720, P < .05) was significantly associated with school adjustment. Adolescent cancer survivors who reported higher mother's emotional warmth exhibited better school adjustment. This finding indicates that it is important to help parents of adolescent cancer survivors enhance their parental rearing behaviors, such as emotional warmth, to help adolescents adjust to school. PMID:28796068
Update zum klinischen Einsatz von Inhibitoren mutierter Phosphokinasen beim Melanom.
Cosgarea, Ioana; Ritter, Cathrin; Becker, Jürgen C; Schadendorf, Dirk; Ugurel, Selma
2017-09-01
Die Behandlungsstrategie beim metastasierten Melanom hat sich mit der Identifizierung therapeutisch angreifbarer molekularer Zielstrukturen innerhalb zellulärer Signalwege radikal geändert. Durch die Zulassung von Substanzen, die gezielt an den zentralen Schaltmolekülen, den Phosphokinasen, angreifen, können diese Signalwege selektiv abgeschaltet werden. Dies ist insbesondere bei denjenigen Tumoren von Interesse, deren Signalwege durch aktivierende Mutationen der für die Schaltmoleküle kodierenden Gene konstitutiv aktiviert sind. Aktuell ist diese therapeutische Strategie insbesondere für Patienten bedeutsam, deren Melanome eine Mutation im BRAF-Gen aufweisen. Diese Patienten können durch eine Kombinationstherapie aus Inhibitoren der Phosphokinasen BRAF und MEK langfristig mit sehr guter Krankheitskontrolle behandelt werden. Unter dieser Kombinationstherapie wird aktuell ein progressionsfreies Überleben von über zehn Monaten und ein Gesamtüberleben von mehr als zwei Jahren bei guter Lebensqualität erzielt. Da unter längerfristiger Therapie mit Kinaseinhibitoren jedoch bei einem Großteil der Patienten eine Resistenzbildung auftritt, sind aktuelle klinische Therapiestudien auf die Suche nach geeigneten Kombinationspartnern unter Blockierung anderer Signalwege oder unter Aktivierung der T-Zell-vermittelten Immunantwort ausgerichtet. Der vorliegende Übersichtsartikel stellt sowohl die aktuell verfügbaren als auch die in der klinischen Testung befindlichen zukünftigen Optionen der zielgerichteten Therapie des Melanoms dar. © 2017 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Wielen, Roland; Wielen, Ute
In the archives of the Astronomisches Rechen-Institut at Heidelberg, there is an old set of 31 documents which are related to the calendar used in Prussia and which originated in the period from 1700 to 1854. The oldest document is an original print of the 'Calendar Edict' issued on 10 May 1700. In this edict, Friedrich III., Elector of Brandenburg, gave a monopoly for issuing calendars in his country to an academy which was founded slightly later. He founded at the same time an observatory in Berlin. The main task of the employed astronomers was to edit the 'Improved Calendar' which was newly introduced in his Protestant country. The Astronomisches Rechen-Institut, which was founded in Berlin and was moved to Heidelberg in 1945, considers this Calendar Edict as his foundation document too. All the other documents are handwritten, mainly letters, but also a detailed expose 'On the Calendar Issues in the Prussian State' from 1843. Two of the scripts stem from the 18th century. The remaining documents are related to the work of the Royal Prussian Calendar Deputation and were written between 1816 and 1854. In this paper we describe, commentate, and transliterate all the documents of this 'Kalender-Konvolut'.
NASA Astrophysics Data System (ADS)
Meyer-Aurich, Andreas
1999-11-01
Mit der vorliegenden Arbeit werden exemplarisch Chancen und Grenzen der Integration von Umwelt- und Naturschutz in Verfahren der ackerbaulichen Landnutzung aufgezeigt. Die Umsetzung von Zielen des Umwelt- und Naturschutzes in Verfahren der Landnutzung ist mit verschiedenen Schwierigkeiten verbunden. Diese liegen zum einen in der Konkretisierung der Ziele, um diese umsetzen zu können, zum anderen in vielfach unzulänglichem Wissen über den Zusammenhang zwischen unterschiedlichen Formen der Landnutzung und insbesondere den biotischen Naturschutzzielen. Zunächst wird die Problematik der Zielfestlegung und Konkretisierung erörtert. Das Umweltqualitätszielkonzept von Fürst et al. (1992) stellt einen Versuch dar, Ziele des Umwelt- und Naturschutzes zu konkretisieren. Dieses Konzept haben Heidt et al. (1997) auf einen Landschaftsausschnitt von ca. 6000 ha im Biosphärenreservat Schorfheide-Chorin im Nordosten Brandenburgs angewendet. Eine Auswahl der von Heidt et al. (1997) formulierten Umweltqualitätsziele bildet die Basis dieser Arbeit. Für die ausgewählten Umweltqualitätsziele wurden wesentliche Einflussfaktoren der Landnutzung identifiziert und ein Bewertungssystem entwickelt, mit dem die Auswirkungen von landwirtschaftlichen Anbauverfahren auf diese Umweltqualitätsziele abgebildet werden können. Die praktizierte Landnutzung von 20 Betrieben im Biosphärenreservat Schorfheide-Chorin wurde von 1994 bis 1997 hinsichtlich ihrer Auswirkungen auf die Umweltqualitätsziele analysiert. Die Analyse ergab ein sehr differenziertes Bild, das zum Teil Unterschiede in der Auswirkung auf die Umweltqualitätsziele für den Anbau einzelner Kulturen oder für bestimmte Betriebstypen zeigte. Es zeigte sich aber auch, dass es bei der Gestaltung des Anbaus einzelner Kulturarten große Unterschiede gab, die für Umweltqualitätsziele Bedeutung haben. Neben der Analyse der Landnutzung im Biosphärenreservat Schorfheide-Chorin wurde ein System entwickelt, mit dem die modellhafte Abbildung von Verfahren der Landnutzung möglich ist. Die Modellverfahren wurden in eine umfangreiche Datenbank eingebunden. Sie wurden mit Hilfe eines Fuzzy- Regelsystems hinsichtlich ihrer Auswirkungen auf die Umweltqualitätsziele bewertet. Die systematisch bewerteten Verfahren wurden in ein Betriebsmodell integriert, womit eine weitergehende Analyse der Zielbeziehungen und die Berechnung von Szenarien mit unterschiedlichen Rahmenbedingungen ermöglicht wurde. Die Analyse der Beziehung verschiedener Ziele zueinander (Zieldivergenz, Zielkonvergenz) zeigte, dass sich mit der Verfolgung vieler Umweltqualitätsziele auch positive Effekte für andere Umweltqualitätsziele ergaben. Teilweise konnte allerdings auch eine Zieldivergenz festgestellt werden, die auf mögliche Zielkonflikte hinweist. Bei der Analyse der Szenarienergebnisse zeigte sich, dass die vorgeschlagenen Veränderungen von Rahmenbedingungen vielfach eine Verschlechterung für verschiedene Umweltqualitätsziele mit sich bringen. Eine Ursache dafür liegt darin, dass bei der Definition der Szenarien die Bedeutung der Stilllegungen unterschätzt wurde. The objective of this research was to show the opportunities and limitations associated with integrating of environmental goals into agricultural land use management. For this purpose, the impact of agricultural land uses on six environmental quality goals was analysed for an approximately 16.000 ha study region within the biosphere reserve Schorfheide-Chorin in north-eastern Brandenburg (Germany). The environmental quality goals considered were protection of ground water, preservation of groundwater recharge, protection of the soil against wind and water erosion and preservation of animal species typical of the agricultural landscape, in particular partridge, amphibians and cranes. For each environmental quality goal, an evaluation framework is presented which enables an assessment of the impact of agricultural land uses on the environmental quality goals. The evaluation framework was applied to assess the impact of land uses of twenty farms in the study area from 1994 to 1997. It was demonstrated that the impact of agricultural land uses on the environmental quality of the region is very complex, and - although some crops did significantly impact certain aspects, one factor did not overwhelmingly contribute to the overall environmental quality. The evaluation framework was integrated further into a system of modelled cropping practices for optimisation calculations. For this purpose, a model framework was established based on a MS Access database. In the database, cropping practices for 17 crops are stored. Inputs and yields are adapted to the site-specific yield potential. In addition to the cropping practices, the model framework is comprised of an evaluation module and optimisation module. Hence, the impact of the cropping practices can be assessed for each site. This model framework provides the basis for optimisation calculations to forecast various land uses under different conditions. A step-wise integration of the environmental quality goals into the optimisation algorithm of a model farm allows for showing trade-offs between economic and ecological goals. The results of the trade-offs show that an improvement of ecological goal achievement is possible with little impact on the gross margin, as long as the improvement is less than 30% of the starting situation. If improvements greater than 30% are desired, very high losses of gross margin must be taken into consideration. Another result of the calculations shows that high achievements of environmental quality goals often correlate with the percentage of set-aside land within the total area.
Seismic Investigation of the Glacier de la Plaine Morte, Switzerland
NASA Astrophysics Data System (ADS)
Laske, Gabi; Lindner, Fabian; Walter, Fabian; Krage, Manuel
2017-04-01
Glacier de la Plaine Morte is a plateau glacier along the border between Valais and Berne cantons. It covers a narrow elevation range and is extremely vulnerable to climate change. During snow melt, it feeds three marginal lakes that have experienced sudden subglacial drainage in recent years, thereby causing flooding in the Simme Valley below. Of greatest concern is Lac des Faverges at the southeastern end of the glacier that has drained near the end of July in recent years, with flood levels reaching capacity of flood control systems downstream. The lake levels are carefully monitored but precise prediction has not yet been achieved. In the search for precursory ice fracturing to the lake drainage to improve forecast, four seismic arrays comprised of five short-period borehole seismometers provided by Eidgenössische Technische Hochschule (ETH), Zürich as well as fifteen 3-component geophones from the Geophysical Instrument Pool Potsdam (GIPP) collected continuous seismic data for about seven weeks during the summer of 2016. We present initial results on discharge dynamics as well as changing noise levels and seismicity before, during and after the drainage of Lac des Faverges. Compared to previous recent years, the 2016 drainage of Lac des Faverges occurred unusually late on August 28. With an aperture between 100 and 200 m, the small arrays recorded many hundred ice quakes per day. A majority of the events exhibits clearly dispersed, high-frequency Rayleigh waves at about 10 Hz and higher. A wide distribution of events allows us to study azimuthal anisotropy and its relationship with the orientation of glacial crevasses.
NASA Astrophysics Data System (ADS)
Kalvius, G. M.; Kienle, P.
Mössbauer and one of the authors (PK) started in 1949 studying physics at the Technische Hochschule München (THM), which was still under reconstruction from the war damages. It offered two directions for studying physics: "Physik A" and "Physik B." I took courses in "Physik A," which meant Technical Physics; Mössbauer studied "Physik B," which was General Physics. Actually, the lectures of both directions were not too different up to the forth semester, followed by a "pre-diploma" examination, which Mössbauer passed in 1952. I as "Physik A" student had besides the various physics, chemistry, and mathematics courses, in addition lectures in Technical Electricity, Technical Mechanics, Technical Thermodynamics, and later Measurement Engineering offered by very famous professors, such as W.O. Schumann, L. Föppl, W. Nußelt, and H. Piloty. Our physics teachers were G. Joos (Experimental physics), G. Hettner (Theoretical Physics), and W. Meissner (Technical Physics); in mathematics, we enjoyed lectures by J. Lense and R. Sauer, and interesting chemistry lectures by W. Hieber. Thus we received a high-class classical education, but quantum mechanics was not a compulsory subject. Mössbauer complained about this deficiency when he realized that the effect he found was a quantum mechanical phenomenon. Quantum mechanics was offered as an optional subject by Prof. Fick and Prof. Haug. Mössbauer just missed to take these advanced lectures, although he was highly talented in mathematics and received even a tutoring position in the mathematics institute of Prof. R. Sauer, while I worked in engineering projects and had extensive industrial training.
NASA Astrophysics Data System (ADS)
Uecker, J.; Hanke, M.; Kamm, S.; Umann, B.; Arnold, F.; Poeschl, U.; Niessner, R.
Gas-phase sulfuric acid and OH have been measured by the novel MPI-K ULTRA- CIMS (ultra-trace gas detection by CIMS technique) at the Schneefernerhaus( 2750 m asl; below the summit of Mount Zugspitze, Germany) in October 2001. These mea- surements were accompanied by measurements of SO2 with another MPI-K CIMS instrument and aerosol size distribution measurements by DMPS (differential mobil- ity particle sizer) operated by the Institut fuer Wasserchemie (Technische Universitaet Muenchen). In that way a data set was obtained which allows investigating major sources and sinks of sulfuric acid under relative clean conditions. H2SO4 and espe- cially OH concentrations are relatively well correlated to solar flux. Noon maximum concentrations of OH and H2SO4 of 6.5·106 and 2·106 cm-3, respectively, were ob- served. The average SO2 concentrations were below 20 ppt. The aerosol size distribu- tion was obtained in 39 size ranges from 10 to 1056 nm. Typical aerosol concentrations are in the range of 400 to 1800 cm-3 during the discussed period of time. An estima- tion of the production rate of H2SO4 was inferred building on the reaction of SO2 and OH, while the loss rate was calculated by considering the condensation of H2SO4 on aerosol particles (Fuchs and Sutugin approach). Results of the measurements and calculations will be discussed.
TUM Critical Zone Observatory, Germany
NASA Astrophysics Data System (ADS)
Völkel, Jörg; Eden, Marie
2014-05-01
Founded 2011 the TUM Critical Zone Observatory run by the Technische Universität München and partners abroad is the first CZO within Germany. TUM CZO is both, a scientific as well as an education project. It is a watershed based observatory, but moving behind this focus. In fact, two mountainous areas are integrated: (1) The Ammer Catchment area as an alpine and pre alpine research area in the northern limestone Alps and forelands south of Munich; (2) the Otter Creek Catchment in the Bavarian Forest with a crystalline setting (Granite, Gneiss) as a mid mountainous area near Regensburg; and partly the mountainous Bavarian Forest National Park. The Ammer Catchment is a high energy system as well as a sensitive climate system with past glacial elements. The lithology shows mostly carbonates from Tertiary and Mesozoic times (e.g. Flysch). Source-to-sink processes are characteristic for the Ammer Catchment down to the last glacial Ammer Lake as the regional erosion and deposition base. The consideration of distal depositional environments, the integration of upstream and downstream landscape effects are characteristic for the Ammer Catchment as well. Long term datasets exist in many regards. The Otter Creek catchment area is developed in a granitic environment, rich in saprolites. As a mid mountainous catchment the energy system is facing lower stage. Hence, it is ideal comparing both of them. Both TUM CZO Catchments: The selected catchments capture the depositional environment. Both catchment areas include historical impacts and rapid land use change. Crosscutting themes across both sites are inbuilt. Questions of ability to capture such gradients along climosequence, chronosequence, anthroposequence are essential.
Taris, F; Uhrich, P; Petit, G; Jiang, Z; Barillet, R; Hamouda, F
2000-01-01
This paper describes the software and equipment used at the Laboratoire Primaire du Temps et des Frequences du Bureau National de Metrologie (BNM-LPTF), Paris, France. Two H-masers in short baseline, one located at the BNM-LPTF and the other at the Laboratoire de l'Horloge Atomique du Centre National de la Recherche Scientifique (CNRS-LHA), Orsay, France, were computed in parallel with the BNM-LPTF software and with the BERNESE V 4.1 software. The comparison of the results issued from both computations shows an agreement within 100 ps (1 sigma). In addition, comparisons with the BNM-LPTF software were made over 10 days with the H-masers located at the Physikalisch-Technische Bundesanstalt (PTB), Braunschweig, Germany, and another at the National Physical Laboratory (NPL), Teddington, United Kingdom. The data collected show that a modulation with an amplitude of 50 ps and a period of 700-800 ps affects the equipment of the NPL. In addition, these comparisons show that the noise of the instruments together with the environmental conditions at the PTB was higher than that of the NPL and the BNM-LPTF during the observation period. The best relative frequency stability obtained, in the BNM-LPTF/NPL comparison, is about 3x10(-15) for averaging periods between 6x10(4) s and 3x10(5) s. This result is in good agreement with the expected stability of H-masers. It demonstrates that the noise brought by the GPS carrier phase measurements can be averaged out at this level.
Kühbeck, Felizian; Engelhardt, Stefan; Sarikas, Antonio
2014-01-01
Background and aim: Audience response (AR) systems are increasingly used in undergraduate medical education. However, high costs and complexity of conventional AR systems often limit their use. Here we present a novel AR system that is platform independent and does not require hardware clickers or additional software to be installed. Methods and results: “OnlineTED” was developed at Technische Universität München (TUM) based on Hypertext Preprocessor (PHP) with a My Structured Query Language (MySQL)-database as server- and Javascript as client-side programming languages. “OnlineTED” enables lecturers to create and manage question sets online and start polls in-class via a web-browser. Students can participate in the polls with any internet-enabled device (smartphones, tablet-PCs or laptops). A paper-based survey was conducted with undergraduate medical students and lecturers at TUM to compare "OnlineTED" with conventional AR systems using clickers. "OnlineTED" received above-average evaluation results by both students and lecturers at TUM and was seen on par or superior to conventional AR systems. The survey results indicated that up to 80% of students at TUM own an internet-enabled device (smartphone or tablet-PC) for participation in web-based AR technologies. Summary and Conclusion: “OnlineTED” is a novel web-based and platform-independent AR system for higher education that was well received by students and lecturers. As a non-commercial alternative to conventional AR systems it may foster interactive teaching in undergraduate education, in particular with large audiences. PMID:24575156
NASA Astrophysics Data System (ADS)
Ziolkowski, Pawel; Stiewe, Christian; de Boor, Johannes; Druschke, Ines; Zabrocki, Knud; Edler, Frank; Haupt, Sebastian; König, Jan; Mueller, Eckhard
2017-01-01
Thermoelectric generators (TEGs) convert heat to electrical energy by means of the Seebeck effect. The Seebeck coefficient is a central thermoelectric material property, measuring the magnitude of the thermovoltage generated in response to a temperature difference across a thermoelectric material. Precise determination of the Seebeck coefficient provides the basis for reliable performance assessment in materials development in the field of thermoelectrics. For several reasons, measurement uncertainties of up to 14% can often be observed in interlaboratory comparisons of temperature-dependent Seebeck coefficient or in error analyses on currently employed instruments. This is still too high for an industrial benchmark and insufficient for many scientific investigations and technological developments. The TESt (thermoelectric standardization) project was launched in 2011, funded by the German Federal Ministry of Education and Research (BMBF), to reduce measurement uncertainties, engineer traceable and precise thermoelectric measurement techniques for materials and TEGs, and develop reference materials (RMs) for temperature-dependent determination of the Seebeck coefficient. We report herein the successful development and qualification of cobalt-doped β-iron disilicide ( β-Fe0.95Co0.05Si2) as a RM for high-temperature thermoelectric metrology. A brief survey on technological processes for manufacturing and machining of samples is presented. Focus is placed on metrological qualification of the iron disilicide, results of an international round-robin test, and final certification as a reference material in accordance with ISO-Guide 35 and the "Guide to the expression of uncertainty in measurement" by the Physikalisch-Technische Bundesanstalt, the national metrology institute of Germany.
SUMIRAD: a near real-time MMW radiometer imaging system for threat detection in an urban environment
NASA Astrophysics Data System (ADS)
Dill, Stephan; Peichl, Markus; Rudolf, Daniel
2012-10-01
The armed forces are nowadays confronted with a wide variety of types of operations. During peace keeping missions in an urban environment, where small units patrol the streets with armored vehicles, the team leader is confronted with a very complex threat situation. The asymmetric imminence arises in most cases from so called IEDs (Improvised explosive devices) which are found in a multitude of versions. In order to avoid risky situations the early detection of possible threats due to advanced reconnaissance and surveillance sensors will provide an important advantage. A European consortium consisting of GMV S.A. (Spain, "Grupo Tecnològico e Industrial"), RMA (Belgium, "Royal Military Academy"), TUM ("Technische Universität München") and DLR (Germany, "Deutsches Zentrum für Luft- und Raumfahrt") developed in the SUM project (Surveillance in an urban environment using mobile sensors) a low-cost multi-sensor vehicle based surveillance system in order to enhance situational awareness for moving security and military patrols as well as for static checkpoints. The project was funded by the European Defense Agency (EDA) in the Joint Investment Program on Force Protection (JIP-FP). The SUMIRAD (SUM imaging radiometer) system, developed by DLR, is a fast radiometric imager and part of the SUM sensor suite. This paper will present the principle of the SUMIRAD system and its key components. Furthermore the image processing will be described. Imaging results from several measurement campaigns will be presented. The overall SUM system and the individual subsystems are presented in more detail in separate papers during this conference.
The MICROSCOPE inertial sensor: qualification status
NASA Astrophysics Data System (ADS)
Santos Rodrigues, Manuel; Touboul, Pierre; Liorzou, Francoise; Bodoville, Guillaume
The payload of the MICROSCOPE space mission embarks two pairs of test-masses, made of Platinum Rhodium alloy and Titanium alloy, that are used to perform the test of the Uni-versality of free fall, i.e. of the Equivalence Principle (EP). These cylindrical test-masses are at the core of the inertial sensors used to perform the full drag-free and attitude control of the satellite. Based on electrostatic space accelerometers developed in ONERA, the payload has been designed with challenging technologies for the electronics and for the sensor core. Following a very specific development plan, the payload is currently in the qualification phase, by being integrated after a long period of challenging accurate production and metrology. The results obtained for the driving components of the expected performance will be addressed. In particular, the micrometric metrology of the instrument core, made in gold coated silica, will be presented: the specific ultra-sonic machining processes, optimized for this production, indeed exhibit a few micrometers accuracy. Similar accuracy is obtained for the geometry of the test-masses, produced and controlled in collaboration with the PTB, Physikalisch-Technische Bundesanstalt. This accurate geometry and the specific selected shape is mandatory to balance the mass moment of inertia for gravity gradient rejection and to highly decouple the instru-ment measurement axes. The first results of the flight model electronics will be also presented demonstrating micro-volt low noise and weak thermal sensitivity in good agreement with the requirements. At last, the development status of the payload will be mentioned insisting on the coming milestones.
Goniochromatic and sparkle properties of effect pigmented samples in multidimensional configuration
NASA Astrophysics Data System (ADS)
Höpe, Andreas; Hauer, Kai-Olaf; Teichert, Sven; Hünerhoff, Dirk; Strothkämper, Christian
2015-03-01
The effects of goniochromatism and sparkle are gaining more and more interest for surface refinement applications driven by demanding requirements from such different branches as automotive, cosmetics, printing and packaging industry. The common background and intention in all of these implementations is improvement of the visual appearance of the related commercial products. Goniochromatic materials show strong angular-dependent reflection characteristics and hence a color impression depending on the effective spatial arrangement of illumination and observation relative to the surface of the artifact. Sparkle is a texture related effect giving a surface which is irradiated directionally, like direct sun light, a bright glittering effect, similar to twinkling stars at the night sky. The prototype for this new effect is the Xirallic® pigment of MERCK KGaA, Germany. The same pigment shows in diffuse irradiation, like on a cloudy day, a different visual effect called graininess (coarseness) which appears as a granular structure of the surface. Both effects were studied on especially manufactured samples of a dilution series in pigment concentration and a tonality series with carbon black. The experiments were carried out with the robot-based gonioreflectometer and integrating sphere facilities at Physikalisch-Technische Bundesanstalt (PTB) in multidimensional configurations of directional and diffuse irradiation. The research is part of the European Metrology Research Program (EMRP), which is a metrology-focused program of coordinated Research & Development (R&D) funded by the European Commission and participating countries within the European Association of National Metrology Institutes (EURAMET). More information and updated news concerning the project can be found on the xD-Reflect website http://www.xdreflect.eu/.
NASA Astrophysics Data System (ADS)
Cessateur, G.; Bolsée, D.; Pereira, N.; Sperfeld, P.; Pape, S.
2017-12-01
The availability of reference spectra for the Solar Spectral Irradiance (SSI) is important for the solar physics, the studies of planetary atmospheres and climatology. The near infrared (NIR) part of these spectra is of great interest for its main role for example, in the Earth's radiative budget. Until recently, some large and unsolved discrepancies (up to 10 %) were observed in the 1.6 μm region between space instruments, models and ground-based measurements. We designed a ground-based instrumentation for SSI measurements at the Top Of Atmosphere (TOA) through atmospheric NIR windows using the Bouguer-Langley technique. The main instrument is a double NIR spectroradiometer designed by Bentham (UK), radiometrically characterized at the Royal Belgian Institute for Space Aeronomy. It was absolute calibrated against a high-temperature blackbody as primary standard for spectral irradiance at the Physikalisch-Technische Bundesanstalt (Germany). The PYR-ILIOS campaign was carried out in June to July 2016 at the Mauna Loa Observatory (Hawaii, USA, 3396 m a.s.l.) follows the four-month IRESPERAD campaign which was carried out in the summer 2011 at the Izaña Atmospheric Observatory (Canary Islands, 2367 m a.s.l.). We present here the results of the 3'week PYR-ILIOS campaign and compare them with the ATLAS 3 spectrum as well as from recently reprocessed NIR solar spectra obtained with SOLAR/SOLSPEC on ISS and SCIAMACHY on ENVISAT. The uncertainty budget of the PYR-ILIOS results will be discussed.
NASA Astrophysics Data System (ADS)
Mauerhofer, E.; Havenith, A.; Carasco, C.; Payan, E.; Kettler, J.; Ma, J. L.; Perot, B.
2013-04-01
The Forschungszentrum Jülich GmbH (FZJ), together with the Aachen University Rheinisch-Westfaelische Technische Hochschule (RWTH) and the French Alternative Energies and Atomic Energy Commission (CEA Cadarache) are involved in a cooperation aiming at characterizing toxic and reactive elements in radioactive waste packages by means of Prompt Gamma Neutron Activation Analysis (PGNAA) [1]. The French and German waste management agencies have indeed defined acceptability limits concerning these elements in view of their projected geological repositories. A first measurement campaign was performed in the new Prompt Gamma Neutron Activation Analysis (PGNAA) facility called MEDINA, at FZJ, to assess the capture gamma-ray signatures of some elements of interest in large samples up to waste drums with a volume of 200 liter. MEDINA is the acronym for Multi Element Detection based on Instrumental Neutron Activation. This paper presents MCNP calculations of the MEDINA facility and quantitative comparison between measurement and simulation. Passive gamma-ray spectra acquired with a high purity germanium detector and calibration sources are used to qualify the numerical model of the crystal. Active PGNAA spectra of a sodium chloride sample measured with MEDINA then allow for qualifying the global numerical model of the measurement cell. Chlorine indeed constitutes a usual reference with reliable capture gamma-ray production data. The goal is to characterize the entire simulation protocol (geometrical model, nuclear data, and postprocessing tools) which will be used for current measurement interpretation, extrapolation of the performances to other types of waste packages or other applications, as well as for the study of future PGNAA facilities.
NASA Astrophysics Data System (ADS)
Guéna, J.; Weyers, S.; Abgrall, M.; Grebing, C.; Gerginov, V.; Rosenbusch, P.; Bize, S.; Lipphardt, B.; Denker, H.; Quintin, N.; Raupach, S. M. F.; Nicolodi, D.; Stefani, F.; Chiodo, N.; Koke, S.; Kuhl, A.; Wiotte, F.; Meynadier, F.; Camisard, E.; Chardonnet, C.; Le Coq, Y.; Lours, M.; Santarelli, G.; Amy-Klein, A.; Le Targat, R.; Lopez, O.; Pottie, P. E.; Grosche, G.
2017-06-01
We report on the first comparison of distant caesium fountain primary frequency standards (PFSs) via an optical fiber link. The 1415 km long optical link connects two PFSs at LNE-SYRTE (Laboratoire National de métrologie et d’Essais—SYstème de Références Temps-Espace) in Paris (France) with two at PTB (Physikalisch-Technische Bundesanstalt) in Braunschweig (Germany). For a long time, these PFSs have been major contributors to accuracy of the International Atomic Time (TAI), with stated accuracies of around 3× {{10}-16} . They have also been the references for a number of absolute measurements of clock transition frequencies in various optical frequency standards in view of a future redefinition of the second. The phase coherent optical frequency transfer via a stabilized telecom fiber link enables far better resolution than any other means of frequency transfer based on satellite links. The agreement for each pair of distant fountains compared is well within the combined uncertainty of a few 10-16 for all the comparisons, which fully supports the stated PFSs’ uncertainties. The comparison also includes a rubidium fountain frequency standard participating in the steering of TAI and enables a new absolute determination of the 87Rb ground state hyperfine transition frequency with an uncertainty of 3.1× {{10}-16} . This paper is dedicated to the memory of André Clairon, who passed away on 24 December 2015, for his pioneering and long-lasting efforts in atomic fountains. He also pioneered optical links from as early as 1997.
Absolute Radiometric Calibration of EUNIS-06
NASA Technical Reports Server (NTRS)
Thomas, R. J.; Rabin, D. M.; Kent, B. J.; Paustian, W.
2007-01-01
The Extreme-Ultraviolet Normal-Incidence Spectrometer (EUNIS) is a soundingrocket payload that obtains imaged high-resolution spectra of individual solar features, providing information about the Sun's corona and upper transition region. Shortly after its successful initial flight last year, a complete end-to-end calibration was carried out to determine the instrument's absolute radiometric response over its Longwave bandpass of 300 - 370A. The measurements were done at the Rutherford-Appleton Laboratory (RAL) in England, using the same vacuum facility and EUV radiation source used in the pre-flight calibrations of both SOHO/CDS and Hinode/EIS, as well as in three post-flight calibrations of our SERTS sounding rocket payload, the precursor to EUNIS. The unique radiation source provided by the Physikalisch-Technische Bundesanstalt (PTB) had been calibrated to an absolute accuracy of 7% (l-sigma) at 12 wavelengths covering our bandpass directly against the Berlin electron storage ring BESSY, which is itself a primary radiometric source standard. Scans of the EUNIS aperture were made to determine the instrument's absolute spectral sensitivity to +- 25%, considering all sources of error, and demonstrate that EUNIS-06 was the most sensitive solar E W spectrometer yet flown. The results will be matched against prior calibrations which relied on combining measurements of individual optical components, and on comparisons with theoretically predicted 'insensitive' line ratios. Coordinated observations were made during the EUNIS-06 flight by SOHO/CDS and EIT that will allow re-calibrations of those instruments as well. In addition, future EUNIS flights will provide similar calibration updates for TRACE, Hinode/EIS, and STEREO/SECCHI/EUVI.
Argan woodlands in South Morocco as an area of conflict between degradation and sustainable land use
NASA Astrophysics Data System (ADS)
Kirchhoff, Mario; Kagermeier, Andreas; Ries, Johannes B.
2016-04-01
The Argan woodlands are endemic for South Morocco and prone to degradation through expanding and intensifying agriculture and overgrazing. Unvegetated areas extend further due to degradation of soil and vegetation. Here infiltration is less than on vegetated areas, while runoff and soil erosion increase. The sale of the highly valuable oil, gained from the seeds of the argan tree, can be seen as an economic alternative for the region and a chance of survival for the argan woodlands. With the introduction of women's cooperatives for the production and sale of the oil, the Gesellschaft für Technische Zusammenarbeit (GTZ, Association for Technical Cooperation) hoped to halt argan degradation from 1995 to 2002. The effects of this approach shall be studied in a proposed DFG-project. The erosion gradient between soils under canopy cover and intertree areas in varying stages of degradation will be at the center of the analysis. Insight into onsite and offsite degradation shall be gained through the measurement of runoff and erosion rates, which lead to rill and gully erosion downslope. Measurements of soil chemical and physical properties might also help indicate when an argan woodland can be classified as natural. Furthermore to be studied are the effects of the new found value of the Argan woodlands among the local population with focus on regional tourism and a possible reduction of grazing pressure. Sustainable soil management in combination with the needs of the local population is essential for a sustainable land use in the region.
[Helsinki declaration on patient safety in anaesthesiology -part 10: infection control/hygiene].
Kerwat, Klaus; Wulf, Hinnerk
2013-11-01
There is a plethora of laws, regulations, guidelines and recommendations relating to infection control and hygiene. Major issues are the prevention of nosocomial infections, staff protection and environmental protection. Of the highest relevance are the infection control law [Infektionsschutzgesetz (IfSG)], the hygiene regulations of the German federal states [Hygieneverordnungen der Bundesländer], the German technical rules for biological materials [Technische Regel Biologische Arbeitsstoffe 250 (TRBA 250)] - biological materials in health-care and welfare work [Biologische Arbeitsstoffe im Gesundheitswesen und in der Wohlfahrtspflege], the guidelines for hospital hygiene and prevention of infection of the commission for hospital hygiene and prevention of infection of the Robert-Koch Institute [Richtlinie für Krankenhaushygiene und Infektionsprävention von der Kommission für Krankenhaushygiene und Infektionsprävention (KRINKO) beim Robert Koch-Institut], the recommendations of the commission on anti-infectives, resistance and therapy of the Robert-Koch Institute [Empfehlungen der Kommission Antiinfektiva, Resistenz und Therapie (ART) beim Robert Koch-Institut]. Of subordinate importance are, e.g., the recommendations of the German Society for Anesthesiology and Intensive Medicine (DGAI). It is practically impossible for an anesthesiologist working in a hospital to have knowledge of all laws, regulations, guidelines and recommendations. And this is also not reasonable. Thus it is necessary to distinguish the relevant from the irrelevant. Checklists can be useful here. The most important and effective individual action in hospital hygiene is and remains hand hygiene as is propagated in the action "clean hands", irrespective of all laws, regulations, guidelines and recommendations. © Georg Thieme Verlag Stuttgart · New York.
Analysis of Radon Decay Data and its Implications for Physics, Geophysics, and Solar Physics.
NASA Astrophysics Data System (ADS)
Sturrock, Peter A.; Fischbach, E.; Jenkins, J. H.; Steinitz, G.
2012-05-01
We present an analysis of about 29,000 measurements of gamma radiation associated with the decay of radon in a sealed container at the Geological Survey of Israel (GSI) Laboratory in Jerusalem between January 28 2007 and May 10 2010. These measurements exhibit strong variations in time of year and time of day, which may be due in part to environmental influences. However, time-series analysis also reveals a number of periodicities, notably at 11.2 year-1 and 12.5 year-1, which we have found in other nuclear-decay data --including data acquired at the Brookhaven National Laboratory and the Physiklisch-Technische Bundesanstalt-- which we attribute to a solar influence. A distinct property of the GSI results is that the annual oscillation is much stronger in daytime data than in nighttime data, but the opposite is true for all other oscillations. We speculate on possible interpretations of this curious result. Solar neutrinos remain our prime suspect as the agent responsible for beta-decay anomalies. These results have implications for physics (that nuclear decay rates are not constant and may be stimulated); for geophysics (that the variability of radon measurements cannot be ascribed entirely to atmospheric and solid-earth processes); and for solar physics (that the Sun contains an inner tachocline, separating a slowly rotating core from the radiative zone, which has properties similar to those of the outer tachocline separating the radiative zone from the convection zone). This work was supported by DOE grant DE-AC-02-76ER071428.
RECOGNIZING INFANTS' EMOTIONAL EXPRESSIONS: ARE ADOLESCENTS LESS SENSITIVE TO INFANTS' CUES?
Niessen, Anke; Konrad, Kerstin; Dahmen, Brigitte; Herpertz-Dahlmann, Beate; Firk, Christine
2017-07-01
Previous studies have shown that adolescent mothers interact less sensitively with their infants than do adult mothers. This difference might be due to developmental difficulties in the recognition of infants' emotional states in adolescents. Therefore, the aim of the current study was to explore differences in the recognition of infant signals between nonparous adolescent girls and boys as compared to female and male adults. To this end, we examined 54 childless adolescents and 54 childless adults (50% female). Participants were shown a series of 20 short videos of infants aged 3 to 6 months presenting different emotional states ranging from very distressed to very happy. In addition, participants were asked to report their own parental experiences using the German version, Fragebogen zum erinnerten elterlichen Erziehungsverhalten (J. Schumacher, M. Eisemann, & E. Brähler, ), of the Egna Minnen Befräffande Uppfostran (Own Memories of Parental Rearing Experiences in Childhood; C. Perris, L. Jacobsson, H. Lindstrom, L. von Knorring, & H. Perris, ). Adolescents rated distressed infants as more distressed than did the adults. Furthermore, female participants rated the very distressed infants as more distressed than did male participants. These data suggest that adolescents, in general, are not impaired in recognizing infant emotional states, as compared to adults. Thus, we suggest that more extreme ratings of infant signals of discomfort together with immature sociocognitive regulation processes during adolescence might contribute to reduced sensitivity observed in adolescent mothers. © 2017 Michigan Association for Infant Mental Health.
2009-01-01
Background Parental rearing behavior is a significant etiological factor for the vulnerability of psychopathology and has been an issue of clinical research for a long time. For this scope instruments are important who asses economically recalled parental rearing behavior in a clinical practice. Therefore, a short German instrument for the assessment of the recalled parental rearing behavior Fragebogen zum erinnerten elterlichen Erziehungsverhalten (FEE) was psychometrically evaluated [Recalled Parental Rearing Behavior]. Methods This questionnaire was evaluated in a representative population sample (N = 2.948) in Germany which included 44.2% male and 55.8% female persons with a mean age of M = 47.35 (SD = 17.10, range = 18–92). For the content evaluation of the FEE the Life Satisfaction Questionnaire (FLZ) and the Inventory of Interpersonal Problems (IIP) was filled out by the participants. Results The FEE scales yielded a good to satisfactory internal consistency and split-half reliability. Its three factors (rejection/punishment, emotional warmth, control/overprotection) correlated positively with most of the areas of life satisfaction. Furthermore, positive associations between interpersonal problems and parental rejection and control could be identified. Conclusion The FEE is a short, reliable and valid instrument that can be applied in the clinical practice. In addition, the data proved an association between recalled parental rearing behavior, life satisfaction and interpersonal problems conform to the literature. Finally, specific problems with the retrospective assessment of parental rearing behavior were addressed as well. PMID:19267894
Jahn-Bassler, Karin; Bauer, Wolfgang Michael; Karlhofer, Franz; Vossen, Matthias G; Stingl, Georg
2017-01-01
Schwere Verlaufsformen der Alopecia areata (AA) im Kindesalter sind aufgrund limitierter Optionen therapeutisch herausfordernd. Systemische, hochdosierte Glukokortikoide weisen die schnellste Ansprechrate auf, nach dem Absetzen kommt es allerdings zu Rezidiven. Eine längerfristige Hochdosis-Anwendung ist aufgrund der zu erwartenden Nebenwirkungen nicht empfehlenswert. Eine dauerhafte Steroiderhaltungstherapie unterhalb der Cushing-Schwellen-Dosis nach Bolustherapie könnte die Krankheitsaktivität ohne Nebenwirkungen längerfristig unterdrücken. Im Rahmen einer offenen Anwendungsbeobachtung wurden 13 Kinder mit schweren Formen der AA in diese Studie eingeschlossen. Bei sieben Kindern lag eine AA totalis/universalis vor, bei sechs eine multifokale AA mit Befall von mehr als 50 % der Kopfhaut. Das Therapieregime sah eine initiale Prednisolon-Dosierung von 2 mg/kg Körpergeweicht (KG) vor und wurde innerhalb von neun Wochen auf eine Erhaltungsdosierung unter der individuellen Cushing-Schwelle reduziert. Der Nachbeobachtungszeitraum betrug ein bis drei Jahre. Wir beobachteten in 62 % aller Fälle ein komplettes Nachwachsen der Haare. Die mittlere Dauer bis zum Ansprechen lag bei 6,6 Wochen und konnte mit der Erhaltungstherapie über den gesamten Beobachtungszeitraum aufrechterhalten werden. An Nebenwirkungen wurden ausschließlich eine Gewichtszunahme (1-3 kg) bei allen Behandelten sowie eine milde Steroidakne in 23 % der Fälle beobachtet. Die kombinierte Hoch-/Niedrig-Dosis-Therapie mit systemischen Glukokortikoiden mittels Prednisolon zeigte eine hohe, dauerhafte Ansprechrate ohne signifikante Nebenwirkungen. © 2017 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
Märtens, Diane; Range, Natasha; Günnewich, Nils; Gruber, Nicola; Schmidt, Stefan
Hintergrund: In dieser Anwendungsbeobachtung wird zum ersten Mal die Behandlung des Reizdarmsyndroms (RDS) mit einem homöopathisch-phytotherapeutischen Komplexpräparat beschrieben. Methodik: Ziel der 6-wöchigen Therapie mit dem Magen-Darm-Entoxin N® war die Reduzierung der RDS-Symptomatik sowie die Verbesserung der Lebensqualität. Zielkriterien waren die Veränderungen in der Irritable Bowel Syndrome - Severity Scoring System (IBS-SSS) und der Irritable Bowel Syndrome - Quality-of-Life Scale (IBS-QoL). Die Patient/innen (N = 41; Alter 44,0 ± 15,74 Jahre) wurden zu gleichen Teilen in einer Hausarztpraxis (N = 20) und einer Heilpraktikerpraxis (N = 21) rekrutiert. Ergebnisse: Der IBS-QoL-Score verringerte sich signifikant (prä: 35,9 ± 16,3; post: 20,1 ± 13,4; t = 8,504; p < 0,001). Die Effektstärke betrug 1,34 (Cohens d). Der IBS-SSS-Score verringerte sich ebenfalls signifikant (prä: 239,4 ± 83,4; post: 123,7 ± 80,9; t = 7,825; p < 0,001) mit einer Effektstärke von d = 1,24. Die Neben- und Wechselwirkungen waren minimal und signifikante Unterschiede zwischen beiden Praxen wurden nicht gefunden. Schlussfolgerungen: Magen-Darm-Entoxin N® ist eine sichere und sinnvolle Therapieoption bei der Behandlung des RDS. Allerdings sollten randomisierte kontrollierte Studien folgen, um die Spezifizität der Ergebnisse dieser Anwendungsbeobachtung zu stützen. © 2017 S. Karger GmbH, Freiburg.
Radiosensibilisierung durch BRAF Inhibitoren.
Strobel, Sophia Boyoung; Pätzold, Sylvie; Zimmer, Lisa; Jensen, Alexandra; Enk, Alexander; Hassel, Jessica Cecile
2017-07-01
In der letzten Zeit wurden in der Literatur vermehrt erhöhte Hauttoxizitäten während einer Kombinationstherapie mit BRAF Inhibitoren und Radiotherapie beschrieben. Wir berichten über sieben Melanompatienten in einem nicht resezierbaren Stadium III oder IV, die eine kombinierte Behandlung aus Bestrahlung und BRAF-Inhibitor erhielten. Bei allen Patienten konnte durch die Kombinationstherapie ein gutes lokales Ansprechen erreicht werden. Nur bei zwei Patienten wurde eine schwere Radiodermatitis (CTCAE Grad 3 bzw. 4) beobachtet. Bei diesen Patienten, die beide Vemurafenib erhielten, trat die Radiodermatitis nach ein bzw. zwei Wochen auf und resultierte in einer Unterbrechung der BRAF-Inhibitor Behandlung.. Die kumulative Dosis bis zum Zeitpunkt der Strahlendermatitis betrug 10 Gy bzw. 35 Gy. Bei allen anderen Vemurafenibpatienten konnten nur milde Reaktionen im Sinne einer Radiodermatitis CTCAE Grad 2, beim Dabrafenibpatienten CTCAE Grade 1 diagnostiziert werden. Bei einem Patienten wurde eine Recalldermatitis nach 14 Tagen einer beendeten Strahlentherapie mit einer kumulativen Dosis von 30 Gy diagnostiziert. Schwere Toxizitätsreaktionen der Haut unter einer BRAF-Inhibitionen treten nicht häufig auf und sind meistens gut therapierbar. Deshalb sollte die Kombinationstherapie bei aggressiv wachsenden Melanomen eine Therapieoption bleiben. Obwohl ein erhöhtes Risiko der Hauttoxizität unter einer Kombinationstherapie von Radiatio und BRAF-Inhibitoren besteht, wird diese von den meisten Patienten gut toleriert. Sequenzielle Therapie anstelle von gleichzeitiger Behandlung scheint die Toxizitätreaktionen nicht zu verhindern. © 2017 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
Zu einer inhaltsorientierten Theorie des Lernens und Lehrens der biologischen Evolution
NASA Astrophysics Data System (ADS)
Wallin, Anita
Der Zweck dieser Studie (zwecks Überblick siehe dazu Abb. 9.1) war zu untersuchen, wie die Schüler der Sekundarstufe II ein Verständnis von der Theorie der biologischen Evolution entwickeln. Vom Ausgangspunkt "Vorurteile der Schüler“ ausgehend wurden Unterrichtssequenzen entwickelt und drei verschiedene Lernexperimente in einem zyklischen Prozess durchgeführt. Das Wissen der Schüler wurde vor, während und nach den Unterrichtssequenzen mit Hilfe von schriftlichen Tests, Interviews und Diskussionsrunden in kleinen Gruppen abgefragt. Etwa 80 % der Schüler hatten vor dem Unterricht alternative Vorstellungen von Evolution, und in dem Nachfolgetest erreichten circa 75 % ein wissenschaftliches Niveau. Die Argumentation der Schüler in den verschiedenen Tests wurde sorgfältig unter Rücksichtnahme auf Vorurteile, der konzeptionellen Struktur der Theorie der Evolution und den Zielen des Unterrichts analysiert. Daraus konnten Einsichten in solche Anforderungen an Lehren und Lernen gewonnen werden, die Herausforderungen an Schüler und Lehrer darstellen, wenn sie anfangen, evolutionäre Biologie zu lernen oder zu lehren. Ein wichtiges Ergebnis war, dass das Verständnis existierender Variation in einer Population der Schlüssel zum Verständnis von natürlicher Selektion ist. Die Ergebnisse sind in einer inhaltsorientierten Theorie zusammengefasst, welche aus drei verschiedenen Aspekten besteht: 1) den inhaltsspezifischen Aspekten, die einzigartig für jedes wissenschaftliche Feld sind; 2) den Aspekten, die die Natur der Wissenschaft betreffen; und 3) den allgemeinen Aspekten. Diese Theorie kann in neuen Experimenten getestet und weiter entwickelt werden.
Hoffmann, Julia; Wölfle, Ute; Schempp, Christoph M; Casetti, Federica
2016-09-01
Das Rhizom von Potentilla officinalis (PO) ist reich an Gerbstoffen und wird traditionell zur äußerlichen Behandlung von Entzündungen der Haut und der Schleimhäute verwendet. Ziel der vorliegenden Arbeit war die Bestätigung der antiinflammatorischen Eigenschaften von PO mittels eines UV-Erythem-Tests und einer klinischen Anwendungsstudie bei atopischer Haut. Die antiinflammatorische Wirkung eines PO-Extrakts (standardisiert auf 2 % Trockensubstanz) wurde in einer prospektiven, randomisierten, placebokontrollierten Doppelblindstudie mit 40 gesunden Erwachsenen im UV-Erythem-Test im Vergleich zu 1 % Hydrocortisonacetat untersucht. Im Rahmen einer prospektiven nicht kontrollierten Studie wurde die Wirkung und Verträglichkeit der 2 % PO-Creme an zwölf Erwachsenen und zwölf Kindern mit atopischer Haut nach Anwendung über zwei Wochen in einem definierten Testareal anhand eines Teil-SCORAD untersucht. Zusätzlich wurde die Beeinflussung der Hautrötung im Testareal photometrisch gemessen. Im UV-Erythem-Test zeigte die PO-Creme eine signifikante Reduktion des Erythemindex im Vergleich zum Vehikel. Die antiinflammatorische Wirkung des Verums entsprach der der 1 % Hydrocortisonacetat-Creme. Die klinische Studie bei Atopikern zeigte eine signifikante Abnahme des Teil-SCORAD und des Erythems im Testareal. Es wurden keine Unverträglichkeitsreaktionen beobachtet. PO als 2%ige Zubereitung besitzt entzündungshemmende Eigenschaften und ist wirksam und gut verträglich auf atopischer Haut. © 2016 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
Krauss, A; Kapsch, R-P
2018-02-06
For the ionometric determination of the absorbed dose to water, D w , in high-energy electron beams from a clinical accelerator, beam quality dependent correction factors, k Q , are required. By using a water calorimeter, these factors can be determined experimentally and potentially with lower standard uncertainties than those of the calculated k Q factors, which are tabulated in various dosimetry protocols. However, one of the challenges of water calorimetry in electron beams is the small measurement depths in water, together with the steep dose gradients present especially at lower energies. In this investigation, water calorimetry was implemented in electron beams to determine k Q factors for different types of cylindrical and plane-parallel ionization chambers (NE2561, NE2571, FC65-G, TM34001) in 10 cm × 10 cm electron beams from 6 MeV to 20 MeV (corresponding beam quality index R 50 ranging from 1.9 cm to 7.5 cm). The measurements were carried out using the linear accelerator facility of the Physikalisch-Technische Bundesanstalt. Relative standard uncertainties for the k Q factors between 0.50% for the 20 MeV beam and 0.75% for the 6 MeV beam were achieved. For electron energies above 8 MeV, general agreement was found between the relative electron energy dependencies of the k Q factors measured and those derived from the AAPM TG-51 protocol and recent Monte Carlo-based studies, as well as those from other experimental investigations. However, towards lower energies, discrepancies of up to 2.0% occurred for the k Q factors of the TM34001 and the NE2571 chamber.
NASA Astrophysics Data System (ADS)
Wagner, Daniela M.; Hüttenrauch, Petra; Anton, Mathias; von Voigts-Rhetz, Philip; Zink, Klemens; Wolff, Hendrik A.
2017-07-01
The Physikalisch-Technische Bundesanstalt has established a secondary standard measurement system for the dose to water, D W, based on alanine/ESR (Anton et al 2013 Phys. Med. Biol. 58 3259-82). The aim of this study was to test the established measurement system for the out-of-field measurements of inpatients with breast cancer. A set of five alanine pellets were affixed to the skin of each patient at the contra lateral breast beginning at the sternum and extending over the mammilla to the distal surface. During 28 fractions with 2.2 Gy per fraction, the accumulated dose was measured in four patients. A cone beam computer tomography (CBCT) scan was generated for setup purposes before every treatment. The reference CT dataset was registered rigidly and deformably to the CBCT dataset for 28 fractions. To take the actual alanine pellet position into account, the dose distribution was calculated for every fraction using the Acuros XB algorithm. The results of the ESR measurements were compared to the calculated doses. The maximum dose measured at the sternum was 19.9 Gy ± 0.4 Gy, decreasing to 6.8 Gy ± 0.2 Gy at the mammilla and 4.5 Gy ± 0.1 Gy at the distal surface of the contra lateral breast. The absolute differences between the calculated and measured doses ranged from -1.9 Gy to 0.9 Gy. No systematic error could be seen. It was possible to achieve a combined standard uncertainty of 1.63% for D W = 5 Gy for the measured dose. The alanine/ESR method is feasible for in vivo measurements.
NASA Astrophysics Data System (ADS)
Taibi, S.; Meddi, M.; Mahé, G.; Assani, A.
2017-01-01
This work aims, as a first step, to analyze rainfall variability in Northern Algeria, in particular extreme events, during the period from 1940 to 2010. Analysis of annual rainfall shows that stations in the northwest record a significant decrease in rainfall since the 1970s. Frequencies of rainy days for each percentile (5th, 10th, 25th, 50th, 75th, 90th, 95th, and 99th) and each rainfall interval class (1-5, 5-10, 10-20, 20-50, and ≥50 mm) do not show a significant change in the evolution of daily rainfall. The Tenes station is the only one to show a significant decrease in the frequency of rainy days up to the 75th percentile and for the 10-20-mm interval class. There is no significant change in the temporal evolution of extreme events in the 90th, 95th, and 99th percentiles. The relationships between rainfall variability and general atmospheric circulation indices for interannual and extreme event variability are moderately influenced by the El Niño-Southern Oscillation and Mediterranean Oscillation. Significant correlations are observed between the Southern Oscillation Index and annual rainfall in the northwestern part of the study area, which is likely linked with the decrease in rainfall in this region. Seasonal rainfall in Northern Algeria is affected by the Mediterranean Oscillation and North Atlantic Oscillation in the west. The ENSEMBLES regional climate models (RCMs) are assessed using the bias method to test their ability to reproduce rainfall variability at different time scales. The Centre National de Recherches Météorologiques (CNRM), Czech Hydrometeorological Institute (CHMI), Eidgenössische Technische Hochschule Zürich (ETHZ), and Forschungszentrum Geesthacht (GKSS) models yield the least biased results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauerhofer, E.; Havenith, A.; Kettler, J.
The Forschungszentrum Juelich GmbH (FZJ), together with the Aachen University Rheinisch-Westfaelische Technische Hochschule (RWTH) and the French Alternative Energies and Atomic Energy Commission (CEA Cadarache) are involved in a cooperation aiming at characterizing toxic and reactive elements in radioactive waste packages by means of Prompt Gamma Neutron Activation Analysis (PGNAA). The French and German waste management agencies have indeed defined acceptability limits concerning these elements in view of their projected geological repositories. A first measurement campaign was performed in the new Prompt Gamma Neutron Activation Analysis (PGNAA) facility called MEDINA, at FZJ, to assess the capture gamma-ray signatures of somemore » elements of interest in large samples up to waste drums with a volume of 200 liter. MEDINA is the acronym for Multi Element Detection based on Instrumental Neutron Activation. This paper presents MCNP calculations of the MEDINA facility and quantitative comparison between measurement and simulation. Passive gamma-ray spectra acquired with a high purity germanium detector and calibration sources are used to qualify the numerical model of the crystal. Active PGNAA spectra of a sodium chloride sample measured with MEDINA then allow for qualifying the global numerical model of the measurement cell. Chlorine indeed constitutes a usual reference with reliable capture gamma-ray production data. The goal is to characterize the entire simulation protocol (geometrical model, nuclear data, and postprocessing tools) which will be used for current measurement interpretation, extrapolation of the performances to other types of waste packages or other applications, as well as for the study of future PGNAA facilities.« less
Renner, Franziska
2016-09-01
Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.
Troussel, Ph; Villette, B; Emprin, B; Oudot, G; Tassin, V; Bridou, F; Delmotte, F; Krumrey, M
2014-01-01
CEA implemented an absolutely calibrated broadband soft X-ray spectrometer called DMX on the Omega laser facility at the Laboratory for Laser Energetics (LLE) in 1999 to measure radiant power and spectral distribution of the radiation of the Au plasma. The DMX spectrometer is composed of 20 channels covering the spectral range from 50 eV to 20 keV. The channels for energies below 1.5 keV combine a mirror and a filter with a coaxial photo-emissive detector. For the channels above 5 keV the photoemissive detector is replaced by a conductive detector. The intermediate energy channels (1.5 keV < photon energy < 5 keV) use only a filter and a coaxial detector. A further improvement of DMX consists in flat-response X-ray channels for a precise absolute measurement of the photon flux in the photon energy range from 0.1 keV to 6 keV. Such channels are equipped with a filter, a Multilayer Mirror (MLM), and a coaxial detector. We present as an example the development of channel for the gold M emission lines in the photon energy range from 2 keV to 4 keV which has been successfully used on the OMEGA laser facility. The results of the radiant power measurements with the new MLM channel and with the usual channel composed of a thin titanium filter and a coaxial detector (without mirror) are compared. All elements of the channel have been calibrated in the laboratory of the Physikalisch-Technische Bundesanstalt, Germany's National Metrology Institute, at the synchrotron radiation facility BESSY II in Berlin using dedicated well established and validated methods.
NASA Astrophysics Data System (ADS)
Cessateur, G.; Bolsée, D.; Pereira, N.; Sperfeld, P.; Pape, S.
2016-12-01
The availability of reference spectra for the Solar Spectral Irradiance (SSI) is of the most importance for the solar physics, the studies of planetary atmospheres and climatology. The near infrared (NIR) part of these spectra is of great interest for its main role for example, in the Earth's radiative budget. However, some large and unsolved discrepancies (up to 10 %) are observed in the 1.6 μm region between recent measurements from space instruments and modelling. We developed a ground-based instrumentation dedicated to SSI measurements of the Top Of Atmosphere (TOA), obtained through atmospheric NIR windows using the Bouguer-Langley technique. The instruments are a double spectroradiometer designed by Bentham (UK) and a 6-channels NIR filters radiometer. Both were radiometrically characterized at the Royal Belgian Institute for Space Aeronomy. In the following they were calibrated against a high-temperature blackbody as primary standard for spectral irradiance at the Physikalisch-Technische Bundesanstalt (Germany). The PYR-ILIOS campaign carried out in June to July 2016 at the Mauna Loa Observatory (Hawaii, USA, 3396 m a.s.l.) is a follower of the four-month IRESPERAD campaign which was carried out in 2011 at the Izaña Atmospheric Observatory (Canary Islands, 2367 m a.s.l.). We present here the results of the 3 weeks PYR-ILIOS campaign and compare them with the outcome from IRESPERAD as well as from other ground-based, airborne or space experiments will be presented. The standard uncertainty of the PYR-ILIOS results will be discussed.
NASA Astrophysics Data System (ADS)
Krauss, A.; Kapsch, R.-P.
2018-02-01
For the ionometric determination of the absorbed dose to water, D w, in high-energy electron beams from a clinical accelerator, beam quality dependent correction factors, k Q, are required. By using a water calorimeter, these factors can be determined experimentally and potentially with lower standard uncertainties than those of the calculated k Q factors, which are tabulated in various dosimetry protocols. However, one of the challenges of water calorimetry in electron beams is the small measurement depths in water, together with the steep dose gradients present especially at lower energies. In this investigation, water calorimetry was implemented in electron beams to determine k Q factors for different types of cylindrical and plane-parallel ionization chambers (NE2561, NE2571, FC65-G, TM34001) in 10 cm × 10 cm electron beams from 6 MeV to 20 MeV (corresponding beam quality index R 50 ranging from 1.9 cm to 7.5 cm). The measurements were carried out using the linear accelerator facility of the Physikalisch-Technische Bundesanstalt. Relative standard uncertainties for the k Q factors between 0.50% for the 20 MeV beam and 0.75% for the 6 MeV beam were achieved. For electron energies above 8 MeV, general agreement was found between the relative electron energy dependencies of the k Q factors measured and those derived from the AAPM TG-51 protocol and recent Monte Carlo-based studies, as well as those from other experimental investigations. However, towards lower energies, discrepancies of up to 2.0% occurred for the k Q factors of the TM34001 and the NE2571 chamber.
Can near-peer medical students effectively teach a new curriculum in physical examination?
2013-01-01
Background Students in German medical schools frequently complain that the subject ‘clinical examination’ is not taught in a satisfying manner due to time constraints and lack of personnel resources. While the effectiveness and efficiency of practice-oriented teaching in small groups using near-peer teaching has been shown, it is rarely used in German medical schools. We investigated whether adding a new near-peer teaching course developed with student input plus patient examination under supervision in small groups improves basic clinical examination skills in third year medical students compared to a traditional clinical examination course alone. Methods Third year medical students registered for the mandatory curricular clinical examination course at the medical faculty of the Technische Universität München were invited to participate in a randomised trial with blinded outcome assessment. Students were randomised to the control group participating in the established curricular physical examination course or to the intervention group, which received additional near-peer teaching for the same content. The learning success was verified by a voluntary objective structured clinical examination (OSCE). Results A total of 84 students were randomised and 53 (63%) participated in the final OSCE. Students in the control group scored a median of 57% (25th percentile 47%, 75th percentile 61%) of the maximum possible total points of the OSCE compared to 77% (73%, 80%; p < 0.001) for students in the intervention group. Only two students in the intervention group received a lower score than the best student in the control group. Conclusion Adding a near-peer teaching course to the routine course significantly improved the clinical examination skills of medical students in an efficient manner in the context of a resource-constrained setting. PMID:24325639
NASA Astrophysics Data System (ADS)
Sanponpute, Tassanai; Meesaplak, Apichaya; Herrmann, Konrad; Menelao, Febo
2009-01-01
The bilateral comparison APMP.M.H-S2 of hardness measurement for Rockwell scales A and B was arranged by the National Institute of Metrology of Thailand, NIMT, as the pilot laboratory, comparing with Physikalisch-Technische Bundesanstalt of Germany, PTB. The objective of this comparison was to confirm the calibration and measurement capabilities of NIMT in hardness measurement. The period of measurement covered March to August 2009. There were two sets of artefacts: scale A artefact set and scale B artefact set. The scale A artefact set consisted of seven hardness blocks: 35 HRA, 40 HRA, 55 HRA, 60 HRA, 70 HRA, 80 HRA, 85 HRA. The artefact set for scale B consisted of nine hardness blocks: 25 HRB, 30 HRB, 40 HRB, 50 HRB, 60 HRB, 70 HRB, 80 HRB, 90 HRB, 100 HRB. Laboratories had to ensure that the primary Rockwell hardness machines passed the verification process according to ISO 6508-3. Then participants measured the hardness value by making ten indentations in a designated area of each artefact block. Hardness measurement results and uncertainty budget were then reported to the pilot laboratory and were used to compute the degrees of equivalence in terms of the Comparison Reference Value (CRV) and En ratio. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by APMP, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
NASA Astrophysics Data System (ADS)
Viertler, Franz; Hajek, Manfred
2015-05-01
To overcome the challenge of helicopter flight in degraded visual environments, current research considers headmounted displays with 3D-conformal (scene-linked) visual cues as most promising display technology. For pilot-in-theloop simulations with HMDs, a highly accurate registration of the augmented visual system is required. In rotorcraft flight simulators the outside visual cues are usually provided by a dome projection system, since a wide field-of-view (e.g. horizontally > 200° and vertically > 80°) is required, which can hardly be achieved with collimated viewing systems. But optical see-through HMDs do mostly not have an equivalent focus compared to the distance of the pilot's eye-point position to the curved screen, which is also dependant on head motion. Hence, a dynamic vergence correction has been implemented to avoid binocular disparity. In addition, the parallax error induced by even small translational head motions is corrected with a head-tracking system to be adjusted onto the projected screen. For this purpose, two options are presented. The correction can be achieved by rendering the view with yaw and pitch offset angles dependent on the deviating head position from the design eye-point of the spherical projection system. Furthermore, it can be solved by implementing a dynamic eye-point in the multi-channel projection system for the outside visual cues. Both options have been investigated for the integration of a binocular HMD into the Rotorcraft Simulation Environment (ROSIE) at the Technische Universitaet Muenchen. Pros and cons of both possibilities with regard on integration issues and usability in flight simulations will be discussed.
NASA Astrophysics Data System (ADS)
Chu, Wei-Han; Yuan, Ming-Chen; Lee, Jeng-Hung; Lin, Yi-Chun
2017-11-01
Ir-192 sources are widely used in brachytherapy and the number of treatments is around seven thousand for the use of the high dose rate (HDR) Ir-192 brachytherapy source per year in Taiwan. Due to its physical half-life of 73.8 days, the source should be replaced four times per year to maintain the HDR treatment mode (DDEP, 2005; Coursey et al., 1992). When doing this work, it must perform the source dose trace to assure the dose accuracy. To establish the primary measurement standard of reference air kerma rate(RAKR) for the HDR Ir-192 brachytherapy sources in Taiwan, the Institute of Nuclear Energy Research (INER) fabricated a dual spherical graphite-walled cavity ionization chambers system to directly measure the RAKR of the Ir-192 brachytherapy source. In this system, the ion-charge was accumulated by the two ionization chambers and after correction for the ion recombination, temperature, atmosphere pressure, room scattering, graphite-wall attenuation, air attenuation, source decay, stem effect, and so on. The RAKR of the Ir-192 source was obtained in the ambient conditions of 22 °C and one atmosphere. The measurement uncertainty of the system was around 0.92% in 96% confidence level (k=2.0). To verify the accuracy of the result, the source calibration comparison has been made at the National Radiation Standard Laboratory (NRSL) of INER and Physikalisch-Technische Bundesanstalt (PTB, Germany) in 2015. The ratio of the measurement results between INER and PTB, INER/PTB, was 0.998±0.027 (k=2) which showed good consistency and the performance of the system was verified.
Determination of line profiles on nano-structured surfaces using EUV and x-ray scattering
NASA Astrophysics Data System (ADS)
Soltwisch, Victor; Wernecke, Jan; Haase, Anton; Probst, Jürgen; Schoengen, Max; Krumrey, Michael; Scholze, Frank; Pomplun, Jan; Burger, Sven
2014-09-01
Non-imaging techniques like X-ray scattering are supposed to play an important role in the further development of CD metrology for the semiconductor industry. Grazing Incidence Small Angle X-ray Scattering (GISAXS) provides directly assessable information on structure roughness and long-range periodic perturbations. The disadvantage of the method is the large footprint of the X-ray beam on the sample due to the extremely shallow angle of incidence. This can be overcome by using wavelengths in the extreme ultraviolet (EUV) spectral range, EUV small angle scattering (EUVSAS), which allows for much steeper angles of incidence but preserves the range of momentum transfer that can be observed. Generally, the potentially higher momentum transfer at shorter wavelengths is counterbalanced by decreasing diffraction efficiency. This results in a practical limit of about 10 nm pitch for which it is possible to observe at least the +/- 1st diffraction orders with reasonable efficiency. At the Physikalisch-Technische Bundesanstalt (PTB), the available photon energy range extends from 50 eV up to 10 keV at two adjacent beamlines. PTB commissioned a new versatile Ellipso-Scatterometer which is capable of measuring 6" square substrates in a clean, hydrocarbon-free environment with full flexibility regarding the direction of the incident light polarization. The reconstruction of line profiles using a geometrical model with six free parameters, based on a finite element method (FEM) Maxwell solver and a particle swarm based least-squares optimization yielded consistent results for EUV-SAS and GISAXS. In this contribution we present scatterometry data for line gratings and consistent reconstruction results of the line geometry for EUV-SAS and GISAXS.
NASA Astrophysics Data System (ADS)
Schödel, R.
2015-08-01
Traceability of length measurements to the international system of units (SI) can be realized by using optical interferometry making use of well-known frequencies of monochromatic light sources mentioned in the Mise en Pratique for the realization of the metre. At some national metrology institutes, such as Physikalisch-Technische Bundesanstalt (PTB) in Germany, the absolute length of prismatic bodies (e.g. gauge blocks) is realized by so-called gauge-block interference comparators. At PTB, a number of such imaging phase-stepping interference comparators exist, including specialized vacuum interference comparators, each equipped with three highly stabilized laser light sources. The length of a material measure is expressed as a multiple of each wavelength. The large number of integer interference orders can be extracted by the method of exact fractions in which the coincidence of the lengths resulting from the different wavelengths is utilized as a criterion. The unambiguous extraction of the integer interference orders is an essential prerequisite for correct length measurements. This paper critically discusses coincidence criteria and their validity for three modes of absolute length measurements: 1) measurements under vacuum in which the wavelengths can be identified with the vacuum wavelengths, 2) measurements under air in which the air refractive index is obtained from environmental parameters using an empirical equation, and 3) measurements under air in which the air refractive index is obtained interferometrically by utilizing a vacuum cell placed along the measurement pathway. For case 3), which corresponds to PTB’s Kösters-Comparator for long gauge blocks, the unambiguous determination of integer interference orders related to the air refractive index could be improved by about a factor of ten when an ‘overall dispersion value,’ suggested in this paper, is used as coincidence criterion.
NASA Astrophysics Data System (ADS)
Bombaci, I.; Covello, A.; Marcucci, L. E.; Rosati, S.
2009-07-01
Armani Paolo (Università di Trento) Benhar Omar (INFN Roma) Bombaci Ignazio (Università di Pisa) Bonanno Luca (Università di Ferrara) Catara Francesco (Università di Catania) Cò Giampaolo (Università di Lecce) Colonna Maria (Laboratori Nazionali del Sud, INFN Catania) Colonna Nicola (INFN Bari) Conti Francesco (Università di Pavia) Coraggio Luigi (INFN Napoli) Covello Aldo (Università di Napoli) Cristoforetti Marco (Technische Universität München, Germania) Cuofano Carmine (Università di Ferrara) Di Toro Massimo (Università di Catania) Drago Alessandro (Università di Ferrara) Faccioli Pietro (Università di Trento) Farina Nicola (INFN Roma) Finelli Paolo (Università di Bologna) Fiorentini Giovanni (Università di Ferrara) Fortunato Lorenzo (Università di Padova) Gambacurta Danilo (Università di Catania) Gandolfi Stefano (Università di Trento) Gargano Angela (INFN Napoli) Giannini Mauro (Università di Genova) Girlanda Luca (INFN Pisa) Giusti Carlotta (INFN Pavia) Illarionov Alexei (SISSA Trieste) Itaco Nunzio (Università di Napoli) Kievsky Alejandro (INFN Pisa) Lanza Edoardo (INFN Catania) Leidemann Winfried (Università di Trento) Lenzi Silvia (Università di Padova) Lipparini Enrico (Università di Trento) Lissia Marcello (Università di Cagliari) Lo Iudice Nicola (Università di Napoli) Maieron Chiara (Università di Lecce) Marcucci Laura Elisa (Università di Pisa) Matera Francesco (Università di Firenze) Millo Raffaele (Università di Trento) Orlandini Giuseppina (Università di Trento) Pacati Franco (Università di Pavia) Pastore Alessandro (Univeristy of Jyväskylä, Finlandia) Pederiva Francesco (Università di Trento) Pisent Gualtiero (Università di Padova) Prete Gianfranco (INFN Laboratori Nazionali di Legnaro) Quarati Piero (Politecnico di Torino) Rosati Sergio (Università di Pisa) Salmè Giovanni (INFN Roma) Santopinto Elena (INFN Genova) Traini Marco (Università di Trento) Vigezzi Enrico (INFN Milano) Vitturi Andrea (Università di Padova) Viviani Michele (INFN Pisa)
Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B
2016-05-01
The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.
TIMED solar EUV experiment: preflight calibration results for the XUV photometer system
NASA Astrophysics Data System (ADS)
Woods, Thomas N.; Rodgers, Erica M.; Bailey, Scott M.; Eparvier, Francis G.; Ucker, Gregory J.
1999-10-01
The Solar EUV Experiment (SEE) on the NASA Thermosphere, Ionosphere, and Mesosphere Energetics and Dynamics (TIMED) mission will measure the solar vacuum ultraviolet (VUV) spectral irradiance from 0.1 to 200 nm. To cover this wide spectral range two different types of instruments are used: a grating spectrograph for spectra between 25 and 200 nm with a spectral resolution of 0.4 nm and a set of silicon soft x-ray (XUV) photodiodes with thin film filters as broadband photometers between 0.1 and 35 nm with individual bandpasses of about 5 nm. The grating spectrograph is called the EUV Grating Spectrograph (EGS), and it consists of a normal- incidence, concave diffraction grating used in a Rowland spectrograph configuration with a 64 X 1024 array CODACON detector. The primary calibrations for the EGS are done using the National Institute for Standards and Technology (NIST) Synchrotron Ultraviolet Radiation Facility (SURF-III) in Gaithersburg, Maryland. In addition, detector sensitivity and image quality, the grating scattered light, the grating higher order contributions, and the sun sensor field of view are characterized in the LASP calibration laboratory. The XUV photodiodes are called the XUV Photometer System (XPS), and the XPS includes 12 photodiodes with thin film filters deposited directly on the silicon photodiodes' top surface. The sensitivities of the XUV photodiodes are calibrated at both the NIST SURF-III and the Physikalisch-Technische Bundesanstalt (PTB) electron storage ring called BESSY. The other XPS calibrations, namely the electronics linearity and field of view maps, are performed in the LASP calibration laboratory. The XPS and solar sensor pre-flight calibration results are primarily discussed as the EGS calibrations at SURF-III have not yet been performed.
Bächi, Beat
2010-12-01
The paper tackles the changes that occurred in the political culture and the episteme of risk in the Federal Republic of Germany in the 1970s. The objects of observation are limit values for hazardous industrial materials, especially for carcinogens. At the forefront of the production of such values in Germany was the German Research Society's Senate Commission for the Examination of Hazardous Industrial Materials. Limit values bring economy, politics, and science together and they mediate different interests. This makes limit values an ideal object of study to bring together changes in different parts of society. In 1972, a new category of limit values for carcinogenic substances is introduced, the so called "Technische Richtkonzentration" (TRK). This category of values does not assume that complete safety can be reached, as do limit values for hazardous industrial materials, so called "Maximale Arbeitsplatzkonzentrationen" (MAK). This means an important rupture in toxicological thinking. Until the 1970s, Paracelsus' dictum about dosage and poison still served as starting point for toxicologists. The innovation of TRK marks an important rupture in the episteme of regulating dangerous matters. Whereas until the 1970s there existed, at least as an ideal, the myth of "no risk" or "zero tolerance" even in the case of carcinogens, since the beginning of the 1970s, certainty is no more guaranteed by epistemically, but by socially robust knowledge. This also means the return of the risk society at the beginning of the 1970s, whereby cancer at the workplace becomes--in the view of the regulatory bodies--out of a medical problem a socioeconomic illness. The paper argues that these changes are connected to a general feeling of disorientation.
Evidence for a Solar Influence on Gamma Radiation from Radon
NASA Astrophysics Data System (ADS)
Sturrock, P. A.; Steinitz, G.; Fischbach, E.; Javorsek, D.; Jenkins, J.
2012-12-01
We have analyzed 29,000 measurements of gamma radiation associated with the decay of radon confined to an airtight vessel at the Geological Survey of Israel (GSI) Laboratory in Jerusalem between January 28 2007 and May 10 2010. These measurements exhibit strong variations in time of year and time of day, which may be due in part to environmental influences. However, time-series analysis reveals a number of strong periodicities, including two at approximately 11.2 year-1 and 12.5 year-1. We consider it significant that these same oscillations have previously been detected in nuclear-decay data acquired at the Brookhaven National Laboratory and at the Physiklisch-Technische Bundesanstalt. We have suggested that these oscillations are due to some form of solar radiation (possibly neutrinos) that has its origin in the deep solar interior. A curious property of the GSI data is that the annual oscillation is much stronger in daytime data than in nighttime data, but the opposite is true for all other oscillations. Time-frequency analysis also yields quite different results from daytime and nighttime data. These procedures have also been applied to data collected from subsurface geological sites in Israel, Tenerife, and Italy, which have a variety of geological and geophysical scenarios, different elevations, and depths below the surface ranging from several meters to 1000 meters. In view of these results, and in view of the fact that there is at present no clear understanding of the behavior of radon in its natural environment, there would appear to be a need for multi-disciplinary research. Investigations that clarify the nature and mechanisms of solar influences may help clarify the nature and mechanisms of geological influences.
Kundenfokus: Startpunkt für die digitale Transformation bei Stadtwerken
NASA Astrophysics Data System (ADS)
Fett, Perry; Küller, Philipp
Big Data, Internet der Dinge, Mobile Computing und soziale Medien - die modernen Informationstechnologien durchdringen den Alltag der meisten Menschen und lösen hierdurch eine digitale Transformation aus. Im Unternehmenskontext manifestiert sich die Digitalisierung durch eine neue Qualität der wissensbasierten Entscheidungsunterstützung und der Automatisierung bzw. Autonomisierung der Geschäftsprozesse. Für Stadtwerke gilt es nun, die Chancen der Digitalisierung zu ihren Gunsten zu nutzen. Ein Startpunkt könnte hierbei sein, wie Stadtwerke zukünftig mit ihren Kunden interagieren. Ausgelöst durch die Liberalisierung der Märkte rückt der Kunde heute stärker in den Mittelpunkt - die Energiewirtschaft steht nun vor der Herausforderung, dem Wettbewerb einen Schritt voraus zu sein und dem Kunden ein absolut positives Kundenerlebnis (Customer Experience) sowohl als Maßnahme zur Kundenbindung als auch zum Kundenaufbau zu bieten. Das vorliegende Kapitel zeigt hierfür die Erfolgskriterien für die gelungene Etablierung des Kundenfokus im eigenen Unternehmen auf. Mit dem Customer-Focus-Cycle-Modell von Fujitsu, angelehnt an den Deming-Kreislauf, wird ein allgemeingültiger Ansatz für ein mögliches Vorgehen beim Aufbau des Kundenfokus vorgestellt. Die sechs Phasen werden dabei anhand praktischer Beispiele erläutert und geben zudem Hinweise zu Methoden und Tools. Aus dem vorgestellten "Werkzeugkasten" wird ferner die Customer-Journey-Methode im Detail erläutert. Weiter soll das präsentierte Reifegradmodell Unternehmen dabei unterstützen, den eigenen Status quo festzustellen und die persönlichen Ziele auf dem Weg zur kundenzentrierten Organisation festzulegen.
Elevated Social Stress Levels and Depressive Symptoms in Primary Hyperhidrosis
Gross, Katharina M.; Schote, Andrea B.; Schneider, Katja Kerstin; Schulz, André; Meyer, Jobst
2014-01-01
Primary hyperhidrosis is defined as excessive sweating of certain body areas without physiological reasons. Hyperhidrotic individuals report a high psychological strain and an impairment of their quality of life. Thus, the aim of the study is to investigate the relation between hyperhidrosis and different psychological as well as physiological aspects of chronic stress as a co-factor for the etiology of depression. In this study, forty hyperhidrotic subjects were compared to forty age- and sex-matched healthy control subjects. The Trier Inventory of Chronic Stress (‘Trierer Inventar zum chronischen Stress’: TICS), the Beck Depression Inventory (BDI-II) and the Screening for Somatoform Disorders (SOMS-2) were used to examine the correlation between primary hyperhidrosis and stress as well as accompanying depressive and somatic symptoms. The cortisol awakening response of each subject was analyzed as a physiological stress correlate. In hyperhidrotics, we found a significant lack of social recognition as well as significantly more depressive symptoms compared to the control subjects. A subgroup of patients with axillary hyperhidrosis had the highest impact on these increased issues of chronic stress, pointing to a higher embarrassment in these subjects. Especially in social situations, hyperhidrotics showed higher stress levels, whereby a vicious circle of stress and sweating is triggered. However, the cortisol awakening response did not significantly differ between hyperhidrotics and controls. Moreover, affected persons suffer from more depressive symptoms, which may be caused by feelings of shame and a lack of self-confidence. This initial study provides an impetus for further investigation to reveal a causative relationship between hyperhidrosis and its psychological concomitants. PMID:24647796
Huebner, Jutta; Mohr, Peter; Simon, Jan-Christoph; Fluck, Michael; Berking, Carola; Zimmer, Lisa; Loquai, Carmen
2016-05-01
In Deutschland wenden 40-90 % aller Krebspatienten Methoden der komplementären and alternativen Medizin (KAM) an. Bis dato gibt es kein Datenmaterial zum Einsatz der KAM bei Melanompatienten. Das Ziel unserer Studie war es, Daten über den Gebrauch, die Informationsquellen und Ziele von Patienten mit metastasierendem Melanom zu erfassen. Einhundertsechsundfünfzig Patienten aus 25 Studienzentren nahmen an der DecOG-MM-PAL Multibasket Studie teil. Die beteiligten Personen wurden auch gebeten, an einer Nebenstudie teilzunehmen, die ihren Gebrauch von KAM erfassen sollte. Dazu wurde während der Behandlung ein standardisierter Fragebogen zu genau festgelegten Zeitpunkten ausgeteilt. Insgesamt gingen 55 Fragebögen von 32 (21 %) Melanompatienten ein. Von diesen gaben 17 (53 %) ein Interesse an KAM an, und sieben (22 %) machten von KAM Gebrauch. Die Hauptinformationsquellen (31 %) waren Familienmitglieder und Freunde, gefolgt von Ärzten (19 %). Die Hauptgründe für die Anwendung von KAM waren die Stärkung des Immunsystems (41 %) und des Körpers (34 %). Nahrungsergänzungsmittel (Vitamine und Spurenelemente) wurden am häufigsten angewendet (28 %). Eine relativ hohe Anzahl an Patienten mit metastasierendem Melanom machte trotz Teilnahme an einer klinischen Studie von KAM Gebrauch. Wechselwirkungen könnten durch biologisch basierte KAM auftreten, und hier besonders bei immunmodulierenden KAM- Strategien. Um Risiken zu vermeiden, sollte die Kommunikation zwischen den Ärzten und den Patienten verbessert werden. © 2016 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
Sunderkötter, Cord; Becker, Karsten; Kutzner, Heinz; Meyer, Thomas; Blödorn-Schlicht, Norbert; Reischl, Udo; Nenoff, Pietro; Geißdörfer, Walter; Gräser, Yvonne; Herrmann, Mathias; Kühn, Joachim; Bogdan, Christian
2018-02-01
Nukleinsäure-Amplifikations-Techniken (NAT), wie die PCR, sind hochsensitiv sowie selektiv und stellen in der mikrobiologischen Diagnostik wertvolle Ergänzungen zur kulturellen Anzucht und Serologie dar. Sie bergen aber gerade bei formalinfixiertem und in Paraffin eingebettetem Gewebe ein Risiko für sowohl falsch negative als auch falsch positive Resultate, welches nicht immer richtig eingeschätzt wird. Daher haben Vertreter der Deutschen Gesellschaft für Hygiene und Mikrobiologie (DGHM) und der Deutschen Dermatologischen Gesellschaft (DDG) einen Konsensus in Form einer Übersichtsarbeit erarbeitet, wann eine NAT am Paraffinschnitt angezeigt und sinnvoll ist und welche Punkte dabei in der Präanalytik und Befundinterpretation beachtet werden müssen. Da bei Verdacht auf eine Infektion grundsätzlich Nativgewebe genutzt werden soll, ist die PCR am Paraffinschnitt ein Sonderfall, wenn beispielsweise bei erst nachträglichaufgekommenem Verdacht auf eine Infektion kein Nativmaterial zur Verfügung steht und nicht mehr gewonnen werden kann. Mögliche Indikationen sind der histologisch erhobene Verdacht auf eine Leishmaniose, eine Infektion durch Bartonellen oder Rickettsien, oder ein Ecthyma contagiosum. Nicht sinnvoll ist oder kritisch gesehen wird eine NAT am Paraffinschnitt zum Beispiel bei Infektionen mit Mykobakterien oder RNA-Viren. Die Konstellation für eine NAT aus Paraffingewebe sollte jeweils benannt werden, die erforderliche Prä-Analytik, die jeweiligen Grenzen des Verfahrens und die diagnostischen Alternativen bekannt sein. Der PCR-Befund sollte entsprechend kommentiert werden, um Fehleinschätzungen zu vermeiden. © 2018 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
Zum Auf und Ab des Meeresspiegels in Skandinavien: Langer Streit um Eustasie oder Isostasie
NASA Astrophysics Data System (ADS)
Seibold, Eugen; Seibold, Ilse
2012-03-01
The phenomenon of the rise of the Scandinavian shield during the Holocene and the concomitant fall in level of the Baltic Sea has been investigated for centuries. Already in medieval times, there were reports about the coastlines of the Gulf of Bothnia that are full of relevant observations. During the eighteenth century, scientists such as Celsius and Linnaeus collected observations such as these. The result was that the search for the possible explanations of this rise-and-fall phenomenon intensified. The generally favoured explanation was that there was an active sinking of sea level in the Baltic rather than an active rising of the land surface in Fennoscandia. This was because water was seen as mobile, in contrast to a "terra firma". The relevant discussion was often emotional, and here, we try to illustrate it using material from the Geologenarchiv Freiburg (von Hoff, von Buch and Goethe). No more than a few decades later, it became obvious by the theory of Ice Age that both the sea level and the land could be mobile (eustatic sea level changes—glacial isostasy). Additionally, of course, plate tectonics had some influence: Norway is situated at the western end of the Eurasian plate and is part of a passive continental margin. There are still open research problems, many of which can be addressed using modern methods of satellite-based geophysics and geodesy. Some other aspects as the permanent uplift trend of Scandinavia since the Cambrium or the rhythmic to and fro of magma in the upper mantle during the Pleistocene are mentioned.
Krug, Susann; Wittchen, Hans-Ulrich; Lieb, Roselind; Beesdo-Baum, Katja; Knappe, Susanne
2016-10-01
The negative impact of parental depression on offsprings' development has been repeatedly documented. There is however little research on the potential pathways contributing to this association. The present study examined the relationship between parental depressive disorders, family functioning and adolescents' self-esteem. A community-based sample of 1040 participants aged 14-17 years and their parents was assessed including direct and indirect information on parental psychopathology based on the Munich-Composite International Diagnostic Interview (M-CIDI). Family functioning and youth self-esteem were assessed by self-report questionnaires using the McMaster Family Assessment Device (FAD) in parents and the "Aussagen-Liste zum Selbstwertgefühl" in adolescents. Findings from multiple regression analyses indicated positive associations between parental depressive disorders and dimensions of dysfunctional family functioning as well as between dysfunctional familial affective involvement and youth's positive self-esteem. The relationship between parental depression and self-esteem was partly mediated by familial affective involvement. Associations may be underestimated, since incidence for depressive disorders spans to the third decade of life. Consensus diagnoses for parental depressive disorders were based on direct and indirect information for maximum use of available data, neglecting familial load, chronicity of parental depressive disorders or comorbid conditions. Thus, specificity of the findings for the family transmission of depressive disorders remains yet to be determined. Findings contribute to understanding of the pathways on how parental depression impairs offsprings' view of themselves, and to consider family functioning as a possible target for preventive interventions. Copyright © 2016 Elsevier B.V. All rights reserved.
Effect of Weight Losing on the Improving Clinical Statement of Patients With Knee Osteoarthritis.
Sadeghi, Alireza; Rad, Zahra Abbaspour; Sajedi, Behnam; Heydari, Amir Hossein; Akbarieh, Samira; Jafari, Behzad
2017-11-01
Osteoarthritis causes severe pain and disability in joints, one of the most prevalent involved joints is the knee joint. There are several therapeutics ways to control pain and disability, but almost none of them are definite treatment. In this article, we tried to reveal the effect of weight loss on improving symptoms of knee osteoarthritis as an effective and permanent therapeutic approach. We chose 62 patients with grade 1-2 (mild to moderate) knee osteoarthritis and divided them equally into case and control groups. Patients should not had used NSAIDs at least for 6 months before study initiation. Symptoms severity was measured by WOMAC and VAS questionnaires before and after 3 months follow up. Weight and BMI were recorded too. Case group was suggested to have weight loss diet of less fat and carbohydrates and control group did not have any limitation. Comparison of variables' average of case and control groups was not logistically meaningful at the initiation and after the end of the study. But there was a meaningful correlation between variables' changes and lifestyle change in both groups, especially in WOMAC and VAS scores. All variables in case group had statistically meaningful differences between their amounts at the beginning and after the end of the study, on the contrary of the control group. In the comparison of our study with similar studies in the world. We deduced that weight loss can improve symptoms of knee osteoarthritis even in short time weight loss diet (3 months). ZUMS.REC.1394.94. Copyright © 2017. Publicado por Elsevier España, S.L.U.
EDITORIAL: Greetings from the new Editor-in-Chief Greetings from the new Editor-in-Chief
NASA Astrophysics Data System (ADS)
Nielsch, Kornelius
2012-01-01
On 1 January 2012 I will be assuming the position of Editor-in-Chief of the journal Semiconductor Science and Technology (SST). I am flattered by the confidence expressed in my ability to carry out this challenging job and I will try hard to justify this confidence. The previous Editor-in-Chief, Laurens Molenkamp, University of Würzburg, Germany, has worked tirelessly for the last ten years and has done an excellent job for the journal. Everyone at the journal is profoundly grateful for his leadership and for his achievements In 2012 several new members will join the Editorial Board: Professor Deli Wang (University of California, San Diego) with considerable expertise in semiconductor nanowires, Professor Saskia Fischer (Humboldt University, Berlin, Germany) with a background in semiconductor quantum devices, and Professor Erwin Kessels (Eindhoven University of Technology, Netherlands) with extensive experience in plasma processing of thin films and gate oxides. In particular, I want to express my gratitude to Professor Israel Bar-Joseph (Weizmann Institute of Science, Israel) and Professor Maria Tamargo (The City College of New York, USA), who will leave next year and who have vigorously served the Editorial Board for years. The journal has recently introduced a fast-track option for manuscripts. This option is a high-quality, high-profile outlet for new and important research across all areas of semiconductor research. Authors can expect to receive referee reports in less than 20 days from submission. Once accepted, you can expect the articles to be online within two or three weeks from acceptance and to be published in print in less than a month. Furthermore, all fast-track communications published in 2011 will be free to read for ten years. More detailed information on fast-track publication can be found on the following webpage: http://iopscience.iop.org/0268-1242/page/Fast track communications It is encouraging to see that since the journal introduced pre-review, with the aim to raise the quality of our content, three years later the number of published articles has remained stable at around 220 per year, whilst the number of downloads and citations to the journal has grown. In 2011, three topical issues have been published, on: (Nano)characterization of semiconductor materials and structures (Guest Editor: Alberta Bonanni, University of Linz, Austria) Flexible OLEDs and organic electronics (Guest Editors: Jang-Joo Kim, Min-Koo Han, Cambridge University, UK, and Yong-Young Noh, Seoul National University, Korea) From heterostructures to nanostructures: an 80th birthday tribute to Zhores Alferov (Guest Editor: Dieter Bimberg, Technische Universität Berlin, Germany) For the coming years, I will strongly support that the number of published topical issues will continue on the same level or slightly rise. SST has planned the publication of the following topical issues for 2012: Non-polar and semipolar nitride semiconductors (Guest Editors: Jung Han, Yale University, USA, and Michael Kneissl, Technische Universität Berlin, Germany) Topological insulators (Guest Editors: Alberto Morpurgo, Université de Genève, Switzerland and Björn Trauzettel, Universität Basel, Switzerland) Atomic layer deposition (Guest Editor: Marek Godlewski, Polish Academy of Sciences, Poland) 50th Anniversary of the laser diode (Guest Editors: Mike Adams, Univeristy of Essex, UK and Stephane Calvez, University of Strathclyde, UK) In addition to the traditional topics of SST, I as Editor-in-chief, strongly support and welcome the submission of manuscripts on organic semiconductors, topological insulators, semiconductor nanostructures for photovoltaic, solid-state lighting and energy harvesting, IC application beyond Moore's law and fundamental works on semiconductors based on abundant materials. I am extremely optimistic about the future of SST. I believe that we will raise the standards of acceptance while maintaining the short time from submission to first decision. I am confident that we will continue to improve the quality of the papers published in this already first-class journal. I look forward to working with the journal's excellent staff and Editorial Board Members.
PREFACE: 31st European Physical Society Conference on Plasma Physics
NASA Astrophysics Data System (ADS)
Dendy, Richard
2004-12-01
This special issue of Plasma Physics and Controlled Fusion comprises refereed papers contributed by invited speakers at the 31st European Physical Society Conference on Plasma Physics. The conference was jointly hosted by the Rutherford Appleton Laboratory, by the EURATOM/UKAEA Fusion Association and by Imperial College London, where it took place from 28 June to 2 July 2004. The overall agenda for this conference was set by the Board of the Plasma Physics Division of the European Physical Society, chaired by Friedrich Wagner (MPIPP, Garching) and his successor Jo Lister (CRPP, Lausanne). It built on developments in recent years, by further increasing the scientific diversity of the conference programme, whilst maintaining its depth and quality. A correspondingly diverse Programme Committee was set up, whose members are listed below. The final task of the Programme Committee has been the preparation of this special issue. In carrying out this work, as in preparing the scientific programme of the conference, the Programme Committee formed specialist subcommittees representing the different fields of plasma science. The chairmen of these subcommittees, in particular, accepted a very heavy workload on behalf of their respective research communities. It is a great pleasure to take this opportunity to thank: Emilia R Solano (CIEMAT, Madrid), magnetic confinement fusion; Jürgen Meyer-ter-Vehn (MPQ, Garching), laser-plasma interaction and beam plasma physics; and Jean-Luc Dorier (CRPP, Lausanne), dusty plasmas. The relatively few papers in astrophysical and basic plasma physics were co-ordinated by a small subcommittee which I led. Together with Peter Norreys (RAL, Chilton), we five constitute the editorial team for this special issue. The extensive refereeing load, compressed into a short time interval, was borne by the Programme Committee members and by many other experts, to whom this special issue owes much. We are also grateful to the Local Organizing Committee chaired by Henry Hutchinson (RAL, Chilton), and to the Plasma Physics and Controlled Fusion journal team (Institute of Physics Publishing, Bristol), for their work on this conference. At the 2004 European Physical Society Conference on Plasma Physics, plenary invited speakers whose talks spanned the entire field were followed, each day, by multiple parallel sessions which also included invited talks. Invited speakers in both these categories were asked to contribute papers to this special issue (the contributed papers at this conference, and at all recent conferences in this series, are archived at http://epsppd.epfl.ch). The Programme Committee is very grateful to the many invited speakers who have responded positively to this request. Invited papers appear here in their order of presentation during the week beginning 28 June 2004; this ordering provides an echo of the character of the conference, as it was experienced by those who took part. Programme Committee 2004 Professor Richard Dendy UKAEA Culham Division, UK Chairman and guest editor Dr Jean-Luc Dorier Centre de Recherches en Physique des Plasmas, Lausanne, Switzerland (Co-ordinator of dusty plasmas and guest editor) Professor Jürgen Meyer-ter-Vehn Max-Planck-Institut für Quantenoptik, Garching, Germany (Co-ordinator of laser-plasma interaction and beam plasma physics and guest editor) Dr Peter Norreys Rutherford Appleton Laboratory, Chilton, UK (Scientific Secretary and guest editor) Dr Emilia R Solano CIEMAT Laboratorio Nacional de Fusión, Madrid, Spain ( Co-ordinator of magnetic confinement fusion and guest editor) Dr Shalom Eliezer Soreq Nuclear Research Centre, Israel Dr Wim Goedheer FOM-Instituut voor Plasmafysica, Rijnhuizen, Netherlands Professor Henry Hutchinson Rutherford Appleton Laboratory, Chilton, UK Professor John Kirk Max-Planck-Institut für Kernphysik, Heidelberg, Germany Dr Raymond Koch Ecole Royale Militaire/Koninklijke Militaire School, Brussels, Belgium Professor Gerrit Kroesen Technische Universiteit Eindhoven, Netherlands Dr Martin Lampe Naval Research Laboratory, Washington DC, USA Dr Jo Lister Centre de Recherches en Physique des Plasmas, Lausanne, Switzerland Dr Paola Mantica Istituto di Fisica del Plasma, Milan, Italy Professor Tito Mendonca Instituto Superior Tecnico, Lisbon, Portugal Dr Patrick Mora École Polytechnique, Palaiseau, France Professor Lennart Stenflo Umeå Universitet, Sweden Professor Paul Thomas CEA Cadarache, Saint-Paul-lez-Durance, France Professor Friedrich Wagner Max-Planck-Institut fr Plasmaphysik, Garching, Germany Professor Hannspeter Winter Technische Universität Wien, Austria
Zeitlicher Verlauf der avaskulären Nekrose des Hüftkopfes bei Patienten mit Pemphigus vulgaris.
Balighi, Kamran; Daneshpazhooh, Maryam; Aghazadeh, Nessa; Saeidi, Vahide; Shahpouri, Farzam; Hejazi, Pardis; Chams-Davatchi, Cheyda
2016-10-01
Pemphigus vulgaris (PV) wird in der Regel mit systemischen Corticosteroiden und Immunsuppressiva behandelt. Avaskuläre Nekrose (AVN) des Hüftkopfes ist eine gut bekannte schwerere Komplikation einer Corticosteroid-Therapie. Die Charakteristika dieser schweren Komplikation bei PV sind nach wie vor unbekannt. Nicht kontrollierte, retrospektive Untersuchung aller PV-bedingten AVN-Fälle, die in einer iranischen Klinik für bullöse Autoimmunerkrankungen zwischen 1985 und 2013 diagnostiziert wurden. Anhand der Krankenakten von 2321 untersuchten PV-Patienten wurden 45 Fälle (1,93 %) von femoraler AVN identifiziert. Dreißig davon waren Männer. Das mittlere Alter bei der Diagnose der AVN betrug 47,4 ± 14,2 Jahre. Der mittlere Zeitraum zwischen der Diagnose des PV und dem Einsetzen der AVN lag bei 25,3 ± 18,3 Monaten. Mit Ausnahme von acht Fällen (17,8 %) setzte die AVN bei der Mehrheit der Patienten innerhalb von drei Jahren nach Diagnose des PV ein. Die mittlere kumulative Dosis von Prednisolon bei Patienten mit AVN betrug 13.115,8 ± 7041,1 mg. Zwischen der Prednisolon-Gesamtdosis und dem Zeitraum bis zum Einsetzen der AVN bestand eine starke Korrelation (p = 0,001). Bei Patienten mit Alendronateinnahme in der Vorgeschichte war dieser Zeitraum signifikant kürzer (p = 0,01). Die AVN ist eine schwere Komplikation einer Corticosteroid-Behandlung bei Patienten mit PV. Sie wird bei 2 % der Patienten beobachtet und tritt vor allem in den ersten drei Behandlungsjahren auf. Bei Patienten, die höhere Dosen von Prednisolon erhalten, setzt die AVN tendenziell früher ein. © 2016 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
Gaskins, Matthew; Dittmann, Martin; Eisert, Lisa; Werner, Ricardo Niklas; Dressler, Corinna; Löser, Christoph; Nast, Alexander
2018-03-01
Laut einer Befragung im Jahre 2012 war der Umgang mit Antithrombotika bei dermatochirurgischen Eingriffen in Deutschland sehr heterogen. 2014 wurde erstmals eine evidenzbasierte Leitlinie zu diesem Thema veröffentlicht. Es wurde eine anonyme Befragung derselben Stichprobe zum Umgang mit Antithrombotika sowie zu Kenntnissen der Leitlinie durchgeführt. Die Ergebnisse wurden als relative Häufigkeiten berichtet und denen aus 2012 gegenübergestellt. 208 Antwortbögen wurden ausgewertet (Rücklaufquote: 36,6 %). Die große Mehrheit der Dermatologen erklärte, kleinere Eingriffe unter Fortführung der Therapie mit Phenprocoumon, niedrig dosierter Acetylsalicylsäure (≤ 100 mg) und Clopidogrel sowie mit direkten oralen Antikoagulanzien durchzuführen. Bei größeren Eingriffen war der Umgang hingegen weiterhin heterogen, insbesondere unter niedergelassenen Dermatologen. Der Anteil der Dermatologen, die Phenprocoumon, Acetylsalicylsäure und Clopidogrel leitlinienkonform verwendeten, hat sich insgesamt vergrößert. Führten 2012 beispielsweise 53,8 % der Klinikärzte bzw. 36,3 % der niedergelassenen Dermatologen eine große Exzision unter Fortführung der Therapie mit niedrig dosierter Acetylsalicylsäure durch, taten dies 2017 90,2 % bzw. 57,8 % (Phenprocoumon: 33,8 % bzw. 11,9 % auf 63,9 % bzw. 29,9 %; Clopidogrel: 36,9 % bzw. 23,2 % auf 63,9 % bzw. 30,6 %). Unter den Klinikärzten war ein hoher Anteil mit der Leitlinie vertraut und fand diese hilfreich. Eine Zunahme des leitlinienkonformen Verhaltens war bei allen Eingriffen zu verzeichnen. Bei größeren Eingriffen zeigte sich trotz deutlicher Verbesserung die Notwendigkeit verstärkter Anstrengungen zur Leitlinienumsetzung bzw. zur Identifizierung von Implementierungsbarrieren. © 2018 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
Big Biology: Supersizing Science During the Emergence of the 21st Century
Vermeulen, Niki
2017-01-01
Ist Biologie das jüngste Mitglied in der Familie von Big Science? Die vermehrte Zusammenarbeit in der biologischen Forschung wurde in der Folge des Human Genome Project zwar zum Gegenstand hitziger Diskussionen, aber Debatten und Reflexionen blieben meist im Polemischen verhaftet und zeigten eine begrenzte Wertschätzung für die Vielfalt und Erklärungskraft des Konzepts von Big Science. Zur gleichen Zeit haben Wissenschafts- und Technikforscher/innen in ihren Beschreibungen des Wandels der Forschungslandschaft die Verwendung des Begriffs Big Science gemieden. Dieser interdisziplinäre Artikel kombiniert eine begriffliche Analyse des Konzepts von Big Science mit unterschiedlichen Daten und Ideen aus einer Multimethodenuntersuchung mehrerer großer Forschungsprojekte in der Biologie. Ziel ist es, ein empirisch fundiertes, nuanciertes und analytisch nützliches Verständnis von Big Biology zu entwickeln und die normativen Debatten mit ihren einfachen Dichotomien und rhetorischen Positionen hinter sich zu lassen. Zwar kann das Konzept von Big Science als eine Mode in der Wissenschaftspolitik gesehen werden – inzwischen vielleicht sogar als ein altmodisches Konzept –, doch lautet meine innovative Argumentation, dass dessen analytische Verwendung unsere Aufmerksamkeit auf die Ausweitung der Zusammenarbeit in den Biowissenschaften lenkt. Die Analyse von Big Biology zeigt Unterschiede zu Big Physics und anderen Formen von Big Science, namentlich in den Mustern der Forschungsorganisation, der verwendeten Technologien und der gesellschaftlichen Zusammenhänge, in denen sie tätig ist. So können Reflexionen über Big Science, Big Biology und ihre Beziehungen zur Wissensproduktion die jüngsten Behauptungen über grundlegende Veränderungen in der Life Science-Forschung in einen historischen Kontext stellen. PMID:27215209
Inzidenz von bullösen Autoimmunerkrankungen in Serbien: eine retrospektive Studie über 20 Jahre.
Milinković, Mirjana; Janković, Slavenka; Medenica, Ljiljana; Nikolić, Miloš; Reljić, Vesna; Popadić, Svetlana; Janković, Janko
2016-10-01
Die meisten früheren Arbeiten zu den klinisch-epidemiologischen Merkmalen von bullösen Autoimmunerkrankungen (AIBD) konzentrierten sich vor allem auf eine einzige Krankheitsentität oder nur eine Krankheitsgruppe; nur in wenigen Studien wurde die Inzidenz verschiedener AIBD untersucht. Bei der vorliegenden Studie war es unser Ziel, das gesamte Spektrum der AIBD zu betrachten, die Inzidenz der häufigsten AIBD zu ermitteln und die zeitlichen Trends ihres Auftretens in Zentralserbien über einen Zeitraum von 20 Jahren zu untersuchen. Wir rekrutierten retrospektiv 1161 AIBD-Fälle, die in Zentralserbien von Januar 1991 bis Dezember 2010 neu diagnostiziert wurden. Die Diagnose stützte sich auf eine strikte klinische, histologische und immunhistologische Beurteilung. Folgende Inzidenzraten wurden für die einzelnen Erkrankungen ermittelt: 4,35 pro eine Million Einwohner/Jahr (pME/Jahr) für Pemphigus, 4,47 pME/Jahr für Pemphigoid, 1,42 pME/Jahr für Dermatitis herpetiformis (DH), 0,25 pME/Jahr IgA-Dermatose und 0,08 pME/Jahr für Epidermolysis bullosa acquisita. Im betrachteten Zeitraum stieg die altersbereinigte Inzidenzrate für Pemphigus und insbesondere für Pemphigoid signifikant an, während sie für DH, allerdings nicht signifikant, abnahm. Unsere Studie befasst sich zum ersten Mal mit den Inzidenzraten des gesamten Spektrums der AIBD in Serbien und untersucht die zeitlichen Trends ihres Auftretens über einen Zeitraum von 20 Jahren. Nach unserem besten Wissen wurde ein ähnlicher Befund wie der unsere, dass nämlich die Inzidenzraten von Pemphigus und Pemphigoid vergleichbar sind, bisher noch nicht publiziert. © 2016 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
Yasuhara-Bell, Jarred; Alvarez, Anne M
2015-03-01
The genus Clavibacter contains one recognized species, Clavibacter michiganensis. Clavibacter michiganensis is subdivided into subspecies based on host specificity and bacteriological characteristics, with Clavibacter michiganensis subsp. michiganensis causing bacterial canker of tomato. Clavibacter michiganensis subsp. michiganensis is often spread through contaminated seed leading to outbreaks of bacterial canker in tomato production areas worldwide. The frequent occurrence of non-pathogenic Clavibacter michiganensis subsp. michiganensis-like bacteria (CMB) is a concern for seed producers because Clavibacter michiganensis subsp. michiganensis is a quarantine organism and detection of a non-pathogenic variant may result in destruction of an otherwise healthy seed lot. A thorough biological and genetic characterization of these seed-associated CMB strains was performed using standard biochemical tests, cell wall analyses, metabolic profiling using Biolog, and single-gene and multilocus sequence analyses. Combined, these tests revealed two distinct populations of seed-associated members of the genus Clavibacter that differed from each other, as well as from all other described subspecies of Clavibacter michiganensis. DNA-DNA hybridization values are 70 % or higher, justifying placement into the single recognized species, C. michiganensis, but other analyses justify separate subspecies designations. Additionally, strains belonging to the genus Clavibacter isolated from pepper also represent a distinct population and warrant separate subspecies designation. On the basis of these data we propose subspecies designations for separate non-pathogenic subpopulations of Clavibacter michiganensis: Clavibacter michiganensis subsp. californiensis subsp. nov. and Clavibacter michiganensis subsp. chilensis subsp. nov. for seed-associated strains represented by C55(T) ( = ATCC BAA-2691(T) = CFBP 8216(T)) and ZUM3936(T) ( = ATCC BAA-2690(T) = CFBP 8217(T)), respectively. Recognition of separate subspecies is essential for improved international seed testing operations. © 2015 IUMS.
Smartes System für die Energiewende - der Übertragungsnetzbetreiber in der digitalen Zukunft
NASA Astrophysics Data System (ADS)
Pflaum, Rainer; Egeler, Tobias
Die Übertragungsnetze stellen eine zuverlässige Versorgung von Haushalt, Gewerbe und Industrie mit elektrischer Energie sicher und sind damit Grundlage einer modernen Wirtschaft und Gesellschaft. Die mittlerweile unumkehrbaren Entwicklungen der nationalen und europäischen Energiewende stellen den Übertragungsnetzbetreiber in seinen Kernaufgaben, dem Bau und Betrieb von Netzen, dem Markt- und Netzzugang und der Integration der erneuerbaren Energien vor neue und große Herausforderungen. Verbrauchsnahe dezentrale Erzeugung wie verbrauchsferne zentrale Erzeugung muss im Sinne der Gewährleistung der Systemstabilität gemanagt und in Einklang mit dem Verbrauch gebracht werden. Erneuerbare Energien müssen zudem in einem solchen System ihren Beitrag zur System- und Marktintegration leisten. All das erfordert mehr Daten, um in einem Gesamtsystem dynamische Reaktionsmöglichkeiten gewährleisten zu können. Erst die "Digitalisierung" schafft dabei die notwendigen Voraussetzungen die Komplexität zu stemmen. Die Digitalisierung stellt daher ein Kernelement dieses Wandels des Übertragungsnetzbetreibers dar, die einerseits mit zum Entstehen der neuen Herausforderungen beiträgt, andererseits aber auch hilft Werkzeuge bereitzustellen, diesen Herausforderungen zu begegnen. Im folgenden Beitrag wird aufgezeigt, wie die Digitalisierung die Aufgaben und Instrumente des Übertragungsnetzbetreibers verändern. Ausgehend von den heutigen Aufgaben eines Übertragungsnetzbetreibers und dem gültigen Rechtsrahmen werden unter dem Begriff "Notwendiges Set für morgen" smarte Elemente und Werkzeuge beschrieben, die bereits heute im Einsatz sind oder in den nächsten Jahren notwendig werden. Im Anschluss erfolgt anhand einiger Beispiele aus unterschiedlichen Bereichen eine Konkretisierung der Einsatzzwecke der Digitalisierung beim Übertragungsnetzbetreiber. Ein kurzer Ausblick mit Fokus auf den weiteren Veränderungsprozess rundet den Beitrag ab.
Kastaun, Sabrina; Brown, Jamie; Brose, Leonie S; Ratschen, Elena; Raupach, Tobias; Nowak, Dennis; Cholmakow-Bodechtel, Constanze; Shahab, Lion; West, Robert; Kotz, Daniel
2017-05-02
The prevalence of tobacco smoking in Germany is high (~27%). Monitoring of national patterns of smoking behaviour and data on the "real-world" effectiveness of cessation methods are needed to inform policies and develop campaigns aimed at reducing tobacco-related harm. In England, the Smoking Toolkit Study (STS) has been tracking such indicators since 2006, resulting in the adaptation of tobacco control policies. However, findings cannot be directly transferred into the German health policy context. The German Study on Tobacco Use (DEBRA: "Deutsche Befragung zum Rauchverhalten") aims to provide such nationally representative data. In June 2016, the study started collecting data from computer-assisted, face-to-face household interviews in people aged 14 years and older. Over a period of 3 years, a total of ~36,000 respondents will complete the survey with a new sample of ~2000 respondents every 2 months (=18 waves). This sample will report data on demographics and the use of tobacco and electronic (e-)cigarettes. Per wave, about 500-600 people are expected to be current or recent ex-smokers (<12 months since quitting). This sample will answer detailed questions about smoking behaviour, quit attempts, exposure to health professionals' advice on quitting, and use of cessation aids. Six-month follow-up data will be collected by telephone. The DEBRA study will be an important source of data for tobacco control policies, health strategies, and future research. The methodology is closely aligned to the STS, which will allow comparisons with data from England, a country with one of the lowest smoking prevalence rates in Europe (18%). This study has been registered at the German Clinical Trials Register ( DRKS00011322 ) on 25th November 2016.
NASA Astrophysics Data System (ADS)
McEvoy, Helen C.; Simpson, Robert; Machin, Graham
2004-04-01
The use of infrared tympanic thermometers for monitoring patient health is widespread. However, studies into the performance of these thermometers have questioned their accuracy and repeatability. To give users confidence in these devices, and to provide credibility in the measurements, it is necessary for them to be tested using an accredited, standard blackbody source, with a calibration traceable to the International Temperature Scale of 1990 (ITS-90). To address this need the National Physical Laboratory (NPL), UK, has recently set up a primary ear thermometer calibration (PET-C) source for the evaluation and calibration of tympanic (ear) thermometers over the range from 15 °C to 45 °C. The overall uncertainty of the PET-C source is estimated to be +/- 0.04 °C at k = 2. The PET-C source meets the requirements of the European Standard EN 12470-5: 2003 Clinical thermometers. It consists of a high emissivity blackbody cavity immersed in a bath of stirred liquid. The temperature of the blackbody is determined using an ITS-90 calibrated platinum resistance thermometer inserted close to the rear of the cavity. The temperature stability and uniformity of the PET-C source was evaluated and its performance validated. This paper provides a description of the PET-C along with the results of the validation measurements. To further confirm the performance of the PET-C source it was compared to the standard ear thermometer calibration sources of the National Metrology Institute of Japan (NMIJ), Japan and the Physikalisch-Technische Bundesanstalt (PTB), Germany. The results of this comparison will also be briefly discussed. The PET-C source extends the capability for testing ear thermometers offered by the NPL body temperature fixed-point source, described previously. An update on the progress with the commercialisation of the fixed-point source will be given.
Broad-band efficiency calibration of ITER bolometer prototypes using Pt absorbers on SiN membranes.
Meister, H; Willmeroth, M; Zhang, D; Gottwald, A; Krumrey, M; Scholze, F
2013-12-01
The energy resolved efficiency of two bolometer detector prototypes for ITER with 4 channels each and absorber thicknesses of 4.5 μm and 12.5 μm, respectively, has been calibrated in a broad spectral range from 1.46 eV up to 25 keV. The calibration in the energy range above 3 eV was performed against previously calibrated silicon photodiodes using monochromatized synchrotron radiation provided by five different beamlines of Physikalische Technische Bundesanstalt at the electron storage rings BESSY II and Metrology Light Source in Berlin. For the measurements in the visible range, a setup was realised using monochromatized halogen lamp radiation and a calibrated laser power meter as reference. The measurements clearly demonstrate that the efficiency of the bolometer prototype detectors in the range from 50 eV up to ≈6 keV is close to unity; at a photon energy of 20 keV the bolometer with the thick absorber detects 80% of the photons, the one with the thin absorber about 50%. This indicates that the detectors will be well capable of measuring the plasma radiation expected from the standard ITER scenario. However, a minimum absorber thickness will be required for the high temperatures in the central plasma. At 11.56 keV, the sharp Pt-L3 absorption edge allowed to cross-check the absorber thickness by fitting the measured efficiency to the theoretically expected absorption of X-rays in a homogeneous Pt-layer. Furthermore, below 50 eV the efficiency first follows the losses due to reflectance expected for Pt, but below 10 eV it is reduced further by a factor of 2 for the thick absorber and a factor of 4 for the thin absorber. Most probably, the different histories in production, storage, and operation led to varying surface conditions and additional loss channels.
Melchart, Dieter; Eustachi, Axel; Gronwald, Stephan; Wühr, Erich; Wifling, Kristina; Bachmeier, Beatrice E
2018-01-01
There is a global trend to a stronger active involvement of persons in the maintenance and restoring of health. The Competence Centre for Complementary Medicine and Naturopathy (CoCoNat) of the Technical University of Munich (TUM) has developed a lifestyle concept to enable each individual to manage his or her health - Individual Health Management (IHM) - and a web-based health portal named Virtual Tool for Education, Reporting, Information and Outcomes (VITERIO ® ), which addresses these needs for practice and research. The objectives of this study were to establish a core set of questionnaires for a self-assessment program on certain risk indications and comprehensive protection factors of health and to develop and enhance 1) tools for individual feedback, longitudinal self-monitoring, self-assessment, and (self-)care-planning; 2) training packages; 3) open notes and records for provider and patient; and 4) tools for monitoring groups and single participants in various indicators for individual coaching and scientific evaluation. The CoCoNat of TUM, Faculty for Applied Health Science of Technische Hochschule Deggendorf, VITERIO ® company, IHM campus network, and Erich Rothenfußer Foundation, Munich, provide a consortium responsible for content, research strategy, technical production and implication, postgraduate education for IHM coaches, implementation of IHM in various settings, and funding resources. A data set of indicators for health screening and self-monitoring of findings, symptoms, health behavior, and attitudes are integrated into a web-based health portal named VITERIO ® . The article introduces some implemented graphical solutions of developed tools and gives examples for daily use. Behavioral change and adaptation in attitudes and personal values are difficult issues of health education and lifestyle medicine. To address this problem best, the implementation of a patient-centric, performance measures-based program including open records and a blended learning concept were elaborated. The combination of an individual web-based health portal with personal coaching allows the implementation of IHM in everyday practice.
Testa, Antonella; Ballarini, Francesca; Giesen, Ulrich; Gil, Octávia Monteiro; Carante, Mario P; Tello, John; Langner, Frank; Rabus, Hans; Palma, Valentina; Pinto, Massimo; Patrono, Clarice
2018-06-01
There is a continued need for further clarification of various aspects of radiation-induced chromosomal aberration, including its correlation with radiation track structure. As part of the EMRP joint research project, Biologically Weighted Quantities in Radiotherapy (BioQuaRT), we performed experimental and theoretical analyses on chromosomal aberrations in Chinese hamster ovary cells (CHO-K1) exposed to α particles with final energies of 5.5 and 17.8 MeV (absorbed doses: ∼2.3 Gy and ∼1.9 Gy, respectively), which were generated by the microbeam at the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig, Germany. In line with the differences in linear energy transfer (approximately 85 keV/μm for 5.5 MeV and 36 keV/μm for 17.8 MeV α particles), the 5.5 MeV α particles were more effective than the 17.8 MeV α particles, both in terms of the percentage of aberrant cells (57% vs. 33%) and aberration frequency. The yield of total aberrations increased by a factor of ∼2, although the increase in dicentrics plus centric rings was less pronounced than in acentric fragments. The experimental data were compared with Monte Carlo simulations based on the BIophysical ANalysis of Cell death and chromosomal Aberrations model (BIANCA). This comparison allowed interpretation of the results in terms of critical DNA damage [cluster lesions (CLs)]. More specifically, the higher aberration yields observed for the 5.5 MeV α particles were explained by taking into account that, although the nucleus was traversed by fewer particles (nominally, 11 vs. 25), each particle was much more effective (by a factor of ∼3) at inducing CLs. This led to an increased yield of CLs per cell (by a factor of ∼1.4), consistent with the increased yield of total aberrations observed in the experiments.
The subjective experience of collaboration in interprofessional tutor teams: A qualitative study.
Weber, Tobias; Hoffmann, Henriette
2016-01-01
The Center for Interprofessional Training in Medicine at the Faculty of Medicine Carl Gustav Carus at the Technische Universität Dresden, Germany, has offered courses covering interprofessional material since the winter semester 2014/15. The unusual feature of these courses is that they are co-taught by peer tutors from medicine and nursing. This study investigates the subjective experiences of these tutors during the collaborative preparation and teaching of these tutorials with the aim of identifying the effects of equal participation in the perceptions and assessments of the other professional group. Semi-structured, guideline-based interviews were held with six randomly selected tutors. The interviews were analyzed using structuring content analysis. The results show that collaborative work led to reflection, mostly by the university student tutors, on the attitudes held. However, the co-tutors from each professional group were perceived to different degrees as being representative of those in their profession. Asked to master a shared assignment in a non-clinical context, the members of the different professional groups met on equal footing, even if the medical students had already gathered more teaching experience and thus mostly assumed a mentoring role over the course of working on and realizing the teaching units. The nursing tutors were primarily focused on their role as tutor. Both professional groups emphasized that prior to the collaboration they had an insufficient or no idea about the theoretical knowledge or practical skills of the other professional group. Overall, the project was rated as beneficial, and interprofessional education was endorsed. In the discussion, recommendations based on the insights are made for joint tutor training of both professional groups. According to these recommendations, harmonizing the teaching abilities of all tutors is essential to ensure equality during cooperation. Ideally, training programs should be attended together by medical and nursing students to emphasize their shared identity as "tutor".
Poppinga, Daniela; Delfs, Bjoern; Meyners, Jutta; Langner, Frank; Giesen, Ulrich; Harder, Dietrich; Poppe, Bjoern; Looe, Hui K
2018-05-04
This study aims at the experimental determination of the diameters and thicknesses of the active volumes of solid-state photon-beam detectors for clinical dosimetry. The 10 MeV proton microbeam of the PTB (Physikalisch-Technische Bundesanstalt, Braunschweig) was used to examine two synthetic diamond detectors, type microDiamond (PTW Freiburg, Germany), and the silicon detectors Diode E (PTW Freiburg, Germany) and Razor Diode (Iba Dosimetry, Germany). The knowledge of the dimensions of their active volumes is essential for their Monte Carlo simulation and their applications in small-field photon-beam dosimetry. The diameter of the active detector volume was determined from the detector current profile recorded by radially scanning the proton microbeam across the detector. The thickness of the active detector volume was determined from the detector's electrical current, the number of protons incident per time interval and their mean stopping power in the active volume. The mean energy of the protons entering this volume was assessed by comparing the measured and the simulated influence of the thickness of a stack of aluminum preabsorber foils on the detector signal. For all detector types investigated, the diameters measured for the active volume closely agreed with the manufacturers' data. For the silicon Diode E detector, the thickness determined for the active volume agreed with the manufacturer's data, while for the microDiamond detectors and the Razor Diode, the thicknesses measured slightly exceeded those stated by the manufacturers. The PTB microbeam facility was used to analyze the diameters and thicknesses of the active volumes of photon dosimetry detectors for the first time. A new method of determining the thickness values with an uncertainty of ±10% was applied. The results appear useful for further consolidating detailed geometrical knowledge of the solid-state detectors investigated, which are used in clinical small-field photon-beam dosimetry. © 2018 American Association of Physicists in Medicine.
Kisch, Rebecca; Bergmann, Antje; Koller, Daniela; Leidl, Reiner; Mansmann, Ulrich; Mueller, Martin; Sanftenberg, Linda; Schelling, Joerg; Sundmacher, Leonie; Voigt, Karen; Grill, Eva
2018-01-01
Introduction Mobility limitations have a multitude of different negative consequences on elderly patients including decreasing opportunities for social participation, increasing the risk for morbidity and mortality. However, current healthcare has several shortcomings regarding mobility sustainment of older adults, namely a narrow focus on the underlying pathology, fragmentation of care across services and health professions and deficiencies in personalising care based on patients’ needs and experiences. A tailored healthcare strategy targeted at mobility of older adults is still missing. Objective The objective is to develop multiprofessional care pathways targeted at mobility sustainment and social participation in patients with vertigo/dizziness/balance disorders (VDB) and osteoarthritis (OA). Methods Data regarding quality of life, mobility limitation, pain, stiffness and physical function is collected in a longitudinal observational study between 2017 and 2019. General practitioners (GPs) recruit their patients with VDB or OA. Patients who visited their GP in the last quarter will be identified in the practice software based on VDB and OA-related International Classification of Diseases 10th Revision. Study material will be sent from the practice to patients by mail. Six months and 12 months after baseline, all patients will receive a mail directly from the study team containing the follow-up questionnaire. GPs fill out questionnaires regarding patient diagnostics, therapy and referrals. Ethics and dissemination The study was approved by the ethical committee of the Ludwig-Maximilians-Universität München and of the Technische Universität Dresden. Results will be published in scientific, peer-reviewed journals and at national and international conferences. Results will be disseminated via newsletters, the project website and a regional conference for representatives of local and national authorities. PMID:29680815
Electronic Structure and Morphology of Graphene Layers on SiC
NASA Astrophysics Data System (ADS)
Ohta, Taisuke
2008-03-01
Recent years have witnessed the discovery and the unique electronic properties of graphene, a sheet of carbon atoms arranged in a honeycomb lattice. The unique linear dispersion relation of charge carriers near the Fermi level (``Dirac Fermions'') lead to exciting transport properties, such as an unusual quantum Hall effect, and have aroused scientific and technological interests. On the way towards graphene-based electronics, a knowledge of the electronic band structure and the morphology of epitaxial graphene films on silicon carbide substrates is imperative. We have studied the evolution of the occupied band structure and the morphology of graphene layers on silicon carbide by systematically increasing the layer thickness. Using angle-resolved photoemission spectroscopy (ARPES), we examine this unique 2D system in its development from single layer to multilayers, by characteristic changes in the π band, the highest occupied state, and the dispersion relation in the out-of-plane electron wave vector in particular. The evolution of the film morphology is evaluated by the combination of low-energy electron microscopy and ARPES. By exploiting the sensitivity of graphene's electronic states to the charge carrier concentration, changes in the on-site Coulomb potential leading to a change of π and π* bands can be examined using ARPES. We demonstrate that, in a graphene bilayer, the gap between π and π* bands can be controlled by selectively adjusting relative carrier concentrations, which suggests a possible application of the graphene bilayer for switching functions in electronic devices. This work was done in collaboration with A. Bostwick, J. L. McChesney, and E. Rotenberg at Advanced Light Source, Lawrence Berkeley National Laboratory, K. Horn at Fritz-Haber-Institut, K. V. Emtsev and Th. Seyller at Lehrstuhl für Technische Physik, Universität Erlangen-Nürnberg, and F. El Gabaly and A. K. Schmid at National Center for Electron Microscopy, Lawrence Berkeley National Laboratory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, T
Purpose: Since 2008 the Physikalisch-Technische Bundesanstalt (PTB) has been offering the calibration of {sup 125}I-brachytherapy sources in terms of the reference air-kerma rate (RAKR). The primary standard is a large air-filled parallel-plate extrapolation chamber. The measurement principle is based on the fact that the air-kerma rate is proportional to the increment of ionization per increment of chamber volume at chamber depths greater than the range of secondary electrons originating from the electrode x{sub 0}. Methods: Two methods for deriving the RAKR from the measured ionization charges are: (1) to determine the RAKR from the slope of the linear fit tomore » the so-called ’extrapolation curve’, the measured ionization charges Q vs. plate separations x or (2) to differentiate Q(x) and to derive the RAKR by a linear extrapolation towards zero plate separation. For both methods, correcting the measured data for all known influencing effects before the evaluation method is applied is a precondition. However, the discrepancy of their results is larger than the uncertainty given for the determination of the RAKR with both methods. Results: A new approach to derive the RAKR from the measurements is investigated as an alternative. The method was developed from the ground up, based on radiation transport theory. A conversion factor C(x{sub 1}, x{sub 2}) is applied to the difference of charges measured at the two plate separations x{sub 1} and x{sub 2}. This factor is composed of quotients of three air-kerma values calculated for different plate separations in the chamber: the air kerma Ka(0) for plate separation zero, and the mean air kermas at the plate separations x{sub 1} and x{sub 2}, respectively. The RAKR determined with method (1) yields 4.877 µGy/h, and with method (2) 4.596 µGy/h. The application of the alternative approach results in 4.810 µGy/h. Conclusion: The alternative method shall be established in the future.« less
Statthaler, Karina; Schwarz, Andreas; Steyrl, David; Kobler, Reinmar; Höller, Maria Katharina; Brandstetter, Julia; Hehenberger, Lea; Bigga, Marvin; Müller-Putz, Gernot
2017-12-28
In this work, we share our experiences made at the world-wide first CYBATHLON, an event organized by the Eidgenössische Technische Hochschule Zürich (ETH Zürich), which took place in Zurich in October 2016. It is a championship for severely motor impaired people using assistive prototype devices to compete against each other. Our team, the Graz BCI Racing Team MIRAGE91 from Graz University of Technology, participated in the discipline "Brain-Computer Interface Race". A brain-computer interface (BCI) is a device facilitating control of applications via the user's thoughts. Prominent applications include assistive technology such as wheelchairs, neuroprostheses or communication devices. In the CYBATHLON BCI Race, pilots compete in a BCI-controlled computer game. We report on setting up our team, the BCI customization to our pilot including long term training and the final BCI system. Furthermore, we describe CYBATHLON participation and analyze our CYBATHLON result. We found that our pilot was compliant over the whole time and that we could significantly reduce the average runtime between start and finish from initially 178 s to 143 s. After the release of the final championship specifications with shorter track length, the average runtime converged to 120 s. We successfully participated in the qualification race at CYBATHLON 2016, but performed notably worse than during training, with a runtime of 196 s. We speculate that shifts in the features, due to the nonstationarities in the electroencephalogram (EEG), but also arousal are possible reasons for the unexpected result. Potential counteracting measures are discussed. The CYBATHLON 2016 was a great opportunity for our student team. We consolidated our theoretical knowledge and turned it into practice, allowing our pilot to play a computer game. However, further research is required to make BCI technology invariant to non-task related changes of the EEG.
PREFACE: Brazil MRS Meeting 2014
NASA Astrophysics Data System (ADS)
2015-11-01
The annual meetings, organized by the Brazilian materials research society - B-MRS, are amongst the most import discussion forums in the area of materials science and engineering in Brazil, with a growing interest from the national and international scientific society. In the last 4 years, more than 1,500 participants have attended the B-MRS meetings, promoting an auspicious environment for presentation and discussion of scientific and technological works in the materials science area. The XIII Brazilian Materials Research Society Meeting was held from 28 September to 02 October, 2014, in João Pessoa, PB, Brazil. The Meeting congregated more than 1650 participants from the whole of Brazil and from 28 other countries. More than 2100 abstracts were accepted for presentation, distributed along 19 Symposia following the format used in traditional meetings of Materials Research Societies. These involved topics such as: synthesis of new materials, computer simulations, optical, magnetic and electronic properties, traditional materials as clays and cements, advanced metals, carbon and graphene nanostructures, nanomaterials for nanostructures, energy storage systems, composites, surface engineering and others. A novelty was a symposium dedicated to innovation and technology transfer in materials research. The program also included 7 Plenary Lectures presented by internationally renowned researchers: Alberto Salleo from Stanford University, United States of America; Roberto Dovesi from Universita' degli Studi di Torino, Italy; Luís Antonio F. M. Dias Carlos from Universidade de Aveiro, Portugal; Jean Marie Dubois from Institut Jean-Lamour, France; Sir Colin Humphreys from University of Cambridge, England; Karl Leo from Technische Universität Dresden, Germany; Robert Chang from Northwestern University, Evanston, United States of America. The numbers of participants in the B-MRS meetings have been growing continuously, and in this meeting we had almost 2200 presentations distributed in plenaries, invited, oral presentations and posters. From these presentations, each symposium chose the best oral and poster presentations presented by students, which were awarded with the ''Bernhard Gross Award''. Some papers of these awarded works are presented in this proceeding.
NASA Astrophysics Data System (ADS)
Büermann, L.
2016-09-01
Materials used for the production of protective devices against diagnostic medical X-radiation described in the international standard IEC 61331-3 need to be specified in terms of their lead attenuation equivalent thickness according to the methods described in IEC 61331-1. In May 2014 the IEC published the second edition of these standards which contain improved methods for the determination of attenuation ratios and the corresponding lead attenuation equivalent thicknesses of lead-reduced or lead-free materials. These methods include the measurement of scattered photons behind the protective material which were hitherto neglected but are becoming more important because of the increasing use of lead-reduced or even lead-free materials. They can offer the same protective effect but are up to 20% lighter and also easier to dispose of. The new method is based on attenuation ratios measured with the so-called ``inverse broad beam condition''. Since the corresponding measurement procedure is new and in some respects more complex than the methods used in the past, it was regarded as being helpful to have a description of how such measurements can reliably be performed. This technical report describes in detail the attenuation ratio measurements and corresponding procedures for the lead equivalent determinations of sample materials using the method with the inverse broad beam condition as carried out at the Physikalisch-Technische Bundesanstalt (PTB). PTB still offers material testing and certification for the German responsible notified body. In addition to the description of the measurements at PTB, a short technical guide is provided for testing laboratories which intend to establish this kind of protective material certification. The guide includes technical recommendations for the testing equipment like X-ray facilities, reference lead sheets and radiation detectors; special procedures for the determination of the lead attenuation equivalent thickness; their uncertainties and the necessary contents of the test certificate.
V-SUIT Model Validation Using PLSS 1.0 Test Results
NASA Technical Reports Server (NTRS)
Olthoff, Claas
2015-01-01
The dynamic portable life support system (PLSS) simulation software Virtual Space Suit (V-SUIT) has been under development at the Technische Universitat Munchen since 2011 as a spin-off from the Virtual Habitat (V-HAB) project. The MATLAB(trademark)-based V-SUIT simulates space suit portable life support systems and their interaction with a detailed and also dynamic human model, as well as the dynamic external environment of a space suit moving on a planetary surface. To demonstrate the feasibility of a large, system level simulation like V-SUIT, a model of NASA's PLSS 1.0 prototype was created. This prototype was run through an extensive series of tests in 2011. Since the test setup was heavily instrumented, it produced a wealth of data making it ideal for model validation. The implemented model includes all components of the PLSS in both the ventilation and thermal loops. The major components are modeled in greater detail, while smaller and ancillary components are low fidelity black box models. The major components include the Rapid Cycle Amine (RCA) CO2 removal system, the Primary and Secondary Oxygen Assembly (POS/SOA), the Pressure Garment System Volume Simulator (PGSVS), the Human Metabolic Simulator (HMS), the heat exchanger between the ventilation and thermal loops, the Space Suit Water Membrane Evaporator (SWME) and finally the Liquid Cooling Garment Simulator (LCGS). Using the created model, dynamic simulations were performed using same test points also used during PLSS 1.0 testing. The results of the simulation were then compared to the test data with special focus on absolute values during the steady state phases and dynamic behavior during the transition between test points. Quantified simulation results are presented that demonstrate which areas of the V-SUIT model are in need of further refinement and those that are sufficiently close to the test results. Finally, lessons learned from the modelling and validation process are given in combination with implications for the future development of other PLSS models in V-SUIT.
Experimental determination of gravitomagnetic effects by means of ring lasers
NASA Astrophysics Data System (ADS)
Tartaglia, Angelo
2013-08-01
A new experiment aimed to the detection of the gravito-magnetic Lense-Thirring effect at the surface of the Earth will be presented; the name of the experiment is GINGER. The proposed technique is based on the behavior of light beams in ring-lasers, also known as gyrolasers. A three-dimensional array of ringlasers will be attached to a rigid "monument"; each ring will have a different orientation in space. Within the space-time of a rotating mass the propagation of light is indeed anisotropic; part of the anisotropy is purely kinematical (Sagnac effect), part is due to the interaction between the gravito-electric field of the source and the kinematical motion of the observer (de Sitter effect), finally there is a contribution from the gravito-magnetic component of the Earth (gravito-magnetic frame dragging or Lense-Thirring effect). In a ring-laser a light beam traveling counterclockwise is superposed to another beam traveling in the opposite sense. The anisotropy in the propagation leads to standing waves with slightly different frequencies in the two directions; the final effect is a beat frequency proportional to the size of the instrument and its effective rotation rate in space, including the gravito-magnetic drag. Current laser techniques and the performances of the best existing ring-lasers allow at the moment a sensitivity within one order of magnitude of the required accuracy for the detection of gravito-magnetic effects, so that the objective of GINGER is in the range of feasibility and aims to improve the sensitivity of a couple of orders of magnitude with respect to present. The experiment will be underground, probably in the Gran Sasso National Laboratories in Italy, and is based on an international collaboration among four Italian groups, the Technische Universität München and the University of Canterbury in Christchurch (NZ).
Geology of ultra-high-pressure rocks from the Dabie Shan, Eastern China
NASA Astrophysics Data System (ADS)
Schmid, Robert
2001-02-01
A multidisciplinary study has been carried out to contribute to the understanding of the geologic evolution of the largest known occurrence of ultra-high-pressure (UHP) rocks on Earth, the Dabie Shan of eastern China. Geophysical data, collected along a ca. 20 km E-W trending seismic line in the eastern Dabie Shan, indicate that the crust comprises three layers. The upper crust has a homogeneously low reflectivity and exhibits roughly subhorizontal reflectors down to ca. 15 km. It is therefore interpreted to portray a crustal UHP slab thrust over non-UHP crust. An aprubt change in intensity and geometry of observed reflectors marks the boundary of a mid- to lower crustal zone which is present down to ca. 33 km. This crustal zone likely represents cratonal Yangtze crust that was unaffected by the Triassic UHP event and which has acted as the footwall during exhumation of the crustal wedge. Strong and continuous reflectors occurring at ca. 33-40 km depth most likely trace the Moho at the base of the crust. Any trace of a crustal root, that may have formed in response to collision tectonics, is therefore not preserved. A shollow tomographic velocity modell based on inversion of the first arrivals is constructed additionally. This model clearly images the distinct lithologies on both sides of the Tan Lu fault. Sediments to the east exhibit velocities of about 3.4 - 5.0 km* s^-1, whereas the gneisses have 5.2 - 6.0 km*s^-1. Geometry of velocity isolines may trace the structures present in the rocks. Thus the sediments dip shallowly towards the fault, whereas isoclinal folds are imaged to occur in the gneisses. Field data from the UHP unit of the Dabie Shan enables definition of basement-cover sequences that represent sections of the former passive margin of the Yangtze craton. One of the cover sequences, the Changpu unit, still displays a stratigraphic contact with basement gneisses, while the other, the Ganghe unit, includes no relative basement exposure. The latter unit is in tectonic contact with the basement of the former unit via a greenschist-facies blastomylonite. The Changpu unit is chiefly constituted by calc-arenitic metasediments intercalated with meta-basalts, whereas the Ganghe unit contains arenitic-volcanoclastic metasediments that are likewise associated with meta-basalts. The basement comprises a variety of felsic gneisses, ranging from preserved eclogitic- to greenschist-facies paragenesis, and locally contains mafic-ultramafic meta-plutons in addition to minor basaltic rocks. Metabasites of all lithologies are eclogite-facies or are retrogressed equivalents, which, with the exception of those from the Ganghe unit, bear coesite and thus testify to an UHP metamorphic overprint. Mineral chemistry of the analysed samples reveal large compositional variations among the main minerals, i.e. garnet and omphacite, indicating either distinct protoliths or different degrees of interaction with their host-rocks. Contents of ferric iron in low Fetot omphacites are determined by wet chemical titration and found to be rather high, i.e. 30-40 %. However, a even more conservative estimate of 50% is applied in the corresponding calculations, in order to be comparable with previous studies. Textural constraints and compositional zonation pattern are compatible with equilibrium conditions during peak metamorphism followed by a retrogressive overprint. P-T data are calculated with special focus on the application of the garnet-omphacite-phengite barometer, combined with Fe-Mg exchange thermometers. Maximum pressures range from 42-48 kbar (for the Changpu unit) to ~37 kbar (for the Ganghe unit and basement rocks). Temperatures during the eclogite metamorphism reached ca. 750 °C. Although the sample suite reveals variable peak-pressures, temperatures are in reasonable agreement. Pressure differences are interpreted to be due to strongly Ca-dominated garnet (up to 50 mol % grossular in the Changpu unit) and modification of peak-compositions during retrogressive metamorphism. The integrated geological data presented in this thesis allow it to be concluded that, i) basement and cover rocks are present in the Dabie Shan and both experienced UHP conditions ii) the Dabie Shan is the metamorphic equivalent of the former passive margin of the Yangtze craton iii) felsic gneisses undergoing UHP metamorphism are affected by volume changes due to phase transitions (qtz coe), which directly influence the tectono-metamorphic processes iv) initial differences in temperature may account for the general lack of lower crustal rocks in UHP-facies Um das Verständnis der geologischen Entwicklung des größten bekannten Vorkommens von ultra-hochdruck (UHP) Gesteinen auf der Erde, des Dabie Shan im östlichen China, zu erhöhen, wurde eine multidisziplinäre Studie durchgeführt. Geophysikalische Daten wurden entlang einer ca. 20 km langen seismischen Linie im östlichen Dabie Shan gesammelt. Diese reflektionsseismischen Daten zeigen, dass die Kruste aus drei Lagen besteht. Die Oberkruste besitzt eine durchgehend niedrige Reflektivität und meist subhorizontale Reflektoren bis in eine Tiefe von ca. 15 km. Aufgrund dieser Charakteristika wird diese Zone als UHP-bezogener krustaler Keil interpretiert, der auf nicht UHP Kruste überschoben wurde. Ein abrupter Wechsel in der Geometrie aber auch Intensität der Reflektoren markiert die Grenze zu einer mittel- bis unterkrustalen Zone, die sich bis ca. 33 km Tiefe erstreckt. Diese Zone repräsentiert wahrscheinlich kratonale Yangtze Kruste, die von der triassischen UHP-Orogenese nicht erfasst wurde, aber während der Exhumierung das Liegende relativ zum UHP Keil war. Starke und kontinuierliche Reflektoren im Tiefenintervall von 33-40 km bilden höchstwahrscheinlich die Moho an der Basis der Kruste ab. Relikte einer Krustenwurzel, die sich wahrscheinlich während der Kollisionstektonik gebildet hatte, sind nicht sichtbar. Ein flaches tomographisches Geschwindigkeitsmodell, das auf der Inversion der Ersteinsätze gründet, konnte zusätzlich erstellt werden. Dieses Modell bildet deutlich die unterschiedlichen Lithologien auf beiden Seiten der Tan Lu Störung ab. Sedimente östlich der Störung zeigen Geschwindigkeiten von 3.4 - 5.0 km* s^-1, wohingegen die Gneise im Westen 5.2 - 6.0 km*s^-1 aufweisen. Die Geometrie der Geschwindigkeits-Isolinien kann als Ausdruck der Strukturen der Gesteine angenommen werden. Somit zeigen die Sedimente ein nordwestliches Einfallen zur Störung hin, wohingegen isoklinale Falten in den Gneisen abgebildet werden. Geländedaten aus der UHP Einheit des Dabie Shan ermöglichen die Definition von Grundgebirgs- und Deckeinheiten, die Teile des ehemaligen passiven Kontinentalrandes des Yangtze Kratons repräsentieren. Eine der Deckeinheiten, die Changpu Einheit, besitzt nach wie vor einen stratigraphischen Kontakt zu den Grundgebirgs-Gneisen. Der anderen Einheit hingegen, der Ganghe Einheit, fehlt ein entsprechendes Grundgebirge. Diese Einheit steht vielmehr über einen Blasto-Mylonit in tektonischem Kontakt zum Grundgebirge der vorherigen. Die Changpu Einheit baut sich aus kalk-arenitischen Metasedimenten auf, die mit Metabasalten assoziiert sind. Die Ganghe Einheit wird von arenitisch-vulkanoklastischen Metasedimenten, die ebenfalls mit metabasaltischen Gesteinen vergesellschaftet sind, dominiert. Das Grundgebirge baut sich aus diversen felsischen Gneisen auf, die von reliktisch eklogitfaziell bis grünschieferfaziell ausgeprägt sind, und in denen, zusätzlich zu Metabasalten, sporadisch mafisch-ultramafische Meta-Plutone auftreten. Mit Ausnahme der Ganghe Einheit, führen die Metabasite Coesit und belegen somit das UHP Ereignis. Die Mineralchemie der analysierten Proben dokumentiert deutliche Variationen in der Zusammensetzung der Hauptminerale, Granat und Omphazit, was entweder unterschiedliche Protolithe oder unterschiedliche Grade von Stoffaustausch mit den Wirtsgesteinen reflektiert. Gehalte von dreiwertigem Eisen in Omphaziten mit geringen Gesamteisengehalten, wurden mittels Titration bestimmt, wobei sich Werte von 30-40 % ergaben. Dennoch wurde ein noch konservativerer Wert von 50% dreiwertigem Eisen in den entsprechenden Berechnungen angenommen, hauptsächlich, um mit anderen Arbeiten vergleichbar zu sein. Texturen und chemische Zonierungen in den Mineralen sind kompatibel mit Gleichgewichtsbedingungen während dem Höhepunkt der Metamorphose, der retrograd überprägt wird. P-T Daten wurden mit deutlicher Betonung auf das Granat-Omphazit-Phengit Barometer, das mit Fe-Mg Austausch-Thermometern kombiniert wurde, berechnet. Höchstdrucke reichen von 42-48 kbar (für die Changpu Einheit) bis ca. 37 kbar (für das Grundgebirge und die Ganghe Einheit). Während der eklogitfaziellen Metamorphose wurden Temperaturen von ca. 750 °C erreicht. Obwohl die maximalen Drucke deutlich schwanken, sind die Temperaturbestimmungen in guter Übereinstimmung. Die Druckschwankungen können zum einen durch deutlich Ca-dominierte Granate (bis zu 50 mol% Grossular in der Changpu Einheit) und/oder zum anderen durch Modifikationen der Mineralzusammensetzungen während der retrograden Metamorphose erklärt werden. Die präsentierten integrativen geologischen Daten ermöglichen die folgenden Schlussfolgerungen i) Grundgebirgs- und Deckeinheiten treten im Dabie Shan auf und wurden beide UHP metamorph überprägt ii) Der Dabie Shan ist das metamorphe Äquivalent des früheren passiven Kontinentalrandes des Yangtze Kratons iii) felsische Gneise, die eine UHP Metamorphose durchlaufen, sind von Volumenver-änderungen betroffen, die durch großräumige Phasenumwandlungen (Quarz Coesit) hervorgerufen werden, was direkt die tektono-metamorphen Prozesse beeinflusst iv) Initiale Unterschiede in der Temperatur sind möglicherweise dafür verantwortlich, dass generell Unterkrustengesteine in UHP Fazies fehlen
Turgeon, Y; Whitaker, H A
2000-01-01
The nineteenth century witnessed many advances in neuroscientific concepts. Among the notable are Charles Bell's (1774-1842) and François Magendie's (1783-1855) identification of sensory and motor pathways, Thomas Henry Huxley's (1825-1895) elaboration of evolutionary theory in the context of comparative neuroanatomy, and Emile Du Bois-Reymond's (1818-1896) and Hermann von Helmholtz's (1821-1894) work in experimental neurophysiology and on the concept of nervous energy. In Germany, the idea that the nervous system consisted of two elements, one that generated nervous energy and another that conducted it throughout the body, had wide currency in mid-nineteenth century. In France, Pierre Jean Georges Cabanis (1757-1808), physician, philosopher, and one of the founders of modern psychophysiology, argued that the brain is the part of the body in which electricity is stored. In his Rapports du Physique et du Moral de l'Homme, published between 1796 and 1802 (translated into German under the title Verhältnis der Seele zum Körper (1808)), Cabanis proposed new ideas on brain function, on the brain's own sensibility, on the concept of will, and on the chemical basis of nervous activity. In the Rapports Cabanis proposed a theory of how brain and nerves relate to thought and behavior. Foreshadowing later developments in neuropsychology, he suggested that different parts of the nervous system have separate functions. Despite the fact that Cabanis had many interesting ideas about brain function, he has been largely ignored by historians of neuroscience; e. g., he is mentioned briefly in Clark and Jacyna (1989), in only two footnotes in Neuburger (1897/1981), and not at all in Finger (1994). Cabanis's far-reaching theory of how the brain works helped shape understanding of the general notion of nervous energy in nineteenth-century European neuroscience.
Biologie statt Philosophie? Evolutionäre Kulturerklärungen und ihre Grenzen
NASA Astrophysics Data System (ADS)
Illies, Christian
Vor über siebzig Jahren fand man in einer Höhle nahe Hohlenstein-Stadel, im heutigen Baden-Württemberg, eine Frau, die keiner bekannten Spezies und nicht einmal eindeutig den Hominiden zugeordnet werden konnte. Wegen ihres Aussehens wurde sie schon bald als "Löwenfrau“ bekannt (unterdessen wird sie als "Löwenmensch“ bezeichnet, da die in solchen Fragen Klarheit schaffenden Geschlechtsteile bei der Figur fehlen und in Zeiten von gender mainstreaming derartige Festlegungen gerne vermieden werden), denn sie hatte eine menschlich-aufrechte, unbehaarte Gestalt mit weiblichen Rundungen, aber zugleich eine Mähne, sowie Augen, Ohren und Schnauze eines Löwen. Eine sehr weitläufige Verwandte des Minotaurus, so schien es, und doch wesentlich älter als alle Bewohner des Olymps, denn vermutlich wurde die knapp 30 cm große Skulptur bereits in der Altsteinzeit vor etwa 32.000 Jahren aus Mammut-Elfenbein geschnitzt. Wir wissen nicht, ob sie kultischen Zwecken diente oder ein Kind mit ihr spielte, ob sie als Glücksbringer für die Jagd oder als Schamanin mit Löwenmaske verehrt und gefürchtet wurde. Aber die Löwenfrau legt nahe, dass der Mensch schon im Morgendämmern seiner Kultur über die eigene Nähe, aber auch Distanz zum Tier nachgedacht haben muss. Die Frage nach der menschlichen Selbstverortung begegnet uns in dieser Figur, und sie bestimmt viele Zeugnisse menschlichen Nachdenkens, welche uns die Altertumswissenschaften vorlegen. Mit dem Begriff "animal rationale“, wie er unter Bezug auf Aristoteles geprägt wurde, findet sie schließlich ihre klassische, für das Abendland lange Zeit maßgebliche Antwort: Der Mensch als Tier, dessen spezifisches Merkmal die Vernunftbegabtheit ist, die ihn zugleich von allen anderen Tieren abgrenzt und über sie stellt. Aber wo genau verläuft die Grenze? Und wie kann der Mensch beides zugleich sein? Die aristotelische Definition beantwortet diese Fragen nach der Doppelnatur nicht, sondern erhebt das offene Rätsel gleichsam zur Wesensbestimmung des Menschen.
Hilp, M; Senjuk, S
2001-06-01
USP 1995 (The United States Pharmacopeia, 23rd Edit., (1995), potassium iodide p. 1265, sodium iodide p. 1424), PH. EUR. 1997 (European Pharmacopoeia, third ed., Council of Europe, Strasbourg, (1997), potassium iodide p. 1367, sodium iodide p. 1493) and JAP 1996 (The Japanes Pharmacopoeia, 13th ed. (1996), potassium iodide p. 578, sodium iodide p. 630) determine iodide with the ICl-method (J. Am. Chem. Soc. 25 (1903) 756-761; Z. Anorg. Chem. 36 (1903) 76-83; Fresenius Z. Anal. Chem. 106 (1936) 12-23; Arzneibuch-Kommentar, Wissenschaftliche Erläuterungen zum Europäischen Arzneibuch, Wissenschaftliche Verlagsgesellschaft mbH, Stuttgart, Govi-Verlag - Pharmazeutischer Verlag GmbH, Eschborn, 12th suppl. (1999), K10 p. 2), using chloroform, which is toxic and hazardous to environment. Without the application of chlorinated hydrocarbons USP 2000 (The United State Pharmacopeia, 24th ed. (2000), potassium iodide p. 1368, sodium iodide p. 1535) and Brit 1999 (British Pharmacopoeia London, (1999), Appendix VIII C, p. A162) titrate iodide with the redox indicator amaranth. A titration with potentiometric indication giving two end-points at the step of I(2) and [ICl(2)](-) is described. Due to the high concentration of hydrochloric acid required for the ICl-method, the determination with DBH (1,3-dibromo-5,5-dimethylhydantoin; 1,3-dibromo-5,5-dimethyl-2,4-imidazolidinedione) can be recommended and is performed easily. Similarly, the iodide content of gallamine triethiodide may be analyzed with DBH by application of a visual two-phase titration in water and ethyl acetate or with potentiometric indication in a mixture of 2-propanol and water. During the removal of the excess of DBH 4-bromo-triethylgallamine (2,2',2"-[1-bromo-benzene-2,3,4-triyltris(oxy)]N,N,N-triethylethanium) is formed.
NASA Astrophysics Data System (ADS)
Unzicker, Alexander
Lautstarker Applaus erhob sich im Salon III/IV des Marriott-Hotels von Crystal City im amerikanischen Bundesstaat Virginia. In dem überfüllten Konferenzraum starrten alle wie gebannt auf die Leinwand, wo nicht mehr zu sehen war als ein nüchternes Diagramm aus zahlreichen Punkten und einer geschwungenen Kurve. Nureine eigenartige Personengruppe konnte sich davon zu Emotionen hinreißen lassen - Physiker auf der Jahrestagung der Astronomischen Gesellschaft, die ihren Begeisterungssturm noch minutenlang fortsetzten. Was war geschehen? Die im Diagramm aufgetragenen Daten bestätigten mit einer nie da gewesenen Genauigkeit ein fundamentales Naturgesetz zur Wärmeabstrahlung von heißen Körpern. 1900 von Max Planck entdeckt, leuchtete es nun in geradezu mathematischer Reinheit auf. Noch sensationeller war der Ursprung der Daten - Mikrowellensignale verschiedener Frequenzen, die nicht aus einem irdischen Labor stammten, sondern von einem heißen Urzustand des Universums! Ein Feuerball aus Wasserstoff und Helium, noch ohne jegliche Strukturen, die irgendwann Leben ermöglichen sollten, ließ damals seinem Licht freien Lauf. Mehr als zehn Milliarden Jahre war es bis zu den Detektoren des vom Menschen gebauten Satelliten COBE unterwegs, der wenige Tage zuvor die Daten übertragen hatte. Wenn ich das alles wie einen Film in meiner Vorstellung ablaufen lasse, bekomme ich immer eine Gänsehaut, als würde ich die inzwischen extrem abgekühlte Strahlung tatsächlich spüren. Ihre Gleichverteilung im Raum macht uns auch deutlich, dass wir uns nicht einbilden dürfen, an einem besonderen Ort im Universum zu leben - intelligente Aliens könnten sich seitdem überall entwickelt haben! Sollten sie - was nicht wahrscheinlich ist - uns wirklich von Zeit zu Zeit über die Schulter schauen, dann hätten sie an jenem Nachmittag des 13. Januar 1990, als der Vortrag stattfand, bestimmt anerkennend mit ihrem großen Kopf genickt.
Modellierung des Einflusses der Landnutzung auf die Hochwasserentstehung in der Mesoskala
NASA Astrophysics Data System (ADS)
Niehoff, Daniel
2001-10-01
Seit 1990 waren mehrere der großen Flussgebiete Mitteleuropas wiederholt von extremen Hochwassern betroffen. Da sowohl die Landoberfläche als auch die Flusssysteme weiter Teile Mitteleuropas in der Vergangenheit weitreichenden Eingriffen ausgesetzt gewesen sind, wird bei der Suche nach den Ursachen für diese Häufung von Extremereignissen auch die Frage nach der Verantwortung des Menschen hierfür diskutiert. Gewässerausbau, Flächenversiegelung, intensive landwirtschaftliche Bodenbearbeitung, Flurbereinigung und Waldschäden sind nur einige Beispiele und Folgen der anthropogenen Eingriffe in die Landschaft. Aufgrund der Vielfalt der beteiligten Prozesse und deren Wechselwirkungen gibt es allerdings bislang nur Schätzungen darüber, wie sehr sich die Hochwassersituation hierdurch verändert hat. Vorrangiges Ziel dieser Arbeit ist es, mit Hilfe eines hydrologischen Modells systematisch darzustellen, in welcher Weise, in welcher Größenordnung und unter welchen Umständen die Art der Landnutzung auf die Hochwasserentstehung Einfluss nimmt. Dies wird anhand exemplarischer Modellanwendungen in der hydrologischen Mesoskala untersucht. Zu diesem Zweck wurde das deterministische und flächendifferenzierte hydrologische Modell wasim-eth ausgewählt, das sich durch eine ausgewogene Mischung aus physikalisch begründeten und konzeptionellen Ansätzen auszeichnet. Das Modell wurde im Rahmen dieser Arbeit um verschiedene Aspekte erweitert, die für die Charakterisierung des Einflusses der Landnutzung auf die Hochwasserentstehung wichtig sind: (1) Bevorzugtes Fließen in Makroporen wird durch eine Zweiteilung des Bodens in Makroporen und Bodenmatrix dargestellt, die schnelle Infiltration und Perkolation jenseits der hydraulischen Leitfähigkeit der Bodenmatrix ermöglicht. (2) Verschlämmung äußert sich im Modell abhängig von Niederschlagsintensität und Vegetationsbedeckungsgrad als Verschlechterung der Infiltrationsbedingungen an der Bodenoberfläche. (3) Das heterogene Erscheinungsbild bebauter Flächen mit einer Mischung aus versiegelten Bereichen und Freiflächen wird berücksichtigt, indem jede Teilfläche je nach Versiegelungsgrad in einen unversiegelten Bereich und einen versiegelten Bereich mit Anschluss an die Kanalisation aufgeteilt wird. (4) Dezentraler Rückhalt von Niederschlagswasser kann sowohl für natürliche Mulden als auch für gezielt angelegte Versickerungsmulden mit definierten Infiltrationsbedingungen simuliert werden. Das erweiterte Modell wird exemplarisch auf drei mesoskalige Teileinzugsgebiete des Rheins angewandt. Diese drei Gebiete mit einer Fläche von zwischen 100 und 500 km² wurden im Hinblick darauf ausgewählt, dass jeweils eine der drei Hauptlandnutzungskategorien Bebauung, landwirtschaftliche Nutzung oder Wald dominiert. Für die drei Untersuchungsgebiete sind räumlich explizite Landnutzungs- und Landbedeckungsszenarien entworfen worden, deren Einfluss auf die Hochwasserentstehung mit Hilfe des erweiterten hydrologischen Modells simuliert wird. Im Einzelnen werden die Auswirkungen von Verstädterung, Maßnahmen zur Niederschlagsversickerung in Siedlungsgebieten, Stilllegung agrarisch genutzter Flächen, veränderter landwirtschaftlicher Bodenbearbeitung, Aufforstung sowie von Sturmschäden in Wäldern untersucht. Diese Eingriffe beeinflussen die Interzeption von Niederschlag, dessen Infiltration, die oberflächennahen unterirdischen Fließprozesse sowie, zum Beispiel im Fall der Kanalisation, auch die Abflusskonzentration. Die hydrologischen Simulationen demonstrieren, dass die Versiegelung einer Fläche den massivsten Eingriff in die natürlichen Verhältnisse darstellt und deshalb die stärksten (negativen) Veränderungen der Hochwassersituation hervorbringt. Außerdem wird deutlich, dass eine bloße Änderung des Interzeptionsvermögens zu keinen wesentlichen Veränderungen führt, da die Speicherkapazität der Pflanzenoberflächen im Verhältnis zum Volumen hochwasserauslösender Niederschläge eher klein ist. Stärkere Veränderungen ergeben sich hingegen aus einer Änderung der Infiltrationsbedingungen. Die Grenzen der entwickelten Methodik zeigen sich am deutlichsten bei der Simulation veränderter landwirtschaftlicher Bewirtschaftungsmethoden, deren mathematische Beschreibung und zahlenmäßige Charakterisierung aufgrund der Komplexität der beteiligten Prozesse mit großen Unsicherheiten behaftet ist. Die Modellierungsergebnisse belegen darüber hinaus, dass pauschale Aussagen zum Einfluss der Landnutzung auf die Hochwasserentstehung aufgrund der entscheidenden Bedeutung der klimatischen und physiographischen Randbedingungen unzulässig sind. Zu den klimatischen Randbedingungen zählen sowohl Niederschlagsintensität und -dauer als auch die Feuchtebedingungen vor einem hochwasserauslösenden Niederschlag. Die physiographischen Randbedingungen sind von der geomorphologischen und geologischen Ausstattung des Gebiets vorgegeben. Weiterhin muss der räumliche und zeitliche Maßstab, über den Aussagen getroffen werden, klar definiert sein, da sich mit steigender Einzugsgebietsgröße die relative Bedeutung sowohl der verschiedenen Niederschlagstypen als auch der physiographischen Eigenschaften verschiebt. Dies wird in der vorliegenden Arbeit im Gegensatz zu vielen anderen Untersuchungen konsequent berücksichtigt. In Abhängigkeit von Randbedingungen und räumlichen Maßstab sind aufgrund der gewonnen Erkenntnisse folgende Aussagen zum Einfluss von Landnutzungsänderungen auf die Hochwasserentstehung möglich: (1) Für intensive konvektive Niederschlagsereignisse mit tendenziell geringer Vorfeuchte ist der Einfluss der Landnutzung größer als für langanhaltende advektive Niederschläge geringer Intensität, da im ersten Fall veränderte Infiltrationsbedingungen stärker zum Tragen kommen als bei kleinen Niederschlagsintensitäten. (2) In kleinen Einzugsgebieten, wo kleinräumige Konvektivzellen zu Hochwassern führen können, ist der Einfluss der Landnutzung dementsprechend größer als in großen Flussgebieten wie dem Rheingebiet, wo vor allem langanhaltende advektive Ereignisse (unter Umständen verbunden mit Schneeschmelze) relevant sind. (3) In Gebieten mit guten Speichereigenschaften wie mächtigen, gut durchlässigen Böden und gut durchlässigem Gesteinsuntergrund ist der Einfluss der Landnutzung größer als in Gebieten mit geringmächtigen Böden und geringdurchlässigem Festgestein. Dies ist darin begründet, dass in Gebieten mit guten Speichereigenschaften bei einer Verschlechterung der Infiltrationsbedingungen mehr Speicherraum für Niederschlag verloren geht als in anderen Gebieten. Since 1990, several of the large European river basins were affected repeatedly by extreme floods. As both the landscape and the river systems in large parts of Central Europe have undergone major changes in the past, during the search for the causes of this accumulation of extreme events also the impact of human activities on flooding has been discussed. River training, surface sealing, intensive agricultural land-use, consolidation of farmland, and damages to forests are only some examples and consequences of the anthropogenic interferences with the landscape. But due to the diversity of the processes and factors involved, by now it can only be estimated how far the flood situation has changed by these interferences. Therefore, the main target of this thesis is to describe systematically in which way, to what extent and under which circumstances the land-use exerts an influence on storm-runoff generation and subsequently the discharge of rivers. This is investigated by means of exemplary model applications at the hydrological meso-scale. For this task, the deterministic and distributed hydrological model wasim-eth was chosen due to its well-balanced mixture of physically-based and conceptual approaches. In the framework of this thesis, the model has been extended in order to cope with several phenomena which are important when aiming at a characterization of the influence of land-use on flood generation: (1) Preferential flow in macropores is treated by a division of the soil into macropores and a soil matrix. This so-called double-porosity approach allows for fast infiltration and percolation beyond the hydraulic conductivity of the soil matrix. (2) Siltation expresses itself within the model as a deterioration of infiltration conditions at the soil surface, depending on precipitation intensity and the degree of vegetation covering. (3) The heterogeneous appearance of built-up areas, consisting of both sealed areas and pervious areas, is taken into account by dividing each partial area into an unsealed part and a sealed part which is connected to the sewer system. (4) Decentralized storage can be simulated for natural depressions as well as for specific infiltration measures with defined infiltration conditions. The extended model is exemplarily applied to three meso-scale tributaries of the Rhine river. These three catchments with an area of between 100 and 500 km² were chosen with regard to their prevailing land-use, one of them being heavily urbanized, one dominated by agricultural use, and one being mainly forested. For these three catchments, spatially explicit land-use and land-cover scenarios were developed. The impact of these scenarios on storm-runoff generation is being simulated using the extended hydrological model. In this context, namely urbanization, infiltration measures in settlement areas, conversion of farmland to set-aside areas, altered agricultural management practices, affor estation and storm damages in forests are taken into account. These changes influence the interception of rainfall, its infiltration into the soil, the subsurface flow processes next to the soil surface as well as, for example in the case of sewer systems, also runoff concentration. The hydrological simulations demonstrate that sealing of the soil surface is the most intensive intervention in the natural conditions among the ones which are mentioned above. Therefore it results in the strongest (negative) changes of the flooding situation in a catchment. In addition to that, the simulations show that a simple alteration in the interception capacity does not yield significant changes in catchment response, because the storage capacity of vegetation surfaces is rather low compared to the volume of storm events which normally lead to significant floods. More pronounced changes arise from modifications in the infiltration conditions. The limits of the methodology which was chosen for this thesis become obvious when simulating altered agricultural management practices. Due the complexity of the processes involved, mathematical description and parameterization is difficult and therefore afflicted with high uncertainty. In addition to that, the modelling results prove that global statements on the influence of land-use on flood generation are illegitimate because of the paramount importance of the climatic and physiographic boundary conditions. Climatic boundary conditions are precipitation intensity and duration as well as the moisture conditions before a storm event. The physiographic boundary counditions are given by the geomorphological and geological catchment properties. Furthermore, with increasing scale there is a shift in the relative importance of the different types of rainfall as well as the different geophysical catchment properties. Therefore, the spatial and temporal scale for which the results are valid have to be clearly defined. This is taken into account consequently within this thesis - in contrast to many other studies on this topic. Depending on boundary conditions and spatial scale, the findings allow the following statements regarding the influence of land-use changes on storm-runoff generation: (1) For intensive convective storm events with generally low antecedent soil moisture, the influence of land-use is greater than for long-lasting advective storm events with low rainfall intensities, because in the first case changes in the infiltration conditions are more important than during times of low precipitation intensities. (2) In small catchments, where small-scale convective cells can lead to a flood, the influence of land-use is accordingly greater than in large river basins like the Rhine basin, where long-lasting advective rainfalls (possibly in combination with snowmelt) are relevant. (3) In areas with good storage conditions like thick, permeable soils and pervious rock underneath, the influence of land-use is greater than in areas with thin soils and only slightly permeable bedrock. This is due to the fact that in case of deteriorating infiltration conditions, more storage space for precipitation is lost in areas with good storage conditions than in other areas. siehe auch: http://opus.kobv.de/ubp/volltexte/2005/398/
Raab, Jennifer; Haupt, Florian; Scholz, Marlon; Matzke, Claudia; Warncke, Katharina; Lange, Karin; Assfalg, Robin; Weininger, Katharina; Wittich, Susanne; Löbner, Stephanie; Beyerlein, Andreas; Nennstiel-Ratzel, Uta; Lang, Martin; Laub, Otto; Dunstheimer, Desiree; Bonifacio, Ezio; Achenbach, Peter; Winkler, Christiane; Ziegler, Anette-G
2016-01-01
Introduction Type 1 diabetes can be diagnosed at an early presymptomatic stage by the detection of islet autoantibodies. The Fr1da study aims to assess whether early staging of type 1 diabetes (1) is feasible at a population-based level, (2) prevents severe metabolic decompensation observed at the clinical manifestation of type 1 diabetes and (3) reduces psychological distress through preventive teaching and care. Methods and analysis Children aged 2–5 years in Bavaria, Germany, will be tested for the presence of multiple islet autoantibodies. Between February 2015 and December 2016, 100 000 children will be screened by primary care paediatricians. Islet autoantibodies are measured in capillary blood samples using a multiplex three-screen ELISA. Samples with ELISA results >97.5th centile are retested using reference radiobinding assays. A venous blood sample is also obtained to confirm the autoantibody status of children with at least two autoantibodies. Children with confirmed multiple islet autoantibodies are diagnosed with pre-type 1 diabetes. These children and their parents are invited to participate in an education and counselling programme at a local diabetes centre. Depression and anxiety, and burden of early diagnosis are also assessed. Results Of the 1027 Bavarian paediatricians, 39.3% are participating in the study. Overall, 26 760 children have been screened between February 2015 and November 2015. Capillary blood collection was sufficient in volume for islet autoantibody detection in 99.46% of the children. The remaining 0.54% had insufficient blood volume collected. Of the 26 760 capillary samples tested, 0.39% were positive for at least two islet autoantibodies. Discussion Staging for early type 1 diabetes within a public health setting appears to be feasible. The study may set new standards for the early diagnosis of type 1 diabetes and education. Ethics dissemination The study was approved by the ethics committee of Technische Universität München (Nr. 70/14). PMID:27194320
SU-E-T-195: Commissioning the Neutron Production of a Varian TrueBeam Linac
DOE Office of Scientific and Technical Information (OSTI.GOV)
Irazola, L; Brualla, L; Rosello, J
2015-06-15
Purpose: The purpose of this work is the characterization of a new Varian TrueBeam™ facility in terms of neutron production, in order to estimate neutron equivalent dose in organs during radiotherapy treatments. Methods: The existing methodology [1] was used with the reference SRAMnd detector, calibrated in terms of thermal neutron fluence at the reference field operated by PTB (Physikalisch-Technische-Bundesanstalt) at the GeNF (Geesthacht-Neutron-Facility) with the GKSS reactor FRG-1 [2]. Thermal neutron fluence for the 5 available possibilities was evaluated: 15 MV and 10&6 MV with and without Flattening Filter (FF and FFF, respectively). Irradiation conditions are as described in [3].more » In addition, three different collimator-MLC configurations were studied for 15 MV: (a) collimator of 10×10 cm{sup 2} and MLC fully retracted (reference), (b) field sizes of 20×20 cm{sup 2} and 10×10 cm{sup 2} for collimator and MLC respectively, and (c) collimator and MLC aperture of 10×10 cm{sup 2}. Results: Thermal fluence rate at the “reference point” [3], as a consequence of the neutron production, obtained for (a) conformation in 15 MV is (1.45±0.11) x10{sup 4} n•cm{sup 2}/MU. Configurations (b) and (c) gave fluences of 96.6% and 97.8% of the reference (a). Neutron production decreases up to 8.6% and 5.7% for the 10 MV FF and FFF beams, respectively. Finally, it decreases up to 2.8% and 0.1% for the 6 MV FF and FFF modes, respectively. Conclusion: This work evaluates thermal neutron production of Varian TrueBeam™ system for organ equivalent dose estimation. The small difference in collimator-MLC configuration shows the universality of the methodology [3]. A decrease in this production is shown when decreasing energy from 15 to 10 MV and an almost negligible production was found for 6 MV. Moreover, a lower neutron contribution is observed for the FFF modes.[1]Phys Med Biol,2012;57:6167–6191.[2]Radiat Meas,2010;45:1513–1517.[3]Med Phys,2015;42:276–281.« less
NASA Astrophysics Data System (ADS)
Priruenrom, T.; Sabuga, W.; Konczak, T.
2013-01-01
The bilateral supplementary comparison APMP.M.P-S4 on pressure measurements in the range (60 to 350) kPa of gauge pressure in gas media was organized by National Institute of Metrology of Thailand, NIMT, as the pilot laboratory, comparing with Physikalisch-Technische Bundesanstalt of Germany, PTB. The objective of this comparison is to check equivalence of gas pressure standards between NIMT and PTB. The period of measurement covered November to December 2012. NIMT provided a transfer standard, which was a WC-WC piston-cylinder assembly (PCA) with a nominal effective area of 10 cm2 manufactured by Fluke Corporation, DHI. The measurements were performed at pressures (60, 100, 150, 200, 250, 300 and 350) kPa. The NIMT laboratory standard used was a pressure balance with a PCA of 10 cm2 manufactured by DHI and identified by serial number 0693. The PTB laboratory standard used was a pressure balance with a PCA of 10 cm2 manufactured by Desgranges et Huot (DH) and identified by serial number 288. The results of this comparison show that the relative difference of the effective area values obtained by NIMT and PTB is not larger than 4.3 ppm, which corresponds to En = 0.26. Therefore, it confirms that the gas pressure standards maintained by the two institutes, NIMT and PTB, in the pressure range (60 to 350) kPa in gauge mode are equivalent under their uncertainties claimed. The result of this comparison is essential to support the calibration and measurement capabilities (CMC) of NIMT in this pressure range. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the APMP, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Giacco, Domenico; Bird, Victoria Jane; McCrone, Paul; Lorant, Vincent; Nicaise, Pablo; Pfennig, Andrea; Bauer, Michael; Ruggeri, Mirella; Lasalvia, Antonio; Moskalewicz, Jacek; Welbel, Marta; Priebe, Stefan
2015-11-25
Mental healthcare organisation can either pursue specialisation, that is, distinct clinicians and teams for inpatient and outpatient care or personal continuity of care, that is, the same primary clinician for a patient across the two settings. Little systematic research has compared these approaches. Existing studies subject have serious methodological shortcomings. Yet, costly reorganisations of services have been carried out in different European countries, inconsistently aiming to achieve specialisation or personal continuity of care. More reliable evidence is required on whether specialisation or continuity of care is more effective and cost-effective, and whether this varies for different patient groups and contexts. In a natural experiment, we aim to recruit at least 6000 patients consecutively admitted to inpatient psychiatric care in Belgium, Germany, Italy, Poland, and the UK. In each country, care approaches supporting specialisation and personal continuity coexist. Patients will be followed up at 1 year to compare outcomes, costs and experiences. Inclusion criteria are: 18 years of age or older; clinical diagnosis of psychosis, affective disorder or anxiety/somatisation disorder; sufficient command of the language of the host country; absence of cognitive deterioration and/or organic brain disorders; and capacity to provide informed consent. Ethical approval was obtained in all countries: (1) England: NRES Committee North East-Newcastle & North Tyneside (ref: 14/NE/1017); (2) Belgium: Comité d'Ethique hospitalo-facultaire des Cliniques St-Luc; (3) Germany: Ethical Board, Technische Universität Dresden; (4) Italy: Comitati Etici per la sperimentazione clinica (CESC) delle provincie di Verona, Rovigo, Vicenza, Treviso, Padova; (5) Poland: Komisja Bioetyczna przy Instytucie Psychiatrii i Neurologii w Warszawie. We will disseminate the findings through scientific publications and a study-specific website. At the end of the study, we will develop recommendations for policy decision-making, and organise national and international workshops with stakeholders. ISRCTN registry: ISRCTN40256812. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Absolute Calibration of Si iRMs used for Measurements of Si Paleo-nutrient proxies
NASA Astrophysics Data System (ADS)
Vocke, R. D., Jr.; Rabb, S. A.
2016-12-01
Silicon isotope variations (reported as δ30Si and δ29Si, relative to NBS28) in silicic acid dissolved in ocean waters, in biogenic silica and in diatoms are extremely informative paleo-nutrient proxies. The resolution and comparability of such measurements depend on the quality of the isotopic Reference Materials (iRMs) defining the delta scale. We report new absolute Si isotopic measurements on the iRMs NBS28 (RM 8546 - Silica Sand), Diatomite, and Big Batch using the Avogadro measurement approach and comparing them with prior assessments of these iRMs. The Avogadro Si measurement technique was developed by the German Physikalish-Technische Bundesanstalt (PTB) to provide a precise and highly accurate method to measure absolute isotopic ratios in highly enriched 28Si (99.996%) material. These measurements are part of an international effort to redefine the kg and mole based on the Planck constant h and the Avogadro constant NA, respectively (Vocke et al., 2014 Metrologia 51, 361, Azuma et al., 2015 Metrologia 52 360). This approach produces absolute Si isotope ratio data with lower levels of uncertainty when compared to the traditional "Atomic Weights" method of absolute isotope ratio measurement calibration. This is illustrated in Fig. 1 where absolute Si isotopic measurements on SRM 990, separated by 40+ years of advances in instrumentation, are compared. The availability of this new technique does not say that absolute Si isotopic ratios are or ever will be better for normal Si isotopic measurements when seeking isotopic variations in nature, because they are not. However, by determining the absolute isotopic ratios of all the Si iRM scale artifacts, such iRMs become traceable to the metric system (SI); thereby automatically conferring on all the artifact-based δ30Si and δ29Si measurements traceability to the base SI unit, the mole. Such traceability should help reduce the potential of bias between different iRMs and facilitate the replacement of delta-scale artefacts when they run out. Fig. 1 Comparison of absolute isotopic measurements of SRM 990 using two radically different approaches to absolute calibration and mass bias corrections.
Hydrogenation of benzaldehyde via electrocatalysis and thermal catalysis on carbon-supported metals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Yang; Sanyal, Udishnu; Pangotra, Dhananjai
Abstract Selective reduction of benzaldehyde to benzyl alcohol on C-supported Pt, Rh, Pd, and Ni in aqueous phase was conducted using either directly H2 (thermal catalytic hydrogenation, TCH) or in situ electrocatalytically generated hydrogen (electrocatalytic hydrogenation, ECH). In TCH, the intrinsic activity of the metals at room temperature and 1 bar H2 increased in the sequence Rh/C < Pt/C < Pd/C, while Ni/C is inactive at these conditions due to surface oxidation in the absence of cathodic potential. The reaction follows a Langmuir-Hinshelwood mechanism with the second hydrogen addition to the adsorbed hydrocarbon being the rate-determining step. All tested metalsmore » were active in ECH of benzaldehyde, although hydrogenation competes with the hydrogen evolution reaction (HER). The minimum cathodic potentials to obtain appreciable ECH rates were identical to the onset potentials of HER. Above this onset, the relative rates of H reacting to H2 and H addition to the hydrocarbon determines the selectivity to ECH and TCH. Accordingly, the selectivity of the metals towards ECH increases in the order Ni/C < Pt/C < Rh/C < Pd/C. Pd/C shows exceptionally high ECH selectivity due to its surprisingly low HER reactivity under the reaction conditions. Acknowledgements The authors would like to thank the groups of Hubert A. Gasteiger at the Technische Universität München of Jorge Gascon at the Delft University of Technology for advice and valuable discussions. The authors are grateful to Nirala Singh, Erika Ember, Gary Haller, and Philipp Rheinländer for fruitful discussions. We are also grateful to Marianne Hanzlik for TEM measurements and to Xaver Hecht and Martin Neukamm for technical support. Y.S. would like to thank the Chinese Scholarship Council for the financial support. The research described in this paper is part of the Chemical Transformation Initiative at Pacific Northwest National Laboratory (PNNL), conducted under the Laboratory Directed Research and Development Program at PNNL, a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy.« less
AMISS - Active and passive MIcrowaves for Security and Subsurface imaging
NASA Astrophysics Data System (ADS)
Soldovieri, Francesco; Slob, Evert; Turk, Ahmet Serdar; Crocco, Lorenzo; Catapano, Ilaria; Di Matteo, Francesca
2013-04-01
The FP7-IRSES project AMISS - Active and passive MIcrowaves for Security and Subsurface imaging is based on a well-combined network among research institutions of EU, Associate and Third Countries (National Research Council of Italy - Italy, Technische Universiteit Delft - The Netherlands, Yildiz Technical University - Turkey, Bauman Moscow State Technical University - Russia, Usikov Institute for Radio-physics and Electronics and State Research Centre of Superconductive Radioelectronics "Iceberg" - Ukraine and University of Sao Paulo - Brazil) with the aims of achieving scientific advances in the framework of microwave and millimeter imaging systems and techniques for security and safety social issues. In particular, the involved partners are leaders in the scientific areas of passive and active imaging and are sharing their complementary knowledge to address two main research lines. The first one regards the design, characterization and performance evaluation of new passive and active microwave devices, sensors and measurement set-ups able to mitigate clutter and increase information content. The second line faces the requirements to make State-of-the-Art processing tools compliant with the instrumentations developed in the first line, suitable to work in electromagnetically complex scenarios and able to exploit the unexplored possibilities offered by new instrumentations. The main goals of the project are: 1) Development/improvement and characterization of new sensors and systems for active and passive microwave imaging; 2) Set up, analysis and validation of state of art/novel data processing approach for GPR in critical infrastructure and subsurface imaging; 3) Integration of state of art and novel imaging hardware and characterization approaches to tackle realistic situations in security, safety and subsurface prospecting applications; 4) Development and feasibility study of bio-radar technology (system and data processing) for vital signs detection and detection/characterization of human beings in complex scenarios. These goals are planned to be reached following a plan of research activities and researchers secondments which cover a period of three years. ACKNOWLEDGMENTS This research has been performed in the framework of the "Active and Passive Microwaves for Security and Subsurface imaging (AMISS)" EU 7th Framework Marie Curie Actions IRSES project (PIRSES-GA-2010-269157).
Raab, Jennifer; Haupt, Florian; Scholz, Marlon; Matzke, Claudia; Warncke, Katharina; Lange, Karin; Assfalg, Robin; Weininger, Katharina; Wittich, Susanne; Löbner, Stephanie; Beyerlein, Andreas; Nennstiel-Ratzel, Uta; Lang, Martin; Laub, Otto; Dunstheimer, Desiree; Bonifacio, Ezio; Achenbach, Peter; Winkler, Christiane; Ziegler, Anette-G
2016-05-18
Type 1 diabetes can be diagnosed at an early presymptomatic stage by the detection of islet autoantibodies. The Fr1da study aims to assess whether early staging of type 1 diabetes (1) is feasible at a population-based level, (2) prevents severe metabolic decompensation observed at the clinical manifestation of type 1 diabetes and (3) reduces psychological distress through preventive teaching and care. Children aged 2-5 years in Bavaria, Germany, will be tested for the presence of multiple islet autoantibodies. Between February 2015 and December 2016, 100 000 children will be screened by primary care paediatricians. Islet autoantibodies are measured in capillary blood samples using a multiplex three-screen ELISA. Samples with ELISA results >97.5th centile are retested using reference radiobinding assays. A venous blood sample is also obtained to confirm the autoantibody status of children with at least two autoantibodies. Children with confirmed multiple islet autoantibodies are diagnosed with pre-type 1 diabetes. These children and their parents are invited to participate in an education and counselling programme at a local diabetes centre. Depression and anxiety, and burden of early diagnosis are also assessed. Of the 1027 Bavarian paediatricians, 39.3% are participating in the study. Overall, 26 760 children have been screened between February 2015 and November 2015. Capillary blood collection was sufficient in volume for islet autoantibody detection in 99.46% of the children. The remaining 0.54% had insufficient blood volume collected. Of the 26 760 capillary samples tested, 0.39% were positive for at least two islet autoantibodies. Staging for early type 1 diabetes within a public health setting appears to be feasible. The study may set new standards for the early diagnosis of type 1 diabetes and education. The study was approved by the ethics committee of Technische Universität München (Nr. 70/14). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Final report on the torque key komparison CCM.T-K1.2 measurand torque: 0 N.m, 500 N.m, 1000 N.m
NASA Astrophysics Data System (ADS)
Röske, Dirk
2015-01-01
The purpose of the CIPM subsequent bilateral comparison CCM.T-K1.2 was to link another participant, namely the National Institute of Metrology (Thailand), in short NIMT, to the CCM.T-K1 torque key comparison. The measuring capabilities up to 1000 N.m of dead-weight torque standard machines with supported lever were investigated. The pilot laboratory was the same in both comparisons—it was the Physikalisch-Technische Bundesanstalt (PTB, Braunschweig, Germany). The same two very stable torque transducers with well-known properties were used as travelling standards. The measurements at the participating laboratory were carried out between November 2007 and February 2008. According to the technical protocol, torque steps of 500 N.m and 1000 N.m had to be measured both in clockwise and anticlockwise directions. Corrections had to be applied to the results reported by the participants taking into account the use of different amplifiers, the creep (due to different loading times of the machines) and the environmental conditions in the laboratories (temperature and relative humidity of the ambient air). The results of the pilot laboratory in this bilateral comparison are in very good agreement with the same results obtained in the CCM.T-K1 comparison. For each of the transducers, the two torque steps and both senses of direction of the torque vector, the key comparison reference value of the CCM.T-K1 was taken, and the results of participant NIMT were calculated with respect to these values. The agreement between the results is very good. The smallest expanded (k = 2) relative uncertainty of the machine stated by the participant is 1 × 10-4. The results of the comparison support this uncertainty statement. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Discussion of thermal extraction chamber concepts for Lunar ISRU
NASA Astrophysics Data System (ADS)
Pfeiffer, Matthias; Hager, Philipp; Parzinger, Stephan; Dirlich, Thomas; Spinnler, Markus; Sattelmayer, Thomas; Walter, Ulrich
The Exploration group of the Institute of Astronautics (LRT) of the Technische Universitüt a München focuses on long-term scenarios and sustainable human presence in space. One of the enabling technologies in this long-term perspective is in-situ resource utilization (ISRU). When dealing with the prospect of future manned missions to Moon and Mars the use of ISRU seems useful and intended. The activities presented in this paper focus on Lunar ISRU. This basically incorporates both the exploitation of Lunar oxygen from natural rock and the extraction of solar wind implanted particles (SWIP) from regolith dust. Presently the group at the LRT is examining possibilities for the extraction of SWIPs, which may provide several gaseous components (such as H2 and N2) valuable to a human presence on the Moon. As a major stepping stone in the near future a Lunar demonstrator/ verification experiment payload is being designed. This experiment, LUISE (LUnar ISru Experiment), will comprise a thermal process chamber for heating regolith dust (grain size below 500m), a solar thermal power supply, a sample distribution unit and a trace gas analysis. The first project stage includes the detailed design and analysis of the extraction chamber concepts and the thermal process involved in the removal of SWIP from Lunar Regolith dust. The technique of extracting Solar Wind volatiles from Regolith has been outlined by several sources. Heating the material to a threshold value seems to be the most reasonable approach. The present paper will give an overview over concepts for thermal extraction chambers to be used in the LUISE project and evaluate in detail the pros and cons of each concept. The special boundary conditions set by solar thermal heating of the chambers as well as the material properties of Regolith in a Lunar environment will be discussed. Both greatly influence the design of the extraction chamber. The performance of the chamber concepts is discussed with respect to the desired target temperature using ESARAD/ESATAN software. Additionally a value for the homogeneity of heating the sample, as a measure for the effectiveness of the concept, will be presented and discussed.
Magnesium Diboride Current Leads
NASA Technical Reports Server (NTRS)
Panek, John
2010-01-01
A recently discovered superconductor, magnesium diboride (MgB2), can be used to fabricate conducting leads used in cryogenic applications. Dis covered to be superconducting in 2001, MgB2 has the advantage of remaining superconducting at higher temperatures than the previously used material, NbTi. The purpose of these leads is to provide 2 A of electricity to motors located in a 1.3 K environment. The providing environment is a relatively warm 17 K. Requirements for these leads are to survive temperature fluctuations in the 5 K and 11 K heat sinks, and not conduct excessive heat into the 1.3 K environment. Test data showed that each lead in the assembly could conduct 5 A at 4 K, which, when scaled to 17 K, still provided more than the required 2 A. The lead assembly consists of 12 steelclad MgB2 wires, a tensioned Kevlar support, a thermal heat sink interface at 4 K, and base plates. The wires are soldered to heavy copper leads at the 17 K end, and to thin copper-clad NbTi leads at the 1.3 K end. The leads were designed, fabricated, and tested at the Forschungszentrum Karlsruhe - Institut foer Technische Physik before inclusion in Goddard's XRS (X-Ray Spectrometer) instrument onboard the Astro-E2 spacecraft. A key factor is that MgB2 remains superconducting up to 30 K, which means that it does not introduce joule heating as a resistive wire would. Because the required temperature ranges are 1.3-17 K, this provides a large margin of safety. Previous designs lost superconductivity at around 8 K. The disadvantage to MgB2 is that it is a brittle ceramic, and making thin wires from it is challenging. The solution was to encase the leads in thin steel tubes for strength. Previous designs were so brittle as to risk instrument survival. MgB2 leads can be used in any cryogenic application where small currents need to be conducted at below 30 K. Because previous designs would superconduct only at up to 8 K, this new design would be ideal for the 8-30 K range.
FORS am Very Large Telescope der Europäischen Südsternwarte
NASA Astrophysics Data System (ADS)
1998-09-01
Erstes wissenschaftliches Beobachtungsinstrument liefert eindrucksvolle Bilder Entsprechend dem straffen Zeitplan wird das ESO Very Large Teleskop Projekt (VLT-Projekt) auf dem Cerro Paranal in Nord-Chile verwirklicht: die volle Betriebsbereitschaft des ersten der vier 8,2m-Einzelteleskope wird Anfang des nächsten Jahres erreicht sein. Am 15. September 1998 wurde ein weiterer wichtiger Meilenstein erfolgreich, rechtzeitig und innerhalb des Kostenplans erreicht. Nur wenige Tage nach seiner Montage am ersten 8,2m-Einzelteleskop des VLT (UT1) konnte FORS1 ( FO cal R educer and S pectrograph) als erstes einer Gruppe leistungsfähiger und komplexer wissenschaftlicher Instrumente seine Beobachtungstätigkeit beginnen. Von Anfang an konnte es eine Reihe exzellenter astronomischer Bilder aufnehmen. Dieses bedeutende Ereignis eröffnet eine Fülle neuer Möglichkeiten für die europäische Astronomie. FORS - ein Höhepunkt an Komplexität FORS1 und das zukünftige Zwillingsinstrument (FORS2) sind das Ergebnis einer der eingehendsten und fortschrittlichsten technologischen Studien, die je für ein Instrument der bodengebundenen Astronomie durchgeführt wurden. Dieses einzigartige Instrument ist nun im Cassegrain-Fokus installiert und verschwindet beinahe, trotz seiner Dimensionen von 3 x 1.5m (Gewicht 2.3t), unterhalb des riesigen 53 m 2 großen Zerodurspiegels. Um die große Spiegelfläche und die hervorragende Bildqualität von UT1 optimal auszunützen, wurde FORS speziell so konstruiert, daß es die lichtschwächsten und entferntesten Objekte im Weltall untersuchen kann. Bald wird dieses komplexe VLT-Instrument den europäischen Astronomen erlauben, die derzeitigen Beobachtungshorizonte entscheidend zu erweitern. Die beiden FORS-Instrumente sind Vielzweck-Beobachtungsinstrumente, die in mehreren unterschiedlichen Beobachtungsarten eingesetzt werden können. Beispielsweise können Bilder mit zwei verschiedenen Abbildungsmaßstäben (Vergrößerungen) sowie Spektren mit unterschiedlicher spektraler Auflösung von einzelnen oder mehreren Objekten aufgenommen werden. Dabei erlaubt der schnelle Wechsel zwischen den unterschiedlichen Beobachtungsarten z.B. zunächst die Aufnahme und direkt anschließend die Spektroskopie weit entfernter Galaxien. Damit kann dann u.a. die stellare Zusammensetzung und die Entfernung bestimmt werden. Als eines der leistungsfähigsten astronomischen Instrumente seiner Art wird FORS1 ein wahres Arbeitspferd für die Untersuchung des fernen Universums darstellen. Der Bau von FORS Das FORS-Projekt wird unter ESO-Kontrakt von einem Konsortium dreier deutscher astronomischer Institute durchgeführt, der Landessternwarte Heidelberg und den Universitäts-Sternwarten von Göttingen and München. Bis zur Beendigung des Projekts werden die beteiligten Institute Arbeit im Umfang von ca. 180 Mann-Jahren eingebracht haben. Bei der Landessternwarte Heidelberg lag die Leitung des Projekts. Hier wurde außerdem das gesamte optische System konstruiert, die Beschaffung der Komponenten der abbildenden Optik und der Zusatzoptiken für Spektroskopie und Polarimetrie durchgeführt und die spezielle Computersoftware geschrieben, mit der die von FORS gelieferten Daten verarbeitet und ausgewertet werden. Darüber hinaus wurde in der Werkstatt der Sternwarte ein Teleskopsimulator gebaut, mit dem alle wesentlichen Funktionen von FORS in Europa getestet werden konnten, bevor das Instrument zum Paranal (Chile) transportiert wurde. An der Universitäts-Sternwarte Göttingen wurden Konstruktion, Herstellung und Zusammenbau der gesamten Mechanik von FORS durchgeführt. Der größte Teil der Präzisionsteile, insbesondere der Multispalteinheit, wurde in der feinmechanischen Werkstatt der Sternwarte hergestellt. Die Beschaffung der großen Instrumentengehäuse und Flansche, die Computeranalysen für mechanische und thermische Stabilität des empfindlichen Spektrographen und die Herstellung der speziellen Werkzeuge für Handhabung, Wartung und Justierung lag ebenso in der Verantwortung dieser Sternwarte wie die Tests der zahlreichen opto- und elektromechanischen Funktionen. Die Universitäts-Sternwarte München war verantwortlich für das Projektmanagement, Integration und Test des gesamten Instruments im Labor, für Planung und Einbau aller Elektronik und Elektromechanik, sowie für Entwicklung und Test der gesamten Software, die FORS in allen Teilen vollständig per Computer steuert (z.B. Filter- und Grismräder, Verschlüsse, Spalteinheit für die Vielspaltspektroskopie, Masken, alle optischen Komponenten, Elektromotoren, Encoder usw.). Zusätzlich wurde Computersoftware geschrieben, mit der die komplexen astronomischen Beobachtungen mit FORS vorbereitet werden und das Verhalten des Instruments durch eine ständige Kontrolle der gesammelten wissenschaftlichen Daten überwacht wird. Als Gegenleistung für den Bau von FORS erhalten die Astronomen der drei beteiligten Institute des FORS-Konsortiums eine gewisse Anzahl von Nächten an "garantierter Beobachtungszeit" am VLT. In dieser Beobachtungszeit werden verschiedene Forschungsprojekte durchgeführt, deren Themen unter anderem von kleinen Körpern im äußeren Sonnensystem über Untersuchungen von Sternen im Endstadium und den von ihnen abgestoßenen Gaswolken bis zur Erforschung ferner Galaxien und Quasare reichen, die Aufschluß über die frühen Zeiten unseres Universums geben. Erste Tests von FORS1 am VLT-UT1: ein großartiger Erfolg Nach sorgfältiger Vorbereitung hat das FORS-Konsortium nun mit der Inbetriebnahme ("Commissioning") des Instruments begonnen. Dazu gehören ein eingehender Nachweis der spezifizierten Leistungsfähigkeit am Teleskop, die Überprüfung der korrekten Funktionsweise unter Softwaresteuerung vom Kontrollraum auf Paranal, und am Ende dieses Prozesses eine Demonstration, daß das Instrument seinen angestrebten wissenschaftlichen Zweck erfüllt. Während der Durchführung dieser Tests gelangen dem Commissioning-Team auf Paranal eine Reihe von Aufnahmen verschiedener astronomischer Objekte, von denen einige hier wiedergegeben sind. Sie wurden alle mit FORS in der Standardauflösung gewonnen (Bildfeldgröße 6.8 x 6.8 Bogenminuten, Pixelgröße 0.20 Bogensekunden) und zeigen einige der eindrucksvollen Möglichkeiten, die das neue Instrument bietet. Spiralgalaxie NGC 1288 ESO PR Photo 37a/98 ESO PR Photo 37a/98 [Preview - JPEG: 800 x 908 pix - 224k] [High-Res - JPEG: 3000 x 3406 pix - 1.5Mb] Farbaufnahme der Spiralgalaxie NGC 1288, aufgenommen in der ersten Beobachtungsnacht von FORS ("Nacht des ersten Lichts"). Das erste Photo zeigt eine Dreifarbenaufnahme der schönen Spiralgalaxie NGC 1288 im südlichen Sternbild Fornax. PR Photo 37a/98 umfaßt das gesamte Feld, das mit der 2048 x 2048 Pixel großen CCD-Kamera abgebildet wurde. Es wurde aus drei CCD-Aufnahmen zusammengesetzt, die bei gutem Seeing in verschiedenen Farben in der "Nacht des ersten Lichts" (15. September 1998) aufgenommen wurden. Diese Galaxie mit einem Durchmesser von rund 200000 Lichtjahren ist etwa 300 Millionen Lichtjahre entfernt, ihre Fluchtgeschwindigkeit beträgt 4500 km/sec. Technische Informationen : Photo 37a/98 ist ein Komposit von drei Aufnahmen in den drei Filtern B (420nm, 6 Minuten belichtet), V (530nm, 3 Minuten) und I (800nm, 3 Minuten) während einer Periode mit 0.7 Bogensekunden Seeing. Das gezeigte Feld ist 6.8 x 6.8 Bogenminuten groß. Norden ist links, Osten unten. Entfernter Galaxienhaufen ESO PR Photo 37b/98 ESO PR Photo 37b/98 [Preview - JPEG: 657 x 800 pix - 248k] [High-Res - JPEG: 2465 x 3000 pix - 1.9Mb] Ein ungewöhnlicher Galaxienhaufen in der Umgebung des Quasars PB5763 . ESO PR Photo 37c/98 ESO PR Photo 37c/98 [Preview - JPEG: 670 x 800 pix - 272k] [High-Res - JPEG: 2512 x 3000 pix - 1.9Mb] Vergrößerung von PR Photo 37b/98; sie zeigt mehr Einzelheiten des ungewöhnlichen Galaxienhaufens. Die nächsten Photos wurden von einer 5-minütigen Aufnahme im Nahen Infrarot reproduziert, die ebenfalls in der "Nacht des ersten Lichts" von FORS1 (15. September 1998) gewonnen wurde. PR Photo 37b/98 zeigt einen Himmelsausschnitt in der Nähe des Quasars PB5763, in dem auch ein ungewöhnlicher, sehr weit entfernter Haufen von Galaxien zu sehen ist. Er besteht aus einer großen Zahl lichtschwacher Galaxien, die bisher noch nicht eingehend untersucht wurden. Dieser Haufen ist ein gutes Beispiel für die Art von Objekten, auf die viel Beobachtungszeit mit FORS verwendet werden wird, sobald der reguläre Beobachtungsbetrieb begonnen hat. Eine Vergrößerung des gleichen Feldes ist in PR Photo 37c/98 wiedergegeben. Sie zeigt die einzelnen Mitglieder dieses Galaxienhaufens im Detail. Man beachte besonders die interessante spindelförmige Galaxie, die anscheinend einen äquatorialen Ring aufweist. Neben einer schönen Spiralgalaxie sind auch noch viele weitere lichtschwache Galaxien zu erkennen. Sie sind entweder Zwerggalaxien und Mitglieder des Haufens oder befinden sich sehr viel weiter entfernt im Hintergrund des Haufens. Technische Informationen : PR Photos 37b/98 (als Negativ reproduziert) und 37c/98 (Positiv) stammen von einer Aufnahme, die bei 0.8 Bogensekunden Seeing durch ein I-Filter (nahes Infrarot, 800nm) gewonnen wurde. Die Belichtungszeit betrug 5 Minuten, und es wurde eine Flatfield-Korrektur durchgeführt. Das gezeigte Feld ist 6.8 x 6.8 Bogenminuten bzw. 2.5 x 2.3 Bogenminuten groß. Norden ist links oben, Osten links unten. Spiralgalaxie NGC 1232 ESO PR Photo 37d/98 ESO PR Photo 37d/98 [Preview - JPEG: 800 x 912 pix - 760k] [High-Res - JPEG: 3000 x 3420 pix - 5.7Mb] Ein Farbbild der Spiralgalaxie NGC 1232, aufgenommen am 21. September 1998. ESO PR Photo 37e/98 ESO PR Photo 37e/98 [Preview - JPEG: 800 x 961 pix - 480k] [High-Res - JPEG: 3000 x 3602 pix - 3.5Mb] Vergrößerung des Zentrums von PR Photo 37d/98. Dieses spektakuläre Bild der großen Spiralgalaxie NGC 1232 (Photo 37d/98) wurde am 21. September 1998 unter guten Beobachtungsbedingungen erhalten. Es wurde aus drei Einzelaufnahmen im ultravioletten, blauen und roten Licht zusammengesetzt. Die Farben der verschiedenen Regionen sind deutlich sichtbar: Das Zentralgebiet enthält ältere, rötlich leuchtende Sterne (Photo 37e/98), während die Spiralarme von jungen, bläulichen Sternen und roten Sternentstehungsgebieten bevölkert sind. Man beachte die gestörte Begleitgalaxie am linken Rand (Photo 37d/98), die wie der griechische Buchstabe "Theta" aussieht. NGC 1232 liegt 20 Grad südlich des Himmelsäquators im Sternbild Eridanus. Obwohl die Entfernung dieser Galaxie ungefähr 100 Millionen Lichtjahre beträgt, kann man auf Grund der exzellenten Bildqualität einen unglaublichen Reichtum an Details erkennen. Bei dieser Entfernung entspricht die Kantenlänge des Bildfeldes etwa 200000 Lichtjahren oder etwa der doppelten Größe unserer Milchstraße. Technische Informationen : Photos 37d/98 und 37e/98 sind ein Komposit von drei Aufnahmen in den drei Filtern U (360nm, 10 Minuten belichtet), B (420nm, 6 Minuten) und R (600nm, 2 Minuten 30 Sekunden) während einer Periode mit 0.7 Bogensekunden Seeing. Das gezeigte Feld ist 6.8 x 6.8 Bogenminuten bzw. 1.6 x 1.8 Bogenminuten groß. Norden ist oben, Osten links. Note: [1] Diese Pressemitteilung wird gemeinsam (auf Englisch und Deutsch) von der Europäischen Südsternwarte, der Landessternwarte Heidelberg und den Universitäts-Sternwarten Göttingen und München herausgegeben. An English Version of this Press Release is also available. Zugang zu ESO Presseinformationen ESO Presseinformationen werden im World Wide Web zur Verfügung gestellt (URL: http://www.eso.org/outreach/press-rel/). ESO Pressephotos dürfen veröffentlicht werden, wenn die Europäische Südsternwarte als Urheber genannt wird.
Zhu, Hao; Ye, Lin; Richard, Ann; Golbraikh, Alexander; Wright, Fred A.; Rusyn, Ivan; Tropsha, Alexander
2009-01-01
Background Accurate prediction of in vivo toxicity from in vitro testing is a challenging problem. Large public–private consortia have been formed with the goal of improving chemical safety assessment by the means of high-throughput screening. Objective A wealth of available biological data requires new computational approaches to link chemical structure, in vitro data, and potential adverse health effects. Methods and results A database containing experimental cytotoxicity values for in vitro half-maximal inhibitory concentration (IC50) and in vivo rodent median lethal dose (LD50) for more than 300 chemicals was compiled by Zentralstelle zur Erfassung und Bewertung von Ersatz- und Ergaenzungsmethoden zum Tierversuch (ZEBET; National Center for Documentation and Evaluation of Alternative Methods to Animal Experiments). The application of conventional quantitative structure–activity relationship (QSAR) modeling approaches to predict mouse or rat acute LD50 values from chemical descriptors of ZEBET compounds yielded no statistically significant models. The analysis of these data showed no significant correlation between IC50 and LD50. However, a linear IC50 versus LD50 correlation could be established for a fraction of compounds. To capitalize on this observation, we developed a novel two-step modeling approach as follows. First, all chemicals are partitioned into two groups based on the relationship between IC50 and LD50 values: One group comprises compounds with linear IC50 versus LD50 relationships, and another group comprises the remaining compounds. Second, we built conventional binary classification QSAR models to predict the group affiliation based on chemical descriptors only. Third, we developed k-nearest neighbor continuous QSAR models for each subclass to predict LD50 values from chemical descriptors. All models were extensively validated using special protocols. Conclusions The novelty of this modeling approach is that it uses the relationships between in vivo and in vitro data only to inform the initial construction of the hierarchical two-step QSAR models. Models resulting from this approach employ chemical descriptors only for external prediction of acute rodent toxicity. PMID:19672406
Eskandari, Mehdi; Jani, Soghra; Kazemi, Mahsa; Zeighami, Habib; Yazdinezhad, Alireza; Mazloomi, Sahar; Shokri, Saeed
2016-01-01
Objective Epididymo-orchitis (EO) potentially results in reduced fertility in up to 60% of affected patients. The anti-inflammatory effects of Korean red ginseng (KRG) and its ability to act as an immunoenhancer in parallel with the beneficial effects of this ancient herbal medicine on the reproductive systems of animals and humans led us to evaluate its protective effects against acute EO. Materials and Methods This animal experimental study was conducted in the Department of Anatomical Sciences, Faculty of Medicine, Zanjan University of Medical Sciences (ZUMS), Zanjan, Iran during 2013-2015. We divided 50 Wistar rats into five following groups (n=10 per group): i. Control-intact animals, ii. Vehicle-phosphate buffered saline (PBS) injection into the vas deferens, iii. KRG-an intraperitoneal (IP) injection of KRG, iv. EO-an injection of uropathogenic Escherichia coli (UPEC) strain M39 into the vas defer- ens, and v. EO/ KRG-injections of both UPEC strain M39 and KRG. The treatment lasted seven days. We then evaluated sperm parameters, number of germ cell layers, Johnson’s criteria, germ cell apoptosis, body weight and relative sex organs weight. Results Acute EO increased the relative weight of prostate and seminal vesicles (P≤0.05). It also reduced sperm quality such as total motility, sperm concentration (P≤0.01), and the percentage of normal sperm (P≤0.001). Moreover, acute EO decreased Miller’s (P≤0.05) and Johnsen’s scores and increased apoptotic indexes of spermatogenic cells (P≤0.001). KRG treatment decreased prostate weight gain (P≤0.05) and improved the percentage of sperm with normal morphology, total motility (P≤0.01), and progressive motility (P≤0.05). The apoptotic indexes of spermatogenic cells reduced (P≤0.001), whereas both Johnsen’s (P≤0.01) and Miller’s criteria increased in the KRG-treated EO testis (P≤0.05). Conclusion Consequently, KRG ameliorated the devastating effects of EO on the sperm retrieved from either epididymis or testicle in rats. PMID:27602327
Zhu, Hao; Ye, Lin; Richard, Ann; Golbraikh, Alexander; Wright, Fred A; Rusyn, Ivan; Tropsha, Alexander
2009-08-01
Accurate prediction of in vivo toxicity from in vitro testing is a challenging problem. Large public-private consortia have been formed with the goal of improving chemical safety assessment by the means of high-throughput screening. A wealth of available biological data requires new computational approaches to link chemical structure, in vitro data, and potential adverse health effects. A database containing experimental cytotoxicity values for in vitro half-maximal inhibitory concentration (IC(50)) and in vivo rodent median lethal dose (LD(50)) for more than 300 chemicals was compiled by Zentralstelle zur Erfassung und Bewertung von Ersatz- und Ergaenzungsmethoden zum Tierversuch (ZEBET; National Center for Documentation and Evaluation of Alternative Methods to Animal Experiments). The application of conventional quantitative structure-activity relationship (QSAR) modeling approaches to predict mouse or rat acute LD(50) values from chemical descriptors of ZEBET compounds yielded no statistically significant models. The analysis of these data showed no significant correlation between IC(50) and LD(50). However, a linear IC(50) versus LD(50) correlation could be established for a fraction of compounds. To capitalize on this observation, we developed a novel two-step modeling approach as follows. First, all chemicals are partitioned into two groups based on the relationship between IC(50) and LD(50) values: One group comprises compounds with linear IC(50) versus LD(50) relationships, and another group comprises the remaining compounds. Second, we built conventional binary classification QSAR models to predict the group affiliation based on chemical descriptors only. Third, we developed k-nearest neighbor continuous QSAR models for each subclass to predict LD(50) values from chemical descriptors. All models were extensively validated using special protocols. The novelty of this modeling approach is that it uses the relationships between in vivo and in vitro data only to inform the initial construction of the hierarchical two-step QSAR models. Models resulting from this approach employ chemical descriptors only for external prediction of acute rodent toxicity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebenau, Melanie, E-mail: melanie.ebenau@tu-dortmun
Purpose: Plastic scintillation detectors are promising candidates for the dosimetry of low- to medium-energy photons but quantitative knowledge of their energy response is a prerequisite for their correct use. The purpose of this study was to characterize the energy dependent response of small scintillation detectors (active volume <1 mm{sup 3}) made from the commonly used plastic scintillator BC400. Methods: Different detectors made from BC400 were calibrated at a number of radiation qualities ranging from 10 to 280 kV and at a {sup 60}Co beam. All calibrations were performed at the Physikalisch-Technische Bundesanstalt, the National Metrology Institute of Germany. The energymore » response in terms of air kerma, dose to water, and dose to the scintillator was determined. Conversion factors from air kerma to dose to water and to dose to the scintillator were derived from Monte Carlo simulations. In order to quantitatively describe the energy dependence, a semiempirical model known as unimolecular quenching or Birks’ formula was fitted to the data and from this the response to secondary electrons generated within the scintillator material BC400 was derived. Results: The detector energy response in terms of air kerma differs for different scintillator sizes and different detector casings. It is therefore necessary to take attenuation within the scintillator and in the casing into account when deriving the response in terms of dose to water from a calibration in terms of air kerma. The measured energy response in terms of dose to water for BC400 cannot be reproduced by the ratio of mean mass energy-absorption coefficients for polyvinyl toluene to water but shows evidence of quenching. The quenching parameter kB in Birks’ formula was determined to be kB = (12.3 ± 0.9) mg MeV{sup −1} cm{sup −2}. Conclusions: The energy response was quantified relative to the response to {sup 60}Co which is the common radiation quality for the calibration of therapy dosemeters. The observed energy dependence could be well explained with the assumption of ionization quenching as described by Birks’ formula. Plastic scintillation detectors should be calibrated at the same radiation quality that they will be used at and changes of the spectrum within the application need to be considered. The authors results can be used to evaluate the range of validity of a given calibration.« less
ERIS: preliminary design phase overview
NASA Astrophysics Data System (ADS)
Kuntschner, Harald; Jochum, Lieselotte; Amico, Paola; Dekker, Johannes K.; Kerber, Florian; Marchetti, Enrico; Accardo, Matteo; Brast, Roland; Brinkmann, Martin; Conzelmann, Ralf D.; Delabre, Bernard A.; Duchateau, Michel; Fedrigo, Enrico; Finger, Gert; Frank, Christoph; Rodriguez, Fernando G.; Klein, Barbara; Knudstrup, Jens; Le Louarn, Miska; Lundin, Lars; Modigliani, Andrea; Müller, Michael; Neeser, Mark; Tordo, Sebastien; Valenti, Elena; Eisenhauer, Frank; Sturm, Eckhard; Feuchtgruber, Helmut; George, Elisabeth M.; Hartl, Michael; Hofmann, Reiner; Huber, Heinrich; Plattner, Markus P.; Schubert, Josef; Tarantik, Karl; Wiezorrek, Erich; Meyer, Michael R.; Quanz, Sascha P.; Glauser, Adrian M.; Weisz, Harald; Esposito, Simone; Xompero, Marco; Agapito, Guido; Antichi, Jacopo; Biliotti, Valdemaro; Bonaglia, Marco; Briguglio, Runa; Carbonaro, Luca; Cresci, Giovanni; Fini, Luca; Pinna, Enrico; Puglisi, Alfio T.; Quirós-Pacheco, Fernando; Riccardi, Armando; Di Rico, Gianluca; Arcidiacono, Carmelo; Dolci, Mauro
2014-07-01
The Enhanced Resolution Imager and Spectrograph (ERIS) is the next-generation adaptive optics near-IR imager and spectrograph for the Cassegrain focus of the Very Large Telescope (VLT) Unit Telescope 4, which will soon make full use of the Adaptive Optics Facility (AOF). It is a high-Strehl AO-assisted instrument that will use the Deformable Secondary Mirror (DSM) and the new Laser Guide Star Facility (4LGSF). The project has been approved for construction and has entered its preliminary design phase. ERIS will be constructed in a collaboration including the Max- Planck Institut für Extraterrestrische Physik, the Eidgenössische Technische Hochschule Zürich and the Osservatorio Astrofisico di Arcetri and will offer 1 - 5 μm imaging and 1 - 2.5 μm integral field spectroscopic capabilities with a high Strehl performance. Wavefront sensing can be carried out with an optical high-order NGS Pyramid wavefront sensor, or with a single laser in either an optical low-order NGS mode, or with a near-IR low-order mode sensor. Due to its highly sensitive visible wavefront sensor, and separate near-IR low-order mode, ERIS provides a large sky coverage with its 1' patrol field radius that can even include AO stars embedded in dust-enshrouded environments. As such it will replace, with a much improved single conjugated AO correction, the most scientifically important imaging modes offered by NACO (diffraction limited imaging in the J to M bands, Sparse Aperture Masking and Apodizing Phase Plate (APP) coronagraphy) and the integral field spectroscopy modes of SINFONI, whose instrumental module, SPIFFI, will be upgraded and re-used in ERIS. As part of the SPIFFI upgrade a new higher resolution grating and a science detector replacement are envisaged, as well as PLC driven motors. To accommodate ERIS at the Cassegrain focus, an extension of the telescope back focal length is required, with modifications of the guider arm assembly. In this paper we report on the status of the baseline design. We will also report on the main science goals of the instrument, ranging from exoplanet detection and characterization to high redshift galaxy observations. We will also briefly describe the SINFONI-SPIFFI upgrade strategy, which is part of the ERIS development plan and the overall project timeline.
PREFACE: 13th International Workshop on Slow Positron Beam Techniques and Applications (SLOPOS13)
NASA Astrophysics Data System (ADS)
2014-04-01
These proceedings originate from the 13th International Workshop on Slow Positron Beam Techniques and Applications SLOPOS13 which was held at the campus of the Technische Universität München in Garching between 15th-20th September, 2013. This event is part of a series of triennial SLOPOS conferences. In total 123 delegates from 21 countries participated in the SLOPOS13. The excellent scientific program comprised 50 talks and 58 posters presented during two poster sessions. It was very impressive to learn about novel technical developments on positron beam facilities and the wide range of their applications all over the world. The workshop reflected the large variety of positron beam experiments covering fundamental studies, e.g., for efficient production of anti-hydrogen as well as applied research on defects in bulk materials, thin films, surfaces, and interfaces. The topics comprised: . Positron transport and beam technology . Pulsed beams and positron traps . Defect profiling in bulk and layered structures . Nanostructures, porous materials, thin films . Surfaces and interfaces . Positronium formation and emission . Positron interactions with atoms and molecules . Many positrons and anti-hydrogen . Novel experimental techniques The international advisory committee of SLOPOS awarded student prizes for the best presented scientific contributions to a team of students from Finland, France, and the NEPOMUC team at TUM. The conference was overshadowed by the sudden death of Professor Klaus Schreckenbach immediately before the workshop. In commemoration of him as a spiritus rectus of the neutron induced positron source a minutes' silence was hold. We are most grateful for the hard work of the Local Organising Committee, the help of the International Advisory Committee, and all the students for their friendly and efficient support during the meeting. The workshop could not have occurred without the generous support of the Heinz Maier-Leibnitz Zentrum (MLZ), Deutsche Forschungsgemeinschaft (DFG), and IOP publishing. Finally we would like to thank all attendees for their outstanding scientific contributions to SLOPOS13, and for the fruitful scientific discussions also in informal atmosphere during the social events. We are looking forward to SLOPOS14 in Japan in 2016! Christoph Hugenschmidt and Christian Piochacz (Guest Editors) Garching, March 2014 Further conference and committee information, as well as the conference picture, can be viewed in the pdf.
Ebenau, Melanie; Radeck, Désirée; Bambynek, Markus; Sommer, Holger; Flühs, Dirk; Spaan, Bernhard; Eichmann, Marion
2016-08-01
Plastic scintillation detectors are promising candidates for the dosimetry of low- to medium-energy photons but quantitative knowledge of their energy response is a prerequisite for their correct use. The purpose of this study was to characterize the energy dependent response of small scintillation detectors (active volume <1 mm(3)) made from the commonly used plastic scintillator BC400. Different detectors made from BC400 were calibrated at a number of radiation qualities ranging from 10 to 280 kV and at a (60)Co beam. All calibrations were performed at the Physikalisch-Technische Bundesanstalt, the National Metrology Institute of Germany. The energy response in terms of air kerma, dose to water, and dose to the scintillator was determined. Conversion factors from air kerma to dose to water and to dose to the scintillator were derived from Monte Carlo simulations. In order to quantitatively describe the energy dependence, a semiempirical model known as unimolecular quenching or Birks' formula was fitted to the data and from this the response to secondary electrons generated within the scintillator material BC400 was derived. The detector energy response in terms of air kerma differs for different scintillator sizes and different detector casings. It is therefore necessary to take attenuation within the scintillator and in the casing into account when deriving the response in terms of dose to water from a calibration in terms of air kerma. The measured energy response in terms of dose to water for BC400 cannot be reproduced by the ratio of mean mass energy-absorption coefficients for polyvinyl toluene to water but shows evidence of quenching. The quenching parameter kB in Birks' formula was determined to be kB = (12.3 ± 0.9) mg MeV(-1) cm(-2). The energy response was quantified relative to the response to (60)Co which is the common radiation quality for the calibration of therapy dosemeters. The observed energy dependence could be well explained with the assumption of ionization quenching as described by Birks' formula. Plastic scintillation detectors should be calibrated at the same radiation quality that they will be used at and changes of the spectrum within the application need to be considered. The authors results can be used to evaluate the range of validity of a given calibration.
CAWR: Two institutions join forces in a cluster by addressing the grand challenges of water research
NASA Astrophysics Data System (ADS)
Jaeckel, Greta; Braeckevelt, Mareike
2017-04-01
The Center for Advanced Water Research (CAWR) brings together the water competences of two German research institutions: Helmholtz Centre for Environmental Research - UFZ and the Technische Universität Dresden (TUD). Highly qualified scientists are jointly tackling some of the key challenges in the water sector in an outstanding breadth of research topics and at the same time with a profound disciplinary expertise. Our mission is: "Save water for humans and environment", because water in a good quality and adequate quantity is a fundamental basis of life for humankind and the environment. In many global challenges, such as food or energy security, human health and ecosystems, flood defence and droughts or the provision of drinking water and sanitation systems, water is becoming a very critical element for a sustainable society in Germany, in Europe and worldwide. The CAWR focusses its work on the fields of research, education & training as well as transfer. The CAWR was established in 2013. Over 3 years the activities within the three pillars and the six thematic priority research fields ( 1) Understanding processes: water cycle and water quality, 2): Water quantity and scarcity in the regional context, 3): Urban Water Systems, 4): Methods of data collection and information processing, 5): Societal and climate change, 6): Water governance) were presented within: • the scientific community (newsletters, publication highlights, workshops with different new formats, conferences) • to national and international stakeholders from policy, industry and society (workshops, opinion papers) • public media (TV, radio stations, Newspapers, brochures, videoclips via youtube…) This PICO presentation by Greta Jäckel (scientific management of CAWR) should show which tools for the presentation of research results are useful and which influence they have on different target groups. A bunch of examples for effective and also less successful instruments to present important water-related research results within the media should be part of this presentation. Publications: Krueger, E; Jäckel, G., Krebs, P.; Berendonk, T.; Borchardt, D.; Kolditz, O.;Maas, H.-G., Feger, K.-H.; Weitere, M.; Bernhofer, C.; Gawel, E.; Reese, M.; Köck, W.; Klauer, B.; Merz, R.; Dietrich, P.; Müller, R.; Fleckenstein, J.; Bernard, L.; Liedl, R. (2013): „ Integrated Water Ressources Management in the context of Global Change: Center for Advanced Water Research"
PREFACE: The 395th Wilhelm and Else Heraeus Seminar: `Time-dependent phenomena in Quantum Mechanics'
NASA Astrophysics Data System (ADS)
Kleber, Manfred; Kramer, Tobias
2008-03-01
The 395th Wilhelm and Else Heraeus Seminar: `Time-dependent phenomena in Quantum Mechanics' took place at the Heinrich Fabri Institute in Blaubeuren, Germany, 12-16 September 2007. The conference covered a wide range of topics connected with time-dependent phenomena in quantum mechanical systems. The 20 invited talks and 15 short talks with posters at the workshop covered the historical debate between Schrödinger, Dirac and Pauli about the role of time in Quantum Mechanics (the debate was carried out sometimes in footnotes) up to the almost direct observation of electron dynamics on the attosecond time-scale. Semiclassical methods, time-delay, monodromy, variational principles and quasi-resonances are just some of the themes which are discussed in more detail in the papers. Time-dependent methods also shed new light on energy-dependent systems, where the detour of studying the time-evolution of a quantum states allows one to solve previously intractable problems. Additional information is available at the conference webpage http://www.quantumdynamics.de The organizer would like to thank all speakers, contributors, session chairs and referees for their efforts in making the conference a success. We also gratefully acknowledge the generous financial support from the Wilhelm and Else Heraeus Foundation for the conference and the production of this special volume of Journal of Physics: Conference Series. Manfred Kleber Physik Department T30, Technische Universität München, 85747 Garching, Germany mkleber@ph.tum.de Tobias Kramer Institut I: Theoretische Physik, Universität Regensburg, 93040 Regensburg, Germany tobias.kramer@physik.uni-regensburg.de Guest Editors
Front row (from left): W Schleich, E J Heller, J B Delos, H Friedrich, K Richter, M Kleber, P Kramer, M Man'ko, A del Campo, V Man'ko, M Efremov, A Ruiz, M O Scully Middle row: A Zamora, R Aganoglu, T Kramer, J Eiglsperger, H Cruz, P Raab, I Cirac, G Muga, J Larson, V Dodonov, W Becker Back row: A Eckardt, A Siddiki, K Vafayi, M Holthaus, E Räsänen, M Rodriguez, O Kullie, D Milošević, J Briggs, A Ribeiro, (not in the picture W Zwerger)
Total Column Greenhouse Gas Monitoring in Central Munich: Automation and Measurements
NASA Astrophysics Data System (ADS)
Chen, Jia; Heinle, Ludwig; Paetzold, Johannes C.; Le, Long
2016-04-01
It is challenging to use in-situ surface measurements of CO2 and CH4 to derive emission fluxes in urban regions. Surface concentrations typically have high variance due to the influence of nearby sources, and they are strongly modulated by mesoscale transport phenomena that are difficult to simulate in atmospheric models. The integrated amount of a tracer through the whole atmosphere is a direct measure of the mass loading of the atmosphere given by emissions. Column measurements are insensitive to vertical redistribution of tracer mass, e.g. due to growth of the planetary boundary layer, and are also less influenced by nearby point sources, whose emissions are concentrated in a thin layer near the surface. Column observations are more compatible with the scale of atmospheric models and hence provide stronger constraints for inverse modeling. In Munich we are aiming at establishing a regional sensor network with differential column measurements, i.e. total column measurements of CO2 and CH4 inside and outside of the city. The inner-city station is equipped with a compact solar-tracking Fourier transform spectrometer (Bruker EM27/SUN) in the campus of Technische Universität München, and our measurements started in Aug. 2015. The measurements over seasons will be shown, as well as preliminary emission studies using these observations. To deploy the compact spectrometers for stationary monitoring of the urban emissions, an automatic protection and control system is mandatory and a challenging task. It will allow solar measurements whenever the sun is out and reliable protection of the instrument when it starts to rain. We have developed a simplified and highly reliable concept for the enclosure, aiming for a fully automated data collection station without the need of local human interactions. Furthermore, we are validating and combining the OCO-2 satellite-based measurements with our ground-based measurements. For this purpose, we have developed a software tool that permits spatial, temporal and quality data filtering and selection from the OCO-2 database. We observed inconsistencies between nadir and glint measurements nearby Munich on consecutive days with similar weather conditions in August 2015. To visualize our regional sensor network, we have developed software to generate KML-Files, which enables us to display and browse the results of our measurement site, OCO-2 measurements as well as future satellite tracks.
PRESCILA: a new, lightweight neutron rem meter.
Olsher, Richard H; Seagraves, David T; Eisele, Shawna L; Bjork, Christopher W; Martinez, William A; Romero, Leonard L; Mallett, Michael W; Duran, Michael A; Hurlbut, Charles R
2004-06-01
Conventional neutron rem meters currently in use are based on 1960's technology that relies on a large neutron moderator assembly surrounding a thermal detector to achieve a rem-like response function over a limited energy range. Such rem meters present an ergonomic challenge, being heavy and bulky, and have caused injuries during radiation protection surveys. Another defect of traditional rem meters is a poor high-energy response above 10 MeV, which makes them unsuitable for applications at high-energy accelerator facilities. Proton Recoil Scintillator-Los Alamos (PRESCILA) was developed as a low-weight (2 kg) alternative capable of extended energy response, high sensitivity, and moderate gamma rejection. An array of ZnS(Ag) based scintillators is located inside and around a Lucite light guide, which couples the scintillation light to a sideview bialkali photomultiplier tube. The use of both fast and thermal scintillators allows the energy response function to be optimized for a wide range of operational spectra. The light guide and the borated polyethylene frame provide moderation for the thermal scintillator element. The scintillators represent greatly improved versions of the Hornyak and Stedman designs from the 1950's, and were developed in collaboration with Eljen Technology. The inherent pulse height advantage of proton recoils over electron tracks in the phosphor grains eliminates the need for pulse shape discrimination and makes it possible to use the PRESCILA probe with standard pulse height discrimination provided by off-the-shelf health physics counters. PRESCILA prototype probes have been extensively tested at both Los Alamos and the German Bureau of Standards, Physikalisch-Technische Bundesanstalt. Test results are presented for energy response, directional dependence, linearity, sensitivity, and gamma rejection. Initial field tests have been conducted at Los Alamos and these results are also given. It is concluded that PRESCILA offers a viable, ergonomically superior, alternative to traditional rem meters that is effective for a wide range of neutron fields. The probe is capable of excellent sensitivity (40 counts per minute per microSv h-1 for 241AmBe) and extended energy response to beyond 20 MeV. Directional response is uniform (+/-15%) over a wide range of energies. Response linearity has been characterized to over 20 mSv h-1. Gamma rejection is effective in gamma fields up to 2 mSv h-1. The PRESCILA technology has been commercialized and is now offered under license by Ludlum Measurements, Inc.
Strauss, G; Strauss, M; Lüders, C; Stopp, S; Shi, J; Dietz, A; Lüth, T
2008-10-01
PROBLEM DEFINITION: The goal of this work is the integration of the information of the intraoperative EMG monitoring of the facial nerve into the radiological data of the petrous bone. The following hypotheses are to be examined: (I) the N. VII can be determined intraoperatively with a high reliability by the stimulation-probe. A computer program is able to discriminate true-positive EMG signals from false-positive artifacts. (II) The course of the facial nerve can be registered in a three-dimensional area by EMG signals at a nerve model in the lab test. The individual items of the nerve can be combined into a route model. The route model can be integrated into the data of digital volume tomography (DVT). (I) Intraoperative EMG signals of the facial nerve were classified at 128 measurements by an automatic software. The results were correlated with the actual intraoperative situation. (II) The nerve phantom was designed and a DVT data set was provided. Phantom was registered with a navigation system (Karl Storz NPU, Tuttlingen, Germany). The stimulation probe of the EMG-system was tracked by the navigation system. The navigation system was extended by a processing unit (MiMed, Technische Universität München, Germany). Thus the classified EMG parameters of the facial route can be received, processed and be generated to a model of the facial nerve route. The operability was examined at 120 (10 x 12) measuring points. The evaluation of the examined algorithm for classification EMG-signals of the facial nerve resulted as correct in all measuring events. In all 10 attempts it succeeded to visualize the nerve route as three-dimensional model. The different sizes of the individual measuring points reflect the appropriate values of Istim and UEMG correctly. This work proves the feasibility of an automatic classification of an intraoperative EMG signal of the facial nerve by a processing unit. Furthermore the work shows the feasibility of tracking of the position of the stimulation probe and its integration into amodel of the route of the facial nerve (e. g. DVT). The rediability, with which the position of the nerve can be seized by the stimulation probe, is also included into the resulting route model.
NASA Astrophysics Data System (ADS)
Wold, Kari
Successfully interacting with those from different cultures is essential to excel in any field, particularly when global, transnational collaborations in the workplace are increasingly common. However, many higher education students in engineering are not explicitly taught how to display the global competency skills desired by future employers. To display global competency skills means students must be able to visibly respect and recognize differences among those from different cultures. Global competency also means students must be able to show they can adjust their behaviors and integrate others' ideas when working with those with cultural backgrounds other than their own. While these skills are now deemed essential for future engineers, many institutions are struggling with determining which strategies and activities are universally effective to allow students to practice the global competency skills now crucial for success. Immersing engineering students in interactive role-playing simulations in transnational environments is one way institutions are encouraging students to illustrate and develop global competency skills. Role-playing simulations in transnational education provide environments where students adopt roles, interact with other students, and together explore and address realistic global problems. However, no studies have addressed whether or how role-playing simulations can help develop global competency in transnational engineering courses, students' perceptions regarding whether they change their abilities to display global competency in those environments, and their perspectives the effectiveness of using role-playing simulations for this purpose. To address this gap, this study assesses the impact of two subsequent role-playing simulations involving nuclear energy policy in a transnational course involving engineering students from the University of Virginia in Charlottesville, Virginia, and from Technische Universitat Dortmund in Dortmund, Germany. The differences in students' self-reports regarding whether their behaviors showing global competency skills changed were insignificant from pretests and posttests. However, data obtained from observations, surveys, and interviews showed students did increase their abilities to display global competency, and they believed role-playing simulations were useful in helping them do so. Findings from this study inform program designers and instructors on how to help students display, and improve their abilities to display, the global competency skills that will help them succeed in the world that awaits them.
NASA Astrophysics Data System (ADS)
2012-11-01
Leadership Team of the IAHR Committee for Hydraulic Machinery and Systems Eduard EGUSQUIZA, UPC Barcelona, Spain, Chair François AVELLAN, EPFL-LMH, Switzerland, Past Chair Richard K FISHER, Voith Hydro Inc., USA, Past Chair Fidel ARZOLA, Edelca, Venezuela Michel COUSTON, Alstom Hydro, France Niklas DAHLBÄCKCK, Vatenfall, Sweden Normand DESY, Andritz VA TECH Hydro Ltd., Canada Chisachi KATO, University of Tokyo, Japan Andrei LIPEJ, Turboinstitut, Slovenija Torbjørn NIELSEN, NTNU, Norway Romeo SUSAN-RESIGA, 'Politehnica' University Timisoara, Romania Stefan RIEDELBAUCH, Stuggart University, Germany Albert RUPRECHT, Stuttgart University, Germany Qing-Hua SHI, Dong Fang Electrical Machinery Co., China Geraldo TIAGO, Universidade Federal de Itajubá, Brazil International Advisory Committee Shouqi YUAN (principal) Jiangsu University China QingHua SHI (principal) Dong Fang Electrical Machinery Co. China Fidel ARZOLA EDELCA Venezuela Thomas ASCHENBRENNER Voith Hydro GmbH & Co. KG Germany Anton BERGANT Litostroj Power doo Slovenia B C BHAOYAL Research & Technology Centre India Hermod BREKKE NTNU Norway Stuart COULSON Voith Hydro Inc. USA Paul COOPER Fluid Machinery Research Inc USA V A DEMIANOV Power Machines OJSC Russia Bart van ESCH Technische Universiteit Eindhoven Netherland Arno GEHRER Andritz Hydro Graz Austria Akira GOTO Ebara Corporation Japan Adiel GUINZBURG The Boeing Company USA D-H HELLMANN KSB AG Germany Ashvin HOSANGADI Combustion Research and Flow Technology USA Byung-Sun HWANG Korea Institute of Material Science Korea Toshiaki KANEMOTO Kyushu Institute of Technology Japan Mann-Eung KIM Korean Register of Shipping Korea Jiri KOUTNIK Voith Hydro GmbH & Co. KG Germany Jinkook LEE Eaton Corporation USA Young-Ho LEE Korea Maritime University Korea Woo-Seop LIM Hyosung Goodsprings Inc Korea Jun MATSUI Yokohama National University Japan Kazuyoshi Mitsubishi H I Ltd, Japan MIYAGAWA Christophe NICOLET Power Vision Engineering Srl Switzerland Maryse PAGE Hydro Quebec IREQ, Varennes Canada Etienne PARKINSON Andritz Hydro Ltd. Switzerland B V S S S PRASAD Indian Institute of Technology Madras India Stefan RIEDELBAUCH Stuttgart University Germany Michel SABOURIN Alstom Hydro Canada Inc Canada Bruno SCHIAVELLO Flowserve Corporation USA Katsumasa SHIMMEI Hitachi Ltd Japan Christoph SINGRTüN VDMA Germany Ale? SKOTAK CKD Blansko Engineering, a s Czech Republic Toshiaki SUZUKI Toshiba Corporation Japan Andy C C TAN Queensland University of Technology Australia Geraldo TIAGO FILHO Universidade Federal de Itajuba Brazi Thi C VU Andritz Hydro Ltd Canada Satoshi WATANABE Kyushu University Japan S H WINOTO National University of Singapore Singapore Woo-Seong WOO STX Institute of Technology Korea International Technical Committee François AVELLAN (principal) EPFL-LMH Switzerland Xingqi LUO (principal) Xi'an University of Technology China Martin BÖHLE Kaiserslautern University Germany Gerard BOIS ENSAM France Young-Seok CHOI KITECH Korea Luca d'AGOSTINO University of Pisa Italy Eduard EGUSQUIZA Polytechnical University Catalonia Spain Arpad FAY University of Miskolcz Hungary Richard FISHER Voith Hydro Inc USA Regiane FORTES-PATELLA Institute Polytechnique de Grenoble France Aleksandar GAJIC University of Belgrade Serbia José GONZÁLEZ Universidad de Oviedo Spain François GUIBAULT Ecole Polytechnique de Montreal Canada Toshiaki IKOHAGI Tohoku University Japan Chisachi KATO University of Tokyo Japan Kwang-Yong KIM Inha University Korea Youn-Jea KIM Sungkyunkwan University Korea Smaine KOUIDRI Université Pierre et Marie Curie (Paris 6) France Shengcai LI Warwick University UK Adrian LUNGU Dunarea de Jos University of Galati Romania Torbjøm K NIELSEN NTNU Norway Michihiro NISHI Tsinghua University China Peter PELZ Darmstadt University Germany Frantisek POCHYLY Brno University Czech Republic Albert RUPRECHT University of Stuttgart Germany Rudolf SCHILLING Technische University München Germany Wei SHYY HKUST Hong Kong,China Romeo SUSAN-RESIGA Politehnica University of Timisoara Romania Kazuhiro TANAKA Kyushu Institute of Technology Japan Yoshinobu TSUJIMOTO Osaka University Japan Local Organizing Committee Chairman Yulin WU Tsinghua University Beijing Executive Chairman Zhengwei WANG Tsinghua University Beijing Members Shuliang CAO Tsinghua University Beijing Cichang CHEN South West University of Petroleum Chengdu Hongxun CHEN Shanghai University Shanghai Jiang DAI China Sanxia General Co Yichang Huashu DOU National University of Singapore Singapore Fengqin HAN Huanan University of Sci & Tech Guangzhou Kun LI Hefei Inst of General Machinery Hefei Rennian LI Lanzhou University of Sci & Tech Lanzhou Wanhong LI National Natural Science Foundation of China Beijing Chao LIU Yangzhou University Yangzhou Li LU China Inst of Water Resources and Hydropower Research Beijing Xingqi LUO Xi'an University of Tech Xi'an Zhenyue MA Dalian University of Sci & Tech Dalian Jiegang MU Zhejiang University of Tech Hangzhou Daqing QIN Harbin Electric Machinery Group Harbin Fujun WANG China Agriculture University Beijing Guoyu WANG Beijing Institute of Technology (BIT) Beijing Leqin WANG Zhejiang University Hangzhou Yuzhen WU NERCSPV Beijing Hongyuan XU Tsinghua University Beijing Jiandong YANG Wuhan University Wuhan Minguan YANG Jiangsu University Zhenjiang Shouqi YUAN Jiangsu University Zhenjiang Lefu ZHANG Harbin Electric Machinery Group Harbin Lixiang ZHANG Yunnan University of Sci & Tech Kunming Shengchang ZHANG Zhejiang University of Tech Hangzhou Kun ZHAO China Water & Electric Consulting Corp Beijing Yuan ZHENG Hehai University Nanjing Jianzhong ZHOU Huazhong University of Sci & Tech Wuhan Lingjiu ZHOU China Agriculture University Beijing Hongwu ZHU China Petroleum University Beijing Zuchao ZHU Zhejiang Sci-Tech University Hangzhou Secretaries Shuhong LIU (Academic), liushuhong@tsinghua.edu.cn Xianwu LUO (Registration), luoxw@tsinghua.edu.cn Baoshan ZHU (Finance), bszhu@mail.tsinghua.edu.cn
Hermann Wilhelm Abich im Kaukasus: Zum zweihundertsten Geburtstag
NASA Astrophysics Data System (ADS)
Seibold, Ilse; Seibold, Eugen
2006-11-01
Hermann Abich was born in 1806 in Berlin and died in 1886 in Graz. He grew up in a wealthy family which had friendly relations with famous scientists like Alexander von Humboldt, Leopold von Buch or Carl Ritter. After his studies in Heidelberg and Berlin he turned to extended fieldwork at the volcanoes of Italy. In 1833 1834 he published excellent petrological/chemical results and got soon a good scientific reputation. Thus he was nominated as Professor for Geology and Mineralogy of the prestigious Russian University in Dorpat (now Tartu, Esthonia) in 1842. In 1844 he was sent to Armenia by the Russian authorities. For the next three decades his fieldwork with about 190 publications was concentrated on the Great and Lesser Caucasus. This was a period of Russian expansion to the South with long-lasting regional fights. But he enjoyed the support of powerful governors. He was an indefatigable and enthusiastic explorer and a precise observer and designer. His interests covered many fields: morphology, glaciology, structural geology, volcanology with Thermal Springs, mineral resources from hydrocarbons, coal, salt to ores, stratigraphy and paleontology as a base for geological maps. But he also gave advice for practical problems, and he was active in meteorology, botany and archaeology. Alltogether he became “the Father of Caucasus Geology”. The following sketch stresses only on three aspects of his activities. He was one of the first pioneers in hydrocarbon exploration, especially around the anticlines with the mud volcanoes near Baku. In many respects, however, his fundamental ideas were erronous. He explained the structure of the Great Caucasus by the traditional theories of Leopold von Buch and Elie de Beaumont. The Caucasus anticline “was elevated by forces acting from beneath”. Following them he tried to discover regularities in the strike of mountain chains. Similarily he treated volcanism like Alexander von Humboldt and Leopold von Buch with their two groups of phenomena: voluminous, mostly basaltic “elevation craters” versus isolated, mostly trachytic and relatively small cones of “true volcanoes”. In spite of the isolation of the Caucasus region he had cultivated continuously contacts with leading geologists in Europe and was honoured by many institutions. He left Russia in 1876 for Vienna planning to write there the final monograph volumes about his investigations but he died before he could complete them.
Zechmeister, T.C.; Farnleitner, A.H.; Rocke, T.E.; Pittner, F.; Rosengarten, R.; Mach, R.L.; Herzig, A.; Kirschner, A.K.T.
2002-01-01
Botulism is one of the most important bird diseases world-wide and is caused by the intoxication with Botulinum-Neurotoxin-C1 (BoNt-C1), which is produced by toxigenic clostridia under appropriate conditions. Avian botulism leads regularly to large losses among the migrating bird populations breeding and resting at the saltwater pools of the Austrian national park Neusiedler See-Seewinkel. Despite of its ethical dubiousness and its high technical expense the mouse-bioassay is still used as the routine standard method for the detection of BoNt-C1. According to the 3R-concept, in vitro alternative methods for the qualitative detection of BoNt-C1 (immunostick-ELISA) and a corresponding BoNt-C1 gene fragment (nested-PCR) were established. In order to estimate the BoNt-C1 production potential the methods were tested with sediment samples from different saltwater pools subjected to cultivation conditions appropriate for in vitro BoNt-C1-production. With the mouse-bioassay, 52 out of 77 samples were found to have a positive toxin production potential. The immunostick-ELISA showed a similar sensitivity as the mouse-bioassay and exhibited a highly significant positive correlation (r=0.94; p<0.001) with the mouse-bioassay in detecting BoNt-C1. The nested-PCR approach revealed higher numbers of positive BoNt-C1 gene fragment detections as compared to the direct toxin analysis approaches. A weak correlation (r=0.21; p=0.07) with the mouse-bioassay was discernible, no correlation was found with the immunostick-ELISA (r=0.09; p=0.46). Obviously, the PCR approach detected the BoNt-C1 gene fragment in some of the samples where no toxin expression has occurred. Thus it is suggested that the qualitative immunostick-ELISA represents a potential in vitro alternative to the mouse-bioassay for assessing the BoNt-C1 production potential in environmental samples. In contrast, qualitative BoNt-C1 gene fragment detection via PCR led to an overestimation of the actual toxin production potential.
NASA Astrophysics Data System (ADS)
Schiel, Detlef; Rienitz, Olaf
2011-01-01
This comparison 'Hg in natural water' was a follow-up to the pilot studies CCQM-P100.1 and CCQM-P100.2. The aim of this comparison was to demonstrate the capability of national metrology institutes to measure the Hg mass concentration in a natural water sample at the very low concentration level of γ(Hg) ≈ 70 ng/L as required by the EQS. In this way it served to help implement the European Water Framework Directive (WFD). This comparison was an activity of the Inorganic Analysis Working Group (IAWG) of CCQM and was piloted by Physikalisch-Technische Bundesanstalt (PTB, Braunschweig, Germany) with the help of the co-organizers Bundesanstalt für Materialforschung und -prüfung (BAM, Berlin, Germany), Laboratoire National de Métrologie et d'Essais (LNE, Paris, France), and the Joint Research Centre-Institute for Reference Materials and Measurements (EC-JRC-IRMM, Geel, Belgium). The following laboratories participated in this key comparison (in alphabetical order): BAM (Germany) EC-JRC-IRMM (European Union) KRISS (Republic of Korea) LGC (United Kingdom) LNE (France) NIST (United States of America) NMIA (Australia) NRC (Canada) PTB (Germany) SP (Sweden) The majority of participants applied isotope dilution mass spectrometry (IDMS) using sector field or quadrupole inductively coupled plasma MS (ICP-MS) in combination with cold vapour (CV) generation as the analytical technique. NRC reported a combined result of ID-CV-ICP-MS and CV atomic absorption spectrometry (CV-AAS). SP applied a standard addition method on a sector field ICP-MS, while BAM made use of an external 5-point calibration on a CV atomic fluorescence spectrometer (AFS). The key comparison reference value (KCRV) was agreed upon during the IAWG meeting in April 2010 at BIPM as the sum of the added Hg content calculated from the gravimetric sample preparation and the Hg matrix content of the water used for sample preparation (determined and validated on two independent pathways). Accordingly the degrees of equivalence were calculated. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
A study of planar Richtmyer-Meshkov instability in fluids with Mie-Grüneisen equations of state
NASA Astrophysics Data System (ADS)
Ward, G. M.; Pullin, D. I.
2011-07-01
We present a numerical comparison study of planar Richtmyer-Meshkov instability with the intention of exposing the role of the equation of state. Results for Richtmyer-Meshkov instability in fluids with Mie-Grüneisen equations of state derived from a linear shock-particle speed Hugoniot relationship (Jeanloz, J. Geophys. Res. 94, 5873, 1989; McQueen et al., High Velocity Impact Phenomena (1970), pp. 294-417; Menikoff and Plohr, Rev. Mod. Phys. 61(1), 75 1989) are compared to those from perfect gases under nondimensionally matched initial conditions at room temperature and pressure. The study was performed using Caltech's Adaptive Mesh Refinement, Object-oriented C++ (AMROC) (Deiterding, Adaptive Mesh Refinement: Theory and Applications (2005), Vol. 41, pp. 361-372; Deiterding, "Parallel adaptive simulation of multi-dimensional detonation structures," Ph.D. thesis (Brandenburgische Technische Universität Cottbus, September 2003)) framework with a low-dissipation, hybrid, center-difference, limiter patch solver (Ward and Pullin, J. Comput. Phys. 229, 2999 (2010)). Results for single and triple mode planar Richtmyer-Meshkov instability when a reflected shock wave occurs are first examined for mid-ocean ridge basalt (MORB) and molybdenum modeled by Mie-Grüneisen equations of state. The single mode case is examined for incident shock Mach numbers of 1.5 and 2.5. The planar triple mode case is studied using a single incident Mach number of 2.5 with initial corrugation wavenumbers related by k1=k2+k3. Comparison is then drawn to Richtmyer-Meshkov instability in perfect gases with matched nondimensional pressure jump across the incident shock, post-shock Atwood ratio, post-shock amplitude-to-wavelength ratio, and time nondimensionalized by Richtmyer's linear growth time constant prediction. Differences in start-up time and growth rate oscillations are observed across equations of state. Growth rate oscillation frequency is seen to correlate directly to the oscillation frequency for the transmitted and reflected shocks. For the single mode cases, further comparison is given for vorticity distribution and corrugation centerline shortly after shock interaction. Additionally, we examine single mode Richtmyer-Meshkov instability when a reflected expansion wave is present for incident Mach numbers of 1.5 and 2.5. Comparison to perfect gas solutions in such cases yields a higher degree of similarity in start-up time and growth rate oscillations. The formation of incipient weak waves in the heavy fluid driven by waves emanating from the perturbed transmitted shock is observed when an expansion wave is reflected.
NASA Astrophysics Data System (ADS)
Köhler, Ulf; Nevas, Saulius; McConville, Glen; Evans, Robert; Smid, Marek; Stanek, Martin; Redondas, Alberto; Schönenborn, Fritz
2018-04-01
Three reference Dobsons (regional standard Dobsons No. 064, Germany and No. 074, Czech Republic as well as the world standard No. 083, USA) were optically characterized at the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig in 2015 and at the Czech Metrology Institute (CMI) in Prague in 2016 within the EMRP ENV 059 project Traceability for atmospheric total column ozone
. Slit functions and the related parameters of the instruments were measured and compared with G. M. B. Dobson's specifications in his handbook. All Dobsons show a predominantly good match of the slit functions and the peak (centroid) wavelengths with deviations between -0.11 and +0.12 nm and differences of the full width half maximum (FWHM) between 0.13 and 0.37 nm compared to the nominal values at the shorter wavelengths. Slightly larger deviations of the FWHMs from the nominal Dobson data, up to 1.22 nm, can be seen at the longer wavelengths, especially for the slit function of the long D-wavelength. However, differences between the effective absorption coefficients (EACs) for ozone derived using Dobson's nominal values of the optical parameters on one hand and these measured values on the other hand are not too large in the case of both old
Bass-Paur (BP) and new
IUP-ozone (Institut für Umweltphysik, University of Bremen) absorption cross sections. Their inclusion in the calculation of the total ozone column (TOC) leads to improvements of significantly less than ±1 % at the AD-wavelengths between -1 and -2 % at the CD-wavelengths pairs in the BP-scale. The effect on the TOC in the IUP-scale is somewhat larger at the AD-wavelengths, up to +1 % (D074), and smaller at the CD-wavelengths pair, from -0.44 to -1.5 %. Beside this positive effect gained from the data with higher metrological quality that is needed for trend analyses and satellite validation, it will be also possible to explain uncommon behaviours of field Dobsons during calibration services, especially when a newly developed transportable device TuPS (tuneable portable radiation source) from CMI proves its capability to provide similar results as the stationary setups in the laboratories of National Metrology Institutes. Then, the field Dobsons can be optically characterized as well during regular calibration campaigns. A corresponding publication will be prepared using the results of TuPS-based measurements of more than 10 Dobsons in field campaigns in 2017.
Phytoremediation of spoil coal dumps in Western Donbass (Ukraine)
NASA Astrophysics Data System (ADS)
Klimkina, Iryna; Kharytonov, Mykola; Wiche, Oliver; Heilmeier, Hermann
2017-04-01
At the moment, in Ukraine about 150 thousand hectares of fertile land are occupied by spoil dumps. Moreover, this figure increases every year. According to the technology used about 1500 m3 of adjacent stratum is dumped at the surface per every 1000 tons of coal mined. Apart from land amortization, waste dumps drastically change the natural landscape and pollute air, soil and water sources as the result of water and wind erosion, as well as self-ignition processes. A serious concern exists with respect to the Western Donbass coal mining region in Ukraine, where the coal extraction is made by the subsurface way and solid wastes are represented by both spoil dumps and wastes after coal processing. Sulphides, mostly pyrite (up to 4% of waste material), are widely distributed in the waste heaps freshly removed due to coal mining in Western Donbass.The oxidation of pyrite with the presence of oxygen and water is accompanied by a sharp drop in the pH from the surface layer to the spoil dumps(from 5.2-6.2 to 3.9-4.2 in soil substrates with chernozen and from 8.3-8.4 to 6.7-7.2 in soil substrates with red-brown clay, stabilizing in dump material in both cases at 2.9-3.2). Low pH generates the transformation of a number of toxic metals and other elementspresent in waste rock (e.g. Fe, Al, Mn, Zn, Mo, Co, As, Cd, Bi, Pb, U) into mobile forms. To stabilize and reduce metal mobility the most resistant plants that occur naturally in specified ecosystems can be used. On coal spoil dumpsin Western Donbas the dominant species are Bromopsis inermis, subdominant Artemisia austriaca; widespread are also Festucas pp., Lathyrus tuberosus, Inula sp., Calamagrostis epigeios, Lotus ucrainicus, and Vicias pp. Identification of plants tolerant to target metals is a key issue in phytotechnology for soil restoration. It is hypothesized that naturally occurring plants growing on coal spoil dumps can be candidates for phytostabilization, phytoextraction (phytoaccumulation) and phytomining techniques. Results on accumulation of target elements in the above- and below ground biomass of abundant plant species will be used to discuss their phytoremediation potential for spoil coal dumps in Western Donbas. Research is being carried out in the framework of DAAD project "Biotechnology in Mining - Integration of New Technologies into Educational Practice" and cooperation between TechnischeUniversität Bergakademie Freiberg, Germany, and National Mining University, Dnipro, Ukraine.
NASA Astrophysics Data System (ADS)
2012-11-01
All papers published in this volume of Journal of Physics: Conference Series have been peer reviewed through processes administered by the editors of the 26th IAHR Symposium on Hydraulic Machinery and Systems proceedings. Reviews were conducted by expert referees from the International Technical Committee to the professional and scientific standards expected of a proceedings journal published by IOP Publishing. The members of the Scientific Committee who selected and reviewed the papers included in the Proceedings of the 26th IAHR Symposium on Hydraulic Machinery and Systems are: Yulin WU Tsinghua University China François AVELLAN EPFL-LMH Switzerland (principal) Xingqi LUO Xi'an University of Sci & Tech China Martin BÖHLE Kaiserslautern University Germany Gerard BOIS Arts et Métiers ParisTech France Luca D'AGOSTINO University of Pisa Italy Eduard EGUSQUIZA Polytechnical University Catalonia Spain Richard FISHER Voith Hydro Inc USA Regiane FORTES-PATELLA Institute Polytechnique de Grenoble France Aleksandar GAJIC University of Belgrade Serbia Wei YANG China Agriculture University China YinLu YOUNG University of Michigan USA Adrian LUNGU Dunarea de Jos University of Galati Romania Arpad FAY University of Miskolcz Hungary José GONZÁLEZ Universidad de Oviedo Spain Baoshan ZHU Tsinghua University China Hongxun CHEN Shanghai University China Chisachi KATO University of Tokyo Japan Zhenyue MA Dalian University of Sci & Tech China Honggang FAN Tsinghua University China François GUIBAULT Ecole Polytechnique de Montreal Canada Pengcheng GUO Xian University of Technology China Leqing WANG Zhejiang University China Toshiaki IKOHAGI Tohoku University Japan Jiandong YANG Wuhan University China Jianzhong ZHOU Huazhong University of Sci & Tech China Jinwei LI NULL China Rennian LI Lanzhou University of Sci & Tech China Houlin LIU NULL China Juan LIU Tsinghua University China Shuhong LIU Tsinghua University China Xianwu LUO Tsinghua University China Michihiro NISHI Tsinghua University China Peter PELZ Darmstadt University Germany František POCHYLY Brno University Czech Republic Rudolf SCHILLING Technische Universität München Germany Minguan YANG Jiangsu University China Smaine KOUIDRI Université Pierre et Marie Curie (Paris 6) France Kazuhiro TANAKA Kyushu Institute of Technology Japan Xuelin TANG Tsinghua University China Yoshinobu TSUJIMOTO Osaka University Japan Fujun WANG China Agriculture University China Guoyu WANG Beijing University of Sci & Tech China Wenwu SONG NULL China Zhengwei WANG Tsinghua University China Hongyuan XU Tsinghua University China Lefu XIAO NULL China Fan YANG Tsinghua University China Yuan ZHENG Hehai University China Zhigang ZUO Tsinghua University China Hongwu ZHU China Petroleum University China Lixiang ZHANG Yunnan University of Sci & Tech China Shengchang ZHANG Zhejiang University of Tech China
Pilge, Stefanie; Kreuzer, Matthias; Karatchiviev, Veliko; Kochs, Eberhard F; Malcharek, Michael; Schneider, Gerhard
2015-05-01
It is claimed that bispectral index (BIS) and state entropy reflect an identical clinical spectrum, the hypnotic component of anaesthesia. So far, it is not known to what extent different devices display similar index values while processing identical electroencephalogram (EEG) signals. To compare BIS and state entropy during analysis of identical EEG data. Inspection of raw EEG input to detect potential causes of erroneous index calculation. Offline re-analysis of EEG data from a randomised, single-centre controlled trial using the Entropy Module and an Aspect A-2000 monitor. Klinikum rechts der Isar, Technische Universität München, Munich. Forty adult patients undergoing elective surgery under general anaesthesia. Blocked randomisation of 20 patients per anaesthetic group (sevoflurane/remifentanil or propofol/remifentanil). Isolated forearm technique for differentiation between consciousness and unconsciousness. Prediction probability (PK) of state entropy to discriminate consciousness from unconsciousness. Correlation and agreement between state entropy and BIS from deep to light hypnosis. Analysis of raw EEG compared with index values that are in conflict with clinical examination, with frequency measures (frequency bands/Spectral Edge Frequency 95) and visual inspection for physiological EEG patterns (e.g. beta or delta arousal), pathophysiological features such as high-frequency signals (electromyogram/high-frequency EEG or eye fluttering/saccades), different types of electro-oculogram or epileptiform EEG and technical artefacts. PK of state entropy was 0.80 and of BIS 0.84; correlation coefficient of state entropy with BIS 0.78. Nine percent BIS and 14% state entropy values disagreed with clinical examination. Highest incidence of disagreement occurred after state transitions, in particular for state entropy after loss of consciousness during sevoflurane anaesthesia. EEG sequences which led to false 'conscious' index values often showed high-frequency signals and eye blinks. High-frequency EEG/electromyogram signals were pooled because a separation into EEG and fast electro-oculogram, for example eye fluttering or saccades, on the basis of a single EEG channel may not be very reliable. These signals led to higher Spectral Edge Frequency 95 and ratio of relative beta and gamma band power than EEG signals, indicating adequate unconscious classification. The frequency of other artefacts that were assignable, for example technical artefacts, movement artefacts, was negligible and they were excluded from analysis. High-frequency signals and eye blinks may account for index values that falsely indicate consciousness. Compared with BIS, state entropy showed more false classifications of the clinical state at transition between consciousness and unconsciousness.
Mobile radiography CQI: an inter-national study.
Kamat, M R; Wein, B; Cohan, R
1999-01-01
Mobile or bedside radiography has been and is a staple diagnostic and follow-up tool used readily by the many medical disciplines, such as cardiology, surgery, orthopedics, pediatrics, neonatology, etc. Ironically, in the past a student or the least qualified technologist was sent to perform the bedside exam. Moreover, it was almost expected that poor but acceptable film quality would result or that repeat films were almost always to be taken. Inefficiency with respect to quality of exam, the time the exam takes, or film repeats can be costly. The price of inefficiency is the cost involved in doing things incorrectly or not in the most efficient manner, i.e., incurring inefficiencies instead of operating in an ideal manner. The purpose of this study was to compare the total cost of inefficiently organized, scheduled and performed mobile radiography at three large teaching hospitals in various locations and of diverse patient loads, as a means of determining how best to increase utilization and performance. The study was performed at the 489-bed New England Deaconess Hospital (NEDH), the 644-bed Sentara Norfolk General Hospital (SNGH), and the 1500-bed Rheinische Westfalische Technische Hochschule (RWTH) in Aachen, Germany. Similar standardized study methods were utilized at all three institutions where extended observation of mobile utilization, areas of inefficiency, time wasted per episode and number of episodes per time period were determined. Data were loggedin at three standardized time periods, summated, and then multiplied by technologist hourly pay rate. This sum was extrapolated over 52 weeks to give the total annual cost of inefficiently organized mobile radiography. For NEDH the cost of total inefficiency was $75,453, for SNGH $49,586, while for RWTH it was $9,519. Eighteen areas of inefficiency were identified and grouped, such as lack of spatial cohesiveness and lack of communication leading to film duplication, etc. While inefficiencies in the delivery of hospital based health care are well known, this study attempts to quantify and determine a dollar value for each process found as inefficient. Key inefficiencies were found to be common at large hospitals no matter whether in the United States or Europe. These impairments are responsible for a disproportionate share of overall inefficiency, and their elimination (achievable by simple solutions) would result in drastic cost reductions (ranging from 40-75% at the institutions studied). Thus this study is important in view of spiralling costs, as it is a key component of total quality management (TQM) in radiology and a continuous quality improvement (CQI) tool for mobile radiology specifically.
Application of ray-traced tropospheric slant delays to geodetic VLBI analysis
NASA Astrophysics Data System (ADS)
Hofmeister, Armin; Böhm, Johannes
2017-08-01
The correction of tropospheric influences via so-called path delays is critical for the analysis of observations from space geodetic techniques like the very long baseline interferometry (VLBI). In standard VLBI analysis, the a priori slant path delays are determined using the concept of zenith delays, mapping functions and gradients. The a priori use of ray-traced delays, i.e., tropospheric slant path delays determined with the technique of ray-tracing through the meteorological data of numerical weather models (NWM), serves as an alternative way of correcting the influences of the troposphere on the VLBI observations within the analysis. In the presented research, the application of ray-traced delays to the VLBI analysis of sessions in a time span of 16.5 years is investigated. Ray-traced delays have been determined with program RADIATE (see Hofmeister in Ph.D. thesis, Department of Geodesy and Geophysics, Faculty of Mathematics and Geoinformation, Technische Universität Wien. http://resolver.obvsg.at/urn:nbn:at:at-ubtuw:1-3444, 2016) utilizing meteorological data provided by NWM of the European Centre for Medium-Range Weather Forecasts (ECMWF). In comparison with a standard VLBI analysis, which includes the tropospheric gradient estimation, the application of the ray-traced delays to an analysis, which uses the same parameterization except for the a priori slant path delay handling and the used wet mapping factors for the zenith wet delay (ZWD) estimation, improves the baseline length repeatability (BLR) at 55.9% of the baselines at sub-mm level. If no tropospheric gradients are estimated within the compared analyses, 90.6% of all baselines benefit from the application of the ray-traced delays, which leads to an average improvement of the BLR of 1 mm. The effects of the ray-traced delays on the terrestrial reference frame are also investigated. A separate assessment of the RADIATE ray-traced delays is carried out by comparison to the ray-traced delays from the National Aeronautics and Space Administration Goddard Space Flight Center (NASA GSFC) (Eriksson and MacMillan in http://lacerta.gsfc.nasa.gov/tropodelays, 2016) with respect to the analysis performances in terms of BLR results. If tropospheric gradient estimation is included in the analysis, 51.3% of the baselines benefit from the RADIATE ray-traced delays at sub-mm difference level. If no tropospheric gradients are estimated within the analysis, the RADIATE ray-traced delays deliver a better BLR at 63% of the baselines compared to the NASA GSFC ray-traced delays.
NASA Astrophysics Data System (ADS)
Hager, P.; Czupalla, M.; Walter, U.
2010-11-01
In this paper we report on the development of a dynamic MATLAB SIMULINK® model for the water and electrolyte balance inside the human body. This model is part of an environmentally sensitive dynamic human model for the optimization and verification of environmental control and life support systems (ECLSS) in space flight applications. An ECLSS provides all vital supplies for supporting human life on board a spacecraft. As human space flight today focuses on medium- to long-term missions, the strategy in ECLSS is shifting to closed loop systems. For these systems the dynamic stability and function over long duration are essential. However, the only evaluation and rating methods for ECLSS up to now are either expensive trial and error breadboarding strategies or static and semi-dynamic simulations. In order to overcome this mismatch the Exploration Group at Technische Universität München (TUM) is developing a dynamic environmental simulation, the "Virtual Habitat" (V-HAB). The central element of this simulation is the dynamic and environmentally sensitive human model. The water subsystem simulation of the human model discussed in this paper is of vital importance for the efficiency of possible ECLSS optimizations, as an over- or under-scaled water subsystem would have an adverse effect on the overall mass budget. On the other hand water has a pivotal role in the human organism. Water accounts for about 60% of the total body mass and is educt and product of numerous metabolic reactions. It is a transport medium for solutes and, due to its high evaporation enthalpy, provides the most potent medium for heat load dissipation. In a system engineering approach the human water balance was worked out by simulating the human body's subsystems and their interactions. The body fluids were assumed to reside in three compartments: blood plasma, interstitial fluid and intracellular fluid. In addition, the active and passive transport of water and solutes between those compartments was modeled dynamically. A kidney model regulates the electrolyte concentration in body fluids (osmolality) in narrow confines and a thirst mechanism models the urge to ingest water. A controlled exchange of water and electrolytes with other human subsystems, as well as with the environment, is implemented. Finally, the changes in body composition due to muscle growth are accounted for. The outcome of this is a dynamic water and electrolyte balance, which is capable of representing body reactions like thirst and headaches, as well as heat stroke and collapse, as a response to its work load and environment.
Bewehrte Betonbauteile unter Betriebsbedingungen: Forschungsbericht
NASA Astrophysics Data System (ADS)
Eligehausen, Rolf; Kordina, Karl; Schießl, Peter
2000-09-01
Vorwort. Teil I: Rißbreiten (Gert König) 1 Ein mechanisches Modell zur Erhöhung der Vorhersagegenauigkeit über die Rißbreiten unter Betriebsbedingungen (Gert König und Michael Fischer). 1.1 Einleitung und Zielsetzung. 1.2 Versuchsprogramm. 1.3 Meßtechnik. 1.4 Belastung und Versuchsdurchführung. 1.5 Literatur. 2 Rißbreiten und Verformungszunahme vorgespannter Bauteile unter wiederholter Last - und Zwangbeanspruchung (Gert König und Michael Fischer). 2.1 Einleitung und Zielsetzung. 2.2 Versuchsprogramm. 2.3 Auswertung. 2.4 Ausblick. 2.5 Literatur. 3 Rißverhalten von Beton bei plötzlicher Abkühlung (Viktor Mechtcherine und Harald S. Müller). 3.1 Einleitung. 3.2 Experimentelle Untersuchungen. 3.3 Formulierung eines Stoffgesetzes für thermisch beanspruchten Beton. 3.4 Riß entwicklung in einer Betonplatte unter Temperaturschock. 3.5 Zusammenfassung. 3.6 Literatur. 4 Stahlfaserbeton unter Betriebsbedingungen bei Dauerbeanspruchung (Bo Soon Kang, Bernd Schnütgen und Friedhelm Stangenberg). 4.1 Einleitung. 4.2 Wirkung von Stahlfasern im Beton. 4.3 Versuchsprogramm. 4.4 Untersuchungen zum Verbundverhalten. 4.5 Untersuchungen zum Verhalten unter Biegebeanspruchung. 4.6 Theoretische Untersuchungen. 4.7 Literatur. 5 Experimentelle Untersuchungen an Stahlbeton-Zugkörpern unter wiederholter Belastung zur Ermittlung des versteifenden Einflusses der Mitwirkung des Betons zwischen den Rissen (Petra Seibel und Gerhard Mehlhorn). 5.1 Einleitung. 5.2 Ansatz zur Bestimmung der Mitwirkung des Betons zwischen den Rissen nach Eurocode 2, Model Code 90 und Günther. 5.3 Experimentelle Untersuchungen. 5.4 Ergebnisse. 5.5 Zusammenfassung. 5.6 Literatur. 6 Riß- und Verformungsverhalten von vorgefertigten Spannbetonträgern unter Betriebsbedingungen bei besonderer Berücksichtigung des Betonalters (Monika Maske, Heinz Meichsner und Lothar Schubert). 6.1 Einleitung. 6.2 Beschreibung der Fertigteilträger. 6.3 Belastungsversuche. 6.4 Ergebnisse. 6.5 Zusammenfassung. 6.6 Literatur. Teil II: Verbundfragen (Rolf Eligehausen). 1 Ein mechanisches Modell zur Beschreibung des Verbundverhaltens zwischen Stahl und Beton (Gert König, Nguyen V. Tue und Wolfgang Kurz). 1.1 Einleitung. 1.2 Beschreibung der Kraftübertragung zwischen Stahl und Beton. 1.3 Vorstellung des Modells. 1.4 Materialgesetze für die Berechnung der Verformung des Fachwerks. 1.5 Vergleich zwischen Versuch und Modell. 1.6 Zusammenfassung und Ausblick. 1.7 Literatur. 2 Verbund unter nicht ruhender Beanspruchung (Rainer Koch und György L. Balázs). 2.1 Übersicht über die durchgeführten Versuche. 2.2 Versuchkörper und Materialien. 2.3 Versuchseinrichtungen. 2.4 Versuche und Ergebnisse. 2.5 Zusammenfassung und Ausblick. 2.6 Literatur. 3 Trag- und Verformungsverhalten von Stahlbetontragwerken unter Betriebsbelastung (Thomas M. Sippel und Rolf Eligehausen). 3.1 Einleitung. 3.2 Allgemeines. 3.3 Rechenmodell und Materialmodelle. 3.4 Vergleich zwischen Versuchen und Rechnung. 3.5 Parameterstudien. 3.6 Vereinfachtes Rechenmodell. 3.7 Zusammenfassung. 3.8 Literatur. 4 Verbundverhalten von Spanngliedern mit nachträglichem Verbund unter Betriebsbedingungen (Josef Hegger, Norbert Will und Heiner Cordes). 4.1 Einführung. 4.2 Verbundverhalten von Spanngliedern. 4.3 Zeitabhängige Effekte des Verbunds. 4.4 Versuche unter statischer Langzeitbeanspruchung. 4.5 Versuche unter dynamischer Langzeitbeanspruchung. 4.6 Bemessungsvorschlag für Verbundkennwerte. 4.7 Zusammenfassung. 4.8 Literatur. 5 Spannungsumlagerungen in gemischt bewehrten Querschnitten (Josef Hegger, Heiner Cordes und Matthias Rudlof). 5.1 Problemstellung und Zielsetzung. 5.2 Spannungsumlagerungen bei gemischter Bewehrung. 5.3 Versuche an zentrischen Zugkörpern. 5.4 Versuchsergebnisse. 5.5 Ermittlung und Vergleich von Verbundkennwerten. 5.6 Zusammenfassung. 5.7 Literatur. Teil III: Bauteile (Karl Kordina). 1 Einfluß von Längsbeanspruchungen auf den Neigungswinkel der Schubrisse (Marek Los und Ulrich Quast). 1.1 Einleitung. 1.2 Ungerissener Zustand. 1.3 Gerissener Zustand. 1.4 FEM-Berechnungen. 1.5 Zusammenfassung. 1.6 Literatur. 2 Auswirkungen des unterschiedlichen Verformungsverhaltens bei Beund Entlastung auf die Beanspruchungen im Gebrauchszustand (Jochen Keysberg). 2.1 Einleitung und Zielsetzung. 2.2 Modelle für die Momenten-Verkrümmungs-Beziehung. 2.3 Programm zur nichtlinearen Berechnung. 2.4 Einfluß von Lastwechseln auf nichtlineare Berechnungen. 2.5 Zusammenfassung. 2.6 Literatur. 3 3D-Analyse von Balken-Stützen-Verbindungen aus normal- und hochfestem Beton unter zyklischer Beanspruchung (Josko Ozbolt, Yijun Li und Rolf Eligehausen). 3.1 Einleitung. 3.2 Materialmodell und FE-Diskretisierung. 3.3 Numerische Analyse. 3.4 Schlußfolgerungen. 3.5 Zusammenfassung. 3.6 Literatur. 4 Der Einfluß von freien Schwingungen infolge dynamischer Belastung auf die Deterioration eines Bauwerks (Manfred Specht und Michael Kramp). 4.1 Veranlassung des Forschungsvorhabens. 4.2 Forschungsziele. 4.3 Versuchsträger, Versuchsdurchführung und Versuchsergebnisse. 4.4 Auswertung. 4.5 Ergebnisse für die Systemidentifikation von Stahlbetonkonstruktionen. 4.6 Literatur. 5 Lokale Schwind- und Temperaturgradienten in bewehrten, oberflächennahen Zonen von Betonkonstruktionen (Josef Eibl und Stephan Kranz). 5.1 Problemstellung. 5.2 Temperatur- und Feuchtefeldberechnung. 5.3 Numerisches Berechnungsmodell zur Spannungsanalyse im Beton. 5.4 Durchgeführte Versuche. 5.5 Rechnerische Untersuchungen. 5.6 Zusammenfassung. 5.7 Literatur. 6 Wassereindringverhalten von Flüssigkeiten beim Biegeriß (Gert König und Christian Brunsch). 6.1 Problemstellung. 6.2 Experimentelle Untersuchungen. 6.3 Entwicklung eines Modells zur rechnerischen Abschätzung des zeitlichen Eindringens einer Wassermenge in Biegerisse von Stahlbetonbauteilen. 6.4 Zusammenfassung und Diskussion der Versuchsreihe. 6.5 Literatur. 7 Dauerhaftigkeitsprobleme von offenen Becken (György Iványi, Wilhelm Buschmeyer und Udo Paas). 7.1 Einleitung. 7.2 Bestandsaufnahme. 7.3 Felduntersuchungen. 7.4 Laboruntersuchungen. 7.5 Berechnungen. 7.6 Entwurfs- und Ausführungskriterien. 7.7 Literatur. Teil IV: Korrosion und Ermüdung (Peter Schießl). 1 Ermüdungskorrosion von Spannstahl (Herbert Kupfer und Hans H. Müller). 1.1 Forschungsziel. 1.2 Korrosionsversuche an Spannstählen. 1.3 Methoden zur Erkennung von Anrissen. 2 Korrosionsermüdung von Stahl in Betonbauteilen (J. W. Weber, Peter Schießl und Jörg Moersch). 2.1 Allgemeines. 2.2 Wesentliche Einflüsse bei der Schwingungsrißkorrosion. 2.3 Ziel der Untersuchungen. 2.4 Prüfkörper und Betone. 2.5 Chloridbeaufschlagung. 2.6 Ergebnisse. 2.7 Zusammenfassung. 2.8 Literatur. 3 Untersuchungen zum Rißkorrosionsverhalten von Spannstählen unter Betriebsbedingungen (Jörg Moersch und Peter Schießl). 3.1 Einführung und Ziel. 3.2 Untersuchungsprogramm. 3.3 Ergebnisse. 3.4 Zusammenfassung der Ergebnisse. 3.5 Literatur. 4 Schwingfestigkeit von Stahlbeton bei Beanspruchung mit Meerwasser (Ulf Nürnberger und Willibald Beul). 4.1 Einführung. 4.2 Mechanismus. 4.3 Schwingfestigkeitsuntersuchungen. 4.4 Folgerungen. 4.5 Literatur. 5 Wasserstoffinduzierte Spannungsrißkorrosion von zugschwellbeanspruchten Spannstählen (Ulf Nürnberger und Willibald Beul). 5.1 Einführung. 5.2 Untersuchungen. 5.3 Folgerung. 5.4 Literatur. 6 Selbstheilung und Bewehrungskorrosion bei von schwach sauren Wässern durchströmten Trennrissen in bewehrtem Beton (Wieland Ramm und Michaela Biscoping). 6.1 Übersicht. 6.2 Einleitung. 6.3 Versuchsprogramm. 6.4 Versuchsergebnisse. 6.5 Zusammenfassung. 6.6 Literatur. 7 Untersuchungen zur Reibermüdung bei teilweise vorgespannten Bauteilen (Heiner Cordes, Josef Hegger und Jens U. Neuser). 7.1 Einleitung. 7.2 Stand der Forschung. 7.3 Versuchsaufbau und Versuchsdurchführung. 7.4 Versuchsergebnisse. 7.5 Zusammenfassung und Ausblick. 7.6 Literatur. Teil V: Junger Beton (Ferdinand S. Rostásy). 1 Ermittlung und Berechnung des Nullspannungstemperaturgradienten im jungen Beton (Rupert Springenschmid und Jean-Louis Bostvironnois). 1.1 Einleitung. 1.2 Der Nullspannungstemperaturgradient. 1.3 Ergebnisse und Schußfolgerungen. 1.4 Literatur. 2 Experimentelle Ermittlung der Verformungskennwerte von jungem Beton und der Zwangspannungen in situ (Markus Plannerer und Rupert Springenschmid). 2.1 Einleitung. 2.2 Versuche zur Ermittlung der Verformungskennwerte und der Zwangspannungen. 2.3 Laborergebnisse. 2.4 In-situ-Ergebnisse. 2.5 Zusammenfassung. 2.6 Literatur. 3 Werkstoffeigenschaften jungen Betons--Experimente und Modellierung (Ferdinand S. Rostásy und Alex-W. Gutsch). 3.1 Einleitung. 3.2 Versuche und Modellbildung. 3.3 Zusammenfassung. 3.4 Literatur.
Rendon, J S; Swinton, M; Bernthal, N; Boffano, M; Damron, T; Evaniew, N; Ferguson, P; Galli Serra, M; Hettwer, W; McKay, P; Miller, B; Nystrom, L; Parizzia, W; Schneider, P; Spiguel, A; Vélez, R; Weiss, K; Zumárraga, J P; Ghert, M
2017-05-01
As tumours of bone and soft tissue are rare, multicentre prospective collaboration is essential for meaningful research and evidence-based advances in patient care. The aim of this study was to identify barriers and facilitators encountered in large-scale collaborative research by orthopaedic oncological surgeons involved or interested in prospective multicentre collaboration. All surgeons who were involved, or had expressed an interest, in the ongoing Prophylactic Antibiotic Regimens in Tumour Surgery (PARITY) trial were invited to participate in a focus group to discuss their experiences with collaborative research in this area. The discussion was digitally recorded, transcribed and anonymised. The transcript was analysed qualitatively, using an analytic approach which aims to organise the data in the language of the participants with little theoretical interpretation. The 13 surgeons who participated in the discussion represented orthopaedic oncology practices from seven countries (Argentina, Brazil, Italy, Spain, Denmark, United States and Canada). Four categories and associated themes emerged from the discussion: the need for collaboration in the field of orthopaedic oncology due to the rarity of the tumours and the need for high level evidence to guide treatment; motivational factors for participating in collaborative research including establishing proof of principle, learning opportunity, answering a relevant research question and being part of a collaborative research community; barriers to participation including funding, personal barriers, institutional barriers, trial barriers, and administrative barriers and facilitators for participation including institutional facilitators, leadership, authorship, trial set-up, and the support of centralised study coordination. Orthopaedic surgeons involved in an ongoing international randomised controlled trial (RCT) were motivated by many factors to participate. There were a number of barriers to and facilitators for their participation. There was a collective sense of fatigue experienced in overcoming these barriers, which was mirrored by a strong collective sense of the importance of, and need for, collaborative research in this field. The experiences were described as essential educational first steps to advance collaborative studies in this area. Knowledge gained from this study will inform the development of future large-scale collaborative research projects in orthopaedic oncology. Cite this article: J. S. Rendon, M. Swinton, N. Bernthal, M. Boffano, T. Damron, N. Evaniew, P. Ferguson, M. Galli Serra, W. Hettwer, P. McKay, B. Miller, L. Nystrom, W. Parizzia, P. Schneider, A. Spiguel, R. Vélez, K. Weiss, J. P. Zumárraga, M. Ghert. Barriers and facilitators experienced in collaborative prospective research in orthopaedic oncology: A qualitative study. Bone Joint Res 2017;6:-314. DOI: 10.1302/2046-3758.65.BJR-2016-0192.R1. © 2017 Ghert et al.
NASA Astrophysics Data System (ADS)
Degrez, Gérard; van der Mullen, Joost
2011-01-01
It is with pleasure and pride that we present the selected contributions from participants of the 11th High-Tech Plasma Processes conference. This conference, which took place in Brussels from June 28 to July 2 2010, is based on a European forum with a history of more than twenty years. The conference series started as a thermal plasma conference and gradually expanded to include other topics and fields as well. HTPP 11 was organized in collaboration with the Belgian Interuniversity Attraction Pole (IAP): Physical chemistry of Plasma-surface Interactions (PSI-ψ). The program was devised by the plasma group of the Technische Universiteit Eindhoven in collaboration with the IAP, the Association Arc Electrique and the International Scientific Committee. The organization was guided by the Steering Committee and supervised by the two founding members, Jacques Amouroux and Pierre Fauchais. HTPP aims to bring together different scientific communities to facilitate contacts between science, technology and industry, providing a platform for the exploration of elementary processes in and by plasmas. This implies that, apart from fundamental topics, considerable attention is paid to new plasma applications; plasma engineering in Europe is one of the main driving forces behind HTPP. The conference supports the dissemination of methods for plasma diagnostics and monitoring and the exchange of models for plasmas sources and plasma applications. A novelty of HTPP 11 was the model market; a special type of poster session where running models were demonstrated and spectators were challenged to assemble their own plasma models using one of the available construction platforms. For the first time in this series of conferences, the proceedings are published in two companion issues: Journal of Physics D: Applied Physics, which presents a selection of papers including invited and keynote papers, and the Journal of Physics: Conference Series. The present volume of the Journal of Physics: Conference Series includes 21 papers devoted to various branches of plasma physics. In line with the objectives of the HTPP conference, you will find papers on plasma sources, diagnostics and theory, covering the fields of thermal and non-thermal (even cold) plasmas, plasma-electrode interactions, surface treatment, synthesis, light generation and transport, and on applications in the fields of environmental technologies, biochemistry, and aeronautical and space sciences. We would like to thank the members of the various committees, the participants who sent their contributions and the referees who did an excellent job giving support to improve the manuscripts. We greatly appreciate the financial support from the conference sponsors: Association Arc Electrique, Belspo (Belgian Science Policy), Fonds National de la Recherche Scientifique, Ocean Optics Inc., Technifutur - Pôle Génie Mécanique & Solvay S.A.. Gérard DegrezChairman of the Local Organizing Committee Joost van der MullenChairman of the Steering Committee
Combining the Observations from Different GNSS (Invited)
NASA Astrophysics Data System (ADS)
Dach, R.; Lutz, S.; Schaer, S.; Bock, H.; Jäggi, A.; Meindl, M.; Ostini, L.; Thaller, D.; Steinbach, A.; Beutler, G.; Steigenberger, P.
2009-12-01
For a very long time GPS has clearly dominated the use of GNSS measurements for scientific purposes. This picture is changing: we are moving from a GPS-only to a multi-GNSS world. This is, e.g., reflected by changing the meaning of the abbreviation IGS in March 2005 from International GPS to GNSS Service. The current situation can be described as follows: GPS has the leading role in the GNSS because it has provided a very stable satellite constellation over many years. Some of the currently active GPS satellites are nearly 15 years old. These old satellites are expected to be decommissioned within the next years. On the other hand, due to the increasing number of active GLONASS satellites and the improved density of multi-GNSS tracking stations in the IGS network, the quality of the GLONASS orbits has drastically improved during the last years. The European Galileo system is under development: currently two test satellites (GIOVE-A and GIOVE-B) are in orbit. The IOV (in-orbit-validation phase) will start soon. Also the first test satellites for the Chinese Compass system are in space. For the maximum benefit the observations of these GNSS will be processed in a combined multi-GNSS analysis in future. CODE (Center for Orbit Determination in Europe) is a joint venture between the Astronomical Institute of the University Bern (AIUB, Bern, Switzerland), the Federal Office of Topography (swisstopo, Wabern, Switzerland), the Federal Agency for Cartography and Geodesy (BKG, Frankfurt am Main, Germany), and the Institut für Astronomische und Physikalische Geodäsie of the Technische Universität München (IAPG/TUM, Munich, Germany). It acts as one of the global analysis centers of the IGS and has started in May 2003 with a rigorous combined processing of GPS and GLONASS measurements for the final, rapid, and even ultra-rapid product lines. All contributions from CODE to the IGS are in fact multi-GNSS products -- the only exception is the satellite and receiver clock corrections. The procedure to derive the satellite and receiver clock corrections is under the transition from the currently operational GPS-only to the multi-GNSS mode including GPS and GLONASS. When CODE started with its multi-GNSS processing more than 6 years ago the network density and the number of active GLONASS satellites was very limited. Nowadays this situation has changed, which brings us into the position to review the strategy to combine the measurements from different GNSS in the data analysis. The presentation will discuss the advantages and disadvantages of the highest (only one constant inter-system bias) and lowest (a minimum number of common parameters) possible correlation between the observations of the individual GNSS.
NASA Astrophysics Data System (ADS)
Statnikov, V.; Stephan, S.; Pausch, K.; Meinke, M.; Radespiel, R.; Schröder, W.
2016-06-01
The turbulent wake of a generic space launcher wind tunnel model with an underexpanded nozzle jet is investigated experimentally and numerically to gain insight into the variation of intricate wake flow phenomena of space vehicles at higher stages of the flight trajectory with increasing Mach number. The experiments are carried out at M_∞ =3 and M_∞ =6 in the Ludwieg tube test facility at the Institute of Fluid Mechanics at the Technische Universität Braunschweig, while the corresponding time-resolved computations are performed by the Institute of Aerodynamics at RWTH Aachen University using a zonal RANS-LES approach. A strong alteration of the wake topology with increasing Mach number due to the changing pressure ratio at the nozzle exit is found. At M_∞ =3 the moderate underexpansion rate of p_e/p_∞ ≈ 5 leads to a formation of a recirculation region with an elongated triangular cross-section reaching to the nozzle exit. At M_∞ =6 a substantially stronger afterexpansion of the jet plume (p_e/p_∞ ≈ 100) causes the formation of a cavity region with a quadrangular cross-section. The stronger deflection towards the nozzle at M_∞ =3 results in lower mean and rms wall pressure ratios than at M_∞ =6. However, due to the higher freestream pressure value at the lower Mach number the relation of absolute values is reciprocal, making the lower supersonic regime more critical with respect to dynamic structural loads. This observation is confirmed by an overall good agreement between numerical and experimental data at characteristic positions on the base and nozzle wall. Furthermore, it was shown that undesired effects of the strut support in the wake are present along the whole circumference. For M_∞ =3 the strut influence is found to be particularly intense. The spectral analysis of wall pressure fluctuations reveals fundamental differences in the dynamic behavior of the two investigated wake flow regimes. At M_∞ =3, a dominant frequency range around Sr_D≈ 0.2 associated with the inner dynamics of the recirculation bubble is found at the base, while on the nozzle a broad-band low-frequency content of substantially higher amplitudes is detected, which is a footprint of the graduate realignment of the turbulent shear layer along the nozzle wall. The spectra at M_∞ =6 are characterized by several high-frequency sharp peaks at Sr_D≥slant 0.8. A strong correlation between the supported wind tunnel configuration and the axisymmetric free-flight case is found for the peaks at Sr_D≈ 0.85 known to be caused by the radial flapping motion of the shear layer along the cavity.
A catchment-scale groundwater model including sewer pipe leakage in an urban system
NASA Astrophysics Data System (ADS)
Peche, Aaron; Fuchs, Lothar; Spönemann, Peter; Graf, Thomas; Neuweiler, Insa
2016-04-01
Keywords: pipe leakage, urban hydrogeology, catchment scale, OpenGeoSys, HYSTEM-EXTRAN Wastewater leakage from subsurface sewer pipe defects leads to contamination of the surrounding soil and groundwater (Ellis, 2002; Wolf et al., 2004). Leakage rates at pipe defects have to be known in order to quantify contaminant input. Due to inaccessibility of subsurface pipe defects, direct (in-situ) measurements of leakage rates are tedious and associated with a high degree of uncertainty (Wolf, 2006). Proposed catchment-scale models simplify leakage rates by neglecting unsaturated zone flow or by reducing spatial dimensions (Karpf & Krebs, 2013, Boukhemacha et al., 2015). In the present study, we present a physically based 3-dimensional numerical model incorporating flow in the pipe network, in the saturated zone and in the unsaturated zone to quantify leakage rates on the catchment scale. The model consists of the pipe network flow model HYSTEM-EXTAN (itwh, 2002), which is coupled to the subsurface flow model OpenGeoSys (Kolditz et al., 2012). We also present the newly developed coupling scheme between the two flow models. Leakage functions specific to a pipe defect are derived from simulations of pipe leakage using spatially refined grids around pipe defects. In order to minimize computational effort, these leakage functions are built into the presented numerical model using unrefined grids around pipe defects. The resulting coupled model is capable of efficiently simulating spatially distributed pipe leakage coupled with subsurficial water flow in a 3-dimensional environment. References: Boukhemacha, M. A., Gogu, C. R., Serpescu, I., Gaitanaru, D., & Bica, I. (2015). A hydrogeological conceptual approach to study urban groundwater flow in Bucharest city, Romania. Hydrogeology Journal, 23(3), 437-450. doi:10.1007/s10040-014-1220-3. Ellis, J. B., & Revitt, D. M. (2002). Sewer losses and interactions with groundwater quality. Water Science and Technology, 45(3), 195-202. itwh (2002). Modellbeschreibung, Institut für technisch-wissenschaftliche Hydrologie GmbH, Hannover. Karpf, C. & Krebs, P. (2013). Modelling of groundwater infiltration into sewer systems. Urban Water Journal, 10:4, 221-229, DOI: 10.1080/1573062X.2012.724077. Kolditz, O., Bauer, S. et al. (2012). OpenGeoSys: an open source initiative for numerical simulation of thermo-hydro-mechanical/chemical (THM/C) processes in porous media. Env. Earth Sci. 67(2):589-599. Wolf, L., Held, I., Eiswirth, M., & Hötzl, H. (2004). Impact of leaky sewers on groundwater quality. Acta Hydrochimica et Hydrobiologica, 32(4-5), 361-373. doi:10.1002/aheh.200400538. Wolf, L. (2006). Influence of leaky sewer systems on groundwater resources beneath the city of Rastatt, Germany. Dissertation, University of Karlsruhe.
Thermal use of groundwater: International legislation and ecological considerations
NASA Astrophysics Data System (ADS)
Hähnlein, S.; Griebler, C.; Blum, P.; Bayer, P.
2009-04-01
Groundwater fulfills various functions for nature, animals and humans. Certainly, groundwater has highest relevance as freshwater resource. Another increasingly important issue - especially considering rising oil and gas prices - is the use of aquifers as renewable energy reservoirs. In view of these two somehow conflictive uses it seems important to define legal regulations and management strategies where exploitation and protection of aquifers is balanced. Thermal use of groundwater with e.g. ground source heat pump (GSHP) systems results in temperature anomalies (cold or heat plumes) in the subsurface. The extension of these temperture plumes has to be known in order to interpret their influence on adjacent geothermal installations. Beside this technological constraint, there exists an ecological one: man made thermal anomalies may have undesirable effects on the groundwater ecosystem. To promote geothermal energy as an economically attractive, sustainable and environmentally friendly energy source, such constraints have to be integrated in regulations, planning and maintenance (Hähnlein et al. 2008a,b). The objective of this study is to review the current legal status of the thermal use of groundwater and to present first results how the ecosystem is influenced. • Legal viewpoint: The international legal situation on thermal groundwater use is very heterogeneous. Nationally and internationally there is no consistent legal situation. Minimum distances between GSHP and temperature limits for heating and cooling the groundwater vary strongly. Until now there are no scientifically based thresholds. And it is also legally unexplained which temperature changes are detrimental. This is due to the fact that there are no ecological and economical parameters established for sustainable groundwater use. • Ecological viewpoint: First results show that temperature changes that arise with the thermal use of groundwater can noticeably influence the composition of biocoenoses. For a profound quantification and interpretation of an ecologically sustainable thermal use of groundwater more data from lab experiments and in situ surveys are needed. We can conclude that for sustainable use of groundwater legally-binding minimum distances between adjacent installations are crucial. However, they have to be based on geological arguments. Also relative temperature limits for cooling and heating of groundwater to avoid negative changes in the groundwater ecosystem should be defined. Overall, there is a need for a legal framework, ideally developed nationally and internationally, which thoroughly addresses legal, technical, ecological and economical aspects. References: Hähnlein, S., Grathwohl, P., Bayer P., Blum, P. (2008a): Cold plumes of ground source heat pumps: Their length and legal situation. EGU, Vienna. Hähnlein, S., Kübert, M., Bayer, P., Walker-Hertkorn, S., Blum, P. (2008b): Rechtliche und technische Aspekte einer nachhaltigen thermischen Grundwassernutzung. FH-DGG Tagung, Göttingen.
Kamarizaite, Fe{3/3+}(AsO4)2(OH)3 · 3H2O, a new mineral species, arsenate analogue of tinticite
NASA Astrophysics Data System (ADS)
Chukanov, N. V.; Pekov, I. V.; Möckel, S.; Mukhanova, A. A.; Belakovsky, D. I.; Levitskaya, L. A.; Bekenova, G. K.
2010-12-01
Kamarizaite, a new mineral species, has been identified in the dump of the Kamariza Mine, Lavrion mining district, Attica Region, Greece, in association with goethite, scorodite, and jarosite. It was named after type locality. Kamarizaite occurs as fine-grained monomineralic aggregates (up to 3 cm across) composed of platy crystals up to 1 μm in size and submicron kidney-shaped segregations. The new mineral is yellow to beige, with light yellow streak. The Mohs hardness is about 3. No cleavage is observed. The density measured by hydrostatic weighing is 3.16(1) g/cm3, and the calculated density is 3.12 g/cm3. The wavenumbers of absorption bands in the IR spectrum of kamarizaite are (cm-1; s is strong band, w is weak band): 3552, 3315s, 3115, 1650w, 1620w, 1089, 911s, 888s, 870, 835s, 808s, 614w, 540, 500, 478, 429. According to TG and IR data, complete dehydration and dehydroxylation in vacuum (with a weight loss of 15.3(1)%) occurs in the temperature range 110-420°C. Mössbauer data indicate that all iron in kamarizaite is octahedrally coordinated Fe3+. Kamarizaite is optically biaxial, positive: n min = 1.825, n max = 1.835, n mean = 1.83(1) (for a fine-grained aggregate). The chemical composition of kamarizaite (electron microprobe, average of four point analyses) is as follows, wt %: 0.35 CaO, 41.78 Fe2O3, 39.89 As2O5, 1.49 SO3, 15.3 H2O (from TG data); the total is 98.81. The empirical formula calculated on the basis of (AsO4,SO4)2 is Ca0.03Fe{2.86/3+} (AsO4)1.90(SO4)0.10(OH)2.74 · 3.27H2O. The idealized formula is Fe{3/3+}(AsO4)2(OH)3 · 3H2O. Kamarizaite is an arsenate analogue of orthorhombic tinticite, space group Pccm, Pcc2, Pcmm, Pcm21, or Pc2 m; a = 21.32(1), b = 13.666(6), c =15.80(1) Å, V= 4603.29(5) Å3, Z= 16. The strongest reflections of the X-ray powder diffraction pattern [ bar d , Å ( I, %) ( hkl)] are: 6.61 (37) (112, 120), 5.85 (52) (311), 3.947 (100) (004, 032, 511), 3.396 (37) (133, 431), 3.332 (60) (314), 3.085 (58) (621, 414, 324). The type material of kamarizaite is deposited in the Mineralogical Collection of Technische Universität Bergakademie Freiberg, Germany, inventory number 82199.
Status report on the cold neutron source of the Garching neutron research facility FRM-II
NASA Astrophysics Data System (ADS)
Gobrecht, K.; Gutsmiedl, E.; Scheuer, A.
2002-01-01
The new high flux research reactor of the Technical University of Munich (Technische Universität München, TUM) will be equipped with a cold neutron source (CNS). The centre of the CNS will be located in the D 2O-reflector tank at 400 mm from the reactor core axis close to the thermal neutron flux maximum. The power of 4500 W developed by the nuclear heating in the 16 l of liquid deuterium at 25 K, and in the structures, is evacuated by a two-phase thermal siphon avoiding film boiling and flooding. The thermal siphon is a single tube with counter current flow. It is inclined by 10° from vertical, and optimised for a deuterium flow rate of 14 g/s. Optimisation of structure design and material, as well as safety aspects will be discussed. Those parts of the structure, which are exposed to high thermal neutron flux, are made from Zircaloy 4 and 6061T6 aluminium. Structure failure due to embrittlement of the structure material under high rapid neutron flux is very improbable during the lifetime of the CNS (30 years). Double, in pile even triple, containment with inert gas liner guarantees lack of explosion risk and of tritium contamination to the environment. Adding a few percent of hydrogen (H 2) to the deuterium (D 2) will improve the moderating properties of our relatively small moderator volume. Nearly all of the hydrogen is bound in the form of HD molecules. A long-term change of the hydrogen content in the deuterium is avoided by storing the mixture not in a gas buffer volume but as a metal hydride at low pressure. The metal hydride storage system contains two getter beds, one with 250 kg of LaCo 3Ni 2, the other one with 150 kg of ZrCo 0.8Ni 0.2. Each bed can take the total gas inventory, both beds together can absorb the total gas inventory in <6 min at a pressure <3 bar. The new reactor will have 13 beam tubes, 4 of which are looking at the CNS, including two for very cold (VCN) and ultra-cold neutron (UCN) production. The latter will take place in the horizontal beam tube SR4, which will house an additional cryogenic moderator (e.g. solid deuterium). More than 60% of the experiments foreseen in the new neutron research facility will use cold neutrons from the CNS. The mounting of the hardware components of the CNS into the reactor has started in the spring of 2000. The CNS went into trial operation in the end of year 2000.
NASA Astrophysics Data System (ADS)
Cong, Xiaoying; Balss, Ulrich; Eineder, Michael
2015-04-01
The atmospheric delay due to vertical stratification, the so-called stratified atmospheric delay, has a great impact on both interferometric and absolute range measurements. In our current researches [1][2][3], centimeter-range accuracy has been proven based on Corner Reflector (CR) based measurements by applying atmospheric delay correction using the Zenith Path Delay (ZPD) corrections derived from nearby Global Positioning System (GPS) stations. For a global usage, an effective method has been introduced to estimate the stratified delay based on global 4-dimensional Numerical Weather Prediction (NWP) products: the direct integration method [4][5]. Two products, ERA-Interim and operational data, provided by European Centre for Medium-Range Weather Forecast (ECMWF) are used to integrate the stratified delay. In order to access the integration accuracy, a validation approach is investigated based on ZPD derived from six permanent GPS stations located in different meteorological conditions. Range accuracy at centimeter level is demonstrated using both ECMWF products. Further experiments have been carried out in order to determine the best interpolation method by analyzing the temporal and spatial correlation of atmospheric delay using both ECMWF and GPS ZPD. Finally, the integrated atmospheric delays in slant direction (Slant Path Delay, SPD) have been applied instead of the GPS ZPD for CR experiments at three different test sites with more than 200 TerraSAR-X High Resolution SpotLight (HRSL) images. The delay accuracy is around 1-3 cm depending on the location of test site due to the local water vapor variation and the acquisition time/date. [1] Eineder M., Minet C., Steigenberger P., et al. Imaging geodesy - Toward centimeter-level ranging accuracy with TerraSAR-X. Geoscience and Remote Sensing, IEEE Transactions on, 2011, 49(2): 661-671. [2] Balss U., Gisinger C., Cong X. Y., et al. Precise Measurements on the Absolute Localization Accuracy of TerraSAR-X on the Base of Far-Distributed Test Sites; EUSAR 2014; 10th European Conference on Synthetic Aperture Radar; Proceedings of. VDE, 2014: 1-4. [3] Eineder M., Balss U., Gisinger C., et al. TerraSAR-X pixel localization accuracy: Approaching the centimeter level, Geoscience and Remote Sensing Symposium (IGARSS), 2014 IEEE International. IEEE, 2014: 2669-2670. [4] Cong X., Balss U., Eineder M., et al. Imaging Geodesy -- Centimeter-Level Ranging Accuracy With TerraSAR-X: An Update. Geoscience and Remote Sensing Letters, IEEE, 2012, 9(5): 948-952. [5] Cong X. SAR Interferometry for Volcano Monitoring: 3D-PSI Analysis and Mitigation of Atmospheric Refractivity. München, Technische Universität München, Dissertation, 2014.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lorenz, Daniel; Wolf, Felix
2016-02-17
The PRIMA-X (Performance Retargeting of Instrumentation, Measurement, and Analysis Technologies for Exascale Computing) project is the successor of the DOE PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing) project, which addressed the challenge of creating a core measurement infrastructure that would serve as a common platform for both integrating leading parallel performance systems (notably TAU and Scalasca) and developing next-generation scalable performance tools. The PRIMA-X project shifts the focus away from refactorization of robust performance tools towards a re-targeting of the parallel performance measurement and analysis architecture for extreme scales. The massive concurrency, asynchronous execution dynamics,more » hardware heterogeneity, and multi-objective prerequisites (performance, power, resilience) that identify exascale systems introduce fundamental constraints on the ability to carry forward existing performance methodologies. In particular, there must be a deemphasis of per-thread observation techniques to significantly reduce the otherwise unsustainable flood of redundant performance data. Instead, it will be necessary to assimilate multi-level resource observations into macroscopic performance views, from which resilient performance metrics can be attributed to the computational features of the application. This requires a scalable framework for node-level and system-wide monitoring and runtime analyses of dynamic performance information. Also, the interest in optimizing parallelism parameters with respect to performance and energy drives the integration of tool capabilities in the exascale environment further. Initially, PRIMA-X was a collaborative project between the University of Oregon (lead institution) and the German Research School for Simulation Sciences (GRS). Because Prof. Wolf, the PI at GRS, accepted a position as full professor at Technische Universität Darmstadt (TU Darmstadt) starting February 1st, 2015, the project ended at GRS on January 31st, 2015. This report reflects the work accomplished at GRS until then. The work of GRS is expected to be continued at TU Darmstadt. The first main accomplishment of GRS is the design of different thread-level aggregation techniques. We created a prototype capable of aggregating the thread-level information in performance profiles using these techniques. The next step will be the integration of the most promising techniques into the Score-P measurement system and their evaluation. The second main accomplishment is a substantial increase of Score-P’s scalability, achieved by improving the design of the system-tree representation in Score-P’s profile format. We developed a new representation and a distributed algorithm to create the scalable system tree representation. Finally, we developed a lightweight approach to MPI wait-state profiling. Former algorithms either needed piggy-backing, which can cause significant runtime overhead, or tracing, which comes with its own set of scaling challenges. Our approach works with local data only and, thus, is scalable and has very little overhead.« less
NASA Astrophysics Data System (ADS)
Frodl, Peter
Von den Anfängen der Quantenmechanik bis heute gibt es Versuche, sie als statistische Theorie über Ensembles individueller klassischer Systeme zu interpretieren. Die Bedingungen, unter denen Theorien verborgener Parameter zu deterministischen Beschreibungen dieser individuellen Systeme als klassisch angesehen werden können, wurden von Einstein, Podolsky und Rosen 1935 formuliert: 1. Physikalische Systeme sind im Prinzip separierbar. 2. Zu jeder physikalischen Größe, deren Wert man ohne Störung des betrachteten Systems mit Sicherheit voraussagen kann, existiert ein ihr entsprechendes Element der physikalischen Realität.Zusammen sind sie, wie Bell 1964 gezeigt hat, prinzipiell unverträglich mit der Quantenmechanik und unhaltbar angesichts neuerer Experimente. Diese erweisen einmal mehr die Quantenmechanik als richtige Theorie. Um ihre Ergebnisse zu verstehen, müssen wir entweder die in der klassischen Physik als selbstverständlich angesehene Annahme der Separierbarkeit physikalischer Systeme aufgeben oder unseren Begriff der physikalischen Realität revidieren. Eine Untersuchung des Begriffs der Separabilität und einige Überlegungen zum Problem der Messung von Observablen zeigen, daß eine Änderung des Begriffs der physikalischen Realität unumgänglich ist. Der revidierte Realitätsbegriff sollte mit klassischer Physik und Quantenmechanik verträglich sein, um ein einheitliches Weltbild zu ermöglichen.Translated AbstractDo Quantum Mechanics Force us to Drastically Change our View of the World? Thoughts and Experiments after Einstein, Podolsky and RosenSince the advent of quantum mechanics there have been attempts of its interpretation in terms of statistical theory concerning individual classical systems. The very conditions necessary to consider hidden variable theories describing these individual systems as classical had been pointed out by Einstein, Podolsky and Rosen in 1935: 1. Physical systems are in principle separable. 2. If it is possible to predict with certainty the value of a physical quantity without disturbing the system under consideration, then there exists an element of physical reality corresponding to this physical quantity.
NASA Astrophysics Data System (ADS)
Oberholzer, Hans-Rudolf; Holenstein, Hildegard; Mayer, Jochen; Leifeld, Jens
2010-05-01
Humus balances are simple mathematical tools used by farmers for assessing the overall performance of their management in terms of soil organic matter changes. They are based on humus reproduction factors which themselves mainly depend on crop rotation, residue management, and amount and type of organic fertilization. Dynamic models, on the other hand, are typically complex and need more detailed input data and are designed to calculate the time course of soil carbon content. In both cases, thorough validation is needed to utilize their potential for estimating carbon stock changes. We compared the results of three humus balance methods SALCA-SQ (Neyroud 1997), VDLUFA method (VDLUFA 2004), Humod (Brock et al. 2008) and the RothC model with measured soil carbon stocks in a long-term experiment in Switzerland for the period 1977-2005 (Fliessbach et al 2007). The field trial comprises various minerally and organically fertilized treatments, the latter differing in the amount and composition of organics applied. All methods were able to distinguish systematic management effects on soil organic carbon (SOC). However, only those SOC trajectories calculated with the dynamic model RothC matched measured stocks quantitatively. For both, humus balances and dynamic modelling the result strongly depended on parameterization of organic fertilizers, i.e. its stability and organic matter content. Therefore, incomplete information on the amount and composition of organic fertilizer and lack of knowledge about its potential for humus reproduction is regarded an uncertainty in both dynamic modelling and humus balance calculation, and seems to be a major drawback for the reliable application of these approaches at the regional scale. Our results stress the need for more detailed and harmonized data bases of organic fertilizer composition and application rates. References Brock C., Hoyer U., Leithold G., Hülsbergen K.-J., 2008. Entwicklung einer praxisanwendbaren Methode der Humusbilanzierung im ökologischen Landbau. Abschlussbericht zum Projekt 03OE084, http://forschung.oekolandbau.de unter der BÖL-Bericht-ID 16447,184 pp. Fliessbach A, Oberholzer H.-R., Gunst L., Mäder P., 2007. Soil organic matter and biological soil quality indicators after 21 years of organic and conventional farming. Agriculture, Ecosystems and Environment 118, 273-284. Leifeld J., Reiser R., Oberholzer H.-R., 2009. Consequences of conventional vs. organic farming on soil carbon: Results from a 27-year field experiment. Agronomy Journal 101, 1204-1218. Neyroud J.-A., 1997. La part du sol dans la production intégrée 1. Gestion de la matière organique et bilan humique. Revue suisse d'agriculture, 29, 45-51. VDLUFA, 2004. VDLUFA-Standpunkt: Humusbilanzierung - Methode zur Beurteilung und Bemessung der Humusversorgung von Ackerland. Verband Deutscher Landwirtschaftlicher Untersuchungs- und Forschungsanstalten, Selbstverlag.
Hainschitz, I; Rieger, K; Siegl, H
2002-06-01
In Austria an index of 3 μg/kg of Ochratoxin A for coffee, 0,3 μg/kg for fruit juices and 0,2 μg/kg for beer is discussed. The laboratory of the food inspection authority of the state of Vorarlberg investigated the contribution of selected foodstuffs to the daily OTA intake and compared it with the recommendation of the scientific food committee of the EC. The focal point of this study was on beverages (coffee, coffee substitutes, beer and fruit juices) and their ingredients.ZUSAMMENFASSUNG: Die Untersuchungsergebnisse von Bier, Fruchtsaft und Kaffee [Diagramm 1] zeigen, dass die Mehrzahl der Proben nur sehr schwach bis gar nicht belastet waren. Die OTA-Belastung lag bei der Mehrzahl der Proben unter der Nachweisgrenze von 0,3 μg/kg bzw. 0,01μg/1. Einzelne Proben waren aber erheblich belastet, sodass bei starkem Konsum (Fruchtsaft im Sommer) eine überschreitung der vom SCF vorgeschlagenen Höchstmenge nicht auszuschließen ist. Die Ergebnisse der Kaffeemitteluntersuchung [Diagramm 2] belegen eine höhere OTA-Belastung bei mehr als der Hälfte der Proben. Wenn die vom SCF vorgeschlagene Höchstaufnahme von 5 ng pro Tag und kg Körpergewicht zu Grunde gelegt wird, resultiert für eine 60 kg schwere Person ein Wert von 0,3 μg/Tag. Das bedeutet bei einem mit 100 μg/kg OTA kontaminierten Kaffeeersatz und dem Konsum nur einer Tasse (5 - 7 g Pulver), dass alleine aus dieser Quelle diese Höchstaufnahme deutlich überschritten wird. Der Eintrag über die restliche Nahrung wie Cerealien, die für etwa die Hälfte der OTA-Aufnahme verantwortlich sind, bleibt hier unberücksichtigt. Die Untersuchungen belegen, dass die Einhaltung der in österreich vorgeschlagenen Richtwerte bei Bier, Fruchtsäften und Kaffee keine Schwierigkeiten bereitet. Für Kaffeemittel und andere Trockenfrüchte als Weintrauben [3] wurde allerdings noch kein Richtwert vorgeschlagen. Die Ergebnisse belegen aber, dass gerade für Kaffemittel und verschiedene Trockenfrüchte vor dem Hintergrund fehlender Höchstwerte ein Richtwert in der Größenordnung von 3 μg/kg bzw. 10 μg/kg hilfreich wäre, bis die Festlegung der gemeinschaftsrechtlicher Höchstgehalte erfolgt. Mit Hinblick auf die in der Einleitung beschriebene toxische Wirkung von OTA ist es im Sinne eines vorbeugenden Verbraucherschutzes wichtig, diese hochbelasteten Chargen zu erkennen und aus dem Verkehr zu nehmen. Ursache für die OTA-Kontamination ist vor allem eine mangelhafte Produktionshygiene in den Erzeugerländern. Dort liegt auch das größte Potential für eine Verringerung der OTA-Belastung. Die überprüfung der Lebensmittel auf Ochratoxin A durch die amtliche Lebensmittelkontrolle ist ein notwendiger Beitrag zum Schutz der öffentlichen Gesundheit und zur weiteren Reduktion der Aufnahme.
NASA Astrophysics Data System (ADS)
Kitazawa, Yukihito; Matsumoto, Haruhisa; Okudaira, Osamu; Kimoto, Yugo; Hanada, Toshiya; Akahoshi, Yasuhiro; Pauline, Faure; Sakurai, Akira; Funakoshi, Kunihiro; Yasaka, Testuo
2015-04-01
The history of Japanese R&D into in-situ sensors for micro-meteoroid and orbital debris (MMOD) measurements is neither particularly long nor short. Research into active sensors started for the meteoroid observation experiment on the HITEN (MUSES-A) satellite of ISAS/JAXA launched in 1990, which had MDC (Munich Dust Counter) on-board sensors for micro meteoroid measurement. This was a collaboration between Technische Universität München and ISAS/JAXA. The main purpose behind the start of passive sensor research was SOCCOR, a late 80's Japan-US mission that planned to capture cometary dust and return to the Earth. Although this mission was canceled, the research outcomes were employed in a JAXA micro debris sample return mission using calibrated aerogel involving the Space Shuttle and the International Space Station. There have been many other important activities apart from the above, and the knowledge generated from them has contributed to JAXA's development of a new type of active dust sensor. JAXA and its partners have been developing a simple in-situ active dust sensor of a new type to detect dust particles ranging from a hundred micrometers to several millimeters. The distribution and flux of the debris in the size range are not well understood and is difficult to measure using ground observations. However, it is important that the risk caused by such debris is assessed. In-situ measurement of debris in this size range is useful for 1) verifying meteoroid and debris environment models, 2) verifying meteoroid and debris environment evolution models, and 3) the real time detection of explosions, collisions and other unexpected orbital events. Multitudes of thin, conductive copper strips are formed at a fine pitch of 100 um on a film 12.5 um thick of nonconductive polyimide. An MMOD particle impact is detected when one or more strips are severed by being perforated by such an impact. This sensor is simple to produce and use and requires almost no calibration as it is essentially a digital system. Based on this sensor technology, the Kyushu Institute of Technology (Kyutech) has designed and developed an educational version of the sensor, which is currently on board the nano-satellite Horyu-II, which was built at Kyutech and launched on May 18, 2012 by JAXA. Although the sensor has a very small sensing area, sensor data were nonetheless successfully received. Moreover, a laboratory version of the sensor fitted on QSAT-EOS ("Tsukushi"), a small satellite, was be launched in November 2014. This version was developed and manufactured by Japan's QPS Institute to evaluate the sensor's capability regarding hypervelocity impact experiments at JAXA. JAXA's flight version, to be employed on satellites and/or the ISS, will be ready soon and a flight demonstration will be conducted on KOUNOTORI (HTV) in 2015. This paper reports on the R&D into in-situ measurement MMOD sensors at JAXA.
Human posture experiments under water: ways of applying the findings to microgravity
NASA Astrophysics Data System (ADS)
Dirlich, Thomas
For the design and layout human spacecraft interiors the Neutral Body Posture (NBP) in micro-gravity is of great importance. The NBP has been defined as the stable, replicable and nearly constant posture the body "automatically" assumes when a human relaxes in microgravity. Furthermore the NBP, as published, suggests that there is one standard neutral posture for all individuals. Published experiments from space, parabolic flights and under water on the other hand show strong inter-individual variations of neutral (relaxed) postures. This might originate from the quite small sample sizes of subjects analyzed or the different experiment conditions, e. g. space and under water. Since 2008 a collaborative research project focussing on human postures and motions in microgravity has been ongoing at the Technische Univer-sitüt München (TUM). This collaborative effort is undertaken by the Institute of Astronautics a (LRT) and the Institute of Ergonomics (LfE). Several test campaigns have been conducted in simulated microgravity under water using a specially designed standardized experiment setup. Stereo-metric HD video footage and anthropometric data from over 50 subjects (female and male) has been gathered in over 80 experiments. The video data is analyzed using PCMAN software, developed by the LfE, resulting in a 3D volumetric CAD-based model of each subject and posture. Preliminary and ongoing analysis of the data offer evidence for the existence of intra-individually constant neutral postures, as well as continuously recurring relaxation strate-gies. But as with the data published prior the TUM experiments show quite a large variation of inter-individual postures. These variation might be induced or influenced by the special environmental conditions in the underwater experiment. Thus in present paper ways of stan-dardizing data and applying the findings gathered under water to real microgravity are being discussed. The following influences stemming from the differences between underwater and real microgravity environment were analyzed in greater detail: external forces (buoyancy and grav-ity), required fixation, postural changes by breathing and subject orientation to gravitational vector. Goal of this analysis was to understand the respective effects of each environmental influence on subjects posture observed. Each of the different influences was then quantified and the postural change induced by it calculated. These were then combined using a specially programmed multi-body-simulation tool, making it possible to recompute 3D posture data dy-namically to the environmental influences. The simulation is based on the volumetric 3D model of each subject, specific anthropometric data, such as body-fat or muscle ratio, combined with external forces such as gravity and buoyancy. The recomputed data can then be compared independent from the environmental influences. The recomputed 3D posture data can then be re-evaluated focussing again on possible inter-personal neutral posture archetypes in the subject group. Some examples of recomputed data and inter-personal findings will be given.
Mixtures of Bosonic and Fermionic atoms
NASA Astrophysics Data System (ADS)
Albus, Alexander
2003-12-01
The theory of atomic Boson-Fermion mixtures in the dilute limit beyond mean-field is considered in this thesis. Extending the formalism of quantum field theory we derived expressions for the quasi-particle excitation spectra, the ground state energy, and related quantities for a homogenous system to first order in the dilute gas parameter. In the framework of density functional theory we could carry over the previous results to inhomogeneous systems. We then determined to density distributions for various parameter values and identified three different phase regions: (i) a stable mixed regime, (ii) a phase separated regime, and (iii) a collapsed regime. We found a significant contribution of exchange-correlation effects in the latter case. Next, we determined the shift of the Bose-Einstein condensation temperature caused by Boson-Fermion interactions in a harmonic trap due to redistribution of the density profiles. We then considered Boson-Fermion mixtures in optical lattices. We calculated the criterion for stability against phase separation, identified the Mott-insulating and superfluid regimes both, analytically within a mean-field calculation, and numerically by virtue of a Gutzwiller Ansatz. We also found new frustrated ground states in the limit of very strong lattices. ----Anmerkung: Der Autor ist Träger des durch die Physikalische Gesellschaft zu Berlin vergebenen Carl-Ramsauer-Preises 2004 für die jeweils beste Dissertation der vier Universitäten Freie Universität Berlin, Humboldt-Universität zu Berlin, Technische Universität Berlin und Universität Potsdam. Ziel der Arbeit war die systematische theoretische Behandlung von Gemischen aus bosonischen und fermionischen Atomen in einem Parameterbereich, der sich zur Beschreibung von aktuellen Experimenten mit ultra-kalten atomaren Gasen eignet. Zuerst wurde der Formalismus der Quantenfeldtheorie auf homogene, atomare Boson-Fermion Gemische erweitert, um grundlegende Größen wie Quasiteilchenspektren, die Grundzustandsenergie und daraus abgeleitete Größen über die Molekularfeldtheorie hinaus zu berechnen. Unter Zuhilfenahme der dieser Resultate System wurde ein Boson-Fermion Gemisch in einem Fallenpotential im Rahmen der Dichtefunktionaltheorie beschrieben. Daraus konnten die Dichteprofile ermittelt werden und es ließen sich drei Bereiche im Phasendiagramm identifizieren: (i) ein Bereich eines stabilen Gemisches, (ii) ein Bereich, in dem die Spezies entmischt sind und (iii) ein Bereich, in dem das System kollabiert. Im letzten dieser drei Fällen waren Austausch--Korrelationseffekte signifikant. Weiterhin wurde die Änderung der kritischen Temperatur der Bose-Einstein-Kondensation aufgrund der Boson-Fermion-Wechselwirkung berechnet. Verursacht wird dieser Effekt von Dichtumverteilungen aufgrund der Wechselwirkung. Dann wurden Boson-Fermion Gemische in optischen Gittern betrachtet. Ein Stabilitätskriterium gegen Phasenentmischung wurde gefunden und es ließen sich Bedingungen für einen supraflüssig zu Mott-isolations Phasenübergang angeben. Diese wurden sowohl mittels einer Molekularfeldrechnung als auch numerisch im Rahmen eines Gutzwilleransatzes gefunden. Es wurden weiterhin neuartige frustrierte Grundzustände im Fall von sehr großen Gitterstärken gefunden.
Ionosphere monitoring and forecast activities within the IAG working group "Ionosphere Prediction"
NASA Astrophysics Data System (ADS)
Hoque, Mainul; Garcia-Rigo, Alberto; Erdogan, Eren; Cueto Santamaría, Marta; Jakowski, Norbert; Berdermann, Jens; Hernandez-Pajares, Manuel; Schmidt, Michael; Wilken, Volker
2017-04-01
Ionospheric disturbances can affect technologies in space and on Earth disrupting satellite and airline operations, communications networks, navigation systems. As the world becomes ever more dependent on these technologies, ionospheric disturbances as part of space weather pose an increasing risk to the economic vitality and national security. Therefore, having the knowledge of ionospheric state in advance during space weather events is becoming more and more important. To promote scientific cooperation we recently formed a Working Group (WG) called "Ionosphere Predictions" within the International Association of Geodesy (IAG) under Sub-Commission 4.3 "Atmosphere Remote Sensing" of the Commission 4 "Positioning and Applications". The general objective of the WG is to promote the development of ionosphere prediction algorithm/models based on the dependence of ionospheric characteristics on solar and magnetic conditions combining data from different sensors to improve the spatial and temporal resolution and sensitivity taking advantage of different sounding geometries and latency. Our presented work enables the possibility to compare total electron content (TEC) prediction approaches/results from different centers contributing to this WG such as German Aerospace Center (DLR), Universitat Politècnica de Catalunya (UPC), Technische Universität München (TUM) and GMV. DLR developed a model-assisted TEC forecast algorithm taking benefit from actual trends of the TEC behavior at each grid point. Since during perturbations, characterized by large TEC fluctuations or ionization fronts, this approach may fail, the trend information is merged with the current background model which provides a stable climatological TEC behavior. The presented solution is a first step to regularly provide forecasted TEC services via SWACI/IMPC by DLR. UPC forecast model is based on applying linear regression to a temporal window of TEC maps in the Discrete Cosine Transform (DCT) domain. Performance tests are being conducted at the moment in order to improve UPC predicted products for 1-, 2-days ahead. In addition, UPC is working to enable short-term predictions based on UPC real-time GIMs (labelled URTG) and implementing an improved prediction approach. TUM developed a forecast method based on a time series analysis of TEC products which are either B-spline coefficients estimated by a Kalman filter or TEC grid maps derived from the B-spline coefficients. The forecast method uses a Fourier series expansion to extract the trend functions from the estimated TEC product. Then the trend functions are carried out to provide predicted TEC products. The forecast algorithm developed by GMV is based on the ionospheric delay estimation from previous epochs using GNSS data and the main dependence of ionospheric delays on solar and magnetic conditions. Since the ionospheric behavior is highly dependent on the region of the Earth, different region-based algorithmic modifications have been implemented in GMV's magicSBAS ionospheric algorithms to be able to estimate and forecast ionospheric delays worldwide. Different TEC prediction approaches outlined here will certainly help to learn about forecasting ionospheric ionization.
NASA Astrophysics Data System (ADS)
Jantsch, Wolfgang; Ferry, David; Kuchar, Friedl
2008-11-01
Günther Bauer Günther Bauer. This special issue of Journal of Physics: Condensed Matter is devoted to Günther Bauer, who celebrated his 65th birthday in September 2007. Günther has had a long career in condensed matter physics, but is known particularly for his studies of high magnetic field transport and optics in semiconductors and, more recently, for the discovery and x-ray analysis of self-organized 3D quantum dot crystals. However, his work is much broader than this, as indicated by the wide selection of topics which are represented in this special issue. Günther began his scientific career in Vienna, became associate Professor at the University of Ulm after habilitation at the Rheinisch-Westfälische Technische Hochschule Aachen, Germany, and then ascended to a full professorship at the Montanuniversität, Leoben, Austria, in 1980. He subsequently moved to the Johannes Kepler Universität, Linz, Austria, in 1990. Apart from his outstanding scientific achievements, Günther is also known for organizing the biannual winterschool on 'New Developments in Solid State Physics' beginning in 1980. Initially, he worked at this task together with Helmut Heinrich (until 2000) and Friedl Kuchar (1984 until now) and subsequently with Wolfgang Jantsch (who followed Helmut Heinrich in 2002). This winterschool was, and remains, very important in identifying new trends in solid state physics, and thus has a number of important proponents in the scientific community. It was at one of the first of these winterschools where the initial reports of the quantum Hall effect were presented, and it has been a meeting place where established experts (including three Nobel laureates) teach, and interact with students and young researchers. It is important to know that Günther's excellent international connections were necessary to maintain the high level of the winterschool over almost 30 years. After the first meeting in 1980 in Mariapfarr, these winterschools were held thirteen times in the castle of Mauterndorf, Province of Salzburg, Austria, and therefore the village name Mauterndorf became synonymous for this series within the semiconductor and nanoscience community. In 2008, the school had outgrown the available facilities in Mauterndorf, and relocated to Bad Hofgastein, not very far from Mauterndorf but providing a substantially larger venue, which allowed both facilities and accommodation for the ever growing number of attendees. In the last few winterschools, the main topic has been 'Semiconductor Nanostructures—Physics and Applications'. In 2008, we had about 240 participants from around the world and an outstanding scientific program comprising the most exciting developments of the past two years. Indeed, most papers in this special issue of Journal of Physics: Condensed Matter were presented at the 2008 Mauterndorf meeting. In addition, a number of outstanding experts on nanoscience and long term friends of the Mauterndorf winterschool volunteered to contribute to honour Günther. We thank all of them for their efforts.
PREFACE: Asia-Pacific Interdisciplinary Research Conference 2011 (AP-IRC 2011)
NASA Astrophysics Data System (ADS)
Sandhu, Adarsh; Okada, Hiroshi; Maekawa, Toru; Okano, Ken
2012-03-01
AP-IRC Logo Scientists, engineers, entrepreneurs and policymakers gather at the first truly interdisciplinary conference held in Asia-Pacific http://www.apirc.jp/ The inaugural Asia-Pacific Interdisciplinary Research Conference 2011 (AP-IRC 2011) was held at Toyohashi University of Technology (Toyohashi Tech) on 17-18 November 2011. The conference is a forum for enhancing mutual understanding between scientists, engineers, policymakers and experts from a wide spectrum of pure and applied sciences, to resolve the daunting global issues facing mankind. The conference attracted approximately 300 participants including delegates from France, Germany, India, Indonesia, Korea, Malaysia, Russia, Sweden, United Kingdom, USA and Vietnam. AP-IRC 2011 was chaired by Dr Yoshiyuki Sakaki, President of Toyohashi Tech, who opened the proceedings by stressing the importance of an interdisciplinary approach to research, to resolve global scientific and technical issues. Recalling his own experience as the leader of Japan's efforts in the Human Genome Project, Sakaki also encouraged participants to make an effort to try to understand the sometimes difficult concepts and terminology of other areas of research. The presentations at AP-IRC 2011 were divided into three focus sessions: innovative mechano-magneto-electronic systems, life sciences, and green science and technology. A total of 174 papers were presented over the two-day conference including eight by invited speakers. Highlights of AP-IRC 2011 included a first-hand account of the damage caused by the massive earthquake in March 2011 to experimental facilities at Tohoku University by Masayoshi Esashi; the fascinating world of bees and the inborn numerical competence of humans and animals by Hans J Gross; research on robots and cognition-enabled technical systems at Technische Universität München by Sandra Hirche; the history of events leading to the invention of the world's strongest NdFeB permanent magnet by Masato Sagawa; a novel method for the synthesis of graphene using bacteria extracted from a riverside in Toyohashi by Toyohashi Tech scientists; and ambitious plans to harvest energy by laying massive numbers of solar cells in North Africa as part of the 'Sahara Solar Breeder (SSB) Plan' for a global clean-energy superhighway, described by Hideomi Koinuma. In addition to the technical sessions, the conference banquet included a short session during which the invited speakers described notable trends in research and policy in their part of the world. The short speeches led to animated discussions between the delegates, particularly the young scientists and graduate students, who were able to talk directly with veteran researchers for a first-hand view of the issues raised during the day's presentations. In closing the conference, Professor Makoto Ishida, co-chair of the conference and vice-president of Toyohashi Tech announced that this conference will be held annually at the same time each year, with AP-IRC 2012 scheduled for 15-16 November 2012 at the Irago Sea-Park & Spa Hotel in Aichi Prefecture, Japan. AP-IRC group The PDF also contains lists of the Committees involved.
NASA Astrophysics Data System (ADS)
Pauri, Massimo
2011-11-01
A critical re-examination of the history of the concepts of space (including spacetime of general relativity and relativistic quantum field theory) reveals a basic ontological elusiveness of spatial extension, while, at the same time, highlighting the fact that its epistemic primacy seems to be unavoidably imposed on us (as stated by A.Einstein "giving up the extensional continuum … is like to breathe in airless space"). On the other hand, Planck's discovery of the atomization of action leads to the fundamental recognition of an ontology of non-spatial, abstract entities (Quine) for the quantum level of reality (QT), as distinguished from the necessarily spatio-temporal, experimental revelations ( measurements). The elementary quantum act (measured by Planck's constant) has neither duration nor extension, and any genuinely quantum process literally does not belong in the Raum and time of our experience. As Heisenberg stresses: "Während also die klassische Physik ein objectives Geschehen in Raum and Zeit zum Gegenstand hat, für dessen Existenz seine Beobachtung völlig irrelevant war, behandelt die Quantentheorie Vorgänge, die sozusagen nur in den Momenten der Beobachtung als raumzeitliche Phänomene aufleuchten, und über die in der zwischenzeit anschaulische physikalische Aussagen sinloss sind". An admittedly speculative, hazardous conjecture is then advanced concerning the relation of such quantum ontology with the role of the pre-phenomenal continuum (Husserl) in the perception of macroscopically distinguishable objects in the Raum and time of our experience. Although rather venturesome, it brings together important philosophical issues. Coherently with recent general results in works on the foundations of QT, it is assumed that the linearity of quantum dynamical evolution does not apply to the central nervous system of living beings at a certain level of the evolutionary ramification and at the pre-conscious stage of subjectivity. Accordingly, corresponding to the onset of a non-linear dynamic evolution, a `primary spatial' reduction is `continually' taking place, thereby constituting the neural precondition for the experience of distinguishable macroscopic objects in the continuous spatial extension. While preventing the theoretically possible quantum superpositions of macroscopic objects from being perceivable by living beings, the `primary reduction' has no effect on the standard processes concerning quantum level entities involved in laboratory man-made experiments. In this connection, an experimental check which might falsify the conjecture is briefly discussed. The approach suggested here, if sound, leads to a naturalization of that part of Kant's Transcendental Aesthetics than can survive the Euclidean catastrophe. According to such naturalized transcendentalism, "space can well be transcendental without the axioms being so", in agreement with a well-known statement by Boltzman. Finally, as far as QT is concerned, the conjecture entails that a scheme for quantum measurement of the von Neumann type cannot even `leave the ground', vindicating Bohr's viewpoint. A quantum theory of measurement, in a proper sense, turns out to be unnecessary and in fact impossible.
Hydrothermal Carbonization: a feasible solution to convert biomass to soil?
NASA Astrophysics Data System (ADS)
Tesch, Walter; Tesch, Petra; Pfeifer, Christoph
2013-04-01
The erosion of fertile soil is a severe problem arising right after peak oil (Myers 1996). That this issue is not only a problem of arid countries is shown by the fact that even the European Commission defined certain milestones to address the problem of soil erosion in Europe (European Commission 2011). The application of bio-char produced by torrefaction or pyrolysis for the remediation, revegetation and restoration of depleted soils started to gain momentum recently (Rillig 2010, Lehmann 2011, Beesley 2011). Hydrothermal carbonization (HTC) is a promising thermo-chemical process that can be applied to convert organic feedstock into fertile soil and water, two resources which are of high value in regions being vulnerable to erosion. The great advantage of HTC is that organic feedstock (e.g. organic waste) can be used without any special pretreatment (e.g. drying) and so far no restrictions have been found regarding the composition of the organic matter. By applying HTC the organic material is processed along a defined pathway in the Van Krevelen plot (Behrendt 2006). By stopping the process at an early stage a nutritious rich material can be obtained, which is known to be similar to terra preta. Considering that HTC-coal is rich in functional groups and can be derived from the process under "wet" conditions, it can be expected that it shall allow soil bacteria to settle more easily compared to the bio-char derived by torrefaction or pyrolysis. In addition, up to 10 tons process water per ton organic waste can be gained (Vorlop 2009). Thus, as organic waste, loss of fertile soil and water scarcity becomes a serious issue within the European Union, hydrothermal carbonization can provide a feasible solution to address these issues of our near future. The presentation reviews the different types of feedstock investigated for the HTC-Process so far and gives an overview on the current stage of development of this technology. References Beesley L., Moreno-Jiménez E., Gomez-Eyles J.L., Harris H., Robinson B., Sizmur T.: A review of biochars' potential role in the remediation, revegetation and restoration of contaminated soils. Environmental Pollution (159), p. 3269 - 3282, 2011. Behrendt F.: Direktverflüssigung von Biomasse - Reaktionsmechanismen und Produktverteilungen Institut für Energietechnik, Technische Universität Berlin Studie im Auftrag der Bundesanstalt für Landwirtschaft und Ernährung; Projektnummer 114-50-10-0337/05-B, 2006. European Commission: "Roadmap to a Resource Efficient Europe", COM(2011) 571. Lehmann J., Rillig M.C., Thies J., Masiello C.A., Hockaday W.C., Crowley D.: Biochar effects on soil biota - A review, Soil Biology & Biochemistry, p. 1-25, 2011. Myers Norman: "Environmental services of biodiversity", Proc. Natl. Acad. Sci. USA Vol 93, pp. 2764 - 2769, 1996. Rillig M.C., Wagner M., Salem M., Antunes P.M., George C., Ramke H.G., Titirici M.M., Antonietti M.: Material derived from hydrothermal carbonization: effects on plant growth and arbuscular mycorrhiza. Applied Soil Ecology (45), p. 238 - 242, 2010. Vorlop K.D., Schuchardt F., Prüße U.: Hydrothermale Carbonisierung Analyse und Ausblicke. FNR-Fachgespräch, Berlin, 2009.
An Experiment to Search for Systematic Effects in Long-Lived Radioactive Decays
NASA Astrophysics Data System (ADS)
Reuter, Cassie A.
Franz Zwicky first discovered "Dunkle Materie," or "Dark Matter" over 100 years ago, when he realized galaxy clusters must consist predominately of non-luminous matter. Since then, mounting evidence, has shown that a paltry 4% of the energy density of the universe is baryonic matter. We realize that the energy density of the universe is, in fact, dominated by dark matter and dark energy. Despite the evidence for dark matter, there is a long-standing discrepancy in the interpretation of results from direct dark matter experiments. The Italian DArk MAtter project (DAMA) claims to have discovered WIMPs, a particular variety of dark matter, since 1999. However, other direct detection experiments, provide results that directly contradict DAMA's claims. For years, the dark matter community has worked to reconcile the two opposing sets of results through improved experiments in direct detection and alternative Dark Matter models. This thesis outlines the Modulation Experiment, which is designed to identify and determine possible systematic sources of error that could explain the annually modulating signal attributed to Dark Matter by DAMA. We present a dedicated experiment for the long-term measurement of gamma emissions resulting from beta decays that provides high-quality data and allows for the identification of systematic influences. Up to 16 sources are monitored redundantly by 32 3x3" NaI(Tl) detectors in four separate setups across three continents. In each setup, monitoring of environmental and operational conditions facilitates correlation studies. The deadtime-free performance of the data acquisition system is confirmed and monitored by LED pulsers. Waveforms of all events are recorded individually, enabling a study of time-dependent effects spanning microseconds to years, using both time-binned and unbinned analyses. In this thesis, we show that the experiment is successfully acquiring data, and environmental effects are well-understood. Because of the experimental design, the Modulation Experiment is particularly well-suited to monitor decay rates of various isotopes. Though decay rates are generally considered to be Poisson processes, standards offices such as the National Institute of Standards (NIST) and Physikalisch-Technische Bundesanstalt (PTB) have reported annually modulating rates due to an unknown influence. Some scientists hypothesize that these effects may be due to a solar neutrino influence. Furthermore, some scientists have also examined a potential link from solar effects (e.g. flares and storms) to discrepancies in decay rate. However, these effects may simply be the by-products of some seasonal effects. This thesis explores the reported claims of decay rate modulation, and limits annual modulation amplitudes to < 5.95x10-5 for Ti-44, 1.46x10-4 for Co-60, and 1.8x10-4 Cs-137 at a 3sigma confidence level. No additional periodicities were found to be statistically significant. The Modulation experiment is beginning to explore the true nature of the impact of systematic effects on the measured decay rate. As data continues to be collected and more setups come online, we will be able to lower statistical uncertainties on measurements the half life, measure or set further limits on time-dependent modulations and search for correlations between locations.
NASA Astrophysics Data System (ADS)
2014-05-01
A scientific session "Prospects of Studies in Neutrino Particle Physics and Astrophysics," of the Physical Sciences Division of the Russian Academy of Sciences (DPS RAS), devoted to the centenary of B M Pontecorvo, was held on 2-3 September 2014 at the JINR international conference hall (Dubna, Moscow region).The following reports were put on the session agenda as posted on the website http://www.gpad.ac.ru of the RAS Physical Sciences Division: (1) Kudenko Yu G (Institute for Nuclear Research, RAS, Moscow; Moscow Institute of Physics and Technology, Dolgoprudnyi, Moscow region; National Research Nuclear University MEPhI, Moscow) "Long-baseline neutrino accelerator experiments: results and prospects";(2) Spiering Ch (Deutsches Elektronen-Synchrotron (DESY), Germany) "Results obtained by ICECUBE and prospects of neutrino astronomy";(3) Barabash A S (Alikhanov Institute for Theoretical and Experimental Physics, Moscow) "Double beta decay experiments: current status and prospects";(4) Bilenky S M (Joint Institute for Nuclear Research, Dubna, Moscow region; Technische Universitat M'unchen, Garching, Germany) "Bruno Pontecorvo and the neutrino";(5) Olshevskiy A G (Joint Institute for Nuclear Research, Dubna, Moscow region) "Reactor neutrino experiments: results and prospects";(6) Gavrin V N (Institute for Nuclear Research, RAS, Moscow) "Low-energy neutrino research at the Baksan Neutrino Laboratory";(7) Gorbunov D S (Institute for Nuclear Research, RAS, Moscow): "Sterile neutrinos and their role in particle physics and cosmology";(8) Derbin A V (Konstantinov Petersburg Nuclear Physics Institute, Gatchina, Leningrad region) "Solar neutrino experiments";(9) Rubakov V A (Institute for Nuclear Research, RAS, Moscow) "Prospects of studies in the field of neutrino particle physics and astrophysics." An article by V N Gavrin, close in essence to talk 6, was published in Usp. Fiz. Nauk 181 (9), 975 (2011) [Phys. Usp. 54 (9) 941 (2011)]. Articles by V A Rubakov, close in essence to talk 9, were published in Usp. Fiz. Nauk 182 (10) 1017 (2012); 181 (6) 655 (2011) [Phys. Usp. 55 (10) 949 (2012); 54 (6) 633 (2011)]. Articles based on talks 1-5, 7, and 8 are published below. • Long-baseline neutrino accelerator experiments: results and prospects, Yu G Kudenko Physics-Uspekhi, 2014, Volume 57, Number 5, Pages 462-469 • High-energy neutrino astronomy: a glimpse of the promised land, Ch Spiering Physics-Uspekhi, 2014, Volume 57, Number 5, Pages 470-481 • Double beta decay experiments: current status and prospects, A S Barabash Physics-Uspekhi, 2014, Volume 57, Number 5, Pages 482-488 • Bruno Pontecorvo and the neutrino, S M Bilenky Physics-Uspekhi, 2014, Volume 57, Number 5, Pages 489-496 • Reactor neutrino experiments: results and prospects, A G Olshevskiy Physics-Uspekhi, 2014, Volume 57, Number 5, Pages 497-502 • Sterile neutrinos and their role in particle physics and cosmology, D S Gorbunov Physics-Uspekhi, 2014, Volume 57, Number 5, Pages 503-511 • Solar neutrino experiments, A V Derbin Physics-Uspekhi, 2014, Volume 57, Number 5, Pages 512-524
NASA Astrophysics Data System (ADS)
Raab, A.; Raab, T.; Takla, M.; Nicolay, A.; Müller, F.; Rösler, H.; Bönisch, E.
2012-04-01
Active lignite mines in Lower Lusatia (Brandenburg, Germany) are a controversial discussed issue. Though lignite mining destroys whole landscapes, it offers the opportunity to investigate prehistory and landscape development on a landscape scale. Since 2010 scientists from Brandenburgische Technische Universität (BTU) Cottbus and archaeologists from Brandenburgisches Landesamt für Denkmalpflege und Archäologisches Landesmuseum (BLDAM) collaborate to study human-environment interactions. Our study area is the open cast mine Jänschwalde, one out of four active lignite mines in Lower Lusatia. The mine is situated c. 150 km southeast of Berlin. Archaeological excavations have been carried out in the prefield over many years and the outcome is manifold. Different approaches are combined for a comprehensive reconstruction: archaeological investigations, geomorphological/pedological studies and historical research. The archaeological fieldwork includes prospection, the opening of test trenches and area excavations. These outcrop situations provide a view into the stratigraphy and are to some extent commonly used for archaeological and sedimentological/pedological studies. In addition, chronological information is obtained by different methods of relative and absolute (14C, OSL, dendrochronological) age determination. To build up a model for the landscape development, data (topographical maps, historical maps, physiogeographical information, etc.) is gathered and processed. The initial situation for our research is the historic charcoal burning in the former "königliche Taubendorfer Forst" and its impact on the environment. In the study area, this trade was carried out from the c. 17th to the 19th century and is very well documented by about 700 excavated ground plans of circular upright kilns and another c. 300 prospected kilns. It is assumed that charcoal was produced for the smelter at Peitz nearby, where bog iron ore was processed since 1567. There is sedimentological proof of the relationship of logging for wood/deforestation and the formation of wind-blown deposits. In addition, sedimentological/pedological studies of several test trenches (up to 150 m long and up to 150 cm deep) show that buried plough horizons are widespread. First results of radiocarbon dating of charcoal fragments from buried Ap horizons date to the Slavic middle ages (600-1200 AD). It is assumed that the eolian activity was triggered by deforestation and agricultural use. In conclusion, our results suggest that there are two major periods with eolian activity induced by human impact: the first period was caused by the extending agriculture during the Slavic middle ages (600-1200 AD) and the second period was induced by deforestation for charcoal burning between the 17th and 19th century. Future research concentrates on unanswered questions like to what extent the landscape was changed by human impact and the consequences for the environment (soil loss, water balance, vegetation) and for the population. Furthermore, absolute and relative age determinations are needed to supplement the chronology information. For a comprehensive understanding, especially concerning the charcoal burning in the study area, archival studies are carried out. The obtained data will be used to build up a GIS-based model of the paleoenvironment and it is intended to extend the model spatially and temporally.
Experiences gained by establishing the IAMG Student Chapter Freiberg
NASA Astrophysics Data System (ADS)
Ernst, Sebastian M.; Liesenberg, Veraldo; Shahzad, Faisal
2013-04-01
The International Association for Mathematical Geosciences (IAMG) Student Chapter Freiberg was founded in 2007 at the Technische Universität Bergakademie Freiberg (TUBAF) in Germany by national and international graduate and undergraduate students of various geoscientific as well as natural science disciplines. The major aim of the IAMG is to promote international cooperation in the application and use of Mathematics in Geosciences research and technology. The IAMG encourages all types of students and young scientists to found and maintain student chapters, which can even receive limited financial support by the IAMG. Following this encouragement, generations of students at TUBAF have build up and established a prosperous range of activities. These might be an example and an invitation for other young scientists and institutions worldwide to run similar activities. We, some of the current and former students behind the student chapter, have organised talks, membership drives, student seminars, guest lectures, several short courses and even international workshops. Some notable short courses were held by invited IAMG distinguished lecturers. The topics included "Statistical analysis in the Earth Sciences using R - a language and environment for statistical computing and graphics", "Geomathematical Natural Resource Modeling" and "Introduction to Geostatistics for Environmental Applications and Natural Resources Evaluation: Basic Concepts and Examples". Furthermore, we conducted short courses by ourselves. Here, the topics included basic introductions into MATLAB, object oriented programming concepts for geoscientists using MATLAB and an introduction to the Keyhole Markup Language (KML). Most of those short courses lasted several days and provided an excellent and unprecedented teaching experience for us. We were given credit by attending students for filling gaps in our university's curriculum by providing in-depth and hands-on tutorials on topics, which were merely mentioned in regular lectures. To date, the major highlights of our activity are two international workshops: MatGeoS 2008 & 2009. During our second workshop, over thirty scientists representing government agencies, academia and non-profit research organizations worldwide participated. A number of interdisciplinary topics were intensively discussed. After the workshop, the decision was made to create a book based on the presented scientific work, which should be edited by the us, the students of the student chapter. Eventually, we called for papers, organized a full-scale peer-review and edited the book. It is scheduled to be published in the first quarter of 2013 and is entitled "Mathematical Geosciences: Theory, Methods and Applications". The whole organizing process proved to be another excellent lesson to us, as it interfered with our overwhelming studying and research activities. It was necessary to learn how to organize and handle the mandatory communication and editing, while pursuing our regular duties. We consider the activities of the IAMG Student Chapter Freiberg as an example of what a group of enthusiastic and dedicated young professionals can achieve. Therefore, we encourage every similar group of students or "scientists in training" to just try to do something beyond the requirements and learn, while doing it. We proved that this is possible.
The German approach to emergency/disaster management.
Domres, B; Schauwecker, H H; Rohrmann, K; Roller, G; Maier, G W; Manger, A
2000-01-01
Disaster control and disaster relief in Germany are public tasks. But the government has shifted the responsibility of the administration of these tasks to the 16 states, the so called "Lander", because the EFG is a federal republic. The same is valid for the civil defense and the civil protection in the case of military or international risks. The 16 states are also responsible for the legislation of rescue service, fire fighting service and disaster control (natural and technical disasters). Counties and district-free cities are responsible for the organisation of these services. The German system is based on the principle of subsidiary between official and private institutions. A lot of official and private relief organisations are responsible for the execution of disaster relief tasks. In Germany the following organisations exist: Official (GO): Technisches Hilfswerk (THW/Federal Technical Support Service), Feuerwehren (Fire Brigades/professionals and volunteers) Academie of Emergency Planning and Civil Defense Private (NGO): Arbeiter-Samariter-Bund Deutschland (ASB/Workers' Samaritan Association Germany), Deutsche Gesellschaft zur Rettung Schiffbruchiger (DGzRS, German Lifesaving Association), Deutsches Rotes Kreuz (DRK/German Red Cross), Johanniter-Unfall-Hilfe (JUH/St. John's Ambulance), Malteser Hilfsdienst (MEID/Maltese-Relief-Organisation). ASB, DRK, JUH and MHD are specialised in the field of rescue, medical and welfare services and medical disaster relief. 80% of the German rescue service and 95% of the German disaster medical relief are realised by these NGO's. NGO's and GO's employ more than 1.2 million volunteers and appr. 100,000 professionals. Rescue service is carried out by professionals, disaster relief by volunteers. The German constitution allows to call the federal army in case of disaster, to support the disaster relief organisations (for example: flood Oder River 1997, train-crash "ICE" 1998). In all counties and district free cities disaster control staffs are set up by the administration. During disaster relief operations a operational command is on site. Most of the counties and district free cities, medical executives, rescue staff executives along with fire executive officers are responsible for the medical rescue organisation. All emergency physicians and medical executives have attended special training or a 520 hours-training-course (Paramedics). All volunteers of the medical service in the disaster relief organisations are trained in separate special courses (90 hours). Over the last years, civil protection, disaster relief and rescue services in the FRG have been reorganised. In 1997, the civil protection was reformed by a new federal act. Disaster relief of the "Lander" is supported by Federal Government with about 9000 vehicles and a budget for training. Emergency physicians have to take part in a (80) eighty hours lasting course on emergency medicine from an interdisciplinary point of view; they are only allowed to do rescue missions after having proved basic experience in emergency medicine as well as having completed a (18) eighteen-months-postgraduate training period at least. Senior emergency physicians receive and additional (40) forty-hours-lasting theoretical and practical training-after three years practice in rescue services as a minimum. There are special training courses offered for Medical and Non-Medical Personal to cope with disaster situation by different institutions and organisations.
NASA Astrophysics Data System (ADS)
Werner, Deljana
2002-05-01
Im Rahmen dieser Arbeit gelang es, katalytische Antikörper zur Hydrolyse von Benzylphenylcarbamaten sowie zahlreiche monoklonale Antikörper gegen Haptene herzustellen. Es wurden verschiedene Hapten-Protein-Konjugate unter Verwendung unterschiedlicher Kopplungsmethoden hergestellt und charakterisiert. Zur Generierung der hydrolytisch aktiven Antikörper wurden Inzuchtmäuse mit KLH-Konjugaten von 4 Übergangszustandsanaloga (ÜZA) immunisiert. Mit Hilfe der Hybridomtechnik wurden verschiedene monoklonale Antikörper gegen diese ÜZA gewonnen. Dabei wurden sowohl verschiedene Immunisierungsschemata als auch verschiedene Inzuchtmausstämme und Fusionstechniken verwendet. Insgesamt wurden 32 monoklonale Antikörper gegen die verwendeten ÜZA selektiert. Diese Antikörper wurden in groen Mengen hergestellt und gereinigt. Zum Nachweis der Antikörper-vermittelten Katalyse wurden verschiedene Methoden entwickelt und eingesetzt, darunter immunologische Nachweismethoden mit Anti-Substrat- und Anti-Produkt-Antikörpern und eine photometrische Methode mit Dimethylaminozimtaldehyd. Der Nachweis der hydrolytischen Aktivität gelang mit Hilfe eines Enzymsensors, basierend auf immobilisierter Tyrosinase. Die Antikörper N1-BC1-D11, N1-FA7-C4, N1-FA7-D12 und R3-LG2-F9 hydrolysierten die Benzylphenylcarbamate POCc18, POCc19 und Substanz 27. Der Nachweis der hydrolytischen Aktivität dieser Antikörper gelang auch mit Hilfe der HPLC. Der katalytische Antikörper N1-BC1-D11 wurde kinetisch und thermodynamisch untersucht. Es wurde eine Michaelis-Menten-Kinetik mit Km von 210 µM, vmax von 3 mM/min und kcat von 222 min-1 beobachtet. Diese Werte korrelieren mit den Werten der wenigen bekannten Diphenylcarbamat-spaltenden Abzyme. Die Beschleunigungsrate des Antikörpers N1-BC1-D11 betrug 10. Das ÜZA Hei3 hemmte die hydrolytische Aktivität. Dies beweist, dass die Hydrolyse in der Antigenbindungsstelle stattfindet. Weiter wurde zwischen der Antikörperkonzentration und der Umsatzgeschwindigkeit eine lineare Abhängigkeit festgestellt. Die thermodynamische Gleichtgewichtsdissoziationskonstante KD des Abzyms von 2,6 nM zeugt von einer sehr guten Affinität zum ÜZA. Hydrolytisch aktiv waren nur Antikörper, die gegen das Übergangszustandsanalogon Hei3 hergestellt worden waren. Es wird vermutet, dass die Hydrolyse der Benzylphenylcarbamate über einen Additions-Eliminierungsmechanismus unter Ausbildung eines tetraedrischen Übergangszustandes verläuft, dessen analoge Verbindung Hei3 ist. Im Rahmen der Generierung von Nachweisantikörpern zur Detektion der Substratabnahme bei der Hydrolyse wurden Anti-Diuron-Antikörper hergestellt. Einer der Antikörper (B91-CG5) ist spezifisch für das Herbizid Diuron und hat einen IC50-Wert von 0,19 µg/l und eine untere Nachweisgrenze von 0,04 µg/l. Ein anderer Antikörper (B91-KF5) reagiert kreuz mit einer Palette ähnlicher Herbizide. Mit diesen Antikörpern wurde ein empfindlicher Labortest, der ein Monitoring von Diuron auf Grundlage des durch die Trinkwasserverordnung festgeschriebenen Wertes für Pflanzenschutzmittel von 0,1 µg/l erlaubt, aufgebaut. Der Effekt der Anti-Diuron-Antikörper auf die Diuron-inhibierte Photosynthese wurde in vitro und in vivo untersucht. Es wurde nachgewiesen, dass sowohl in isolierten Thylakoiden, als auch in intakten Algen eine Vorinkubation der Anti-Diuron-Antikörper mit Diuron zur Inaktivierung seiner Photosynthese-hemmenden Wirkung führt. Wurde der Elektronentransport in den isolierten Thylakoiden oder in Algen durch Diuron unterbrochen, so führte die Zugabe der Anti-Diuron-Antikörper zur Reaktivierung der Elektronenübertragung. Attempts to produce catalytic antibodies for hydrolysis of arylcarbamates and arylureas: The aim of the investigations was to produce antibodies which are able to cleave herbicides resistant to naturally occuring enzymes. Structurally similar carbamate and urea derivatives were chosen for the experiments. Phosphonate derivatives were synthesized that mimick possible transition state analogues in structure and charge. Mice were immunized with 4 different derivatives after conjugating them to carrier proteins. 32 hybridomas were established that produce monoclonal antibodies binding to these derivatives. The possible cleavage of substrates was determined by immunoassays with monoclonal antibodies against the substrate and the products and with a photometric method based on dimethylaminocinammonaldehyde. The measuring of cleavage products was succeeded by an amperometric method. The enzyme sensor was based on immobilized tyrosinase which oxidizes p-chlorophenol and phenol. The antibodies N1-BC1-D11, N1-FA7-C4, N1-FA7-D12 und R3-LG2-F9 hydrolysed the benzylphenylcarbamates POCc18, POCc19 und Substance 27. The hydrolytic activity of these antibodies was also succeeded with HPLC. The catalytic antibody N1-BC1-D11 was investigated kinetically and thermodynamically. A Michaelis-Menten-Kinetic was observed (at pH 8.0 exhibited a Km 210 µM, a vmax 3 mM/min and a kcat 222 min-1). These values are in the range of the values obtained for the antibody-catalysed hydrolysis of diphenylcarbamates. The rate enhancement of N1-BC1 was 10. The reaction was completely inhibited by stoichiometric quantities of the transition state analogue Hei3. This is consistent with the affinity of the abzyme to Hei3 of 2.6 nM, determined by BIAcore assay. Only antibodies generated against Hei3 showed hydrolytic activity. The hydrolysis of benzylphenylcarbamates presumably occurs via an addition-elimination-Mechanism involving a tetrahedral intermediate. In summary, this work presents the first example of antibody-catalysed hydrolysis of benzylphenylcarbamates. Monoclonal anti-diuron antibodies were generated that bind to the herbicide diuron with an extremely low equilibrium dissociation constant. A sensitive immunoassay with a low detection limit of 0.2 nM for diuron was established. This is the most sensitive immunological method for detection of diuron known so far. These antibodies were also used in vitro and in vivo to prevent diuron-dependent inhibition of photosynthesis or to restore photosynthesis after inhibition. In isolated thylakoids prepared from spinach leaves (Spinacia oleracea L.) the diuron-inhibited Hill reaction was reconstituted immediately following the addition of the monoclonal antibodies. In an in vivo approach the photosynthetic oxygen evolution of the cell wall deficient mutant (cw 15) of the green alga Chlamydomonas reinhardtii Dangeard was monitored. The antibodies prevented the diuron-dependent inhibition of photosynthesis and restored photosynthesis after inhibition. Transgenic plants that synthesize and accumulate these antibodies or antibody fragments and are therefore diuron-resistant can be created.
Haass-Koffler, Carolina L; Naeemuddin, Mohammad; Bartlett, Selena E
2012-08-31
The most common software analysis tools available for measuring fluorescence images are for two-dimensional (2D) data that rely on manual settings for inclusion and exclusion of data points, and computer-aided pattern recognition to support the interpretation and findings of the analysis. It has become increasingly important to be able to measure fluorescence images constructed from three-dimensional (3D) datasets in order to be able to capture the complexity of cellular dynamics and understand the basis of cellular plasticity within biological systems. Sophisticated microscopy instruments have permitted the visualization of 3D fluorescence images through the acquisition of multispectral fluorescence images and powerful analytical software that reconstructs the images from confocal stacks that then provide a 3D representation of the collected 2D images. Advanced design-based stereology methods have progressed from the approximation and assumptions of the original model-based stereology even in complex tissue sections. Despite these scientific advances in microscopy, a need remains for an automated analytic method that fully exploits the intrinsic 3D data to allow for the analysis and quantification of the complex changes in cell morphology, protein localization and receptor trafficking. Current techniques available to quantify fluorescence images include Meta-Morph (Molecular Devices, Sunnyvale, CA) and Image J (NIH) which provide manual analysis. Imaris (Andor Technology, Belfast, Northern Ireland) software provides the feature MeasurementPro, which allows the manual creation of measurement points that can be placed in a volume image or drawn on a series of 2D slices to create a 3D object. This method is useful for single-click point measurements to measure a line distance between two objects or to create a polygon that encloses a region of interest, but it is difficult to apply to complex cellular network structures. Filament Tracer (Andor) allows automatic detection of the 3D neuronal filament-like however, this module has been developed to measure defined structures such as neurons, which are comprised of dendrites, axons and spines (tree-like structure). This module has been ingeniously utilized to make morphological measurements to non-neuronal cells, however, the output data provide information of an extended cellular network by using a software that depends on a defined cell shape rather than being an amorphous-shaped cellular model. To overcome the issue of analyzing amorphous-shaped cells and making the software more suitable to a biological application, Imaris developed Imaris Cell. This was a scientific project with the Eidgenössische Technische Hochschule, which has been developed to calculate the relationship between cells and organelles. While the software enables the detection of biological constraints, by forcing one nucleus per cell and using cell membranes to segment cells, it cannot be utilized to analyze fluorescence data that are not continuous because ideally it builds cell surface without void spaces. To our knowledge, at present no user-modifiable automated approach that provides morphometric information from 3D fluorescence images has been developed that achieves cellular spatial information of an undefined shape (Figure 1). We have developed an analytical platform using the Imaris core software module and Imaris XT interfaced to MATLAB (Mat Works, Inc.). These tools allow the 3D measurement of cells without a pre-defined shape and with inconsistent fluorescence network components. Furthermore, this method will allow researchers who have extended expertise in biological systems, but not familiarity to computer applications, to perform quantification of morphological changes in cell dynamics.
Franz Ulinski, an Almost Forgotten Early Pioneer of Rocketry
NASA Astrophysics Data System (ADS)
Besser, B. P.
2002-01-01
During the early period of rocket development several pioneers originating from the former Austro-Hungarian empire contributed their ideas to the new field of rocketry. The most well known - regarded as the "father of rocketry" in Western Europe - is Hermann Oberth. The others were Max Valier, Franz von Hoefft, Guido von Pirquet, Hermann Potocnik, Friedrich Schmiedl, Franz Ulinski, Eugen Saenger and others. Franz Ulinski (1890-1974) was born 1890 in Blosdorf, Moravia (now Mljadejow, Czech Republic). After attending schools in Wels, Upper Austria, he started a career in the Austro-Hungarian Army in 1910. During his service he worked beginning 1917 at an airplane engine plant in Fischamend and in 1919/20 at the "Fliegerarsenal" (aircraft arsenal) in Vienna. End of 1920 the army of the remaining republic of Austria had to severely reduce its forces and Ulinski was superannuated without further payment. Since 1917 he was also inscribed at the College for Advanced Technology in Vienna ("Technische Hochschule Wien"), but he never graduated, instead he autodidactically attained the VDI-Engineering-Diploma (VDI = "Verein Deutscher Ingenieure"- Association of German Engineers). During 1921-1924 he worked as a development engineer and later as a design engineer for a car factory. In 1925 he set up and ran his own company (radio sale enterprise) and in 1929 an engineering workshop. From 1938 to 1945 he first served as technical staff and later as a design engineer at the Siebel- Flugzeugwerke (Airplane-Factory) in Halle/Saale, Germany. After the Second World War he was employed as a design engineer at different engineering companies in Austria and he died 1974 in Wels. Ulinski's first contact with the topic of space flight occurred during the time period when he was a member of the Austro- Hungarian Army. Ulinski was one of the first in the german speaking part of Europe to publish an article with his ideas about space flight in 1920 (three years before Herman Oberth published his book on travelling into space). The Austrian flight magazine "Der Flug" (The Flight) printed a manuscript deposited by Ulinski at the Academy of Sciences in Vienna (October 1, 1919) in a special edition of December 1920. Ulinski describes in this article a space ship using corpuscular rays as impulse. The energy for accelerating the electrons comes from either solar energy, which has been transformed into electrical energy before, or from the use of "intra-atomic" energy. Unfortunately, the study suffers from some serious errors in the description of the physics involved, but still it can be considered as one of the first to propose the energy gained from solar radiation as a driving power for a spacecraft. During the Twenties another design of a space ship by Ulinski got some doubtful publicity. The space ship consisted of a closed chamber, within the rocket should work. The disagreement of this design with the laws of mechanics (physics) is rather obvious and brought Ulinski into disrepute in the rocket circles of the time. Two years ago the last known work of Ulinski with respect to rocketry was discovered. It is a typewritten manuscript of a talk he gave on March 24, 1941 at a VDI-meeting in Halle/Saale with the title "The problem of rocket flight".
ogs6 - a new concept for porous-fractured media simulations
NASA Astrophysics Data System (ADS)
Naumov, Dmitri; Bilke, Lars; Fischer, Thomas; Rink, Karsten; Wang, Wenqing; Watanabe, Norihiro; Kolditz, Olaf
2015-04-01
OpenGeoSys (OGS) is a scientific open-source initiative for numerical simulation of thermo-hydro-mechanical/chemical (THMC) processes in porous and fractured media, continuously developed since the mid-eighties. The basic concept is to provide a flexible numerical framework for solving coupled multi-field problems. OGS is targeting mainly on applications in environmental geoscience, e.g. in the fields of contaminant hydrology, water resources management, waste deposits, or geothermal energy systems, but it has also been successfully applied to new topics in energy storage recently. OGS is actively participating several international benchmarking initiatives, e.g. DECOVALEX (waste management), CO2BENCH (CO2 storage and sequestration), SeSBENCH (reactive transport processes) and HM-Intercomp (coupled hydrosystems). Despite the broad applicability of OGS in geo-, hydro- and energy-sciences, several shortcomings became obvious concerning the computational efficiency as well as the code structure became too sophisticated for further efficient development. OGS-5 was designed for object-oriented FEM applications. However, in many multi-field problems a certain flexibility of tailored numerical schemes is essential. Therefore, a new concept was designed to overcome existing bottlenecks. The paradigms for ogs6 are: - Flexibility of numerical schemes (FEM#FVM#FDM), - Computational efficiency (PetaScale ready), - Developer- and user-friendly. ogs6 has a module-oriented architecture based on thematic libraries (e.g. MeshLib, NumLib) on the large scale and uses object-oriented approach for the small scale interfaces. Usage of a linear algebra library (Eigen3) for the mathematical operations together with the ISO C++11 standard increases the expressiveness of the code and makes it more developer-friendly. The new C++ standard also makes the template meta-programming technique code used for compile-time optimizations more compact. We have transitioned the main code development to the GitHub code hosting system (https://github.com/ufz/ogs). The very flexible revision control system Git in combination with issue tracking, developer feedback and the code review options improve the code quality and the development process in general. The continuous testing procedure of the benchmarks as it was established for OGS-5 is maintained. Additionally unit testing, which is automatically triggered by any code changes, is executed by two continuous integration frameworks (Jenkins CI, Travis CI) which build and test the code on different operating systems (Windows, Linux, Mac OS), in multiple configurations and with different compilers (GCC, Clang, Visual Studio). To improve the testing possibilities further, XML based file input formats are introduced helping with automatic validation of the user contributed benchmarks. The first ogs6 prototype version 6.0.1 has been implemented for solving generic elliptic problems. Next steps are envisaged to transient, non-linear and coupled problems. Literature: [1] Kolditz O, Shao H, Wang W, Bauer S (eds) (2014): Thermo-Hydro-Mechanical-Chemical Processes in Fractured Porous Media: Modelling and Benchmarking - Closed Form Solutions. In: Terrestrial Environmental Sciences, Vol. 1, Springer, Heidelberg, ISBN 978-3-319-11893-2, 315pp. http://www.springer.com/earth+sciences+and+geography/geology/book/978-3-319-11893-2 [2] Naumov D (2015): Computational Fluid Dynamics in Unconsolidated Sediments: Model Generation and Discrete Flow Simulations, PhD thesis, Technische Universität Dresden.
NASA Astrophysics Data System (ADS)
de Palézieux, Larissa; Loew, Simon; Zwahlen, Peter
2017-04-01
Within the scope of planning a hydropower pump storage plant in the Poschiavo valley by Lagobianco SA (Repower AG), numerous cored boreholes with depths of 50 to 300 m were drilled at elevations between 963 and 2538 m a.s.l.. In several boreholes Lugeon and transient pressure packer tests were executed at various depths and pore water pressure sensors were properly installed in short monitoring intervals. Several of the boreholes intersect large suspended rock slides showing the characteristic zones of highly fragmented rock mass above a kakirite layer of several tens of meters thickness. This study presents long term transient pressure records from these deep boreholes and relates them to seasonal recharge trends from snow melt and summer rainstorm events. Annual pore pressure amplitudes at depths between 45 and 278 meters, range between 4 and 40 meters. Recharge from snow melt water production is obtained from the Degree-Day Method (Rango and Martinec, 1995), despite a considerable distance between the meteorological station and the location of the boreholes. First estimations of storage properties of the aquifers intersected by the boreholes are determined by fitting a combined snow melt and precipitation pressure function to the observed (delayed and attenuated) pore pressure records using a convolution of the one-dimensional pressure diffusion equation for a semi-infinite aquifer of constant thickness (De Marsily, 1986). Initial hydraulic conductivity values were taken directly from hydraulic tests executed by Lagobianco SA in similar rock types (Figi et al., 2014). For most boreholes this strongly simplified approach yields impressively good fits of the transient pressure records and specific storage/yield values, which vary significantly as a function of sensor depth below the piezometric level. Values range from 1e-6 m-1 to 5e-4 m-1 for confined gneiss-schists aquifers and around 3e-2 m-1 for phreatic aquifers, where pore pressure sensors are located only 20-30 m below the phreatic surface. The obtained values for specific storage and the assumed values for hydraulic conductivity were then verified with a one-dimensional finite element free-surface hydraulic model under steady-state and transient conditions, again fitting the simulated values to the observed pore water pressure records. Boundary conditions were set to constant head at the foot of the column and to infiltration with seepage face review at the top of the column. The results support the observed values for hydraulic conductivity as obtained from the packer tests with low permeabilities in the intact rock mass (K=2e-8 - 3e-10 m/s) and a higher permeability in rock slide masses (around 2e-6 m/s). Furthermore, the values for specific storage found by convolution could be confirmed. Finally, the complex local hydrogeology of an alpine mountain slope with a large suspended rock slide was investigated with a 2D finite element model under steady state and transient conditions. Preliminary results support the theory of a hydraulic barrier at the base of large rock slides with a perched aquifer above and partially unsaturated conditions below the sliding plane. REFERENCES De Marsily, G. (1986), Quantitative Hydrogeology (pp. 198-199). Masson. Figi, D., Brunold, F. & Zwahlen, P. (2014), Felskennwerte - Kennwertebericht, Projekt Lagobianco. Büro für Technische Geologie AG, Sargans. Rango, A., & Martinec, J. (1995), Revisiting the Degree-Day Method for Snowmelt Computations. JAWRA Journal of the American Water Resources Association, 31(4), 657-669.
NASA Astrophysics Data System (ADS)
Hadamcik, E.; Renard, J.-B.; Lasue, J.; Levasseur-Regourd, A. C.
2007-08-01
1- Introduction Cometary and possibly interplanetary dust particles seem to be mainly made of agglomerates of submicron and micron-sized grains. These particles are among the most primitive in our solar system. Regoliths on asteroidal and planetary surfaces seem to be loose materials produced by impinging meteorites on the surface of small bodies. Comparing their physical properties is thus fundamental to understand their evolution. To interpret remote observations of solar light scattered by dust particles and regoliths, it is necessary to use numerical and experimental simulations [1,2,3]. 2- PROGRA2 experiment PROGRA2 instruments are polarimeters; the light sources are two randomly polarized lasers (632.8 nm and 543.5 nm). Levitating particles (in microgravity or lifted by an air-draught) are studied by imaging polarimetry. Details on the instruments can be found in [4,5]. 3- Samples Two kinds of samples are studied: compact particles in the (1-400) micrometer size range and fluffy aggregates in the same size range, made from submicron and micronsized grains. The materials are transparent silica and absorbing carbon. Some deposited particles are huge agglomerates of micron-sized grains produced by random ballistic deposition of single grains [6,7] or produced by evaporation of mixtures in alcohol of fluffy aggregates of submicron-sized grains. Two samples are made of silica spheres coated by a carbonaceous black compound. Cometary analogues are mixtures of silica and amorphous carbon or Mg-Fe silicates mixed with amorphous carbon. 4- Results Phase curves and their main parameters (negative polarization at small phase angles and maximum polarization, Pmax, at 90-100° phase angle) for the different materials will be compared and related to the physical properties. For example, it is well known by numerical simulations and/or by experiments that the maximum polarization decreases when the size (submicrometer range) of the grains increases [2,8,9]. An inverse rule is found for compact grains, larger than the wavelength. Mixtures of fluffy silica and fined grained amorphous carbon or better Mg-Fe silicates with amorphous carbon are excellent cometary particles analogues (as light scattering is concerned) if they are mixed with some compact micron-sized grains [9]. Nevertheless the structure of the aggregates seems to play a major role to obtain the negative branch found on the polarimetric phase curves for comets [10]. 5- Discussion and conclusions The experiments purpose is to help to disentangle the different physical properties of dust particles that can be deduced from remote observations (cometary dust, regoliths). Differences between the main parameters influencing the variations of Pmax and the presence of a negative branch on the polarimetric phase curves for lifted and deposited particles (in huge agglomerates or not) will be discussed. Acknowledgments: Technische Universität Carolo-Wilhelmina, Braunschweig, Deutschland (Pr Blum, Dr Schräpler); University of New Mexico, Albuquerque, USA (Pr Rietmeijer); NASA Goddard Space Flight Center, Maryland, USA (Dr Nuth) References [1] A.C. Levasseur-Regourd, E. Hadamcik, JQSRT 79-80, 903 (2003) [2] J. Lasue, A.C. Levasseur-Regourd, JQSRT 100, 220 (2006) [3] J.-B. Renard et al., ASR 31, 2511 (2003) [4] J.-B. Renard et al., Appl. Opt. 91, 609 (2002) [5] E. Hadamcik et al., JQSRT 106, 74 (2007) [6] J. Blum, R. Schreapler, Phys. Rev Let 93:115031 (2004) [7] J. Blum et al., Astrophys J 652, 1768 (2006) [8] R. West, Appl. Opt. 30, 5216 (1991) [9] E. Hadamcik et al., JQSRT 100, 143 (2006) [10] E. Hadamcik et al., Icarus, in press (2007)
A man-induced landslide in Lower Austria: natural conditions versus man-made causes
NASA Astrophysics Data System (ADS)
Kittel, Roland; Ottner, Franz; Damm, Bodo; Terhorst, Birgit
2010-05-01
In many cases, composition and characteristics of hillslope sediments are of particular importance related to landslide research in low mountain areas. The interaction of geologic, geomorphologic, and hydrologic factors determines the susceptibility for mass movements, which is affected by human impact as well. The present study aims to investigate factors that control mass movements and natural and anthropogenic impacts. On March 8th 2009, a landslide of 30.000 to 50.000 m³ occurred that destroyed a large part of a sports ground in the village of Hintersdorf, municipality of St. Andrä-Wördern (Lower Austria). As a result of extensive water supply ground liquefaction was initiated and the slide mass moved in form of a mud flow about 200 m down slope. As a consequence a small forest area and a fishpond were destroyed and an adjacent road was damaged. Closely to the event, first studies started and showed that the Hintersdorf landslide was triggered by extensive water saturation combined with hydrostatic pressure inside the slide mass. Heavy and long-lasting rainfalls and the start of snowmelt caused strong seepage and soil water saturation. Furthermore, insufficient ground drainage and overflow of a small retention pond intensified the unfavourable impact on soil-mechanical stability. Further studies included archive data analyse, field survey, as well as laboratory analyse and showed that high landslide susceptibility at the Hintersdorf landslide site was caused by a bundle of factors that control the process: The sports ground was built nearby the head of a trough valley that collects interflow and surface run-off from the surrounding slopes. The Flysch bedrock is covered extensively by clayey slope deposits. Furthermore, in the area of the valley head a waste deposit was operated up to the 1980's that resulted in a thick waste filling there. The Hintersdorf sports ground was constructed in 1984 on top of the waste body. Preliminary results show that hillslope sediments and soils in the landslide area are almost impermeable due to their high amount of clay. On the one hand, they are able to seal the floor and to prevent the penetration of polluted water. On the other hand they provide a slide plane for mass movements. In contrast, the comparably low consolidated waste body forms a water reservoir. Due to technical operation, for example the deposition and mechanical compaction of soil material in context with the construction of the Hintersdorf sports ground, the waste body was partly sealed. To outline the result it can be stated that the unfavourable meteorological conditions during the first days of March 2009 caused an increased water pressure in the waste body, which triggered the landslide with damages to forest and infrastructure in Hintersdorf. References Damm, B., Terhorst, B., 2009. A model of slope formation related to landslide activity in the Eastern Prealps, Austria. Geomorphology, doi:10.1016/j.geomorph.2009.11.001. Damm, B., Terhorst, B., Köttritsch, E., Ottner, F., Mayrhofer, M., 2008. Zum Einfluss bodenphysikalischer und bodenmechanischer Parameter in quartären Deckschichten auf Massenbewegungen im Wienerwald. Abh. Geol. B.-A. 62: 33-37. Terhorst, B., Damm, B., Peticzka, R., Köttritsch, E., 2009. Reconstruction of Quaternary landscape formation as a tool to understand present geomorphological processes in the Eastern Prealps (Austria). Quaternary International, 209: 66-78.
Modelling Seasonal Carbon Dynamics on Fen Peatlands
NASA Astrophysics Data System (ADS)
Giebels, Michael; Beyer, Madlen; Augustin, Jürgen; Roppel, Mario; Juszczak, Radoszlav; Serba, Tomasz
2010-05-01
In Germany more than 99 % of fens have lost their carbon and nutrient sink function due to heavy drainage and agricultural land use especially during the last decades and thus resulted in compression and heavy peat loss (CHARMAN 2002; JOOSTEN & CLARKE 2002; SUCCOW & JOOSTEN 2001; AUGUSTIN et al. 1996; KUNTZE 1993). Therefore fen peatlands play an important part (4-5 %) in the national anthropogenic trace gas budget. But only a small part of drained and agricultural used fens in NE Germany can be restored. Knowledge of the influence of land use to trace gas exchange is important for mitigation of the climate impact of the anthropogenic peatland use. We study carbon exchanges between soil and atmosphere on several fen peatland use areas at different sites in NE-Germany. Our research covers peatlands of supposed strongly climate forcing land use (cornfield and intensive pasture) and of probably less forcing, alternative types (meadow and extensive pasture) as well as rewetted (formerly drained) areas and near-natural sites like a low-degraded fen and a wetted alder woodland. We measured trace gas fluxes with manual and automatic chambers in periodic routines since spring 2007. The used chamber technique bases on DROESLER (2005). In total we now do research at 22 sites situated in 5 different locations covering agricultural, varying states of rewetted and near-natural treatments. We present results of at least 2 years of measurements on our site of varying types of agricultural land use. There we found significant differences in the annual carbon balances depending on the genesis of the observed sites and the seasonal dynamics. Annual balances were constructed by applying single respiration and photosynthesis CO2 models for each measurement campaign. These models were based on LLOYD-TAYLOR (1994) and Michaelis-Menten-Kinetics respectively. Crosswise comparison of different site treatments combined with the seasonal environmental observations give good hints for the identification of main flux driving parameters. Based on this procedure we developed a specific methane efflux model, mainly driven by the observed groundwater fluctuation and soil temperature. Depending on the observed timescale initial starting points of the model showed up to be remarkably different. We also will present suggestions for an advanced CO2 modelling as the present approaches are both based on single parameters. Generally our experiences from our field studies show that mono-parameterized models often fail in reproducing measured flux values. References: Augustin, J., Merbach, W., Käding, H., Schnidt, W. & Schalitz, G. 1996. Lachgas- und Methanemissionen aus degradierten Niedermoorstandorten Nordostdeutschlands unter dem Einfluß unterschiedlicher Bewirtschaftung. Alfed-Wegener-Stiftung (ed.): Von den Ressourcen zum Recycling: Geoanalytik-Geomanagement-Geoinformatik. Ernst & Sohn Verlag. Berlin Charman, D. 2002: Peatland and environmental change. John Wiley & Sons, LTD, Chichester Droesler, M. 2005. Trace Gas Exchange and climatic relevance of bog ecosystems, Southern Germany, phD-thesis, TU München, München Joosten, H. & Clarke, D. 2002: Wise use of mires and peatlands-background and principles including a framework for decision-making. International Mire Conservation Group and International Peat Society (eds.), Finland Kuntze 1993: Moore als Senken und Quellen für C und N, Mitt. Deutsche Bodenkundliche Gesellschaft 69, 277-280 Lloyd, J., Taylor, J. A. 1994. On the Temperature Dependence of Soil Respiration, Functional Ecology, Vol. 8, No. 3, pp. 315-323 Succow, M. & Joosten, H. 2001: Landschaftsökologische Moorkunde, 2nd edition, Schweizerbart'sche Verlagsbuchhandlung, Stuttgart
PREFACE: Fundamental Constants in Physics and Metrology
NASA Astrophysics Data System (ADS)
Klose, Volkmar; Kramer, Bernhard
1986-01-01
This volume contains the papers presented at the 70th PTB Seminar which, the second on the subject "Fundamental Constants in Physics and Metrology", was held at the Physikalisch-Technische Bundesanstalt in Braunschweig from October 21 to 22, 1985. About 100 participants from the universities and various research institutes of the Federal Republic of Germany participated in the meeting. Besides a number of review lectures on various broader subjects there was a poster session which contained a variety of topical contributed papers ranging from the theory of the quantum Hall effect to reports on the status of the metrological experiments at the PTB. In addition, the participants were also offered the possibility to visit the PTB laboratories during the course of the seminar. During the preparation of the meeting we noticed that even most of the general subjects which were going to be discussed in the lectures are of great importance in connection with metrological experiments and should be made accessible to the scientific community. This eventually resulted in the idea of the publication of the papers in a regular journal. We are grateful to the editor of Metrologia for providing this opportunity. We have included quite a number of papers from basic physical research. For example, certain aspects of high-energy physics and quantum optics, as well as the many-faceted role of Sommerfeld's fine-structure constant, are covered. We think that questions such as "What are the intrinsic fundamental parameters of nature?" or "What are we doing when we perform an experiment?" can shed new light on the art of metrology, and do, potentially, lead to new ideas. This appears to be especially necessary when we notice the increasing importance of the role of the fundamental constants and macroscopic quantum effects for the definition and the realization of the physical units. In some cases we have reached a point where the limitations of our knowledge of a fundamental constant and/or a physical unit have their origin in the shortcomings of our understanding of the underlying physics rather than being due to the technical problems in the experiment. In this context, it is worth mentioning that the quantum Hall effect, the discovery of which by Klaus von Klitzing was rewarded only recently by the Nobel Prize for physics, still needs further attention. We are able to reproduce experimentally resistances with an extremely high precision using this effect. Nevertheless, we have severe difficulties in our present physical understanding of the mechanism which provides the plateaux in the Hall resistance. Lectures on "Quantum Non-Demolition" and "Determination of the Boltzmann Constant" have been included in order to show routes to "new frontiers" in metrology. Even the "conventional" metrological concepts, when combined with modern technology, can provide surprises: Although the Josephson effect is known since 1962, it was only recently that a quantized voltage in the 1-volt range could be experimentally realized. The experiment was performed by making use of modern thin-film technology. In addition to providing a simple and precise voltage standard in a practically important regime it also sets a new frontier in precision electrical metrology by demonstrating that, ultimately, the reproducibility of the unit of voltage is limited by that of the unit of time. We are indebted to a number of people who helped to organize the Seminar as well as to prepare this volume. Especially, we would like to mention Mrs Inge Bode. Without her continuous work the 70th PTB Seminar would not have been possible in the way we all have experienced it. We appreciate also the help of R P Hudson and H Lotsch in achieving a fast publication of this volume. Financial support from the Helmholtz-Fond is gratefully acknowledged.
NASA Astrophysics Data System (ADS)
Kiss, Andrea; Wilson, Rob; Holawe, Franz; Strömmer, Elisabeth; Bariska, István.
2010-05-01
We present an almost 500-year May-July temperature reconstruction based on 24 biophysical series. 19 are vine-related series from Kőszeg, Szombathely and Sopron in Western Hungary (11 series, from 1580s onwards), Vienna, Klosterneuburg, Perchtoldsdorf (7 series; at present from 1520s onwards) in Eastern Austria and Bratislava in Western Slovakia (1 series, from 1770s onwards). At present, the first, largest group is the vine related indicators, where dates of blossoming (1 series), beginning of grape ripening (3 series), dates of full ripening (2 series), beginning of grapevine harvest (8 series), starting dates of pressing out the juice (1 series: at present 1520s onwards), starting dates of tax-collection (after the pressed juice: 1 series) and wine quality indices (3 series: from 1580s onwards) are also included. The second main group of 4 indicators applied in the analysis are grain-harvest related indicators from Western Hungary, such as dates for estimation of harvesting-shares which are strongly dependant on the beginning of grain harvesting (at present 2 series: from 1640s onwards) and dates of grain tax collection (2 series: from 1560s onwards). A third group, still in development, is mainly related to more natural vegetation, and is strongly dependent on the full ripening of oak acorns: the starting dates of woodland pasture (at present 1 series: from 1770s onwards). The above-mentioned historical series correlate well, over the 18th-19th century period, with Vienna May-July measured temperature (Böhm, et. al. 2009). The complicated nature of these historical data is described (e.g. with respect to the normality of the data distribution), and we present methods to transform and composite the data into a homogenous, homoscedastic time-series that can be used for proxy based calibration. Finally, a preliminary May-July temperature reconstruction is derived using modified dendrochronological methods. (see Leijonhufvud et al. 2009) The Kőszeg, Szombathely, Sopron and Bratislava series and all presented analyses were developed within the framework of the EU project 'Millenium'. The Austrian series are partly based on published series (Pribram 1938, Lauscher 1985, Strömmer 2003) although in some cases modified and extended for this study, as well as newly developed data. The present work is a continuation of the 'Analysis of late spring-summer temperatures for Western Hungary based on vine, grain tithes and harvest records', presented at the annual congress of EGU in 2009 (Kiss and Wilson 2009). References Böhm, R., Jones, P.D., Hiebl, J., Frank, D., Brunetti, M., Maugeri and M. 2009: The early instrumental warm-bias: a solution for long central European temperature series 1760-2007. Climatic Change, doi: 10.1007/s10584-009-9649-4. Kiss, A. and Wilson, R. 2009: Analysis of late spring-summer temperatures for Western Hungary based on vine, grain tithes and harvest records. Geophysical Research Abstracts Vol 11, EGU2009-10945-1. Lauscher, F. 1985: Beiträge zur Wetterchronik seit dem Mittelalter. In: Sitzungsberichte, Abtheilung II, Mathematische, Physikalische und Technische Wissenschaften, Band 194, Heft 1-3. pp. 93-131. Leijonhufvud, L., Wilson, R., Möberg, A., Söderberg, J., Retső, D. and Söderlind, U. 2009: Five centuries of Stockholm winter/spring temperatures reconstructed from documentary evidence and instrumental observations. Climatic Change, doi: 10.1007/s10584-009-9650-y. Pribram, A. F. 1938: Materialien zur Geschichte der Preise und Löhne in Österreich. Band. I. Carl Ueberreuters Verlag, Wien. pp. 364-370. Strömmer, E. 2003: Klima-Geschichte. Methoden der Rekonstruction und historische Perspektive. Ostösterreich 1700 bis 1830. Forschungen und Beiträge zur Wiener Stadtgescichte 39. Franz Deuticke, Wien. pp. 59-71.
Seasonal Trace Gas Dynamics on Minerotrophic Fen Peatlands in NE-Germany
NASA Astrophysics Data System (ADS)
Giebels, Michael; Beyer, Madlen; Augustin, Jürgen; Minke, Merten; Juszczak, Radoszlav; Serba, Tomasz
2010-05-01
In Germany more than 99 % of fens have lost their carbon and nutrient sink function due to heavy drainage and agricultural land use especially during the last decades and thus resulted in compression and heavy peat loss (CHARMAN 2002; JOOSTEN & CLARKE 2002; SUCCOW & JOOSTEN 2001; AUGUSTIN et al. 1996; KUNTZE 1993). Therefore fen peatlands play an important part (4-5 %) in the national anthropogenic trace gas budget. But only a small part of drained and agricultural used fens in NE Germany can be restored. Knowledge of the influence of land use to trace gas exchange is important for mitigation of the climate impact of the anthropogenic peatland use. We study carbon exchanges of several fen peatland use areas between soil and atmosphere at different sites in NE-Germany. Our research covers peatlands of supposed strongly climate forcing land use (cornfield and intensive pasture) and of probably less forcing, alternative types (meadow and extensive pasture) as well as rewetted (formerly drained) areas and near-natural sites like a low-degraded fen and a wetted alder woodland. We measured trace gas fluxes with manual and automatic chambers in periodic routines since spring 2007. The used chamber technique bases on DROESLER (2005). In total we now do research at 22 sites situated in 5 different locations covering agricultural, varying states of rewetted and near-natural treatments. We present results of at least 2 years of measurements and show significant differences in their annual trace gas balances depending on the genesis of the observed sites and the seasonal dynamics. Crosswise comparison of different site treatments combined with the seasonal environmental observations give good hints for the identification of main flux driving parameters. That is that a reduced intensity in land use as a supposed mitigating treatment did not show the expected effect, though a normal meadow treatment surprisingly resulted in the lowest balances in both years. For implementing a further trace gas flux model observations will proceed at least until the end of year 2011. Regarding restoration sites we present newly installed locations of observing especially methane fluxes. To assure our results (presented at last years EGU conference, GIEBELS et al. 2009) from our in 2005 rewetted site we started observations at sites with advanced states of rewetting and alternative management respectively. I.e. one alternative aim to mitigate the heavy methane efflux after rewetting is observed at a site with removed canopy. Other experiments are conducted by freshly reforested alders and reed grass. References: Augustin, J., Merbach, W., Käding, H., Schnidt, W. & Schalitz, G. 1996. Lachgas- und Methanemissionen aus degradierten Niedermoorstandorten Nordostdeutschlands unter dem Einfluß unterschiedlicher Bewirtschaftung. Alfed-Wegener-Stiftung (ed.): Von den Ressourcen zum Recycling: Geoanalytik-Geomanagement-Geoinformatik. Ernst & Sohn Verlag. Berlin Charman, D. 2002: Peatland and environmental change. John Wiley & Sons, LTD, Chichester Droesler, M. 2005. Trace Gas Exchange and climatic relevance of bog ecosystems, Southern Germany, phD-thesis, TU München, München Giebels, M., Augustin, J., Minke, M., Halle, E., Beyer, M., Ehrig, B., Leitholdt, E., Chojnicki, B., Juszczak, R., Serba, T. 2009. Anthropogenic impact on the carbon cycle of fen peatlands in NE-Germany, EGU General Assembly 2009 Joosten, H. & Clarke, D. 2002: Wise use of mires and peatlands-background and principles including a framework for decision-making. International Mire Conservation Group and International Peat Society (eds.), Finland Kuntze 1993: Moore als Senken und Quellen für C und N, Mitt. Deutsche Bodenkundliche Gesellschaft 69, 277-280 Succow, M. & Joosten, H. 2001: Landschaftsökologische Moorkunde, 2nd edition, Schweizerbart'sche Verlagsbuchhandlung, Stuttgart
Seasonal Carbon Dynamics on Selected Fen Peatland Sites in NE-Germany
NASA Astrophysics Data System (ADS)
Giebels, Michael; Beyer, Madlen; Augustin, Jürgen; Minke, Merten; Juszczak, Radoszlav; Serba, Tomasz
2010-05-01
In Germany more than 99 % of fens have lost their carbon and nutrient sink function due to heavy drainage and agricultural land use especially during the last decades and thus resulted in compression and heavy peat loss (CHARMAN 2002; JOOSTEN & CLARKE 2002; SUCCOW & JOOSTEN 2001; AUGUSTIN et al. 1996; KUNTZE 1993). Therefore fen peatlands play an important part (4-5 %) in the national anthropogenic trace gas budget. But only a small part of drained and agricultural used fens in NE Germany can be restored. Knowledge of the influence of land use to trace gas exchange is important for mitigation of the climate impact of the anthropogenic peatland use. We study carbon exchanges of several fen peatland use areas between soil and atmosphere at different sites in NE-Germany. Our research covers peatlands of supposed strongly climate forcing land use (cornfield and intensive pasture) and of probably less forcing, alternative types (meadow and extensive pasture) as well as rewetted (formerly drained) areas and near-natural sites like a low-degraded fen and a wetted alder woodland. We measured trace gas fluxes with manual and automatic chambers in periodic routines since spring 2007. The used chamber technique bases on DROESLER (2005). In total we now do research at 22 sites situated in 5 different locations covering agricultural, varying states of rewetted and near-natural treatments. We present results of at least 2 years of measurements and show significant differences in their annual carbon balances depending on the genesis of the observed sites and the seasonal dynamics. Crosswise comparison of different site treatments combined with the seasonal environmental observations give good hints for the identification of main flux driving parameters. That is that a reduced intensity in land use as a supposed mitigating treatment did not show the expected effect, though a normal meadow treatment surprisingly resulted in the lowest CO2 balances in both years. For implementing a further trace gas flux model observations will proceed at least until the end of year 2011. Regarding restoration sites we present newly installed locations of observing especially methane fluxes. To assure our results (presented at last years EGU conference, GIEBELS et al. 2009) from our in 2005 rewetted site we started observing carbon exchange at sites with advanced states of rewetting and alternative management respectively. I.e. one alternative aim to mitigate the heavy methane efflux after rewetting is observed at a site with removed canopy. Other experiments are conducted by freshly reforested alders and reed grass. References: Augustin, J., Merbach, W., Käding, H., Schnidt, W. & Schalitz, G. 1996. Lachgas- und Methanemissionen aus degradierten Niedermoorstandorten Nordostdeutschlands unter dem Einfluß unterschiedlicher Bewirtschaftung. Alfed-Wegener-Stiftung (ed.): Von den Ressourcen zum Recycling: Geoanalytik-Geomanagement-Geoinformatik. Ernst & Sohn Verlag. Berlin Charman, D. 2002: Peatland and environmental change. John Wiley & Sons, LTD, Chichester Droesler, M. 2005. Trace Gas Exchange and climatic relevance of bog ecosystems, Southern Germany, phD-thesis, TU München, München Giebels, M., Augustin, J., Minke, M., Halle, E., Beyer, M., Ehrig, B., Leitholdt, E., Chojnicki, B., Juszczak, R., Serba, T. 2009. Anthropogenic impact on the carbon cycle of fen peatlands in NE-Germany, EGU General Assembly 2009 Joosten, H. & Clarke, D. 2002: Wise use of mires and peatlands-background and principles including a framework for decision-making. International Mire Conservation Group and International Peat Society (eds.), Finland Kuntze 1993: Moore als Senken und Quellen für C und N, Mitt. Deutsche Bodenkundliche Gesellschaft 69, 277-280 Succow, M. & Joosten, H. 2001: Landschaftsökologische Moorkunde, 2nd edition, Schweizerbart'sche Verlagsbuchhandlung, Stuttgart
NASA Astrophysics Data System (ADS)
John, Cédric Michaël
2003-08-01
This study investigated the slope carbonates of two Miocene carbonate systems: the Maltese Islands (in the Central Mediterranean) and the Marion Plateau (Northeastern Australia, drilled during ODP Leg 194). The aim of the study was to trace the impact of the Miocene cooling steps (events Mi1-Mi6) in these carbonate systems, especially the Mi3 event, which took place around 13.6 Ma and deeply impacted the marine oxygen isotope record. This event also profoundly impacted oceanographic and climatic patterns, eventually leading to the establishment of the modern ice-house world. In particular, East Antarctica became ice covered at that period. The rational behind the present study was to investigate the impact that this event had on shallow water systems in order to complement the deep-sea record and hence acquire a more global perspective on Miocene climate change. The Maltese Islands were investigated for trends in bulk-rock carbon and oxygen isotopes, as well as bulk-rock mineralogy, clay minerals analysis and organic geochemisty. Results showed that the mid Miocene cooling event deeply impacted sedimentation at that location by changing sedimentation from carbonate to clay-rich sediments. Moreover, it was discovered that each phase of Antarctic glaciation, not just the major mid Miocene event, resulted in higher terrigenous input on Malta. Mass accumulation rates revealed that this was linked to increased runoff during periods when Antarctica was glaciated, and thus that the carbonate sediments were “diluted” by clay-rich sediments. The model subsequently developed to explain this implies feedback from Antarctic glaciations creating cold, dense air masses that push the ITCZ Northward, thus increasing precipitation on the North African subcontinent. Increased precipitation (or stronger African monsoon) accelerated continental weathering and runoff, thus bringing more terrigenous sediment to the paleo-location of the slope sediments of Malta. Spectral analysis of carbonate content and organic matter geochemical analysis furthermore suggest that the clay-rich intervals are similar to sapropelic deposits. On the Marion Plateau, trends in oxygen and carbon isotopes were obtained by measuring Cibicidoides spp foraminifers. Moreover, carbonate content was reconstructed using a chemical method (coulometer). Results show that the mid Miocene cooling step profoundly affected this system: a major drop in accumulation rates of carbonates occurs precisely at 13.8 Ma, around the time of the East Antarctic ice sheet formation. Moreover, sedimentation changes occurred at that time, carbonate fragments coming from neritic environments becoming less abundant, planktonic foraminifer content increasing and quartz and reworked glauconite being deposited. Conversely, a surprising result is that the major N12-N14 sea-level fall occurring around 11.5 Ma did not impact the accumulation of carbonates on the slope. This was unexpected since carbonate platform are very sensitive to sea-level changes. The model developed to explain that mass accumulation rates of carbonates diminished around 13.6 Ma (Mi3 Event) instead of 11.5 Ma (N12-N14 event), suggests that oceanic currents were controlling slope carbonate deposition on the Marion Plateau prior to the mid-Miocene, and that the mid Miocene event considerably increase their strength, hence reducing the amount of carbonate being deposited on slope sites. Moreover, by combining results from deep-sea oxygen isotopes with sea-level estimates based on coastal onlaps made during Leg 194, we constrain the amplitude of the N12-N14 sea-level fall to 90 meters. When integrating isotopic results from this study, this amplitude is lowered to 70 meters. A general conclusion of this work is that the mid Miocene climatic shift did impact carbonate systems, at least at the two locations studied. However, the nature of this response was highly dependant on the regional settings, in particular the presence of land mass (Malta) and the absence of a barrier to shelter from the effects of open ocean (Marion Plateau). Im Rahmen dieser Doktorarbeit wurden die Hangkarbonate von zwei miozänen heterozoischen Karbonatsystemen näher untersucht: die Malta Inselgruppe (zentrales Mittelmeer) und das Marion Plateau (Nordost Australien, ODP Leg 194). Die Auswirkungen der mittelmiozänen Abkühlung (Mi3), die auf 13.6 Ma datiert wird und starken Einfluß auf die Sauerstoffisotopenkurve hatte, in den oben genannten Flachwassersystemen stellten das Ziel dieser Arbeit dar. Dieses Abkühlungsereignis beeinflußte außerdem sehr stark die ozeanographischen und klimatischen Muster, die im weiteren Verlauf zum modernen Eishausklima führten. So steht insbesondere die Vereisung von Ostantarktika mit diesem Ereignis in Verbindung. Diese Arbeit untersucht den Einfluß dieses Ereignisses auf Flachwassersysteme, um vorliegende Untersuchungen in Tiefwassersystemen zu ergänzen und so zum globalen Verständnis des miozänen Klimawechsels beizutragen. Die Profile auf der Maltainselgruppe wurden mit Hilfe von Kohlenstoff- und Sauerstoffisotopen Auswertungen im Gesamtgestein, Gesamtgesteinmineralogie, Tonmineralanalyse und organischer Geochemie untersucht. Durch einen Wechsel von karbonatischeren zu tonigeren Sedimenten beeinflußte das mittelmiozäne Abkühlungsereignis die Sedimentation in diesem Gebiet sehr stark. Weiterhin wurde beobachtet, daß jede Phase der antarktischen Vereisung, nicht nur das mittelmiozäne Hauptereignis, zu einem erhöhten terrigenen Eintrag in den Hangsedimenten der Maltainselgruppe führte. Akkumulationsraten zeigen, daß dieser erhöhte terrigene Eintrag den einzelnen Vereisungsperioden zusammenhängt und die karbonatischen Sedimente durch tonreiche Sedimente “verunreinigt” wurden. Das daraufhin entwickelte Modell erklärt diesen erhöhten terrigenen Eintrag mit einer nordwärtigen Verlagerung der innertropischen Konvergenzzone durch die Bildung von kalten, dichten Luftmassen, die zu verstärkten Niederschlägen in Nordafrika führten. Diese verstärkten Niederschläge (oder verstärkter afrikanischer Monsun) beeinflußten die kontinentale Verwitterung und den Eintrag, mit der Folge, daß verstärkt terrigene Sedimente im Bereich der Hangsedimente der Maltainselgruppe abgelagert wurden. Die tonreichen Intervalle weisen Ähnlichkeiten zu sapropelischen Ablagerungen auf, was mit Hilfe der Spektral analyse des Karbonatgehalts und der geochemischen Analyse des organischen Materials gezeigt wurde. Auf dem Marion Plateau wurden die Sauerstoff- und Kohlenstoffisotopenkurven anhand von Foraminiferen der Gattung Cibicidoides spp. rekonstruiert. Der Karbonatgehalt wurde mit Hilfe einer chemischen Methode (Coulometer) ermittelt. Genauso wie die Sedimente der Maltainselgruppe beeinflußte das mittelmiozäne Abkühlungsereignis (Mi3) auch die Sedimente auf dem Marion Plateau. So kam es bei 13,8 Ma, in etwa zur Zeit der Vereisung von Ostantarktika, zu einem Abfall der Karbonatakkumulationsraten. Weiterhin traten Änderungen in der Zusammensetzung der Sedimente auf, so nehmen neritische Karbonatfragmente ab, der planktische Foraminiferengehalt nimmt zu und es wurden verstärkt Quarz und Glaukonit abgelagert. Ein überraschendes Ergebnis ist die Tatsache, daß der große N12-N14 Meeresspiegelabfall um 11,5 Ma die Akkumulationsraten der Karbonate auf dem Hang nicht beeinflußte. Dieses Ergebnis ist umso erstaunlicher, da Karbonatplattformen normalerweise sehr sensitiv auf Meeresspiegeländerungen reagieren. Der Grund, warum sich die Karbonatakkumulationsraten schon um 13,6 Ma (Mi3) und nicht erst um 11,5 Ma (N12-N14) verringerten, liegt in der Tatsache, daß die ozeanischen Strömungen die Karbonatsedimentation auf dem Hang des Marion Plateau schon im Miozän kontrollierten. Das mittelmiozäne Ereignis (Mi3) erhöhte die Stärke diese Strömungen und als eine Ursache wurde die Karbonatakkumulation auf den Hängen reduziert. Die Amplitude des N12-N14 Meeresspiegelabfalls liegt bei 90 m unter der Berücksichtigung der Sauerstoffisotopendaten aus der Tiefsee und Berechnungen des Meeresspiegels anhand des “coastal onlaps”, die während Leg 194 gemacht wurden. Die Isotopendaten dieser Arbeit weisen hingegen auf einen verringerten Meeresspiegelabfall von 70 m hin. Als allgemeine Schlußfolgerung kann gesagt werden, daß der mittelmiozäne Klimaumschwung die Karbonatsysteme zumindest an den beiden untersuchten Lokalitäten beeinflußt hat. Allerdings waren die Auswirkungen sehr von den unterschiedlichen lokalen Gegebenheiten abhängig. Insbesondere wirkten sich die Anwesenheit einer Landmasse (Malta) und die Abwesenheit einer Barriere vor den Einflüssen des offenen Ozeans (Marion Plateau) stark auf die Ablagerung der Karbonate aus.
Neue biosensorische Prinzipien für die Hämoglobin-A1c Bestimmung
NASA Astrophysics Data System (ADS)
Stöllner, Daniela
2002-06-01
Hämoglobin-A1c (HbA1c) ist ein Hämoglobin (Hb)-Subtypus, der durch nicht-enzymatische Glykierung des N-terminalen Valinrestes der Hämoglobin-beta-Kette entsteht. Das gemessene Verhältnis von HbA1c zum Gesamt-Hämoglobin (5-20 % bei Diabetikern) repräsentiert den Mittelwert der Blutglucosekonzentration über einen zweimonatigen Zeitraum und stellt zur Beurteilung der diabetischen Stoffwechsellage eine Ergänzung zur Akutkontrolle der Glukosekonzentration dar. Ziel der vorliegenden Arbeit war es, einen amperometrischen Biosensor für die Bestimmung des medizinisch relevanten Parameters HbA1c zu entwickeln. Durch Selektion geeigneter Bioerkennungselemente und deren Immobilisierung unter Erhalt der Bindungsfunktion für die Zielmoleküle Hämoglobin bzw. HbA1c wurden spezifische, hochaffine und regenerationsstabile Sensoroberflächen geschaffen. Für die Entwicklung des HbA1c-Biosensors wurden zwei Konzepte - Enzymsensor und Immunosensor - miteinander verglichen. Die enzymatische Umsetzung von HbA1c erfolgte mit der Fructosylamin Oxidase (FAO) aus Pichia pastoris N 1-1 unter Freisetzung von H2O2, welches sowohl optisch über eine Indikatorreaktion als auch elektrochemisch nach Einschluss der FAO in PVA-SbQ und Fixierung des Immobilisats vor einer H2O2-Elektrode nachgewiesen wurde. Die Kalibration des Enzymsensors mit der HbA1c-Modellsubstanz Fructosyl-Valin ergab Nachweisgrenzen, die ausserhalb des physiologisch relevanten HbA1c-Konzentrationsbereich lagen. Aus der Umsetzung von glykierten Peptiden mit einer nicht HbA1c analogen Aminosäurensequenz, z.B. Fructosyl-Valin-Glycin wurde zudem eine geringe HbA1c-Spezifität abgeleitet. Für den Immunosensor wurden zwei heterogene Immunoassay-Formate unter Verwendung von hochaffinen und spezifischen Antikörpern in Kombination mit Glucose Oxidase (GOD) als Markerenzym zum Nachweis von HbA1c untersucht. Beim indirekt-kompetitiven Immunoassay wurde anstelle des kompletten HbA1c-Moleküls das glykierte Pentapeptid Fructosyl-Valin-Histidin-Leucin-Threonin-Prolin (glkPP) als Kompetitor und Affinitätsligand immobilisiert und so eine regenerierfähige Oberfläche geschaffen. Beim Sandwich-Immunoassay wurde im ersten Schritt Gesamt-Hämoglobin an die mit Haptoglobin (Hp) modifizierte Festphase angereichert und im zweiten Schritt der gebundene HbA1c-Anteil nachgewiesen. Für die Konstruktion des HbA1c-Immunosensors wurden Affinitätsmatrizen durch Modifizierung von Cellulose-Dialysemembranen mit glkPP bzw. Hp hergestellt. Grundlegend studiert wurde die Aktivierung der Cellulose-Membranen mit 1,1'-Carbonyldiimidazol (CDI) und 1-Cyano-4-dimethylaminopyridintetrafluoroborat (CDAP) als Aktivierungsagenzien. Eine gerichtete Immobilisierung der Liganden wurde realisiert, indem glkPP über dessen C-Terminus (einzige Carboxylatgruppe) und Hp über dessen periodat-oxidiertem Kohlenhydratrest an die amino- oder hydrazidfunktionalisierte Membranen kovalent gekoppelt wurden. Mit dem Einsatz der glkPP- und Hp-modifizierten Membranen in der elektrochemischen Messzelle war erstmalig der biosensorische Nachweis von HbA1c möglich. Als Transduktor diente eine Pt-Elektrode, an der das von der GOD generierte H2O2 umgesetzt und ein mit der HbA1c-Konzentration korrelierendes Stromsignal erzeugt wurde. Die Immunosensoren zeigten Ansprechzeiten von 3 s. Mit dem Immunosensor auf Basis des indirekt-kompetitiven Testprinzips wurde eine Kalibrationskurve für HbA1c im Bereich von 0,25-30 µg/ml (3,9-465 nM, CV 3-9 %) mit Assayzeiten von 60 min und mit dem Immunosensor im Sandwich-Format eine Kalibrationskurve im Bereich von 0,5-5 µg/ml (7,8-78 nM; 5-50 % HbA1c vom Gesamt-Hb, CV 6-10 %, 3 h) aufgenommen. Hemoglobin-A1c (HbA1c) is a hemoglobin subtype formed by non-enzymatic reaction of glucose with the N-terminus of the beta-polypeptide chains. As it reflects the glycemic status of diabetics over the preceding 8-12 weeks, the determination of HbA1c has become an established procedure in the management of diabetes mellitus. It is measured as the percentage of total hemoglobin. Up to 5 % HbA1c are considered as normal whereas in diabetic subjects it could be elevated from 5-20 %. In addition to amperometric biosensors for glucose self monitoring which have been successfully applied in diabetes management, biosensors for HbA1c would be an useful supplement for a comprehensive diabetes control. Objective of this work was to develop and compare amperometric biosensors for determination of HbA1c based on enzymatic and immunochemical methods. For the enzyme based HbA1c assay a novel fructosamine oxidase (FAO) derived from marine yeast Pichia pastoris, strain N1-1 was utilized. It recognizes and oxidatively degrades fructosyl-valine (FV) which corresponds to the glycated N-terminus of the beta-chain of HbA1c and therefore is regarded as a model compound for HbA1c. Hydrogen peroxide which is liberated by the FAO during FV conversion was indicated optically in a horseradish peroxidase (POD) coupled reaction and electrochemically. For the biosensor the FAO was embedded in polyvinyl alcohol-stylbazole (PVA-SbQ) and fixed it in front of a Pt-electrode. So far, the measuring range of FV did not cover the clinically relevant range of HbA1c. Low specificity was assumed since enzyme activity also was obtained with glycated peptides, e.g. fructosyl-valine-glycine, not corresponding to the glycated N-terminus of the hemoglobin-beta-chain. For the immunosensor two immunoassays formats - heterogeneous sandwich and heterogeneous competitive - were tested. The assays were designed as follows: The competitive immunoassay was based on the immobilized synthetic glycated pentapeptide fructosyl-valine-histidine-leucine-threonine-proline (glkPP) utilized as HbA1c analogue. The peptide has an amino acid sequence corresponding to the N-terminus of the hemoglobin beta-chains and is capable for competition together with the HbA1c of the sample for the amount of a glucose oxidase (GOD)-labelled anti-HbA1c antibody. In the sandwich-type assay haptoglobin (Hp), a natural hemoglobin binding molecule with antibody characteristic properties, was used as bioreceptor for enrichment of total hemoglobin onto the surface. In a subsequent step the HbA1c fraction was quantified by a GOD-labelled HbA1c specific antibody. Cellulose dialysis membrane was used as the solid support for immobilization of Hp and glkPP near the sensor surface. For activation of the membrane two reagents, 1,1‧-carbonyldiimidazole (CDI) and 1-cyano-4-dimethylamino pyridinium tetrafluoroborate (CDAP), were compared with respect to the degree of activation and coupling efficiency. Site-directed immobilization of Hp and glkPP was achieved by coupling Hp via its carbohydrate residue and glkPP via its C-terminus to the activated membrane using a bis-amine or bis-hydrazide spacer. The affinity membranes were placed in front of a modified Clark-type hydrogen peroxide electrode in an electrochemical measuring cell and HbA1c analysis was carried out within the stirred cell. Detection of the bound GOD-label was achieved by measurement of the electrocatalytic oxidation of hydrogen peroxide at +600 mV vs. Ag/AgCl. The indication was done in only 3 s. For the competitive principle a typical inhibition curve with a linear range between 0,25-30 µg/ml (3,9-465 nM, CV 3-9 %, 60 min per sample) HbA1c was obtained. Due to the high functional stability of the peptide multiple regeneration of the affinity surface was possible without loss of binding capacity. With the sandwich assay configuration the clinically relevant range could easily be covered (calibration curve: 5-50 % HbA1c corresponding to 7,8-78 nM, CV 6-10 %, 3 h per sample).
Are the low-lying isovector 1 + states scissors vibrations?
NASA Astrophysics Data System (ADS)
Faessler, A.
At the Technische Hochschule in Darmstadt the group of Richter and coworkers found in 1983/84 in deformed rare earth nuclei low-lying isovector 1 + states. Such states have been predicted in the generalized Bohr-Mottelson model and in the interacting boson model no. 2 (IBA2). In the generalized Bohr-Mottelson model one allows for proton and neutron quadrupole deformations separately. If one includes only static proton and neutron deformations the generalized Bohr-Mottelson model reduces to the two rotor model. It describes the excitation energy of these states in good agreement with the data but overestimates the magnetic dipole transition probabilities by a factor 5. In the interacting boson model (IBA2) where only the outermost nucleons participate in the excitation the magnetic dipole transition probability is only overestimated by a factor 2. The too large collectivity in both models results from the fact that they concentrate the whole strength of the scissors vibrations into one state. A microscopic description is needed to describe the spreading of the scissors strength over several states. For a microscopic determination of these scissors states one uses the Quasi-particle Random Phase Approximation (QRPA). But this approach has a serious difficulty. Since one rotates for the calculation the nucleus into the intrinsic system the state corresponding to the rotation of the whole nucleus is a spurious state. The usual procedure to remove this spuriosity is to use the Thouless theorem which says that a spurious state created by an operator which commutes with the total hamiltonian (here the total angular momentum, corresponding to a rotation of the whole system) produces the spurious state if applied to the ground state. It says further the energy of this spurious state lies at zero excitation energy (it is degenerate with the ground state) and is orthogonal to all physical states. Thus the usual approach is to vary the quadrupole-quadrupole force strength so that a state lies at zero excitation energy and to identify that with the spuríous state. This procedure assumes that a total angular momentum commutes with a total hamiltonian. But this is not the case since the total hamiltonian contains a deformed Saxon-Woods potential. Thus one has to take care explicitly that the spurious state is removed. This we do in our approach by introducing Lagrange multipliers for each excited states and requesting that these states are orthogonal to the spurious state which is explicitly constructed by applying the total angular momentum operator to the ground state. To reduce the number of free parameters in the hamiltonian we take the Saxon-Woods potential for the deformed nuclei from the literature (with minor adjustments) and determine the proton-proton, neutron-neutron and the proton-neutron quadrupole force constant by requesting that the hamiltonian commutes with the total angular momentum in the (QRPA) ground state. This yields equations fixing all three coupling constants for the quadrupole-quadrupole force allowing even for isospin symmetry violation. The spin-spin force is taken from the Reid soft core potential. A possible spin-quadrupole force has been taken from the work of Soloviev but it turns out that this is not important. The calculation shows that the strength of the scissors vibrations are spread over many states. The main 1 + state at around 3 MeV has an overlap of the order of 14 % of the scissors state. 50% of that state are spread over the physical states up to an excitation energy of 6 MeV. The rest is distributed over higher lying states. The expectation value of the many-body hamiltonian in the scissors vibrational state shows roughly an excitation energy of 7 MeV above the ground state. The results also support the experimental findings that these states are mainly orbital excitations. States are not very collective. Normally only a proton and neutron particle-hole pair are with a large amplitude participating in forming these states. But those protons and neutrons which are excited perform scissors type vibrations.
SoCRocket: A Virtual Platform for SoC Design
NASA Astrophysics Data System (ADS)
Fossati, Luca; Schuster, Thomas; Meyer, Rolf; Berekovic, Mladen
2013-08-01
Both in the commercial and in the aerospace domain, the continuous increase of transistor density on a single die is leading towards the production of more and more complex systems on a single chip, with an increasing number of components. This brought to the introduction of the System-On-Chip (SoC) architecture, that integrates on a single circuit all the elements of a full system. This strive for efficient utilization of the available silicon has triggered several paradigm shifts in system design. Similarly to what happened in the early 1990s, when VHDL and Verilog took over from schematic design, today SystemC and Transaction Level Modeling [1] are about to further raise the design abstraction level. Such descriptions have to be accurate enough to describe the entire system throughout the phases of its development, and has to provide enough flexibility to be refined iteratively up to the point where the actual device can be produced using current process technology. Besides requiring new languages and methodologies, the complexity of current and future SoCs (SCOC3 [16] and NGMP [5] are example in the space domain) forces the SoC design process to rely on pre-designed or third party components. Components obtained from different providers, and even those designed by different teams of the same company, may be heterogeneous on several aspects: design domains, interfaces, abstraction levels, granularity, etc. Therefore, component integration is required at system level. Only by applying design re-use it is possible to successfully and timely design such complex SoCs. This transition to new languages and design methods is also motivated by the implementation with software of an increasing amount of system functionalities. Hence the need for methodologies to enable early software development and which allow the analysis of the performance of the combined Hw/Sw system, as their design and configuration cannot be performed separately. Virtual Prototyping is a key approach in this sense, enabling embedded software developers to start development earlier in the system design cycle, and cutting the dependency on the physical system hardware. In order to successfully implement the described methodologies, it is requested to have access to the a wide selection of IP-Cores (and related SystemC/TLM models) and access to the latest Electronic Design Automation (EDA, [17]) tools. On the one hand, for what concerns the European Space landscape, such IP-Cores are provided by the European Space Agency [4] and a few other suppliers (e.g Aeroflex Gaisler with GRLIB [2]). On the other hand, for what concerns the related high abstraction models and related design methodologies (partly depicted in Figure 1), the European Space Agency, through the Braunschweig Technische Universitat, has started the development of the SoCRocket Virtual Platform [8]. Together with the Virtual Platform infrastructure SoCRocket contains a library of IP-Core models. The SoCRocket library has been built around the TrapGen LEON instruction set simulator [15]. The library contains a variety of SystemC simulation models such as caches, memory management unit, AMBA interconnect, memory controller, memories, interrupt controller, timer and more. All models are TLM2.0 compliant and come in both loosely-timed and approximately timed coding styles. As later-on presented more in detail, the runtime reconfiguration, the completeness of tools and models, as well as the fact that all simulation IPs have a freely available RTL counterpart differentiates SoCRocket from other commercially available Virtual Platforms. Moreover, due to their TLM2.0 compliance the provided models are not bound to the SoCRocket environment but they can be used with alternative tools, such as Cadence Virtual Platform [3] or Synopsys Platform Architect [10]. The paper is organized as follows: Section 2 presents the architecture of SoCRocket and the related library of SystemC models. Finally Section 3 shows how SoCRocket was used to optimize the design of a LEON3-based SoC targeted to the execution of an implementation of the CCSDS standard n.123 for the lossless compression of hyperspectral images.
NASA Astrophysics Data System (ADS)
Bandyopadhyay, A. K.; Woo, Sam Yong; Fitzgerald, Mark; Man, John; Ooiwa, Akira; Jescheck, M.; Jian, Wu; Fatt, Chen Soo; Chan, T. K.; Moore, Ken; El-Tawil, Alaaeldin A. E.
2003-01-01
This report summarizes the results of a regional key comparison (APMP-IC-2-97) under the aegis of the Asia Pacific Metrology Program (APMP) for pressure measurements in gas media and in gauge mode from 0.4 MPa to 4.0 MPa. The transfer standard was a pressure-balance with a piston-cylinder assembly with nominal effective area 8.4 mm2 (V-407) and was supplied by the National Metrology Institute of Japan [NMIJ]. Ten standard laboratories from the APMP region with one specially invited laboratory from the EUROMET region, namely Physikalisch-Technische Bundesanstalt (PTB), Germany, participated in this comparison. The comparison started in October 1998 and was completed in May 2001. The pilot laboratory prepared the calibration procedure [1] as per the guidelines of APMP and the International Bureau of Weights and Measures (BIPM) [2-4]. Detailed instructions for performing this key comparison were provided in the calibration protocol [1] and the required data were described in: (1) Annex 3 - characteristics of the laboratory standards, (2) Annex 4 - the effective area (A'p'/mm2) (the prime indicates values based on measured quantities) at 23°C of the travelling standard as a function of nominal pressure (p'/MPa) (five cycles both increasing and decreasing pressures at ten pre-determined pressure points) and (3) Annex 5 - the average effective area at 23°C (A'p'/mm2) obtained for each pressure p'/MPa with all uncertainty statements. The pilot laboratory processed the information and the data provided by the participants for these three annexes, starting with the information about the standards as provided in Annex 3. Based on this information, the participating laboratories are classified into two categories: (I) laboratories that are maintaining primary standards, and (II) laboratories that are maintaining standards loosely classified as secondary standards with a clear traceability as per norm of the BIPM. It is observed that out of these eleven laboratories, six laboratories have primary standards [Category (I)], the remaining five laboratories are placed in Category (II). The obtained data were compiled and processed under the same program as per the Consultative Committee for Mass and Related Quantities (CCM)/BIPM guidelines. From the data of Category (I), we evaluated the APMP reference value as a function of p'/MPa. Then, we estimated the relative difference of the A'p' values with reference to the APMP reference value for all participating laboratories and we observed that they agree well within their expanded uncertainties. We further estimated the effective area at null pressure and at 23°C (A'0/mm2) and the pressure distortion coefficient (lambda'/MPa-1) of the transfer standard for all the participating laboratories. We then estimated the relative deviation of the A'0/mm2 from the reference value for all eleven laboratories and compared this with their estimated expanded uncertainties. The result is once again extremely encouraging and all these eleven laboratories are agreeing within their estimated maximum expanded uncertainties. We also estimated the degree of equivalence between any two participating laboratories following a matrix mechanism. This once again agrees extremely well within the estimated relative standard uncertainty, which is derived for the two participating laboratories. Finally, a new method has been introduced to evaluate these results and establish a link to CCM.P-K1c and EUROMET.M.P-K2 at two nominal pressures, near 1 MPa and 4 MPa. Again the results show an agreement of all participating laboratories in the present comparison to within the estimated expanded uncertainties using a coverage factor k = 2. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the APMP, according to the provisions of the Mutual Recognition Arrangement (MRA).
Influence of plant productivity over variability of soil respiration: a multi-scale approach
NASA Astrophysics Data System (ADS)
Curiel Yuste, J.
2009-04-01
To investigate the role of plant photosynthetic activity on the variations in soil respiration (SR), SR data obtained from manual sampling and automatic soil respiration chambers placed on eddy flux towers sites were used. Plant photosynthetic activity was represented as Gross Primary Production (GPP), calculated from the half hourly continuous measurements of Net Ecosystem Exchange (NEE). The role of plant photosynthetic activity over the variation in SR was investigated at different time-scales: data averaged hourly, daily and weekly were used to study the photosynthetic effect on SR dial variations (Hourly data), 15 days variations (Daily averages), monthly variations (daily and weekly averages) and seasonal variations (weekly data). Our results confirm the important role of plant photosynthetic activity on the variations of SR at each of the mentioned time-scales. The effect of photosynthetic activity on SR was high on hourly time-scale (dial variations of SR). At half of the studied ecosystems GPP was the best single predictor of dial variations of SR. However at most of the studied sites the combination of soil temperature and GPP was the best predictor of dial variations in SR. The effect of aboveground productivity over dial variations of SR lagged on the range of 5 to 15 hours, depending on the ecosystem. At daily to monthly time scale variations of SR were in general better explained with the combination of temperature and moisture variations. However, ‘jumps' in average weekly SR during the growing season yielded anomaly high values of Q10, in some cases above 1000, which probably reflects synoptic changes in photosynthates translocation from plant activity. Finally, although seasonal changes of SR were in general very well explained by temperature and soil moisture, seasonality of SR was better correlated to seasonality of GPP than to seasonality of soil temperature and/or soil moisture. Therefore the magnitude of the seasonal variation in SR was in general controlled by the seasonality of substrate supply by plants (via photosynthates translocation and/or litter) to soil. Although soil temperature and soil moisture exert a strong influence over the variation in SR, our results indicates that substrate supply by plant activity could exert a more important than previously expected role in the variability of soil respiration. 1. CREAF (Centre de Recerca Ecológica i Aplicacions Forestals), Unitat d'Ecofisiologia i Canvi Global CREAF-CEAB-CSIC, BELLATERRA (Barcelona), Spain (j.curiel@creaf.uab.es) 2. University of Antwerp (UA), Antwerp, Belgium (ivan.janssens@ua.ac.be) 3. Institute of Ecology, University of Innsbruck, Innsbruck, Austria (michael.bahn@uibk.ac.at) 4. UMR Ecologie et Ecophysiologie Forestières, Centre INRA de Nancy, France (longdoz@nancy.inra.fr) 5. ESPM, University of Calicornia at Berkeley, Berkeley, CA, US (baldocchi@nature.berkeley.edu) 6. The Woods Hole Research Center, Falmouth, USA (edavidson@whrc.org) 7. Max-Planck-Institute for Biogeochemistry, Jena, Germany (markus.reichstein@bgc-jena.mpg.de) 8. Institute of Systems Biology and Ecology, Academy of Sciences of the Czech Republic, Czech Republic (manuel@brno.cas.cz) 9. Università degli studi della Tuscia, Viterbo, Italy (arriga@unitus.it) 10. Laurence Berkeley lab, Berkeley, CA, USA (mstorn@lbl.gov) 11. Gembloux Agricultural University, Gembloux, Belgium (aubinet.m@fsagx.ac.be) 12. Fundacion CEAM(Centro de Estudios Ambientales del Mediterráneo), Valencia, Spain (arnaud@ceam.es) 13. Institute of Hydrology and Meteorology, Technische Universität Dresden, Pienner, Germany (gruenwald@forst.tu-dresden.de) 14. Department of Environmental Sciences, Second University of Naples, Caserta, Italy (ilaria.inglima@unina2.it) 15. CNRS-CEFE Montpellier, France (Laurent.MISSON@cefe.cnrs.fr) 16. Agenzia Provinciale per l'Ambiente, Bolzano, Italy (leonar@inwind.it) 17. University of Helsinki Department of Forest Ecology, Helsinki, Finland (jukka.pumpanen@helsinki.fi) 18. Institute for the Study of Earth, Oceans and Space, University of New Hampshire Durham, USA (andrew.richardson@unh.edu) 19. Institute of Plant Sciences, ETH Zurich, Zurich, Switzerland (nadine.ruehr@ipw.agrl.ethz.ch)
EDITORIAL: Nanoscale metrology Nanoscale metrology
NASA Astrophysics Data System (ADS)
Picotto, G. B.; Koenders, L.; Wilkening, G.
2009-08-01
Instrumentation and measurement techniques at the nanoscale play a crucial role not only in extending our knowledge of the properties of matter and processes in nanosciences, but also in addressing new measurement needs in process control and quality assurance in industry. Micro- and nanotechnologies are now facing a growing demand for quantitative measurements to support the reliability, safety and competitiveness of products and services. Quantitative measurements presuppose reliable and stable instruments and measurement procedures as well as suitable calibration artefacts to ensure the quality of measurements and traceability to standards. This special issue of Measurement Science and Technology presents selected contributions from the Nanoscale 2008 seminar held at the Istituto Nazionale di Ricerca Metrologica (INRIM), Torino, in September 2008. This was the 4th Seminar on Nanoscale Calibration Standards and Methods and the 8th Seminar on Quantitative Microscopy (the first being held in 1995). The seminar was jointly organized by the Nanometrology Group within EUROMET (The European Collaboration in Measurement Standards), the German Nanotechnology Competence Centre 'Ultraprecise Surface Figuring' (CC-UPOB), the Physikalisch-Technische Bundesanstalt (PTB) and INRIM. A special event during the seminar was the 'knighting' of Günter Wilkening from PTB, Braunschweig, Germany, as the 1st Knight of Dimensional Nanometrology. Günter Wilkening received the NanoKnight Award for his outstanding work in the field of dimensional nanometrology over the last 20 years. The contributions in this special issue deal with the developments and improvements of instrumentation and measurement methods for scanning force microscopy (SFM), electron and optical microscopy, high-resolution interferometry, calibration of instruments and new standards, new facilities and applications including critical dimension (CD) measurements on small and medium structures and nanoparticle characterization. The papers in the first part report on new or improved instrumentation, details of developments of metrology SFM, improvements to SFM, probes and scanning methods in the direction of nanoscale coordinate measuring machines and true 3D measurements as well as of progress of a 2D encoder based on a regular crystalline lattice. To ensure traceability to the SI unit of length many highly sophisticated instruments are equipped with laser interferometers to measure small displacements in the nanometre range very accurately. Improving these techniques is still a challenge and therefore new interferometric techniques are considered in several papers as well as improved sensors for nanodisplacement measurements or the development of a deep UV microscope for micro- and nanostructures. The tactile measurement of small structures also calls for a better control of forces in the nano- and piconewton range. A nanoforce facility, based on a disk-pendulum with electrostatic stiffness reduction and electrostatic force compensation, is presented for the measurement of small forces. In the second part the contributions are related to calibration and correction strategies and standards such as the development of test objects based on 3D silicon structures, and of samples with irregular surface profiles, and their use for calibration. The shape of the tip and its influence on measurements is still a contentious issue and addressed in several papers: use of nanospheres for tip characterization, a geometrical approach for reconstruction errors by tactile probing. Molecular dynamical calculations, classical as well as ab initio (based on density functional theory), are used to discuss effects of tip-sample relaxation on the topography and to have a better base from which to estimate uncertainties in measurements of small particles or features. Some papers report about measurements of air refractivity fluctuations by phase modulation interferometry, angle-scale traceability by laser diffractometry, and an error separation method. The development of 3D surface roughness measurement standards from scratches is considered in one contribution. Here a 2D autoregressive model was used to generate the software gauge data, which were used as a base for the manufacturing process by diamond turning. Contributions in the third part deal with applications including CD measurements on small and medium structures, the characterization of nanoparticles with a diameter less than 200 nm by electron microscopy, chemical nanoscale metrology by TXRF and a study of the strength of nanotube bundles. We would like to thank all the authors for their contributions, and the referees for their time spent reviewing all the papers and for making their valuable and helpful comments. Additional thanks are extended to all involved in the production of this issue for their help and support.
NASA Astrophysics Data System (ADS)
Heintze, Gawan
2016-04-01
Gawan Heintze1,2, Matthias Drösler1, Ulrike Hagemann3and Jürgen Augustin3 1University of Applied Sciences Weihenstephan-Triesdorf, Chair of Vegetation Ecology, Weihenstephaner Berg 4, 85354 Freising, Germany 2Technische Universität München, Chair of Plant Nutrition, Emil-Ramann-Str. 2, 85354 Freising, Germany 3Leibniz Centre for Agricultural Landscape Research (ZALF), Eberswalder Straße 84, 15374 Müncheberg, Germany Together with industrial process-related emissions (8.1%) the actual GHG emissions from agriculture (7.5% - 70 million tones (Mt) of carbon dioxide (CO2)-equivalents) representing after energy-related emissions from combustion processes of fossil fuels (83.7%) the second largest budget of the Germany-wide total emissions per year. To reduce the EU's CO2 emissions by 20% by 2020 the cultivation of energy crops for biogas production, ideally coupled to a subsequent return of the resulting residues in form of biogas digestate is intended as one key element in the pathway of renewable energy production. Despite an increasing cultivation of energy crops for the production of biogas aiming to reduce the overall climate impact of the agricultural sector, it is still largely unknown how the application of ammonia-rich organic digestate effects field N2O emissions. Therefore, the collaborative research project "potential for reducing the release of climate-relevant trace gases in the cultivation of energy crops for the production of biogas" was launched. The main objective of the study was to determine an improved process understanding and to quantify the influence of mineral nitrogen fertilization, biogas digestate application, crop type and crop rotation, to gain precise and generalizable statements on the exchange of trace gases like nitrous oxide (N2O) and methane (CH4) on the resulting climate impact. Gas fluxes of N2O and CH4 were measured for three and a half years on two differently managed sites in maize monoculture with different applied organic N amounts and in a crop rotation system called FFA and FFB with same amounts of applied N but three different forms of N application (mineral N, mineral+organic N, organic N). The annual cumulative N2O exchange rates in maize monoculture showed a clear dependence on the amount of applied organic fertilizer. Average annual cumulative exchange rates ranged from 1.65 ± 0.74 kg N ha-1 yr-1 to 11.03 ± 1.63 kg N ha-1 y-1explainable by a twice as high amount of N compared to the conventional fertilized site. The average annual cumulative CH4 exchange rates in maize monoculture varied between -1.2 ± 0.46 kg C ha-1 yr-1 and 3.75 ± 0.48 kg C ha-1 y-1with measured CH4 fluxes around zero between the fertilizing events, indicating a minor role. For FFA and FFB the average annual cumulative N2O exchange rates ranged from 1.45 ± 0.18 kg N ha-1 yr-1 to 3.5 ± 1.1 kg N ha-1 y-1and 1.37 ± 0.57 kg N ha-1 yr-1 to 1.71 ± 0.29 kg N ha-1 y-1 andshowed lower values to comparable treatments in the maize monoculture especially indicating the different management effects. Determined average annual cumulative CH4 exchange rates ranged from 0.19 ± 0.6 kg C ha-1 yr-1 to 0.21 ± 0.45 kg C ha-1 yr-1and -0.8 ± 0.7 kg C ha-1 y-1 to 1 ± 0.6 kg C ha-1 y-1 and played as well a minor role. Altogether, biogas digestate can be seen as a suitable alternative if the amounts of applied N selected appropriately in combination with a customized management.
IN MEMORIAM: Hermann Anton Haus, 1925 2003
NASA Astrophysics Data System (ADS)
2004-08-01
Photograph Hermann Anton Haus, an Institute Professor at the Massachusetts Institute of Technology (MIT), was to have been a Keynote Speaker at the Fluctuations and Noise in Photonics and Quantum Optics Conference, from which the papers in this special issue derive. Sadly, on May 21, 2003 - less than two weeks before the conference - Professor Haus succumbed to a heart attack after arriving home in Lexington, Massachusetts, from his regular, 15-mile commute by bicycle from MIT. He was 77. Throughout his lengthy and illustrious career, Professor Haus had repeatedly and very successfully addressed problems of fluctuations and noise, with special focus on the fundamental issues that arise in quantum optics. To honour Professor Haus' legacy to our technical community, this special issue of Journal of Optics B: Quantum and Semiclassical Optics is dedicated to his memory. Professor Haus was born in Ljubljana, Slovenia, in the former Yugoslavia, on 8 August 1925. After attending the Technische Hochschule, Graz, and the Technische Hochschule, Wien, in Austria, he received his Bachelor of Science degree from Union College in Schenectady, New York in 1949. In 1951, he graduated from Rensselaer Polytechnic Institute with a Master of Science in Electrical Engineering, and came to MIT, where he earned his Doctorate of Science and joined the faculty in 1954. He was promoted to Associate Professor in 1958, to Professor in 1962, and to Elihu Thomson Professor in 1973. In 1986, he was conferred the honour of Institute Professor. Professor Haus had a lifelong fascination with noise. While still an undergraduate at Union College, he became aware of Norbert Wiener's theories of statistical phenomena - the new mathematics needed to understand and quantify the random fluctuations we refer to as noise. So it was that noise theory formed the core of Professor Haus' research during the 1950s: noise in electron beams, noise in microwave amplifiers, and noise in amplifier cascades. Two of his notable achievements from that era are his elegant four-terminal impedance transformation for the treatment of noise in electron beams [1], and the single noise measure for optimizing linear amplifier cascades that he developed with Richard B Adler [2]. In 1960 the first working laser was reported, and Professor Haus' noise work shifted from microwaves to higher frequencies - light waves - and to the most fundamental source of fluctuations, the inescapable noise introduced by quantum mechanics. In 1962, he and Charles H Townes used the number-phase uncertainty principle to derive the sensitivity advantage that optical homodyne detection enjoys over optical heterodyne detection [3]. That same year he and James A Mullen tied the fundamental noise limits on linear amplification to the quadrature-noise uncertainty principle [4]. Four years later he and Charles Freed reported the first measurements of photoelectron statistics for a laser operating below and above its oscillation threshold [5]. All three of these works have continuing echoes through more recent research on the quantum theory of coherent detection, the noise limits of phase-insensitive and phase-sensitive amplifiers, and quantum noise measurements via photodetection. It took some time for laser technology to fulfil its initial promise of inexpensive, long-haul, broadband communications, and Professor Haus' work on modelocked, distributed-feedback, and soliton lasers played no small role in that development. Nevertheless, from the 1980s onward, Professor Haus' research interest returned again and again to quantum noise. In collaboration with colleagues from the Raytheon Company he showed that their ring-laser gyroscope was operating at the noise limit set by the number-phase uncertainty principle [6]. In collaboration with Nobuyuki Imoto and Yoshihisa Yamamoto from Nippon Telegraph and Telephone Research Laboratories he proposed a practical route to the quantum nondemolition (QND) measurement of photon number [7]. Together with James P Gordon he elucidated the quantum timing jitter that afflicts soliton propagation down optically-amplified fibre lines [8]. He and Masataka Shirasaki introduced the nonlinear Sagnac loop as a technique for generating squeezed states in optical fibre [9]. Together with his student Yinchieh Lai, Professor Haus established a physically-motivated decomposition that accounts for the various noise contributions seen in soliton squeezing [10]. The impact of these works has continued to reverberate through more recent efforts devoted to optical QND measurements, long-haul soliton transmission systems, and Sagnac loop quantum-noise manipulation. During the last decade of Professor Haus' life, he revisited - in very modern terms - some topics that he had studied early in his career. In a collaboration between his group and researchers at Bell Laboratories he used amplified spontaneous emission noise measurements to accurately predict the performance of a 10-Gbit/s optically-preamplified receiver [11], thus reprising - in the context of broadband optical communications - issues of photodetection noise statistics that he had confronted 30 years earlier with Charles Freed. In an invited paper he described a single noise figure for amplification that is valid from radio to optical frequencies [12], i.e., in both the classical and quantum regimes, thus bringing him back, full circle, to his earliest interest in amplifier noise measures. The culmination of his life's work, however, was his book, Electromagnetic Noise and Quantum Optical Measurements. Published in 2000 [13], it is a distillation of 45 years of his research. Generations of students to come will learn quantum noise from this masterwork. Professor Haus authored or co-authored five books, published more than 300 articles, and presented his work at virtually every major conference and symposium on lasers, quantum electronics, and quantum optics around the world. He was one of very few engineers in the USA to become a member of both the National Academy of Engineering and the National Academy of Sciences. He was a Fellow of the American Academy of Arts and Sciences, the American Physical Society, the Institute of Electrical and Electronics Engineers, and the Optical Society of America. He received Guggenheim and Fulbright Fellowships and several honorary degrees, including one from the University of Vienna, and he received the Austrian government's Wittgenstein Prize for outstanding contributions to humanity. Professor Haus was selected by his MIT colleagues for the 1982-1983 James R Killian Faculty Achievement Award, the highest honour that the MIT faculty bestows. In 1984, the Optical Society of America recognized Professor Haus' contributions with its Frederic Ives Medal, the Society's highest award. In 1995, Professor Haus was awarded the National Medal of Science by President William Jefferson Clinton. In a 1998 interview, Professor Haus was asked about his philosophy of life. He replied, 'Try to do your best, because that's all part of the fun. The greatest thing is that once in a while something clicks. It happens every three of four years. It can't happen more often than that, except for some exceptional people.' Things clicked for Hermann Anton Haus. This happened not just once, not just every three or four years, but regularly and throughout his long career. He was a truly exceptional man. Jeffrey H Shapiro Massachusetts Institute of Technology, Cambridge, USA References [1] Haus H A 1955 Noise in one-dimensional electron beams J. Appl. Phys. 26 560-71 [2] Haus H A and Adler R B 1958 Optimum noise performance of linear amplifiers Proc. IRE 46 1519-33 [3] Haus H A and Townes C H 1962 Comment on `Noise in photoelectric mixing' Proc. IRE 50 1544 [4] Haus H A and Mullen J A 1962 Quantum noise in linear amplifiers Phys. Rev. 128 2407-13 [5] Freed C and Haus H A 1966 Photoelectron statistics produced by a laser operating below and above the threshold of oscillation IEEE J. Quantum Electron. QE-2 190-5 [6] Dorschner T A , Haus H A, Holz M, Smith I W and Statz H 1980 Laser gyro at quantum limit IEEE J. Quantum Electron. QE-16 1376-9 [7] Imoto N, Haus H A and Yamamoto Y 1985 Quantum nondemolition measurement of the photon number via the optical Kerr effect Phys. Rev. A 32 2287-92 [8] Gordon J P and Haus H A 1986 Random walk of coherently amplified solitons in optical fiber transmission Opt. Lett. 11 665-7 [9] Shirasaki M and Haus H A 1990 Squeezing of pulses in a nonlinear interferometer J. Opt. Soc. Am. B 7 30-4 [10] Haus H A and Lai Y 1990 Quantum theory of soliton squeezing: a linearized approach J. Opt. Soc. Am. B 7 386-92 [11] Wong W S, Haus H A, Jiang L A, Hansen P B and Margalit M 1998 Photon statistics of amplified spontaneous emission noise in a 10-Gbit/s optically preamplified direct-detection receiver Opt. Lett. 23 1832-34 [12] Haus H A 2000 Noise figure definition valid from RF to optical frequencies IEEE J. Sel. Topics Quantum Electron. 6 240-7 [13] Haus H A 2000 Electromagnetic Noise and Quantum Optical Measurements (Berlin: Springer)
Nomadic migration : a service environment for autonomic computing on the Grid
NASA Astrophysics Data System (ADS)
Lanfermann, Gerd
2003-06-01
In recent years, there has been a dramatic increase in available compute capacities. However, these “Grid resources” are rarely accessible in a continuous stream, but rather appear scattered across various machine types, platforms and operating systems, which are coupled by networks of fluctuating bandwidth. It becomes increasingly difficult for scientists to exploit available resources for their applications. We believe that intelligent, self-governing applications should be able to select resources in a dynamic and heterogeneous environment: Migrating applications determine a resource when old capacities are used up. Spawning simulations launch algorithms on external machines to speed up the main execution. Applications are restarted as soon as a failure is detected. All these actions can be taken without human interaction. A distributed compute environment possesses an intrinsic unreliability. Any application that interacts with such an environment must be able to cope with its failing components: deteriorating networks, crashing machines, failing software. We construct a reliable service infrastructure by endowing a service environment with a peer-to-peer topology. This “Grid Peer Services” infrastructure accommodates high-level services like migration and spawning, as well as fundamental services for application launching, file transfer and resource selection. It utilizes existing Grid technology wherever possible to accomplish its tasks. An Application Information Server acts as a generic information registry to all participants in a service environment. The service environment that we developed, allows applications e.g. to send a relocation requests to a migration server. The server selects a new computer based on the transmitted resource requirements. It transfers the application's checkpoint and binary to the new host and resumes the simulation. Although the Grid's underlying resource substrate is not continuous, we achieve persistent computations on Grids by relocating the application. We show with our real-world examples that a traditional genome analysis program can be easily modified to perform self-determined migrations in this service environment. In den vergangenen Jahren ist es zu einer dramatischen Vervielfachung der verfügbaren Rechenzeit gekommen. Diese 'Grid Ressourcen' stehen jedoch nicht als kontinuierlicher Strom zur Verfügung, sondern sind über verschiedene Maschinentypen, Plattformen und Betriebssysteme verteilt, die jeweils durch Netzwerke mit fluktuierender Bandbreite verbunden sind. Es wird für Wissenschaftler zunehmend schwieriger, die verfügbaren Ressourcen für ihre Anwendungen zu nutzen. Wir glauben, dass intelligente, selbstbestimmende Applikationen in der Lage sein sollten, ihre Ressourcen in einer dynamischen und heterogenen Umgebung selbst zu wählen: Migrierende Applikationen suchen eine neue Ressource, wenn die alte aufgebraucht ist. 'Spawning'-Anwendungen lassen Algorithmen auf externen Maschinen laufen, um die Hauptanwendung zu beschleunigen. Applikationen werden neu gestartet, sobald ein Absturz endeckt wird. Alle diese Verfahren können ohne menschliche Interaktion erfolgen. Eine verteilte Rechenumgebung besitzt eine natürliche Unverlässlichkeit. Jede Applikation, die mit einer solchen Umgebung interagiert, muss auf die gestörten Komponenten reagieren können: schlechte Netzwerkverbindung, abstürzende Maschinen, fehlerhafte Software. Wir konstruieren eine verlässliche Serviceinfrastruktur, indem wir der Serviceumgebung eine 'Peer-to-Peer'-Topology aufprägen. Diese “Grid Peer Service” Infrastruktur beinhaltet Services wie Migration und Spawning, als auch Services zum Starten von Applikationen, zur Dateiübertragung und Auswahl von Rechenressourcen. Sie benutzt existierende Gridtechnologie wo immer möglich, um ihre Aufgabe durchzuführen. Ein Applikations-Information- Server arbeitet als generische Registratur für alle Teilnehmer in der Serviceumgebung. Die Serviceumgebung, die wir entwickelt haben, erlaubt es Applikationen z.B. eine Relokationsanfrage an einen Migrationsserver zu stellen. Der Server sucht einen neuen Computer, basierend auf den übermittelten Ressourcen-Anforderungen. Er transferiert den Statusfile des Applikation zu der neuen Maschine und startet die Applikation neu. Obwohl das umgebende Ressourcensubstrat nicht kontinuierlich ist, können wir kontinuierliche Berechnungen auf Grids ausführen, indem wir die Applikation migrieren. Wir zeigen mit realistischen Beispielen, wie sich z.B. ein traditionelles Genom-Analyse-Programm leicht modifizieren lässt, um selbstbestimmte Migrationen in dieser Serviceumgebung durchzuführen.
Dedication: phys. stat. sol. (a) 202/15
NASA Astrophysics Data System (ADS)
Albrecht, Martin
2005-12-01
The papers in this issue are dedicated to Professor Horst Paul Strunk on the occasion of his 65th birthday and his retirement from active teaching. This volume honours a scientist who has made a lasting impact on the field in electron microscopic characterisation of growth and relaxation phenomena in epitaxial growth of semiconductors. Born in The Hague, The Netherlands, on 13 June 1940, he studied physics in Stuttgart where he received his degree in Physics in 1968. He joined the group of Prof. Seeger at the Max-Planck-Institut für Metallforschung and defended his Ph.D. on defects in NaCl at Stuttgart University in 1973. He spent one year at Cornell University as a visiting Professor before joining Technische Universität Hamburg-Harburg in 1983. There he created the Zentralbereich Elektronenmikroskopie and was a professor for materials analytics from 1983 till 1989. In 1989 he changed to the University of Erlangen-Nürnberg, where he established the Verbundlabor für hochauflösende Elektronenmikroskopie and directed the Lehrstuhl Mikrocharakterisierung at the Institut für Werkstoffwissenschaften of the same university. He spent two research periods at the Universities of Rennes in France and Campinas in Brazil. Together with his colleague Prof. Jürgen Werner he created the series of conferences on polycrystalline semiconductors POLYSE which he has been supervising together with Jürgen Werner since 1990.The research activities of Horst P. Strunk are focused on microstructure of materials and their relation to macroscopic physical properties. Main topics are dislocations, their formation and interaction mechanisms, strain relaxation as well as fundamental mechanisms of epitaxial growth. The spectrum of materials covers a wide range starting from metals over ionic crystals, e.g. NaCl to elemental and compound semiconductors. From the beginning, the main tool of study has been the transmission electron microscope. However, Horst P. Strunk recognised that a thorough understanding of materials problems would require the combined use of structural characterisation, advanced spectroscopy and modelling. Therefore he complemented electron microscopic approaches by optical methods e.g. Raman spectroscopy and cathodoluminescence. Modelling of strain states by finite elements and of defect structures by ab-initio calculations became an important topic especially in the last years. It is characteristic for the scientific approach of Horst Strunk that methodological developments were not an end in itself but linked to problems in solid state physics and materials sciences. Among the scientific works of Strunk, a few examples should be highlighted which mark important stages in his scientific career. Pioneering work has been done on the influence of dislocations in homoepitaxial growth of Si and GaAs in collaboration with Elisabeth Bauser in Stuttgart. Strunk correlated growth spirals on the surface to dislocations that caused these step sources. Studying the dislocation structure of heteroepitaxial Ge/GaAs layers, Strunk discovered that a new dislocation multiplication source works which, later known as Hagen-Strunk source, had a strong impact on understanding of relaxation processes by dislocations in heteroepitaxial semiconductor systems. Work on electrical and structural properties of grain boundaries in silicon was performed together with Jürgen Werner. This was the starting point of a long lasting research on photovoltaic materials that accompanies Strunk till today. Fundamental studies on heteroepitaxial growth were performed in the system SiGe grown from solution. In this context, finite elements were established for the first time in the study of nanostructured materials. In the last years correlated studies on structural and optical properties on III-nitride heterostructures were done by cathodoluminescence in the transmission electron microscope. The impact of Horst P. Strunk's work is evident from the fact that his lab became part of collaborative international projects based on the unique facilities at the Verbundlabor für Hochauflösende Elektronenmikroskopie and the profound knowledge in the field of crystal growth and solid state physics present in his group. The articles in this issue contain original research results contributed by his friends, collaborators and former students. They are a testimony of the lasting impact of Horst P. Strunk's work and they express the authors' gratefulness for benefiting from his work. This volume gives us a unique opportunity to say thank you to Horst P. Strunk and to wish him a new period in his life that should continue to be scientifically as fruitful as up to now but less affected by the burden of administrative work than during the last years.
NASA Astrophysics Data System (ADS)
Heintze, Gawan
2017-04-01
Influence of soil organic C content on the greenhouse gas emission potential after application of biogas residues or cattle slurry - Results from a pot experiment Gawan Heintze1,2, Tim Eickenscheidt1, Urs Schmidthalter2 and Matthias Drösler1 1University of Applied Sciences Weihenstephan-Triesdorf, Chair of Vegetation Ecology, Weihenstephaner Berg 4, 85354 Freising, Germany 2Technische Universität München, Chair of Plant Nutrition, Emil-Ramann-Str. 2, 85354 Freising, Germany The European Union Renewable Energy Directive, which sets a binding target of a final energy consumption of 20% from renewable sources by 2020, has markedly promoted the increase of biogas plants, particularly in Germany. As a consequence, a large amount of biogas residue remains as a by-product of the fermentative process. These residues are now widely used instead of mineral fertilizers or animal slurries to maintain soil fertility and productivity. However, to date, the effect of the application of biogas residue on greenhouse gas (GHG) emission, compared to that of other organic fertilizers, is contradictory in literature, not having been completely understood. It is often stated that GHG fluxes are closely related to the quality of the raw material, particularly the type of soil to which the digestates are applied. This study addresses the questions (a) to what extent are the applications of biogas digestate and cattle slurry different in terms of their GHG emission (CO2, CH4 and N2O) potential, and (b) how do different soil organic carbon contents (SOCs) influence the rate of GHG exchange. We hypothesize that, i) cattle slurry application enhances the CO2 and N2O fluxes compared to the biogas digestate due to the overall higher C and N input, and ii) that with increasing SOC and N content, higher emissions of CO2 and N2O can be expected. The study was conducted as a pot experiment. Biogas digestate and cattle slurry were applied to and incorporated into three different soil types with varying SOC contents (Cambisol, termed Clow; Mollic Gleysol, termed Cmedium and Sapric Histosol, termed Chigh). The application rate was equivalent to 150 kg NH4+-N ha-1. GHG exchange (CO2, CH4 and N2O) was measured on five replicates over a period of 22 days using the closed chamber technique to simulate the high-risk situation of enhanced GHG emissions following organic fertilizer application in energy maize cultivation. Generally, it was found that the application of cattle slurry resulted in significantly higher CO2 and N2O fluxes compared to the application of biogas digestate. The total cumulative CO2 exchange rates after 22 days ranged from 137 ± 4.6 kg C ha-1 22d-1 (Clow, control) to 885 ± 32.5 kg C ha-1 22 d-1 (Chigh, CS). However, the total cumulative N2O exchange rates ranged from 7.7 ± 6.1 g N ha-1 22 d-1 (Clow, control) up to 2000 ± 226 g N ha-1 22 d-1 (Cmedium, CS). No differences were found regarding the CH4 exchange, which was close to zero for all treatments. Total cumulative CH4 exchange rates ranged between -31 ± 32 g C ha-1 22d-1 (Cmedium, control) and -167 ± 34 g C ha-1 22d-1 (Chigh, CS). Calculated cumulative emissions revealed that 4% to 15% of the C derived from the organic fertilizer was emitted as CO2, and 0.06% to 0.67% of the applied N as N2O. Significantly higher CO2 emissions were observed at the Chigh treatments compared to the other two soil types investigated, whereas the significantly highest N2O emissions were found at the Cmedium treatments. The results clearly demonstrate the importance of soil type-adapted fertilization with respect to changing soil physical and environmental conditions. Considering the distinctly higher global warming potential (GWP) of N2O compared to CO2 (298:1; IPCC 2014), the present results revealed that soil type-specific 22-day cumulative N2O emissions contributed to 8% of the total GWP balance at Clow, 25% at Cmedium and 4% at Chigh, respectively. Overall, it seems that soils rich in SOM have a higher sensitivity regarding changing physical soil conditions than soils with low SOM contents.
EDITORIAL: Focus on Carbon Nanotubes
NASA Astrophysics Data System (ADS)
2003-09-01
The study of carbon nanotubes, since their discovery by Iijima in 1991, has become a full research field with significant contributions from all areas of research in solid-state and molecular physics and also from chemistry. This Focus Issue in New Journal of Physics reflects this active research, and presents articles detailing significant advances in the production of carbon nanotubes, the study of their mechanical and vibrational properties, electronic properties and optical transitions, and electrical and transport properties. Fundamental research, both theoretical and experimental, represents part of this progress. The potential applications of nanotubes will rely on the progress made in understanding their fundamental physics and chemistry, as presented here. We believe this Focus Issue will be an excellent guide for both beginners and experts in the research field of carbon nanotubes. It has been a great pleasure to edit the many excellent contributions from Europe, Japan, and the US, as well from a number of other countries, and to witness the remarkable effort put into the manuscripts by the contributors. We thank all the authors and referees involved in the process. In particular, we would like to express our gratitude to Alexander Bradshaw, who invited us put together this Focus Issue, and to Tim Smith and the New Journal of Physics staff for their extremely efficient handling of the manuscripts. Focus on Carbon Nanotubes Contents Transport theory of carbon nanotube Y junctions R Egger, B Trauzettel, S Chen and F Siano The tubular conical helix of graphitic boron nitride F F Xu, Y Bando and D Golberg Formation pathways for single-wall carbon nanotube multiterminal junctions Inna Ponomareva, Leonid A Chernozatonskii, Antonis N Andriotis and Madhu Menon Synthesis and manipulation of carbon nanotubes J W Seo, E Couteau, P Umek, K Hernadi, P Marcoux, B Lukic, Cs Mikó, M Milas, R Gaál and L Forró Transitional behaviour in the transformation from active end planes to stable loops caused by annealing M Endo, B J Lee, Y A Kim, Y J Kim, H Muramatsu, T Yanagisawa, T Hayashi, M Terrones and M S Dresselhaus Energetics and electronic structure of C70-peapods and one-dimensional chains of C70 Susumu Okada, Minoru Otani and Atsushi Oshiyama Theoretical characterization of several models of nanoporous carbon F Valencia, A H Romero, E Hernández, M Terrones and H Terrones First-principles molecular dynamics study of the stretching frequencies of hydrogen molecules in carbon nanotubes Gabriel Canto, Pablo Ordejón, Cheng Hansong, Alan C Cooper and Guido P Pez The geometry and the radial breathing mode of carbon nanotubes: beyond the ideal behaviour Jeno Kürti, Viktor Zólyomi, Miklos Kertesz and Sun Guangyu Curved nanostructured materials Humberto Terrones and Mauricio Terrones A one-dimensional Ising model for C70 molecular ordering in C70-peapods Yutaka Maniwa, Hiromichi Kataura, Kazuyuki Matsuda and Yutaka Okabe Nanoengineering of carbon nanotubes for nanotools Yoshikazu Nakayama and Seiji Akita Narrow diameter double-wall carbon nanotubes: synthesis, electron microscopy and inelastic light scattering R R Bacsa, E Flahaut, Ch Laurent, A Peigney, S Aloni, P Puech and W S Bacsa Sensitivity of single multiwalled carbon nanotubes to the environment M Krüger, I Widmer, T Nussbaumer, M Buitelaar and C Schönenberger Characterizing carbon nanotube samples with resonance Raman scattering A Jorio, M A Pimenta, A G Souza Filho, R Saito, G Dresselhaus and M S Dresselhaus FTIR-luminescence mapping of dispersed single-walled carbon nanotubes Sergei Lebedkin, Katharina Arnold, Frank Hennrich, Ralph Krupke, Burkhard Renker and Manfred M Kappes Structural properties of Haeckelite nanotubes Ph Lambin and L P Biró Structural changes in single-walled carbon nanotubes under non-hydrostatic pressures: x-ray and Raman studies Sukanta Karmakar, Surinder M Sharma, P V Teredesai, D V S Muthu, A Govindaraj, S K Sikka and A K Sood Novel properties of 0.4 nm single-walled carbon nanotubes templated in the channels of AlPO4-5 single crystals Z K Tang, N Wang, X X Zhang, J N Wang, C T Chan and Ping Sheng Lattice dynamics and symmetry of double wall carbon nanotubes M Damnjanovic, E Dobardzic, I Milosevic, T Vukovic and B Nikolic Optical characterization of single-walled carbon nanotubes synthesized by catalytic decomposition of alcohol Shigeo Maruyama, Yuhei Miyauchi, Yoichi Murakami and Shohei Chiashi Christian Thomsen, Technische Universität Berlin, Germany Hiromichi Kataura, Tokyo Metropolitan University, Japan
Mercaptursäure und Nukleosidaddukt im Harn als Biomarker in 1-Hydroxymethylpyren-exponierten Ratten
NASA Astrophysics Data System (ADS)
Ma, Lan
2002-01-01
1-Methylpyren (MP) ist hepatokanzerogen in neugeborenen männlichen Mäusen. Durch Hydroxylierung an der benzylischen Stelle und anschließende Sulfonierung wird MP zu DNA-reaktivem 1-Sulfooxymethylpyren (SMP) aktiviert. In der Ratte führt die Exposition des benzylischen Alkohols, 1-Hydroxymethylpyren (HMP), zur DNA-Adduktbildung in verschiedenen Geweben. Eventuelle Konsequenz der Toxifizierung ist die Ausscheidung entsprechender Mercaptursäure und Nukleosidaddukt im Harn, welche aufgrund ihrer Herkunft als Biomarker eignen könnten. In dieser Arbeit wird die Ausscheidung der Mercaptursäure und des N2-Desoxyguanosinadduktes in HMP-exponierten Ratten untersucht. Nach der Applikation von HMP bzw. MP wurden weniger als 1 % der Dosis als MPMA über Urin und Faeces ausgeschieden (0 - 48 h). Die Ausscheidung erfolgt hauptsächlich in den ersten 24 h nach der Applikation. MPdG konnte weder in Urin noch in Faeces der HMP-behandelten Tieren identifiziert werden. Nach direkter SMP-Applikation wurde MPdG nur in sehr geringe Menge (weniger als 0,9 ppm in 12 h) im Urin gefunden. Aufgrund der geringen Menge eignet sich MPdG nicht als Biomarker. MPMA dagegen, lässt sich analytisch gut erfassen. Es sollte daher untersucht werden, ob MPMA die Toxifizierung des HMP wiederspiegelt. Die Voraussetzung dafür ist die Kenntnisse über das Metabolismusmuster von HMP. Es wurde daher umfassende Untersuchungen zum Metabolismus des HMP durchgeführt. Die Ergebnisse zeigten, dass mehr als 80 % der Metaboiten in ihrer oxidierten Form (PCS, deren Glucuronsäure-Konjugate sowie phenolische Sulfatester der PCS) ausgeschieden wurden. Demnach spielt die Oxidation des HMP zu PCS eine sehr wichtige Rolle bei der Detoxifizierung und Ausscheidung von HMP. Ferne konnte nachgewiesen werden, dass die Enzyme Alkohol- und Aldehyd-Dehydrogenase an der Oxidation von HMP beteiligt waren. Die Inhibitoren Disulfiram und Ethanol der o. g. Enzyme wurde daher zur Modulation der Detoxifizierung in vivo eingesetzt. Die Veränderungen in der Toxifizierung von HMP zu SMP wurden durch die SMP-Konzentration im Plasma, die DNA-Addukthäufigkeit und die MPMA-Ausscheidung erfasst. Die Vorbehandlung von Disulfiram und Ethanol führte zu tendentielle Erhöhung der SMP-Konzentration im Plasma, DNA-Addukthäufigkeit in der Leber und die MPMA-Ausscheidung. Bemerkenswert ist jedoch, dass bereits eine Dosis von 0,2 g Ethanol/kg Körpermasse bereits zu statistisch signifikanten Erhöhungen der MPMA-Ausscheidung bei weiblichen Ratten. 1-Methylpyrene is hepatocarcinogenic in rodents. It is metabolized primarily to 1-hydroxymethylpyrene (HMP) by various cDNA-expressed rat and human cytochromes P450. HMP is activated to a highly reactive sulfuric acid ester, 1-sulfooxymethylpyrene (SMP), by sulphotransferases. In the rat, this activation pathway leads to the formation of DNA adducts in various tissues. Possible consequences of the toxification could be the excretion of the corresponding mercapturic acid and nucleosidadduct in urine and feces. Because of their origin, these substances should reflex the toxification process may be used as biomarkers. We investigate the excretion of 1-methylpyrenyl-mercapturic acid (MPMA) and the excretion of N2-(1-methylpyrenyl)-desoxyguanosin (MPdG) in urine and feces of HMP-treated rats. These studies showed that only a minor portion (< 1 %) of the administered dose of 1-HMP was excreted as mercapturic acid. MPdG could not be identified in urine and feces of HMP-treated rats. Treating rats with the active spieces sulfooxymethylpyrene, 0.9 ppm of the dose was found excreted within 12 h. I now investigated the alternative metabolic pathways of HMP. More than 50 % of the dose (administered intraperitoneally) was excreted as free 1-pyrenyl carboxylic acid and its glucuronic acid conjugate primarily in the urine. Other major urinary metabolites were phenolic sulpho conjugates of ring-oxidized 1-pyrenyl carboxylic acid (> 30 %). Minor metabolites were phenolic sulpho conjugates of HMP (< 5 %). The glucuronic acid conjugate of HMP was found in very small amounts. In total, > 80 % of the metabolites excreted were oxidized at the exocyclic carbon. This side-chain oxidation, probably catalyzed by alcohol and aldehyde dehydrogenases, appears to represent a detoxification pathway. Indeed, administration of ethanol shortly before the administration of HMP to rats increased the levels of SMP detected in blood, of DNA adducts formed in tissues and of mercapturic acid excreted. These effects were observed even at very low dose levels of ethanol (0.2 g per kg body weight). Similar effects were shown after administration of Disulfiram, an inhibitor of aldehyde dehydrogenase.
NASA Astrophysics Data System (ADS)
2011-09-01
AbeTakashiUniversity of Tokyotabe@nt.phys.s.u-tokyo.ac.jp AmusiaMironRacah Institute of Physics, Jerusalemamusia@vms.huji.ac.il BaldoMarcelloINFN Cataniabaldo@ct.infn.it BansalManiePanjab University, Chandigarhbansalmanni@gmail.com BarrancoFranciscoUniversity of Sevillebarranco@us.es BertschGeorgeUniversity of Washington, Seattlebertsch@u.washington.edu BhagwatAmeeyaCBS Mumbaiameeya@kth.se BorderieBernardIPN Orsayborderie@ipno.in2p3.fr CarbonellJaumeLPSC Grenoblejaume.carbonell@lpsc.in2p3.fr CarlsonJoeLos Alamos National Laboratorycarlson@lanl.gov ColòGianlucaINFN - Università degli Studi di Milanocolo@mi.infn.it DanielewiczPawelNSCL, Michigan State Universitydanielewicz@nscl.msu.edu DescouvemontPierreUniversité Libre de Bruxellespdesc@ulb.ac.be Dohet-EralyJérémyUniversité Libre de Bruxellesjdoheter@ulb.ac.be DraayerJerryLouisiana State Universitydraayer@lsu.edu DufourMarianneIPHC, Université de Strasbourgmarianne.dufour@ires.in2p3.fr DuguetThomasCEA Saclaythomas.duguet@cea.fr DukelskyJorgeCSIC Madriddukelsky@iem.cfmac.csic.es EbranJean-PaulCEA-DAM, Arpajonebran@ipno.in2p3.fr FreerMartinUniversity of Birminghamm.freer@bham.ac.uk FujiiShinichiroUniversity of Tokyosfujii@cns.s.u-tokyo.ac.jp FunakiYasuroRIKEN Nishina Center, Wakofunaki@riken.jp GrassoMarcellaIPN Orsaygrasso@ipno.in2p3.fr HaginoKouichiTohoku Universityhagino@nucl.phys.tohoku.ac.jp HansenHubertUniversité Claude Bernard Lyon 1hansen@ipnl.in2p3.fr HolzmannMarkusLPMMC Grenoblemarkus@lptl.jussieu.fr HoriuchiHisashiRCNP, Osaka Universityhoriuchi@rcnp.osaka-u.ac.jp HoriuchiWataruGSI Darmstadtw.horiuchi@gsi.de HupinGuillaumeGANIL, Caenhupin@ganil.fr JinMengHuazhong Normal University, Wuhanjinm@iopp.ccnu.edu.cn KamimuraMasayasuRIKEN Nishina Center, Wakomkamimura@riken.jp Kanada-En'yoYoshikoKyoto Universityyenyo@ruby.scphys.kyoto-u.ac.jp KatoKiyoshiHokkaido University, Sapporokato@nucl.sci.hokudai.ac.jp KawabataTakahiroKyoto Universitykawabata@scphys.kyoto-u.ac.jp KhanEliasIPN Orsaykhan@ipno.in2p3.fr KhodelVictorKurchatov Institute, Moscowvak@wuphys.wustl.edu KimuraMasaakiHokkaido University, Sapporomasaaki@nucl.sci.hokudai.ac.jp LacroixDenisGANIL, Caenlacroix@ganil.fr LiangHaozhaoPeking University, Beijinghzliang@pku.edu.cn MargueronJérômeIPN Orsayjerome.margueron@ipno.in2p3.fr MassotElisabethIPN Orsaymassot@ipno.in2p3.fr MengJiePeking University, Beijingmengj@pku.edu.cn MillerTomaszWarsaw University of Technologymillert@student.mini.pw.edu.pl MoghrabiKassemIPN Orsaymoghrabi@ipno.in2p3.fr NapolitaniPaoloIPN Orsaynapolita@ipno.in2p3.fr NeffThomasGSI Darmstadtt.neff@gsi.de NguyenVan GiaiIPN Orsaynguyen@ipno.in2p3.fr OtsukaTakaharuUniversity of Tokyootsuka@phys.s.u-tokyo.ac.jp PilletNathalie-MarieCEA-DAM, Arpajonnathalie.pillet@cea.fr QiChongKTH Stockholmchongq@kth.se RamananSunethraICTP Triestesramanan@ictp.it RingPeterTU Munichring@ph.tum.de Rios HuguetArnauUniversity of Surreya.rios@surrey.ac.uk RivetMarie-FranceIPN Orsayrivet@ipno.in2p3.fr RobledoLuisUniversidad Autonoma de Madridluis.robledo@uam.es Roca MazaXavierINFN Milanoxavier.roca.maza@mi.infn.it RöpkeGerdRostock Universitygerd.roepke@uni-rostock.de RowleyNeilIPN Orsayrowley@ipno.in2p3.fr SagawaHiroyukiUniversity of Aizusagawa@u-aizu.ac.jp SandulescuNicolaeIFIN-HH, Bucharestsandulescu@theory.nipne.ro SchuckPeterIPN Orsayschuck@ipno.in2p3.fr SedrakianArmenGoethe Universität Frankfurtsedrakian@th.physik.uni-frankfurt.de SeveryukhinAlexeyJINR Dubnasever@theor.jinr.ru SogoTakaakiIPN Orsaysogo@ipno.in2p3.fr SomàVittorioCEA Saclayvittorio.soma@cea.fr StrinatiGiancarloUniversità di Camerinogiancarlo.strinati@gmail.com SuharaTadahiroKyoto Universitysuhara@ruby.scphys.kyoto-u.ac.jp SukhoruchkinSergeiPetersburg Nuclear Physics Institutesergeis@pnpi.spb.ru SuzukiToruTokyo Metropolitan Universitysuzukitr@tmu.ac.jp SuzukiToshioNihon University, Tokyosuzuki@chs.nihon-u.ac.jp TarpanovDimitarINRNE, Sofiadimitert@yahoo.co.uk Tohsaki-SuzukiAkihiroOsaka Universitytohsaki@rcnp.osaka-u.ac.jp TypelStefanGSI Darmstadts.typel@gsi.de UesakaTomohiroUniversity of Tokyouesaka@cns.s.u-tokyo.ac.jp UrbanMichaelIPN Orsayurban@ipno.in2p3.fr Van IsackerPietGANIL Caenisacker@ganil.fr VigezziEnricoINFN Milanovigezzi@mi.infn.it ViñasXavierUniversitat de Barcelonaxavier@ecm.ub.es Vinh MauNicoleIPN Orsayvinhmau@ipno.in2p3.fr VitturiAndreaINFN Padovavitturi@pd.infn.it Von OertzenWolframHelmholtz Zentrum Berlinoertzen@helmholtz-berlin.de WambachJochenTechnische Universität Darmstadtjochen.wambach@physik.tu-darmstadt.de WlazłowskiGabrielWarsaw University of Technologygabrielw@if.pw.edu.pl YamadaTaiichiKanto Gakuin University, Yokohamayamada@kanto-gakuin.ac.jp YoshidaKenichiRIKEN Nishina Center, Wakokenichi.yoshida@riken.jp YoshidaSatoshiHosei University, Tokyos_yoshi@i.hosei.ac.jp
EDITORIAL: Advances in Measurement Technology and Intelligent Instruments for Production Engineering
NASA Astrophysics Data System (ADS)
Gao, Wei; Takaya, Yasuhiro; Gao, Yongsheng; Krystek, Michael
2008-08-01
Measurement and instrumentation have long played an important role in Production Engineering, through supporting both the traditional field of manufacturing and the new field of micro/nano-technology. Papers published in this special feature were selected and updated from those presented at The 8th International Symposium on Measurement Technology and Intelligent Instruments (ISMTII 2007) held at Tohoku University, Sendai, Japan, on 24-27 September 2007. ISMTII 2007 was organized by ICMI (The International Committee on Measurements and Instrumentation), Japan Society for Precision Engineering (JSPE, Technical Committee of Intelligent Measurement with Nanoscale), Korean Society for Precision Engineering (KSPE), Chinese Society for Measurement (CSM) and Tohoku University. The conference was also supported by Center for Precision Metrology of UNC Charlotte and Singapore Institute of Manufacturing Technology. A total of 220 papers, including four keynote papers, were presented at ISMTII 2007, covering a wide range of topics, including micro/nano-metrology, precision measurement, online & in-process measurement, surface metrology, optical metrology & image processing, biomeasurement, sensor technology, intelligent measurement & instrumentation, uncertainty, traceability & calibration, and signal processing algorithms. The guest editors recommended publication of updated versions of some of the best ISMTII 2007 papers in this special feature of Measurement Science and Technology. The first two papers were presented in ISMTII 2007 as keynote papers. Takamasu et al from The University of Tokyo report uncertainty estimation for coordinate metrology, in which methods of estimating uncertainties using the coordinate measuring system after calibration are formulated. Haitjema, from Mitutoyo Research Center Europe, treats the most often used interferometric measurement techniques (displacement interferometry and surface interferometry) and their major sources of errors. Among the other papers, two are related to length measurement, which forms the basis of dimensional measurement. Schödel et al from Physikalisch Technische Bundesanstalt (PTB) describe the recent state of thermal expansion measurements with PTB's Precision Interferometer, which are based on the observation of the absolute length of samples by using phase stepping interferometry. Meiners-Hagen et al, also from PTB, investigate an improved method for compensation of the refractive index of air in length measurements by optical interferometry where the air pressure and the humidity are measured. Three papers concern surface metrology. Song et al from NIST (National Institutes of Standards and Technology) report topography measurement for determining the decay factors in surface replication of Standard Casing to support ballistics measurements in the US. Takahashi et al from the University of Tokyo present a lateral resolution improvement for a total internal reflection fluorescence microscope that employs the combined use of standing evanescent light and a scattering distribution retrieval algorithm with successive approximation. X Liu et al from Warwick University report on a new investigation into how surface topography and friction affect the touch-feel perception, with the results showing that both the measured roughness and friction coefficient have a strong correlation with rough-smooth and grippy-slippery feelings. Measurement algorithms and calibration are described in the following three papers. Hessling from SP Technical Research Institute of Sweden presents a general unprecedented framework for dynamic evaluation of measurement systems, which separates physical experiments, analysis and signal processing methods into succeeding steps of evaluations. Wübbeler et al from PTB illustrate the Monte Carlo method required for the numerical calculations of the probability density function approach, which has been proposed for evaluation of measurement uncertainty. Neuschaefer-Rube et al, also from PTB, present procedures and standards to test tactile and optical microsensors and micro-computed tomography systems, which are similar to the established tests for classical coordinate measuring machines and assess local and global sensor characteristics. The last three papers are related to micro/nano-metrology and intelligent instrumentation. Jiang et al from Tohoku University describe the fabrication of piezoresistive nanocantilevers for ultra-sensitive force detection by using spin-out diffusion, EB lithography and FAB etching, respectively. Y-C Liu et al from National Taiwan University develop an economical and highly sensitive optical accelerometer using a commercial optical pickup head. Michihata et al from Osaka University experimentally investigate the positioning sensing property and accuracy of a laser trapping probe for a nano-coordinate measuring machine. As guest editors, we believe that this special feature presents the newest information on advances in measurement technology and intelligent instruments from basic research to applied systems for Production Engineering. We would like to thank all the authors for their great contributions to this special feature and the referees for their careful reviews of the papers. We would also like to express our thanks and appreciation to Professor P Hauptmann, Editor-in-Chief of MST, for his kind offer to publish selected ISMTII 2007 papers in MST, and to the publishing staff of MST for their dedicated efforts that have made this special feature possible.
Distributed computations in a dynamic, heterogeneous Grid environment
NASA Astrophysics Data System (ADS)
Dramlitsch, Thomas
2003-06-01
In order to face the rapidly increasing need for computational resources of various scientific and engineering applications one has to think of new ways to make more efficient use of the worlds current computational resources. In this respect, the growing speed of wide area networks made a new kind of distributed computing possible: Metacomputing or (distributed) Grid computing. This is a rather new and uncharted field in computational science. The rapidly increasing speed of networks even outperforms the average increase of processor speed: Processor speeds double on average each 18 month whereas network bandwidths double every 9 months. Due to this development of local and wide area networks Grid computing will certainly play a key role in the future of parallel computing. This type of distributed computing, however, distinguishes from the traditional parallel computing in many ways since it has to deal with many problems not occurring in classical parallel computing. Those problems are for example heterogeneity, authentication and slow networks to mention only a few. Some of those problems, e.g. the allocation of distributed resources along with the providing of information about these resources to the application have been already attacked by the Globus software. Unfortunately, as far as we know, hardly any application or middle-ware software takes advantage of this information, since most parallelizing algorithms for finite differencing codes are implicitly designed for single supercomputer or cluster execution. We show that although it is possible to apply classical parallelizing algorithms in a Grid environment, in most cases the observed efficiency of the executed code is very poor. In this work we are closing this gap. In our thesis, we will - show that an execution of classical parallel codes in Grid environments is possible but very slow - analyze this situation of bad performance, nail down bottlenecks in communication, remove unnecessary overhead and other reasons for low performance - develop new and advanced algorithms for parallelisation that are aware of a Grid environment in order to generelize the traditional parallelization schemes - implement and test these new methods, replace and compare with the classical ones - introduce dynamic strategies that automatically adapt the running code to the nature of the underlying Grid environment. The higher the performance one can achieve for a single application by manual tuning for a Grid environment, the lower the chance that those changes are widely applicable to other programs. In our analysis as well as in our implementation we tried to keep the balance between high performance and generality. None of our changes directly affect code on the application level which makes our algorithms applicable to a whole class of real world applications. The implementation of our work is done within the Cactus framework using the Globus toolkit, since we think that these are the most reliable and advanced programming frameworks for supporting computations in Grid environments. On the other hand, however, we tried to be as general as possible, i.e. all methods and algorithms discussed in this thesis are independent of Cactus or Globus. Die immer dichtere und schnellere Vernetzung von Rechnern und Rechenzentren über Hochgeschwindigkeitsnetzwerke ermöglicht eine neue Art des wissenschaftlich verteilten Rechnens, bei der geographisch weit auseinanderliegende Rechenkapazitäten zu einer Gesamtheit zusammengefasst werden können. Dieser so entstehende virtuelle Superrechner, der selbst aus mehreren Grossrechnern besteht, kann dazu genutzt werden Probleme zu berechnen, für die die einzelnen Grossrechner zu klein sind. Die Probleme, die numerisch mit heutigen Rechenkapazitäten nicht lösbar sind, erstrecken sich durch sämtliche Gebiete der heutigen Wissenschaft, angefangen von Astrophysik, Molekülphysik, Bioinformatik, Meteorologie, bis hin zur Zahlentheorie und Fluiddynamik um nur einige Gebiete zu nennen. Je nach Art der Problemstellung und des Lösungsverfahrens gestalten sich solche "Meta-Berechnungen" mehr oder weniger schwierig. Allgemein kann man sagen, dass solche Berechnungen um so schwerer und auch um so uneffizienter werden, je mehr Kommunikation zwischen den einzelnen Prozessen (oder Prozessoren) herrscht. Dies ist dadurch begründet, dass die Bandbreiten bzw. Latenzzeiten zwischen zwei Prozessoren auf demselben Grossrechner oder Cluster um zwei bis vier Grössenordnungen höher bzw. niedriger liegen als zwischen Prozessoren, welche hunderte von Kilometern entfernt liegen. Dennoch bricht nunmehr eine Zeit an, in der es möglich ist Berechnungen auf solch virtuellen Supercomputern auch mit kommunikationsintensiven Programmen durchzuführen. Eine grosse Klasse von kommunikations- und berechnungsintensiven Programmen ist diejenige, die die Lösung von Differentialgleichungen mithilfe von finiten Differenzen zum Inhalt hat. Gerade diese Klasse von Programmen und deren Betrieb in einem virtuellen Superrechner wird in dieser vorliegenden Dissertation behandelt. Methoden zur effizienteren Durchführung von solch verteilten Berechnungen werden entwickelt, analysiert und implementiert. Der Schwerpunkt liegt darin vorhandene, klassische Parallelisierungsalgorithmen zu analysieren und so zu erweitern, dass sie vorhandene Informationen (z.B. verfügbar durch das Globus Toolkit) über Maschinen und Netzwerke zur effizienteren Parallelisierung nutzen. Soweit wir wissen werden solche Zusatzinformationen kaum in relevanten Programmen genutzt, da der Grossteil aller Parallelisierungsalgorithmen implizit für die Ausführung auf Grossrechnern oder Clustern entwickelt wurde.
EDITORIAL: Focus on Carbon Nanotubes
NASA Astrophysics Data System (ADS)
2003-09-01
The study of carbon nanotubes, since their discovery by Iijima in 1991, has become a full research field with significant contributions from all areas of research in solid-state and molecular physics and also from chemistry. This Focus Issue in New Journal of Physics reflects this active research, and presents articles detailing significant advances in the production of carbon nanotubes, the study of their mechanical and vibrational properties, electronic properties and optical transitions, and electrical and transport properties. Fundamental research, both theoretical and experimental, represents part of this progress. The potential applications of nanotubes will rely on the progress made in understanding their fundamental physics and chemistry, as presented here. We believe this Focus Issue will be an excellent guide for both beginners and experts in the research field of carbon nanotubes. It has been a great pleasure to edit the many excellent contributions from Europe, Japan, and the US, as well from a number of other countries, and to witness the remarkable effort put into the manuscripts by the contributors. We thank all the authors and referees involved in the process. In particular, we would like to express our gratitude to Alexander Bradshaw, who invited us put together this Focus Issue, and to Tim Smith and the New Journal of Physics staff for their extremely efficient handling of the manuscripts. Focus on Carbon Nanotubes Contents <;A article="1367-2630/5/1/117">Transport theory of carbon nanotube Y junctions R Egger, B Trauzettel, S Chen and F Siano The tubular conical helix of graphitic boron nitride F F Xu, Y Bando and D Golberg Formation pathways for single-wall carbon nanotube multiterminal junctions Inna Ponomareva, Leonid A Chernozatonskii, Antonis N Andriotis and Madhu Menon Synthesis and manipulation of carbon nanotubes J W Seo, E Couteau, P Umek, K Hernadi, P Marcoux, B Lukic, Cs Mikó, M Milas, R Gaál and L Forró Transitional behaviour in the transformation from active end planes to stable loops caused by annealing M Endo, B J Lee, Y A Kim, Y J Kim, H Muramatsu, T Yanagisawa, T Hayashi, M Terrones and M S Dresselhaus Energetics and electronic structure of C70-peapods and one-dimensional chains of C70 Susumu Okada, Minoru Otani and Atsushi Oshiyama Theoretical characterization of several models of nanoporous carbon F Valencia, A H Romero, E Hernández, M Terrones and H Terrones First-principles molecular dynamics study of the stretching frequencies of hydrogen molecules in carbon nanotubes Gabriel Canto, Pablo Ordejón, Cheng Hansong, Alan C Cooper and Guido P Pez The geometry and the radial breathing mode of carbon nanotubes: beyond the ideal behaviour Jeno Kürti, Viktor Zólyomi, Miklos Kertesz and Sun Guangyu Curved nanostructured materials Humberto Terrones and Mauricio Terrones A one-dimensional Ising model for C70 molecular ordering in C70-peapods Yutaka Maniwa, Hiromichi Kataura, Kazuyuki Matsuda and Yutaka Okabe Nanoengineering of carbon nanotubes for nanotools Yoshikazu Nakayama and Seiji Akita Narrow diameter double-wall carbon nanotubes: synthesis, electron microscopy and inelastic light scattering R R Bacsa, E Flahaut, Ch Laurent, A Peigney, S Aloni, P Puech and W S Bacsa Sensitivity of single multiwalled carbon nanotubes to the environment M Krüger, I Widmer, T Nussbaumer, M Buitelaar and C Schönenberger Characterizing carbon nanotube samples with resonance Raman scattering A Jorio, M A Pimenta, A G Souza Filho, R Saito, G Dresselhaus and M S Dresselhaus FTIR-luminescence mapping of dispersed single-walled carbon nanotubes Sergei Lebedkin, Katharina Arnold, Frank Hennrich, Ralph Krupke, Burkhard Renker and Manfred M Kappes Structural properties of Haeckelite nanotubes Ph Lambin and L P Biró Structural changes in single-walled carbon nanotubes under non-hydrostatic pressures: x-ray and Raman studies Sukanta Karmakar, Surinder M Sharma, P V Teredesai, D V S Muthu, A Govindaraj, S K Sikka and A K Sood Novel properties of 0.4 nm single-walled carbon nanotubes templated in the channels of AlPO4-5 single crystals Z K Tang, N Wang, X X Zhang, J N Wang, C T Chan and Ping Sheng Lattice dynamics and symmetry of double wall carbon nanotubes M Damnjanovic, E Dobardzic, I Milosevic, T Vukovic and B Nikolic Optical characterization of single-walled carbon nanotubes synthesized by catalytic decomposition of alcohol Shigeo Maruyama, Yuhei Miyauchi, Yoichi Murakami and Shohei Chiashi Christian Thomsen, Technische Universität Berlin, Germany Hiromichi Kataura, Tokyo Metropolitan University, Japan
Charged systems in bulk and at interfaces
NASA Astrophysics Data System (ADS)
Moreira, André Guérin
2001-05-01
One of the rules-of-thumb of colloid and surface physics is that most surfaces are charged when in contact with a solvent, usually water. This is the case, for instance, in charge-stabilized colloidal suspensions, where the surface of the colloidal particles are charged (usually with a charge of hundreds to thousands of e, the elementary charge), monolayers of ionic surfactants sitting at an air-water interface (where the water-loving head groups become charged by releasing counterions), or bilayers containing charged phospholipids (as cell membranes). In this work, we look at some model-systems that, although being a simplified version of reality, are expected to capture some of the physical properties of real charged systems (colloids and electrolytes). We initially study the simple double layer, composed by a charged wall in the presence of its counterions. The charges at the wall are smeared out and the dielectric constant is the same everywhere. The Poisson-Boltzmann (PB) approach gives asymptotically exact counterion density profiles around charged objects in the weak-coupling limit of systems with low-valent counterions, surfaces with low charge density and high temperature (or small Bjerrum length). Using Monte Carlo simulations, we obtain the profiles around the charged wall and compare it with both Poisson-Boltzmann (in the low coupling limit) and the novel strong coupling (SC) theory in the opposite limit of high couplings. In the latter limit, the simulations show that the SC leads in fact to asymptotically correct density profiles. We also compare the Monte Carlo data with previously calculated corrections to the Poisson-Boltzmann theory. We also discuss in detail the methods used to perform the computer simulations. After studying the simple double layer in detail, we introduce a dielectric jump at the charged wall and investigate its effect on the counterion density distribution. As we will show, the Poisson-Boltzmann description of the double layer remains a good approximation at low coupling values, while the strong coupling theory is shown to lead to the correct density profiles close to the wall (and at all couplings). For very large couplings, only systems where the difference between the dielectric constants of the wall and of the solvent is small are shown to be well described by SC. Another experimentally relevant modification to the simple double layer is to make the charges at the plane discrete. The counterions are still assumed to be point-like, but we constraint the distance of approach between ions in the plane and counterions to a minimum distance D. The ratio between D and the distance between neighboring ions in the plane is, as we will see, one of the important quantities in determining the influence of the discrete nature of the charges at the wall over the density profiles. Another parameter that plays an important role, as in the previous case, is the coupling as we will demonstrate, systems with higher coupling are more subject to discretization effects than systems with low coupling parameter. After studying the isolated double layer, we look at the interaction between two double layers. The system is composed by two equally charged walls at distance d, with the counterions confined between them. The charge at the walls is smeared out and the dielectric constant is the same everywhere. Using Monte-Carlo simulations we obtain the inter-plate pressure in the global parameter space, and the pressure is shown to be negative (attraction) at certain conditions. The simulations also show that the equilibrium plate separation (where the pressure changes from attractive to repulsive) exhibits a novel unbinding transition. We compare the Monte Carlo results with the strong-coupling theory, which is shown to describe well the bound states of systems with moderate and high couplings. The regime where the two walls are very close to each other is also shown to be well described by the SC theory. Finally, Using a field-theoretic approach, we derive the exact low-density ("virial") expansion of a binary mixture of positively and negatively charged hard spheres (two-component hard-core plasma, TCPHC). The free energy obtained is valid for systems where the diameters d_+ and d_- and the charge valences q_+ and q_- of positive and negative ions are unconstrained, i.e., the same expression can be used to treat dilute salt solutions (where typically d_+ ~ d_- and q_+ ~ q_-) as well as colloidal suspensions (where the difference in size and valence between macroions and counterions can be very large). We also discuss some applications of our results. Eine der Faustregeln der Kolloid- und Oberflächenphysik ist, dass die meisten Oberflächen geladen sind, wenn sie mit einem Lösungsmittel, normalerweise Wasser, in Kontakt treten. Dies ist zum Beispiel bei ladungsstabilisierten Kolloidalen Suspensionen der Fall, bei denen die Oberfläche der Kolloidteilchen geladen ist (gewöhnlich mit einer Ladung von mehreren Hunderttausend Elementarladungen), oder bei Monoschichten ionischer Tenside, die auf einer Luft-Wasser Grenzfläche sitzen (wobei die wasserliebenden Kopfgruppen durch die Freisetzung von Gegenionen geladen werden), sowie bei Doppelschichten, die geladene phospholipide enthalten (wie Zellmembranen). In dieser Arbeit betrachten wir einige Modellsysteme, die zwar eine vereinfachte Fassung der Realität darstellen, von denen wir aber dennoch erwarten koennen, dass wir mit ihrer Hilfe einige physikalische Eigenschaften realer geladener Systeme (Kolloide und Elektrolyte) einfangen können.
Robust boosting via convex optimization
NASA Astrophysics Data System (ADS)
Rätsch, Gunnar
2001-12-01
In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems? Boosting methods are originally designed for classification problems. To extend the boosting idea to regression problems, we use the previous convergence results and relations to semi-infinite programming to design boosting-like algorithms for regression problems. We show that these leveraging algorithms have desirable theoretical and practical properties. o Can boosting techniques be useful in practice? The presented theoretical results are guided by simulation results either to illustrate properties of the proposed algorithms or to show that they work well in practice. We report on successful applications in a non-intrusive power monitoring system, chaotic time series analysis and a drug discovery process. --- Anmerkung: Der Autor ist Träger des von der Mathematisch-Naturwissenschaftlichen Fakultät der Universität Potsdam vergebenen Michelson-Preises für die beste Promotion des Jahres 2001/2002. In dieser Arbeit werden statistische Lernprobleme betrachtet. Lernmaschinen extrahieren Informationen aus einer gegebenen Menge von Trainingsmustern, so daß sie in der Lage sind, Eigenschaften von bisher ungesehenen Mustern - z.B. eine Klassenzugehörigkeit - vorherzusagen. Wir betrachten den Fall, bei dem die resultierende Klassifikations- oder Regressionsregel aus einfachen Regeln - den Basishypothesen - zusammengesetzt ist. Die sogenannten Boosting Algorithmen erzeugen iterativ eine gewichtete Summe von Basishypothesen, die gut auf ungesehenen Mustern vorhersagen. Die Arbeit behandelt folgende Sachverhalte: o Die zur Analyse von Boosting-Methoden geeignete Statistische Lerntheorie. Wir studieren lerntheoretische Garantien zur Abschätzung der Vorhersagequalität auf ungesehenen Mustern. Kürzlich haben sich sogenannte Klassifikationstechniken mit großem Margin als ein praktisches Ergebnis dieser Theorie herausgestellt - insbesondere Boosting und Support-Vektor-Maschinen. Ein großer Margin impliziert eine hohe Vorhersagequalität der Entscheidungsregel. Deshalb wird analysiert, wie groß der Margin bei Boosting ist und ein verbesserter Algorithmus vorgeschlagen, der effizient Regeln mit maximalem Margin erzeugt. o Was ist der Zusammenhang von Boosting und Techniken der konvexen Optimierung? Um die Eigenschaften der entstehenden Klassifikations- oder Regressionsregeln zu analysieren, ist es sehr wichtig zu verstehen, ob und unter welchen Bedingungen iterative Algorithmen wie Boosting konvergieren. Wir zeigen, daß solche Algorithmen benutzt werden koennen, um sehr große Optimierungsprobleme mit Nebenbedingungen zu lösen, deren Lösung sich gut charakterisieren laesst. Dazu werden Verbindungen zum Wissenschaftsgebiet der konvexen Optimierung aufgezeigt und ausgenutzt, um Konvergenzgarantien für eine große Familie von Boosting-ähnlichen Algorithmen zu geben. o Kann man Boosting robust gegenüber Meßfehlern und Ausreissern in den Daten machen? Ein Problem bisheriger Boosting-Methoden ist die relativ hohe Sensitivität gegenüber Messungenauigkeiten und Meßfehlern in der Trainingsdatenmenge. Um dieses Problem zu beheben, wird die sogenannte 'Soft-Margin' Idee, die beim Support-Vector Lernen schon benutzt wird, auf Boosting übertragen. Das führt zu theoretisch gut motivierten, regularisierten Algorithmen, die ein hohes Maß an Robustheit aufweisen. o Wie kann man die Anwendbarkeit von Boosting auf Regressionsprobleme erweitern? Boosting-Methoden wurden ursprünglich für Klassifikationsprobleme entwickelt. Um die Anwendbarkeit auf Regressionsprobleme zu erweitern, werden die vorherigen Konvergenzresultate benutzt und neue Boosting-ähnliche Algorithmen zur Regression entwickelt. Wir zeigen, daß diese Algorithmen gute theoretische und praktische Eigenschaften haben. o Ist Boosting praktisch anwendbar? Die dargestellten theoretischen Ergebnisse werden begleitet von Simulationsergebnissen, entweder, um bestimmte Eigenschaften von Algorithmen zu illustrieren, oder um zu zeigen, daß sie in der Praxis tatsächlich gut funktionieren und direkt einsetzbar sind. Die praktische Relevanz der entwickelten Methoden wird in der Analyse chaotischer Zeitreihen und durch industrielle Anwendungen wie ein Stromverbrauch-Überwachungssystem und bei der Entwicklung neuer Medikamente illustriert.
Campo, J L; Cobos, P
1994-01-12
Four lines of Tribolium castaneum were selected in each of three replicates for increased ratio of (pupal-larval) to (adult-larval) weight gains, using selection for increased (pupal-larval) weight gain (PL), selection for decreased (adult-larval) weight gain (AL), direct selection for the ratio (R) and linear selection index of larval, pupal and adult weights (I), respectively, for four generations. Linear index was calculated with economic weights of m(2) -m(3) , m(3) -m(1) and m(1) -m(2) , respectively, with m(1) , m(2) and m(3) being the means for larval, pupal and adult weights. Selection to increase the ratio is considered to be a method to maximize the mean response in (adult-larval) weight while controlling the response in (pupal-adult) weight, and as a form of antagonistic selection to increase the weight gain during a given age period relative to the gain at another age period. Larval, pupal and adult weights were measured at 14, 21 and 28 days after adult emergence, respectively. The selected proportion was 20 % in all lines. The response observed for the ratio differed significantly among lines (p < 0.01), with the I and AL lines having the greatest responses. Line R was less effective in improving the objective of selection, while line PL appeared to be inappropriate. The observed responses for the numerator and denominator weight gains were positive in line PL, and negative in the AL, R and I lines. All lines apart from line PL decreased the (adult-larval) weight, holding (pupal-adult) weight constant. Larval weight showed the greatest influence on the response for the objective of selection. The results for this greater than 1 ratio are compared with results of others for smaller than 1 ratios, in which indirect selection for increased numerator is the more efficient alternative to the selection index. ZUSAMMENFASSUNG: Effizienz Selektionsverfahren zur Verbesserung des Quotienten der Gewichtsentwicklung zwischen Puppe/Larve und Käfer/Larve bei Tribolium. In den jeweils drei Versuchsserien zur Erhöhung des Quotienten (Gewicht von Puppe-Larve/Gewicht von Käfer-Larve) wurden vier Versuchsreihen von Tribolium castaneum untersucht: die Versuchsreihe PL wurde gewählt um die Differenz (Puppengewicht-Larvengewicht) zu erhöhen, die Versuchsreihe AL wurde gewählt um die Differenz (Käfergewicht-Larvengewicht) zu reduzieren, die Versuchsreihe R wurde direkt für den Koeffizienten gewählt und die Auswahl der Versuchsreihe I erfolgte über einen linearen Index, errechnet aus dem Gewicht von Larven, Puppen und Käfern über vier Generationen. Der lineare Index wurde berechnet aus den Gewichten von (m(2) -m(3) ), (m(3) -m(1) ) bzw. (m(1) -m(2) ), wobei m(1) , m(2) und m(3) die Mittelwerte für das Gewicht von Larven, Puppen bzw. Käfern sind. Die Auswahl zur Erhöhung des Quotienten ist eine Methode zur Maximierung des Durchschnittsgewichtsverhältnisses Käfer/Larve, sowie eine antagonische Auswahlform zur Erhöhung der Gewichtszunahme während einer bestimmten Wachstumsperiode im Vergleich zur Gewichtszunahme während einer anderen Wachstumsperiode. Das Selektionsverhältnis belief sich auf 20%. Die beim Quotienten beobachtete Antwort wies bedeutende Unterschiede zwischen Versuchsreihen auf (p < 0.01), wobei die höchsten Antworten bei den Versuchsreihen I und AL beobachtet wurden. Versuchsreihe R war am wenigsten effektiv, während Versuchsreihe PL nicht geeignet schien, das Auswahlziel zu verbessern. Die bei Nenner und Zähler beobachteten Antworten waren positiv bei der Versuchsreihe PL und negativ bei den anderen drei Versuchsreihen. Die Ergebnisse für diesen Quotient größer als 1 wurden mit denen anderer Versuche für Quotienten kleiner als 1 verglichen, bei denen die Auswahl zur Erhöhung des Zählers die effizienteste Alternative zum Auswahlindex ist. RESUMEN: Eficiencia de métodos de selección para incrementar el cociente entre la ganancia en peso de pupa-peso de larva y la ganancia en peso de adulto-peso de larva en Tribolium Cuatro líneas de Tribolium castaneum fueron seleccionadas en cada una de tres repeticiones para incrementar el cociente (peso de pupa-peso de larva)/(peso de adulto-peso de larva); la línea PL fue seleccionada para aumentar la diferencia (peso de pupa-pesp de larva), la línea AL fue seleccionada para disminuir la diferencia (peso de adulto-peso de larva), fa línea R fue seleccionada directamente para el cociente, y la línea I fue seleccionada por medio de un índice lineal basado en los pesos de larva, pupa y adulto, durante cuatro generaciones. El índice lineal se calculó con pesos económicos de (m(2) -m(3) ), (m(3) -m(1) ), y (m(1) -m(2) ) respectivamentee, siendo m(1) , m(2) , y m(3) los valores medios para el peso de larva, pupa y adulto. La selección para aumentar el cociente indicado es un método para maximizar la respuesta en (peso de adulto-peso de larva) controlando al tiempo la respuesta en (peso de pupa-peso de adulto), y es una forma de selección antagónica para aumentar la ganancia de peso durante un periodo de edad en relación con la ganancia en peso durante otro periodo. La proporción de selección due el 20 %. La respuesta observada en el cociente difería significativamente entre lineas (p < 0.01), teniendo las líneas I y AL las mayores respuestas. La línea R fue menos efectiva, mientras que la línea PL parecía inapropiada para mejorar el objetiyo de selección. Las respuestas observadas en el denominador y en el numerador fueron positivas en la línea PL, y negativas en las otras tres líneas. Los resultados para este cociente mayor que 1 se comparan con los de otros experimentos para cocientes menores que 1, en los que la selección para incrementar el numerador es la alternativa más eficiente al índice de selección. 1994 Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Fu, Wei-En
2014-03-01
Proceedings of the 14th International Conference, Taipei, Taiwan, 17th-21st June, 2013 Taiwan Organized by: Center for Measurement Standards/Industrial Technology Research Institute Mechanical and Systems Research Laboratories/Industrial Technology Research Institute National Taiwan University National Cheng Kung University National Taiwan University of Science and Technology National Tsing Hua University Greetings from Chairman of International Programme CommitteeTom Thomas When Professor Ken Stout and I founded this series of conferences in the United Kingdom more than thirty years ago, we did not anticipate its longevity or its success. Since that first meeting at Leicester, the conference has been often held in England, but also in several other European countries: France, Poland and Sweden, as well as in the United States. Ken, sadly no longer with us, would be proud of what it has achieved and has come to represent. Generations of researchers have presented their new ideas and innovations here which are now embodied in many textbooks and international standards. But this conference in 2013 marks a new departure and perhaps a new future. For the first time it is being held in Asia, reflecting the historic rise of the economies of the Pacific Rim, adding modern technology to their long-existing traditions of ordered insight and precise craftsmanship. Many of you have travelled far to attend this meeting, and we hope you will feel your trouble has been rewarded. We have an excellent selection of papers from all over the world from many of the world's experts, embodying the consolidation of tested ideas as well as the latest advances in the subject. These will be set in context by a glittering array of keynote and invited speakers. On behalf of the International Programme Committee, I am glad to acknowledge the hard work of the members of the Local Organising Committee in putting the programme together and making all the arrangements, and to accept their hospitality. It is my privilege and pleasure to welcome you all to the 14th International Conference on Metrology and Properties of Engineering Surfaces here in Taipei. Tom Thomas Halmstad, 1st June 2013 Greetings from Chairman of Local Organizing CommitteeVictor Lin It is the great honor of Center for Measurement Standards (CMS), metrology group of Industrial Technology Research Institute (ITRI), to host the 14th International Conference on Metrology and Properties of Engineering Surfaces (Met & Props 2013) from 17-21 June, 2013, in Taipei, Taiwan. In collaboration with four local universities, National Taiwan University (NTU), National Cheng-Kung University (NCKU), National Taiwan University of Science and Technology (NTST) and National Tsing-Hua University (NTHU), we have spent more than one year to prepare this Conference since the approval by the International Programme Committee (IPC). With the guidance from the IPC, we are able to go through the laborious, but important, process of paper selection and review from more than 100 submissions, and also to maintain the tradition in gathering the high quality and state-of-the-art papers. Finally, more than 65 full papers are collected in the programme (oral and poster), and over 120 surface metrologists from 17 countries (or economies) will attend the Conference. As stated in the preface by Professor Thomas, this series of conferences were founded by Tom and late Professor Ken Stout in the United Kingdom more than thirty years ago. I was lucky to join Ken's research group in Birmingham, and to start my journey over surface metrology in 1989, under the financial support from ITRI. With the encouragement from Professor Liam Blunt and endeavors of my colleagues, we are able to hold the Conference first time in emerging Asia, and to ''carry on the heritage and pave the way to the future'' (a Chinese proverb) in surface metrology. Taiwan is also known as Formosa, from Portuguese Ilha Formosa, which means ''Beautiful Island''. Besides the inspiring scientific arrangements, I encourage you to taste Taiwan's wonderful gourmet cuisine, and to explore the beauty of the sweet-potato-shaped island. I wish you a joyful, fruitful and memorable stay. Victor TY Lin, PhD Chairman Local Organizing Committee Met & Props 2013 International Programme Committee Professor Mohamed El Mansori (Arts et Metiers ParisTech, France) Professor H Zahouani (Ecole Centrale de Lyon, France) Professor B-G Rosen (Halmstad University, Sweden) Professor Tom R Thomas (Halmstad University, Sweden) Professor Liam Blunt (University of Huddersfield, UK) Professor Richard Leach (National Physical Laboratory, UK) Professor Chris Brown (Worcester Polytechnic Institute, USA) Dr Jia-Ruey Duann (Center for Measurement Standards, ITRI, Taiwan) International Scientific Committee Professor H Zahouani (Ecole Centrale de Lyon, France) Dr Rolf Krüger-Sehm (Physikalisch-Technische Bundesanstalt, Germany) Professor Pawel Pawlus (Rzeszów University of Technology, Poland) Professor B-G Rosen (Halmstad University, Sweden) Professor Tom R Thomas (Halmstad University, Sweden) Professor Liam Blunt (University of Huddersfield, UK) Professor Derek Chetwynd (University of Warwick, UK) Professor Jane Jiang (University of Huddersfield, UK) Professor Richard Leach (National Physical Laboratory, UK) Professor Paul Scott (University of Huddersfield, UK) Dr Andrew Yacoot (National Physical Laboratory, UK) Professor Chris Brown (Worcester Polytechnic Institute, USA) Dr Chris Evans (University of North Carolina at Charlotte, USA) Professor Jay Raja (University of North Carolina at Charlotte, USA) Dr Ted Vorburger (National Institute of Standards and Technology, USA) Dr Andrew Baker (National Measurement Institute, Australia) Professor David Lee Butler (Nanyang Technological University, Singapore) Dr Benny Cheung (The Hong Kong Polytechnic University, China) Professor Yetai Fei (Hefei University of Technology, China) Dr Kazuya Naoi (National Metrology Institute of Japan, Japan) Dr Heui-Jae Pahk (SNU Precision Co. Ltd., Korea) Professor Jiu-Bin Tan (Harbin Institute of Technology, China) Ms. Siew-Leng Tan (National Metrology Centre (NMC/A*STAR), Singapore) Mr. A. Tonmueanwai (National Institute of Metrology, Thailand (NIMT), Thailand) Professor Kazuhisa Yanagi (Nagaoka University, Japan) Local Organizing Committee Dr Victor Tzeng-Yow Lin (Center for Measurement Standards, ITRI, Taiwan) Professor Kuang-Chao Fan (National Taiwan University, Taiwan) Professor Jen-Fin Lin (ASME Fellow, National Cheng Kung University, Taiwan) Professor Chao-Chang Chen(National Taiwan University of Science and Technology, Taiwan) Professor Shih-Chieh Lin (National Tsing Hua University, Taiwan) Professor Liang-Chia Chen (National Taiwan University, Taiwan) Professor Fang-Jung Shiou (National Taiwan University of Science and Technology, Taiwan) Professor Chun-Hui Chung (National Taiwan University of Science and Technology, Taiwan) Professor Pin-Chuan Chen (National Taiwan University of Science and Technology, Taiwan) Dr Wen-En Fu (Center for Measurement Standards, ITRI, Taiwan)
Genetic study of Andalusia's ovine and caprine breeds.
Rodero, E; Haba, M R; Rodero, A
1997-01-12
Two different breeds of Andalusian sheep, 'Grazalema Merino' and 'Lebrijan Churro', and two different breeds of Andalusian goats, 'Andalusian White' and 'Andalusian Black', chosen by previous studies (Rodero et al. 1992a) as priority breeds for conservation, were studied. The systems used corresponded to ethnozootechnic characteristics, as well as the different biochemical-polymorphism variables. Farms were differentiated within breeds, or between themselves, and different tests were used of genetic and genotypic frequencies: Wright's indices, medium heterozygosities, Whalund's variances, G test of probability of reason, etc. Also Cavalli-Sforza's genetic distance was obtained. In the Andalusian Black and Grazalema Merino breeds, the Whalund's variances obtained were a result of selection, that has divided the breeds into distinct populations differentiated spatially. Medium heterozygosities of each breed do not differ much within themselves, but when each system is considered alone, discrepancies between ethnic groups are relevant. Wright's F indices demonstrated in the Andalusian White and Grazalema Merino breeds, genetic heterozygosities between populations or studied herds can be deduced, but this is not possible in the Andalusian Black. The F(IS) values indicated, despite the small size of the populations, that inbreeding has been avoided, probably because of the entry of foreign sires. In none of the breeds is there a significant excess of heterozygosis. The genetic distances between flocks within breeds do not differ from those found between breeds. RÉSUMÉ: On a travallé avec, differents troupeau des races de montons de l'Andalusie, Grazalema Merino et Lebrija Churro, et avec les races caprines Andalusian White et Andalusian Black, choisie entre les races Andaluciennes comme prioritaires pour la conservation, dans un etudie avant (Rodero et col. 1992a). Les sistémes utilicés dans cette travaille correspondent á charactérés etnozootechniques et á differents variables de polymorphism biochimique. Lorsque on fait differences entre troupeau, dedans de races, ou entre elles, on a utilicés differents preuves, á partir des fréquences géniques et génotipiques: l'index de Wright, hétérozygotie moyennes, variances de Whalund, preuve G de raison de probabilité, etc. Aussi le distance de Cavalli-Sforza. Comme conclusion, dans les races Andalusian Black et Grazalema Merino les variances de Whalund obtenues sont cosequences de l'action de la selection, donant different populations avec differentiation spaciale. Les hétérozygoties moyennes de chaque race sont parus, mais lorsque on considérent chaque systéme separé, les differences entre groupes ethniques sont importantes. Les indexes F de Wright demonstrent que, dans le races Andalusian White et Grazalema Merino on peuvent déduire d'heterozygoties génétique entre les populations ou troupeau analicées, dans le race Andalusian Black les differences valeurs de FIS indiquent que, malgré les petites dimensions des populations, on a evité la consanguinitée, due, probablement, á l'entrée d'étalons externes. Il n'y a pas, chez auqune race, d'un signifivative accroissement d'hétérizygosis. Les distances génétiques entre troupeau, dedans des races, en different pas des distances obtenues entre races. RESUMEN: Se ha trabajado con diferentes ganaderias de las razas ovinas andaluzas Merino de Grazalema y Churra Lebrijana, y con las caprinas Blanca Serrana y Negra Serrana, elegidas entre el resto de las razas de Andalucia como prioritarias pra la conservación, por estudio previo (Rodero y col., 1992a). Los sistemas utilizados en este trabajo corresponden tanto a caracteres etnozootécnicos como a diferentes variables de polimorfismos bioquimicos. Cuando se han diferenciado las ganaderias dentrde razas, o las ganaderias entre si, se han utilizado diferentes pruebas, a partir de las frecuencias genéticas y genotipicas: indices de Wright, heterocigosidades media, varianzas de Whalund, prueba G de razón de probabilidad, etc. También se obtuvieron las distancia genéticas de Cavalli-Sforza. Se concluye que en las razas Negra Serrana y Merino de Grazalema las varianzas de Whalund obtenidas son consecuencia de la acción de la selección que ha actuado dividiendo las razas en distintas poblaciones con diferenciación espacial. Las heterocigosidades medias de cada raza no difieren mucho entre si, pero cuando se considera cada sistema aisladamente, las discrepancias entre grupos étnicos son acusadas. Los indices F de Wright ponen de manifiesto que, mientras en las razas Blanca Serrana y Merino de Grazalema se pueden deducir heterocigosidades genéticas entre las problaciones o ganaderias estudiadas, no ocurre otro tanto en la raza Negra Serrana. Los valores de F(IS) parecen indicar que, a pesar del tamaño pequeño de las poblaciones, se ha evitado la consanguinidad, probablemente por la entrada de sementales externos. No se produce en ninguna de las razas un exceso significativo de heterocigosis. Las distancias genéticas entre ganaderias dentro de razas no difieren de lashalladas entre razas. ZUSAMMENFASSUNG: Es wurde mit verschiedenen andalusischen Zuchten der Schafrassen 'Grazalema Merino' and 'Lebrijan Churro' und der Ziegenrassen 'Andalusian White' und 'Andalusian Black' gearbeitet, die man von den andalusichen Rassen im Hinblick auf Erhaltung ausgewählt hat. Die benutzten Systeme in dieser Forschungsarbeit entsprechen Merkmalen, die sich sowohl auf ethnisch wie auch auf die verschiedenen Variablen des biochemischen Polimorfismius bezichen. Zur Unterschlidung von Zuchten innerhalb die Rassen oder von zuchten untereinander wurden verschiedene Tests benutzt, die von den genetischen und genotypischen Frequenzen ausgehen: Wright-Index, Durchschnittsheterozygositäten, Wahl und Varianz G-Test der Wahrscheinlichkeit, etc. Außerdem wurden die genetischen Distanzen nach Cavalli-Sforza errechnet. Man Kommt zum Schluß, daß in den Andalusian Black und Grazalema Merino die Wahl und Varianz das Ergebnis einer Selektionsaktivität ist, die die Rassen in Verschiedenen Populationen und unterschiedlichen Räumen aufgeteilt hat. Die durchschnittlichen Heterozygositäten jeder Rasse unterscheiden sich wenig voneinander, aber wenn man jedes System für sich betrachtet, stell man doch erhebliche Diskrepanzen zwischen den ethnischen Gruppen fest. Der Wright-Index offenbart, daß man in den Rassen Andalusian White und Grazalema Merino genetische Heterozygositäten ableiten kann zwischen den Populationen oder den untersuchten Zuchten; dies ist nicht der Fall inder Rasse Andalusian Black. Die F(IS) werte scheinen anzugeben, da trotz der kleinen Größe der Populationen die Blutsverwandschaft vermieden wurde, wahrscheinlich durch von außerhalb kommenden Böcken. In keiner der Rassen existiert übermässige Heterozygotie Die genetischen Distanzen zwischer der Zuchten unterscheiden sich nicht van dener der Rassen. 1997 Blackwell Verlag GmbH.
Light Dawns on Dark Gamma-ray Bursts
NASA Astrophysics Data System (ADS)
2010-12-01
Gamma-ray bursts are among the most energetic events in the Universe, but some appear curiously faint in visible light. The biggest study to date of these so-called dark gamma-ray bursts, using the GROND instrument on the 2.2-metre MPG/ESO telescope at La Silla in Chile, has found that these gigantic explosions don't require exotic explanations. Their faintness is now fully explained by a combination of causes, the most important of which is the presence of dust between the Earth and the explosion. Gamma-ray bursts (GRBs), fleeting events that last from less than a second to several minutes, are detected by orbiting observatories that can pick up their high energy radiation. Thirteen years ago, however, astronomers discovered a longer-lasting stream of less energetic radiation coming from these violent outbursts, which can last for weeks or even years after the initial explosion. Astronomers call this the burst's afterglow. While all gamma-ray bursts [1] have afterglows that give off X-rays, only about half of them were found to give off visible light, with the rest remaining mysteriously dark. Some astronomers suspected that these dark afterglows could be examples of a whole new class of gamma-ray bursts, while others thought that they might all be at very great distances. Previous studies had suggested that obscuring dust between the burst and us might also explain why they were so dim. "Studying afterglows is vital to further our understanding of the objects that become gamma-ray bursts and what they tell us about star formation in the early Universe," says the study's lead author Jochen Greiner from the Max-Planck Institute for Extraterrestrial Physics in Garching bei München, Germany. NASA launched the Swift satellite at the end of 2004. From its orbit above the Earth's atmosphere it can detect gamma-ray bursts and immediately relay their positions to other observatories so that the afterglows could be studied. In the new study, astronomers combined Swift data with new observations made using GROND [2] - a dedicated gamma-ray burst follow-up observation instrument, which is attached to the 2.2-metre MPG/ESO telescope at La Silla in Chile. In doing so, astronomers have conclusively solved the puzzle of the missing optical afterglow. What makes GROND exciting for the study of afterglows is its very fast response time - it can observe a burst within minutes of an alert coming from Swift using a special system called the Rapid Response Mode - and its ability to observe simultaneously through seven filters covering both the visible and near-infrared parts of the spectrum. By combining GROND data taken through these seven filters with Swift observations, astronomers were able to accurately determine the amount of light emitted by the afterglow at widely differing wavelengths, all the way from high energy X-rays to the near-infrared. The astronomers used this information to directly measure the amount of obscuring dust that the light passed through en route to Earth. Previously, astronomers had to rely on rough estimates of the dust content [3]. The team used a range of data, including their own measurements from GROND, in addition to observations made by other large telescopes including the ESO Very Large Telescope, to estimate the distances to nearly all of the bursts in their sample. While they found that a significant proportion of bursts are dimmed to about 60-80 percent of the original intensity by obscuring dust, this effect is exaggerated for the very distant bursts, letting the observer see only 30-50 percent of the light [4]. The astronomers conclude that most dark gamma-ray bursts are therefore simply those that have had their small amount of visible light completely stripped away before it reaches us. "Compared to many instruments on large telescopes, GROND is a low cost and relatively simple instrument, yet it has been able to conclusively resolve the mystery surrounding dark gamma-ray bursts," says Greiner. Notes [1] Gamma-ray bursts lasting longer than two seconds are referred to as long bursts and those with a shorter duration are known as short bursts. Long bursts, which were observed in this study, are associated with the supernova explosions of massive young stars in star-forming galaxies. Short bursts are not well understood, but are thought to originate from the merger of two compact objects such as neutron stars. [2] The Gamma-Ray burst Optical and Near-infrared Detector (GROND) was designed and built at the Max-Planck Institute for Extraterrestrial Physics in collaboration with the Tautenburg Observatory, and has been fully operational since August 2007. [3] Other studies relating to dark gamma-ray bursts have been released. Early this year, astronomers used the Subaru Telescope to observe a single gamma-ray burst, from which they hypothesised that dark gamma-ray bursts may indeed be a separate sub-class that form through a different mechanism, such as the merger of binary stars. In another study published last year using the Keck Telescope, astronomers studied the host galaxies of 14 dark GRBs, and based on the derived low redshifts they infer dust as the likely mechanism to create the dark bursts. In the new work reported here, 39 GRBs were studied, including nearly 20 dark bursts, and it is the only study in which no prior assumptions have been made and the amount of dust has been directly measured. [4] Because the afterglow light of very distant bursts is redshifted due to the expansion of the Universe, the light that left the object was originally bluer than the light we detect when it gets to Earth. Since the reduction of light intensity by dust is greater for blue and ultraviolet light than for red, this means that the overall dimming effect of dust is greater for the more distant gamma-ray bursts. This is why GROND's ability to observe near-infrared radiation makes such a difference. More information This research is presented in a paper to appear in the journal Astronomy & Astrophysics on 16 December 2010 The team is composed of: J. Greiner (Max-Planck-Institut für extraterrestrische Physik [MPE], Germany), T. Krühler (MPE, Universe Cluster, Technische Universität München), S. Klose (Thüringer Landessternwarte, Germany), P. Afonso (MPE), C. Clemens (MPE), R. Filgas (MPE), D.H. Hartmann (Clemson University, USA), A. Küpcü Yoldaş¸ (University of Cambridge, UK), M. Nardini (MPE), F. Olivares E. (MPE), A. Rau (MPE), A. Rossi (Thüringer Landessternwarte, Germany), P. Schady (MPE), and A. Updike (Clemson University, USA) ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world's most productive astronomical observatory. It is supported by 14 countries: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world's most advanced visible-light astronomical observatory and VISTA, the world's largest survey telescope. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 42-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become "the world's biggest eye on the sky".
NASA Astrophysics Data System (ADS)
Poisson, E.
2006-09-01
The motion of a charged particle interacting with its own electromagnetic field is an area of research that has a long history; this problem has never ceased to fascinate its investigators. On the one hand the theory ought to be straightforward to formulate: one has Maxwell's equations that tell the field how to behave (given the motion of the particle), and one has the Lorentz-force law that tells the particle how to move (given the field). On the other hand the theory is fundamentally ambiguous because of the field singularities that necessarily come with a point particle. While each separate sub-problem can easily be solved, to couple the field to the particle in a self-consistent treatment turns out to be tricky. I believe it is this dilemma (the theory is straightforward but tricky) that has been the main source of the endless fascination. For readers of Classical and Quantum Gravity, the fascination does not end there. For them it is also rooted in the fact that the electromagnetic self-force problem is deeply analogous to the gravitational self-force problem, which is of direct relevance to future gravitational wave observations. The motion of point particles in curved spacetime has been the topic of a recent Topical Review [1], and it was the focus of a recent Special Issue [2]. It is surprising to me that radiation reaction is a subject that continues to be poorly covered in the standard textbooks, including Jackson's bible [3]. Exceptions are Rohrlich's excellent text [4], which makes a very useful introduction to radiation reaction, and the Landau and Lifshitz classic [5], which contains what is probably the most perfect summary of the foundational ideas (presented in characteristic terseness). It is therefore with some trepidation that I received Herbert Spohn's book, which covers both the classical and quantum theories of a charged particle coupled to its own field (the presentation is limited to flat spacetime). Is this the text that graduate students and researchers should turn to in order to get a complete and accessible education in radiation reaction? My answer is that while the book does indeed contain a lot of useful material, it is not a very accessible source of information, and it is certainly not a student-friendly textbook. Instead, the book presents a technical account of the author's personal take on the theory, and represents a culminating summary of the author's research contributions over more than a decade. The book is written in a fairly mathematical style (the author is Professor of Mathematical Physics at the Technische Universitat in Munich), and it very much emphasises mathematical rigour. This makes the book less accessible than I would wish it to be, but this is perhaps less a criticism than a statement about my taste, expectation, and attitude. The presentation of the classical theory begins with a point particle, but Spohn immediately smears the charge distribution to eliminate the vexing singularities of the retarded field. He considers both the nonrelativistic Abraham model (in which the extended particle is spherically symmetric in the laboratory frame) and the relativistic Lorentz model (in which the particle is spherical in its rest frame). In Spohn's work, the smearing of the charge distribution is entirely a mathematical procedure, and I would have wished for a more physical discussion. A physically extended body, held together against electrostatic repulsion by cohesive forces (sometimes called Poincaré stresses) would make a sound starting point for a classical theory of charged particles, and would have nicely (and physically) motivated the smearing operation adopted in the book. Spohn goes on to derive energy momentum relations for the extended objects, and to obtain their equations of motion. A compelling aspect of his presentation is that he formally introduces the 'adiabatic limit', the idea that the external fields acting on the charged body should have length and time scales that are long compared with the particle's internal scales (respectively the electrostatic classical radius and its associated time scale). As a consequence, the equations of motion do not involve a differentiated acceleration vector (as is the case for the Abraham Lorentz Dirac equations) but are proper second-order differential equations for the position vector. In effect, the correct equations of motion are obtained from the Abraham Lorentz Dirac equations by a reduction-of-order procedure that was first proposed (as far as I know) by Landau and Lifshitz [5]. In Spohn's work this procedure is not {\\it ad hoc}, but a natural consequence of the adiabatic approximation. An aspect of the classical portion of the book that got me particularly excited is Spohn's proposal for an experimental test of the predictions of the Landau Lifshitz equations. His proposed experiment involves a Penning trap, a device that uses a uniform magnetic field and a quadrupole electric field to trap an electron for very long times. Without radiation reaction, the motion of an electron in the trap is an epicycle that consists of a rapid (and small) cyclotron orbit superposed onto a slow (and large) magnetron orbit. Spohn shows that according to the Landau Lifshitz equations, the radiation reaction produces a damping of the cyclotron motion. For reasonable laboratory situations this damping occurs over a time scale of the order of 0.1 second. This experiment might well be within technological reach. The presentation of the quantum theory is based on the nonrelativistic Abraham model, which upon quantization leads to the well-known Pauli-Fierz Hamiltonian of nonrelativistic quantum electrodynamics. This theory, an approximation to the fully relativistic version of QED, has a wide domain of validity that includes many aspects of quantum optics and laser-matter interactions. As I am not an expert in this field, my ability to review this portion of Spohn's book is limited, and I will indeed restrict myself to a few remarks. I first admit that I found Spohn's presentation to be tough going. Unlike the pair of delightful books by Cohen-Tannoudji, Dupont-Roc, and Grynberg [6, 7], this is not a gentle introduction to the quantum theory of a charged particle coupled to its own electromagnetic field. Instead, Spohn proceeds rather quickly through the formulation of the theory (defining the Hamiltonian and the Hilbert space) and then presents some applications (for example, he constructs the ground states of the theory, he examines radiation processes, and he explores finite-temperature aspects). There is a lot of material in the eight chapters devoted to the quantum theory, but my insufficient preparation and the advanced nature of Spohn's presentation were significant obstacles; I was not able to draw much appreciation for this material. One of the most useful resources in Spohn's book are the historical notes and literature reviews that are inserted at the end of each chapter. I discovered a wealth of interesting articles by reading these, and I am grateful that the author made the effort to collect this information for the benefit of his readers. References [1] Poisson E 2004 Radiation reaction of point particles in curved spacetime Class. Quantum Grav 21 R153 R232 [2] Lousto C O 2005 Special issue: Gravitational Radiation from Binary Black Holes: Advances in the Perturbative Approach, Class. Quantum Grav22 S543 S868 [3] Jackson J D 1999 Classical Electrodynamics Third Edition (New York: Wiley) [4] Rohrlich F 1990 Classical Charged Particles (Redwood City, CA: Addison Wesley) [5] Landau L D and Lifshitz E M 2000 The Classical Theory of Fields Fourth Edition (Oxford: Butterworth Heinemann) [6] Cohen-Tannoudji C Dupont-Roc J and Grynberg G 1997 Photons and Atoms - Introduction to Quantum Electrodynamics (New York: Wiley-Interscience) [7] Cohen-Tannoudji C, Dupont-Roc J and G Grynberg G 1998 Atom Photon Interactions: Basic Processes and Applications (New York: Wiley-Interscience)
NASA Astrophysics Data System (ADS)
Massa, Enrico; Nicolaus, Arnold
2011-04-01
This issue of Metrologia collects papers about the results of an international research project aimed at the determination of the Avogadro constant, NA, by counting the atoms in a silicon crystal highly enriched with the isotope 28Si. Fifty years ago, Egidi [1] thought about realizing an atomic mass standard. In 1965, Bonse and Hart [2] operated the first x-ray interferometer, thus paving the way to the achievement of Egidi's dream, and soon Deslattes et al [3] completed the first counting of the atoms in a natural silicon crystal. The present project, outlined by Zosi [4] in 1983, began in 2004 by combining the experiences and capabilities of the BIPM, INRIM, IRMM, NIST, NPL, NMIA, NMIJ and PTB. The start signal, ratified by a memorandum of understanding, was a contract for the production of a silicon crystal highly enriched with 28Si. The enrichment process was undertaken by the Central Design Bureau of Machine Building in St Petersburg. Subsequently, a polycrystal was grown in the Institute of Chemistry of High-Purity Substances of the Russian Academy of Sciences in Nizhny Novgorod and a 28Si boule was grown and purified by the Leibniz-Institut für Kristallzüchtung in Berlin. Isotope enrichment made it possible to apply isotope dilution mass spectroscopy, to determine the Avogadro constant with unprecedented accuracy, and to fulfil Egidi's dream. To convey Egidi's 'fantasy' into practice, two 28Si kilogram prototypes shaped as quasi-perfect spheres were manufactured by the Australian Centre for Precision Optics; their isotopic composition, molar mass, mass, volume, density and lattice parameter were accurately determined and their surfaces were chemically and physically characterized at the atomic scale. The paper by Andreas et al reviews the work carried out; it collates all the findings and illustrates how Avogadro's constant was obtained. Impurity concentration and gradients in the enriched crystal were measured by infrared spectroscopy and taken into account; Zakel et al relate these measurements in detail. Next, Pramann et al illustrate how the molar mass of the enriched crystal was measured by exploiting isotopic enrichment and isotope dilution mass spectrometry. Valkiers et al report about remeasurement of the molar mass of a natural Si crystal, a measurement prompted by the exigency of clarifying the origin of the discrepancy between the NA value given in the present issue and the value obtained using natural Si crystals. A consistency analysis of the different isotopic-composition determinations is illustrated in the paper by Bulska et al. As reported in two papers by Massa et al, to determine the lattice parameter an x-ray interferometer was manufactured from the material between the already mentioned spheres. The measurement result was combined with lattice comparisons between different crystal samples and with the impurity gradient to extrapolate the sphere's lattice-parameter. Ferroglio et al's contribution analyzes the self-weight deformation of the x-ray interferometer. Fujimoto et al report about the lattice-perfection investigations carried out by a novel self-referencing diffractometer at the National Laboratory for High-Energy Physics (KEK) in Japan. A really great effort was made to characterize the sphere surfaces and to correct for the oxide layer and the contaminating atoms. The results of these investigations are given by Busch et al. The sphere diameter and topography were measured by optical interferometry to nanometer accuracy; the papers of Bartl et al and Kuramoto et al describe how the sphere volumes were determined. Andreas et al's paper describes the calculation of phase corrections for the diameter measurements. The results of mass comparisons against the Pt-Ir standards of the BIPM, NMIJ and PTB are given by Picard et al. The results reported in the present issue need to be completed. One of the necessary activities is to relate the mass of the 28Si atom to its Compton wavelength to test the mass-energy-frequency equivalence. Another effort is to monitor the stability of the Pt-Ir prototype: the technologies described in the present issue can be refined and finalized to calculate the mass variation of 1 kg 28Si spheres by monitoring the surface evolution without weighing them on a balance. The last activity is the determination of the mass of a 28Si sphere by electrical measurements using a watt balance and without any reference to the Pt-Ir prototype. In this framework, it will be necessary to demonstrate the mutual consistency and the stability of both the electrical and crystal mise en pratique of a kilogram definition based on a conventional value of the Planck constant. A related issue is to develop suitable procedures and protocols to disseminate the unit of mass from the new realizations. Since the molar Planck constant is well known via the measurement of the Rydberg constant, the accurate measurement of NA also provides an accurate and independent determination of the Planck constant, h. A comparison of the values of the Planck constant obtained via the watt-balance experiment and the NA determination tests quantum mechanics. In fact, the watt-balance value of h depends on solid state physics through the theories of Josephson and quantum Hall effects, whereas the value of h derived from NA depends on atomic physics through the energy level differences in hydrogen and deuterium, whose associated transition frequencies yield information on the Rydberg constant. Grateful thanks are addressed to H-J Pohl for his outstanding project management in Russia, to A K Kaliteevski and his colleagues of the Central Design Bureau of Machine Building and the Institute of Chemistry of High-Purity Substances for their dedication and the punctual delivery of the enriched material, to H Riemann and his staff of the Institut für Kristallzüchtung for the crystal growth, to our directors for their advice and financial support, and to our colleagues for their daily work. Special thanks are addressed to Peter Becker, to whom this issue is dedicated on the occasion of his retirement from work at the Physikalisch-Technische Bundesanstalt. In 1974, young Peter joined the PTB's Avogadro group which, under the direction of Peter Seyfried, followed Bonse's work and improved the measurements of the lattice parameter and the Avogadro constant [5, 6]. In 2004, Peter proposed and backed this project by taking on his shoulders the risks, the management burden and the coordination of the many relevant activities. References [1] Egidi C 1963 Phantasies on a natural unity of mass Nature 200 61-2 [2] Bonse U and Hart M 1965 An x-ray interferometer Appl. Phys. Lett. 6 155-6 [3] Deslattes R D et al 1974 Determination of the Avogadro constant Phys. Rev. Lett. 33 463-6 [4] Zosi G 1983 A neo-Pythagorean approach towards an atomic mass standard Lett. Nuovo Cimento 38 577-80 [5] Becker P et al 1981 Absolute measurement of the (220) lattice plane spacing in a silicon crystal Phys. Rev. Lett. 46 1540-3 [6] Seyfried P et al 1992 A determination of the Avogadro constant Z. Phys. B 87 289-98
NASA Astrophysics Data System (ADS)
Montenegro, Rivelino V. D.
2003-05-01
The colloidal systems are present everywhere in many varieties such as emulsions (liquid droplets dispersed in liquid), aerosols (liquid dispersed in gas), foam (gas in liquid), etc. Among several new methods for the preparation of colloids, the so-called miniemulsion technique has been shown to be one of the most promising. Miniemulsions are defined as stable emulsions consisting of droplets with a size of 50-500 nm by shearing a system containing oil, water, a surfactant, and a highly water insoluble compound, the so-called hydrophobe 1. In the first part of this work, dynamic crystallization and melting experiments are described which were performed in small, stable and narrowly distributed nanodroplets (confined systems) of miniemulsions. Both regular and inverse systems were examined, characterizing, first, the crystallization of hexadecane, secondly, the crystallization of ice. It was shown for both cases that the temperature of crystallization in such droplets is significantly decreased (or the required undercooling is increased) as compared to the bulk material. This was attributed to a very effective suppression of heterogeneous nucleation. It was also found that the required undercooling depends on the nanodroplet size: with decreasing droplet size the undercooling increases. 2. It is shown that the temperature of crystallization of other n-alkanes in nanodroplets is also significantly decreased as compared to the bulk material due to a very effective suppression of heterogeneous nucleation. A very different behavior was detected between odd and even alkanes. In even alkanes, the confinement in small droplets changes the crystal structure from a triclinic (as seen in bulk) to an orthorhombic structure, which is attributed to finite size effects inside the droplets. An intermediate metastable rotator phase is of less relevance for the miniemulsion droplets than in the bulk. For odd alkanes, only a strong temperature shift compared to the bulk system is observed, but no structure change. A triclinic structure is formed both in bulk and in miniemulsion droplets. 3. In the next part of the thesis it is shown how miniemulsions could be successfully applied in the development of materials with potential application in pharmaceutical and medical fields. The production of cross-linked gelatin nanoparticles is feasible. Starting from an inverse miniemulsion, the softness of the particles can be controlled by varying the initial concentration, amount of cross-link agent, time of cross-linking, among other parameters. Such particles show a thermo-reversible effect, e.g. the particles swell in water above 37 °C and shrink below this temperature. Above 37 °C the chains loose the physical cross-linking, however the particles do not loose their integrity, because of the chemical cross-linking. Those particles have potential use as drug carriers, since gelatin is a natural polymer derived from collagen. 4. The cross-linked gelatin nanoparticles have been used for the biomineralization of hydroxyapatite (HAP), a biomineral, which is the major constituent of our bones. The biomineralization of HAP crystals within the gelatin nanoparticles results in a hybrid material, which has potential use as a bone repair material. 5. In the last part of this work we have shown that layers of conjugated semiconducting polymers can be deposited from aqueous dispersion prepared by the miniemulsion process. Dispersions of particles of different conjugated semiconducting polymers such as a ladder-type poly(para-phenylene) and several soluble derivatives of polyfluorene could be prepared with well-controlled particle sizes ranging between 70 - 250 nm. Layers of polymer blends were prepared with controlled lateral dimensions of phase separation on sub-micrometer scales, utilizing either a mixture of single component nanoparticles or nanoparticles containing two polymers. From the results of energy transfer it is demonstrated that blending two polymers in the same particle leads to a higher efficiency due to the better contact between the polymers. Such an effect is of great interest for the fabrication of opto-electronic devices such as light emitting diodes with nanometer size emitting points and solar cells comprising of blends of electron donating and electron accepting polymers. populärwissenschaftlicher Abstract: Kristallisation, Biomimetik und halbleitende Polymere in räumlich begrenzten Systemen: Äl und Wasser mischen sich nicht, man kann aber aus beiden Flüssigkeiten Emulsionen herstellen, bei denen Tröpfchen der einen Flüssigkeit in der anderen Flüssigkeit vorliegen. Das heit, es können entweder Ältröpfchen in Wasser oder Wassertröpfchen in Äl erzeugt werden. Aus täglichen Erfahrungen, z.B. beim Kochen wei man jedoch, dass sich eine Emulsion durch Schütteln oder Rühren herstellen lässt, diese jedoch nicht besonders stabil ist. Mit Hilfe von hohen Scherenergien kann man nun sehr kleine, in ihrer Gröe sehr einheitliche und auerdem sehr stabile Tröpfchen von 1/10000 mm erhalten. Eine solche Emulsion wird Miniemulsion genannt. In der Dissertation wurden nun z.B. Miniemulsionen untersucht, die aus kleinen Wassertröpfchen in einem Äl bestehen. Es konnte gezeigt werden, dass das Wasser in diesen Tröpfchen, also in den räumlich begrenzten Systemen, nicht bei 0 °C, sondern bei -22 °C kristallisierte. Wie lässt sich das erklären? Wenn man einen Eimer Wasser hat, dann bildet sich normalerweise bei 0 °C Eis, da nämlich in dem Wasser einige (manchmal ganz wenige) Keime (z.B. Schutzteilchen, ein Fussel etc.) vorhanden sind, an denen sich die ersten Kristalle bilden. Wenn sich dann einmal ein Kristall gebildet hat, kann das Wasser im gesamten Eimer schnell zu Eis werden. Ultrareines Wasser würde bei -22 °C kristallisieren. Wenn man jetzt die Menge Wasser aus dem Eimer in kleine Tröpfchen bringt, dann hat man eine sehr, sehr groe Zahl, nämlich 1017 Tröpfchen, in einem Liter Emulsion vorliegen. Die wenigen Schmutzpartikel verteilen auf sehr wenige Tröpfchen, die anderen Tröpfchen sind ultrarein. Daher kristallisieren sie erst bei -22 °C. Im Rahmen der Arbeit konnte auch gezeigt werden, dass die Miniemulsionen genutzt werden können, um kleine Gelatine-Partikel, also Nanogummibärchen, herzustellen. Diese Nanogummibärchen quellen bei Erhöhung der Temperatur auf ca. 38 °C an. Das kann ausgenutzt werden, um zum Beispiel Medikamente zunächst in den Partikeln im menschlichen Körper zu transportieren, die Medikamente werden dann an einer gewünschten Stelle freigelassen. In der Arbeit wurde auch gezeigt, dass die Gelatine-Partikel genutzt werden können, um die Natur nachzuahnen (Biomimetik). Innerhalb der Partikel kann nämlich gezielt Knochenmaterial aufgebaut werden kann. Die Gelatine-Knochen-Partikel können dazu genutzt werden, um schwer heilende oder komplizierte Knochenbrüche zu beheben. Gelatine wird nämlich nach einigen Tagen abgebaut, das Knochenmaterial kann in den Knochen eingebaut werden. LEDs werden heute bereits vielfältig verwendet. LEDs bestehen aus Halbleitern, wie z.B. Silizium. Neuerdings werden dazu auch halbleitende Polymere eingesetzt. Das groe Problem bei diesen Materialien ist, dass sie aus Lösungsmitteln aufgebracht werden. Im Rahmen der Doktorarbeit wurde gezeigt, dass der Prozess der Miniemulsionen genutzt werden kann, um umweltfreundlich diese LEDs herzustellen. Man stellt dazu nun wässrige Dispersionen mit den Polymerpartikeln her. Damit hat man nicht nur das Lösungsmittel vermieden, das hat nun noch einen weiteren Vorteil: man kann nämlich diese Dispersion auf sehr einfache Art verdrucken, im einfachsten Fall verwendet man einfach einen handelsüblichen Tintenstrahldrucker.
Electrical quantum standards and their role in the SI
NASA Astrophysics Data System (ADS)
Robinson, Ian; Georgakopoulos, Dimitrios
2012-12-01
The International System of Units, SI, is poised to make a quantum change and become a measurement system based entirely on the fundamental properties of the natural world. In the next version of the SI, the Planck constant h, the elementary charge e, the Avogadro constant NA and the Boltzmann constant k will be fixed, in addition to the already fixed values of the speed of light c and the ground state hyperfine splitting in caesium-133. As a result, six out of the seven base units of the SI will be based directly on true invariants of nature. A major part of this change has been enabled by the ready availability of electrical quantum standards of exquisite precision and mechanisms for using them to make measurements outside the electrical arena. The overall effect will be to eliminate the remaining imprecise definitions of physical units associated with the use of artefact standards and aid direct SI measurements without problems of scaling. Fixing the Planck constant and the elementary charge will have the effect of incorporating the best physical realizations of electrical quantities into the SI, providing a system of units fit for the 21st century. The purpose of this special feature is to review the status of electrical quantum standards and report the latest developments in those areas and their applications to other areas of metrology. The special feature coincides with the 50th anniversary of the seminal paper of Josephson, 'Possible new effects in superconductive tunnelling' [1], which established the basic physical principle upon which the quantum voltage standards are based. Josephson voltage standards are based on the inverse Josephson effect. When a junction of two superconducting electrodes, weakly linked through a thin insulator or a normal metal, is irradiated with a radiofrequency electromagnetic field of frequency f and is biased by a dc current, then the voltage across the junction is quantized (i.e. small changes in either the dc current or the power of the rf irradiation, or both, do not change the voltage). The value of this quantized Josephson voltage is equal to nfh/2e, where n is the quantum step of the current-voltage characteristic curve. In this special feature there are three papers on dc Josephson voltage standards. Solve and Stock review the programme conducted by the Bureau International des Poids et Mesures (BIPM) to perform on-site comparisons of Josephson voltage standards, and give a comprehensive analysis of the possible sources of errors of such comparisons. Behr et al summarize the developments of Josephson voltage standards at Physikalisch-Technische Bundesanstalt (PTB) and their applications in dc voltage and other areas of metrology. Finally, Georgakopoulos et al report a reduction, by a factor of a thousand, in the smallest voltage that can be generated by dc Josephson voltage standards. Although dc voltage standards are well established, significant challenges exist when extending this extremely precise technology to ac. There are two approaches to producing accurate ac voltages using the inverse Josephson effect: the programmable Josephson voltage standard (PJVS) and the pulse-driven ac voltage standard. The PJVS contains an array of Josephson junctions, organized into independently biased segments. By biasing chosen, binary-related, segments on the first quantum step (positive or negative) or zero, the array can be made to behave as a quantum digital to analogue converter. The PJVS approach can produce stepwise approximated sine waves with rms values of some volts, but it suffers from parasitic capacitances and inductances distributed in the different parts of the system and, more importantly, the voltage is not quantized during the finite transition time between successive voltage levels. Hence the output frequency of PJVS-based systems is limited to a few kilohertz. In this special feature, Jeanneret et al review the Josephson locked synthesizer, a PJVS-based system where the effect of transients between successive steps on the output voltage is reduced. This special feature also presents two applications of PJVS-based quantum voltage standards: the evaluation of conventional ac voltage standards based on thermal converters (Budovsky et al) and the measurement of the settling time of a high resolution digital voltmeter (Henderson et al). In the pulse-driven ac voltage standard, arbitrary voltages can be produced by modulating the rf irradiation of an array of Josephson junctions by a series of high frequency pulses, usually by means of Δ-Σ modulation. The output voltage of the array of junctions is a series of quantized voltage pulses that correspond to the desired waveform after the high frequency components are removed. The pulse-driven standard can operate at much higher frequencies than the PJVS. Eliminating the effects of parasitic impedances of the, necessarily long, connecting leads therefore becomes a significant challenge. In this special feature, van den Brom and Houtzager report a voltage lead correction technique. Quantum resistance standards are based on the quantum Hall effect in which the resistance of a two-dimensional electron gas in a strong magnetic field is quantized. The value of the quantized Hall resistance is h/ie2, where i is the number of the quantum step in the resistance-magnetic field curve. Quantum Hall resistance devices can be combined in series to form a resistive voltage divider with low uncertainty in the ratio. In this special feature, Domae et al report the realization of such a resistive voltage divider on a chip. Quantum Hall resistance standards have been routinely used at dc for over two decades. However, the operation of quantum Hall devices at ac is complicated by the flow of current in capacitances around the device, which can compromise measurement of its resistance. Schurr et al review the status of ac quantum Hall resistance standards and their role in the SI. Ohm's law can be applied to quantum realizations of voltage, resistance and current to test their consistency. Active research into this 'metrological triangle' is underway and, at present, there is no evidence to indicate a discrepancy at any level. However, work is continuing on current sources which utilize a countable flow of electrons (the electric current produced is proportional to ef, f being the operating frequency of the device), but the work has some way to go before the question of consistency can be resolved at levels approaching 1 part in 109. In this special feature, Scherer and Camarota review the state-of-the-art of metrological triangle experiments and Devoille et al report on the status of the metrological triangle experiment at the Laboratoire National de Métrologie et d'Essais (LNE), France. The availability of precise representations of the volt and the ohm based on quantum mechanics has enabled the watt balance, an apparatus which relates electrical and mechanical power, to link the kilogram to the Planck constant. This has paved the way for the proposed redefinition of the kilogram, the last artefact standard in the SI, in terms of a fixed value of the Planck constant. In the past few years a number of papers, e.g. [2, 3], have been published describing the working principles of the watt balance and the characteristics of the existing implementations of the experiment. The measurements of the principal quantities—mass, velocity, gravitational acceleration, resistance and voltage—are reasonably well documented but the ultimate precision of the apparatus depends on a number of techniques that are required to eliminate second-order effects. In this special feature, Robinson provides details of these general alignment techniques with special reference to the NPL Mark II watt balance. Acknowledgments We would like to thank the authors for supporting the special feature with their excellent contributions; the guardians of the quality of a scientific paper, the referees, for their valuable comments and suggestions; Professor Wuqiang Yang and the members of the editorial board of Measurement Science and Technology for their support. Finally, we would like to thank Dr Sharon D'Souza, James Dimond and all the editorial and publication staff at Measurement Science and Technology, for their help in making the special feature a reality. References [1] Josephson B D 1962 Possible new effects in superconductive tunnelling Phys. Lett. 1 251-3 [2] Li S, Han B, Li Z and Lan J 2012 Precisely measuring the Planck constant by electromechanical balances Measurement 45 1-13 [3] Stock M 2011 The watt balance: determination of the Planck constant and redefinition of the kilogram Phil. Trans. R. Soc. A 369 3936-53
NASA Astrophysics Data System (ADS)
Rietdorf, Katja
2003-07-01
In der vorliegenden Arbeit habe ich wichtige Teilmechanismen der Erregungs-Sekretionskopplung in der Speicheldrüse der Schabe Periplaneta americana (L.) untersucht. Die Speicheldrüse ist von dopaminergen und serotonergen Fasern innerviert (Baumann et al., 2002). Beide Transmitter stimulieren eine unterschiedliche Reaktion der Drüse: Dopamin (DA) stimuliert die P-Zellen der Acini und die Ausführgangzellen, während Serotonin (5-HT) die P- und C-Zellen der Acini stimuliert, nicht jedoch die Ausführgangzellen. Der Endspeichel ist nach einer DA-Stimulierung proteinfrei. Dagegen enthält er nach einer 5-HT-Stimulierung Proteine, die von den C-Zellen sezerniert werden (Just & Walz, 1996). Im ersten Teil meiner Arbeit habe ich mittels Kapillarelektrophoretischer Analyse (CE-Analyse) die Elektrolytkonzentrationen im Endspeichel untersucht sowie die Raten der Flüssigkeitssekretion gemessen. Damit wollte ich klären, welche Transporter an der Sekretion des Primärspeichels und an dessen Modifikation beteiligt sind. Ausserdem wollte ich die Rolle der transportaktiven Epithelzellen der Ausführgänge für die Modifikation des Primärspeichels untersuchen. Dafür habe ich einen Vergleich der Elektrolytkonzentrationen im DA- und 5-HT-stimulierten Endspeichel durchgeführt. Der Elektrolytgehalt des DA- und 5-HT-stimulierten Endspeichels unterscheidet sich nicht signifikant voneinander. Er ist nach beiden Stimulierungen hypoosmotisch zum verwendeten Ringer. Die Ausführgangzellen werden durch DA stimuliert und modifizieren den Primärspeichel durch eine netto-Ionenreabsorption. Meine Versuche zeigen jedoch, dass auch die während einer 5-HT-Stimulierung der Drüse unstimulierten Ausführgangzellen den Primärspeichel modifizieren. In einer nachfolgenden Versuchsreihe habe ich den Einfluss von Ouabain, einem Hemmstoff der Na+-K+-ATPase, und Bumetanid, einem Hemmstoff des NKCC, auf die Raten der Flüssigkeitssekretion sowie den Elektrolytgehalt des Endspeichels untersucht. Ich habe gefunden, dass die Aktivität der Na+-K+-ATPase wichtig für die Modifikation des DA-stimulierten Primärspeichels ist. Im Gegensatz dazu ist sie für die Modifikation des 5-HT-stimulierten Primärspeichels nicht von Bedeutung. Bezüglich der Flüssigkeitssekretion habe ich keinen Einfluss der Na+-K+-ATPase-Aktivität auf die DA-stimulierten Sekretionsraten gefunden, dagegen ist die 5-HT-stimulierte Sekretionsrate in Anwesenheit von Ouabain gesteigert. Die Aktivität des NKCC ist für beide sekretorische Prozesse, die Ionen- und die Flüssigkeitssekretion, wichtig. Eine Hemmung des NKCC bewirkt eine signifikante Verringerung der Raten der Flüssigkeitssekretion nach DA- und 5-HT-Stimulierung sowie in beiden Fällen einen signifikanten Abfall der Ionenkonzentrationen im Endspeichel. Im zweiten Teil meiner Arbeit habe ich versucht, Änderungen der intrazellulären Ionenkonzentrationen in den Acinuszellen während einer DA- oder 5-HT-Stimulierung zu messen. Diese Experimente sollten mit der Methode des "ratiometric imaging" durchgeführt werden. Messungen mit dem Ca2+-sensitiven Fluoreszenzfarbstoff Fura-2 zeigten keinen globalen Anstieg in der intrazellulären Ca2+-Konzentration der P-Zellen. Aufgrund von Problemen mit einer schlechten Beladung der Zellen, einer starken und sich während der Stimulierung ändernden Autofluoreszenz der Zellen sowie Änderungen im Zellvolumen wurden keine Messungen mit Na+- und K+-sensitiven Fluoreszenzfarbstoffen durchgeführt. Im dritten Teil dieser Arbeit habe ich die intrazellulären Signalwege untersucht, die zwischen einer 5-HT-Stimulierung der Drüse und der Proteinsekretion vermitteln. Dazu wurde der Proteingehalt im Endspeichel biochemisch mittels eines modifizierten Bradford Assay gemessen. Eine erstellte Dosis-Wirkungskurve zeigt, dass die Rate der Proteinsekretion von der zur Stimulierung verwendeten 5-HT-Konzentration abhängt. In einer Serie von Experimenten habe ich die intrazellulären Konzentrationen von Ca2+, cAMP und / oder cGMP erhöht und anschließend den Proteingehalt im Endspeichel gemessen. Ein Anstieg der intrazellulären Ca2+-Konzentration aktiviert nur eine geringe Rate der Proteinsekretion. Dagegen kann die Steigerung der intrazellulären cAMP-Konzentration eine stärkere Proteinsekretion aktivieren, die sich nicht signifikant von der nach 5-HT-Stimulierung unterscheidet. Die cAMP-stimulierte Proteinsekretion kann durch gleichzeitige Erhöhung der intrazellulären Ca2+-Konzentration weiter gesteigert werden. Dagegen aktivierte eine Erhöhung der intrazellulären cGMP-Konzentration die Proteinsekretion nicht. Aufgrund dieser Ergebnisse postuliere ich die Existenz eines die Adenylatcyclase aktivierenden 5-HT-Rezeptors in der Basolateralmembran der C-Zellen. The aim of this PhD-work was to investigate major mechanisms of excitation-secretion coupling in the salivary gland of the cockroach Periplaneta americana (L.). This salivary gland is innervated by dopaminergic and serotonergic fibres (Baumann et al., 2002). The two transmitters stimulate different processes in the gland: Dopamine (DA) stimulates the p-cells of the acini and the salivary duct cells, whereas 5-HT (serotonin) activates the p- and the c-cells of the acini, but not the salivary duct cells. Final saliva is completely protein-free after dopamine stimulation. It contains proteins, which are secreted by the c-cells of the acini, after a 5-HT-stimulation (Just & Walz, 1996). In the first part of my work I measured the electrolytic composition of the final saliva by capillary electrophoretic analysis and measured the rates of fluid secretion, in order to answer the following questions: 1.) Which transporters affect the production of primary saliva and its modification? 2.) What is the function of the transport-active salivary duct cells for the modification of the primary saliva? Electrolytic composition of the DA- and 5-HT-stimulated final saliva is not significantly different from each other, and is hypoosmotic to the Ringer used. Salivary duct cells are stimulated by DA and modify the primary saliva by a netto ion-reabsorption. My experiments also show that the duct cells, which are unstimulated during a 5-HT-stimulation of the gland, modify the primary saliva. In the next series of experiments I investigated the effects of ouabain, an inhibitor of the Na+-K+-ATPase, and bumetanide, an inhibitor of the NKCC on the rates of fluid secretion and the electrolytic composition of the final saliva. I found, that the activity of the Na+-K+-ATPase is important for the modification of DA-stimulated primary saliva during its flow through the stimulated duct system. In contrast, it is not important for modification of the 5-HT-stimulated primary saliva. Inhibition of the Na+-K+-ATPase does not affect rates of DA-stimulated fluid secretion, but it increases the rates of 5-HT-stimulated fluid secretion. Activity of the NKCC is important for both secretory processes: the ion and the fluid secretion. Inhibition of the NKCC results in a significant drop in the rates of fluid secretion after DA- and 5-HT-stimulation, as well as a drop in electrolytic concentrations in the saliva. In the second part of my work, I tried to measure changes in the intracellular ionic concentrations (Ca2+, Na+, and K+) within the acinar cells during a DA- or 5-HT-stimulation. The experiments should be performed by ratiometric imaging. Measurements with the Ca2+-sensitive dye Fura-2 did not show any global increase in the intracellular Ca2+-concentration in the p-cells of the acini. Problems concerning a bad loading of the cells, a strong autofluorescence which changed during the time course of the stimulation, as well as changes in the cell volume were the reason, that no measurements using Na+- or K+-sensitive dyes were performed. In the third part of my work I investigated the intracellular signalling pathways, which activate protein secretion after 5-HT-stimulation of the gland. A modified Bradford Assay was used for measuring the protein content in the final saliva. In a dose-response curve I showed that rates of protein secretion are dependent on the 5-HT-concentrations used to stimulate the glands. In another set of experiments I increased the intracellular concentrations of Ca2+, cAMP and / or cGMP, and measured the protein content in the final saliva. An increase in the intracellular Ca2+-concentration activates only a low rate of protein secretion. After an increase in the intracellular cAMP-concentration a much higher rate of protein secretion can be activated, which is not significantly different from the 5-HT stimulated rate of protein secretion. The cAMP-stimulated protein secretion can be further increased by a simultaneous rise in the intracellular Ca2+-concentration. In contrast, cGMP does not activate protein secretion. Therefore I propose the expression of an adenylyl cyclase activating 5-HT-receptor in the basolateral membrane of the protein secreting c-cells.
NASA Astrophysics Data System (ADS)
Hebden, Jeremy C.; Rinneberg, Herbert
2005-06-01
The Commission of the European Union (EU) conceived its Fifth Framework Programme (FP5) to identify the priorities for the European Union's research, technological development and demonstration activities for the period 1998-2002. By encouraging collaborative research between groups in different member countries, FP5 was intended to help solve problems the EU is facing and respond to major socio-economic challenges. The programme focused on a number of objectives and areas combining technological, industrial, economic, social and cultural aspects. A specific call was made, under its `Quality of Life and Management of Living Resources' section, for proposals which aim to explore improvements in non-invasive methods of imaging for early diagnosis and clinical evaluation of disease. Among the projects successfully funded under the FP5 programme was one entitled `Optical mammography: Imaging and characterization of breast lesions by pulsed near-infrared laser light', known by its acronym OPTIMAMM. The project involved a consortium of nine partners, comprising ten applied science and clinical research groups based in six EU countries, with overall administration and management provided by the Physikalisch-Technische Bundesanstalt, Berlin, Germany. The broad aim of the OPTIMAMM project was to combine multi-disciplinary basic (physics, engineering, mathematics, computer science) and clinical (oncology, histology) research to assess the diagnostic potential of time-domain optical and photoacoustic mammography as novel, non-invasive imaging modalities for the detection and clinical evaluation of breast lesions. Funding for the project, at a total cost of about 1.67 MEuro, began in December 2000 for a period of three years, although a zero-cost extension was granted to enable the ongoing project activities to continue until the end of May 2004. The importance of developing new tools for the detection and diagnosis of breast disease is evident from the very high incidence and mortality associated with it, within the EU and throughout the world. Although x-ray mammography is recognized as an effective tool for cancer screening in women over 35-40 years of age, it suffers from a significant number of false positives which often lead to unnecessary biopsy. X-ray mammography is also less effective for younger women with denser breasts, and involves the use of potentially harmful ionizing radiation. While other conventional diagnostic techniques such as ultrasound and magnetic resonance imaging (MRI) are also widely used in the diagnosis and characterization of breast disease, their roles in the detection and staging of breast tumours have so far been limited. The development of optical methods of imaging the breast is attractive partly because they are safe, but chiefly because they can reveal contrast between normal and diseased tissues which are not evident using conventional methods. The principal mechanism for contrast at near-infrared wavelengths is the characteristic absorption by haemoglobin and other dominant tissue chromophores, such as fat and water. Furthermore, the differences between the absorption of oxy-haemoglobin and deoxy-hemoglobin provide a means of determining oxygenation, and therefore of studying tissue function. The OPTIMAMM project focused specifically on the diagnostic potential of time-resolved methods. Systems which measure the flight-times of photons transmitted across highly scattering breast tissue offer the potential to provide greater spatial resolution and contrast than systems based on intensity measurements alone, and facilitate better separation between the effects of scatter and those of absorption. A major component of the project was a series of clinical trials performed at four European sites, in particular in Berlin (Germany) and Milan (Italy) using similar scanning instrumentation, carried out under a harmonized clinical protocol where appropriate. The clinical trials were augmented by efforts to refine semi-empirical and rigorous mathematical methods for data analysis and image reconstruction, and by technical improvements to instrumentation. Throughout the project, developments in technology and methodology were assessed through appropriate evaluation on human subjects. The nine project partners also applied different measurement techniques to a concerted effort to reliably measure the scattering and absorption properties of normal breast tissue, benign lesions and tumours at various near-infrared wavelengths in vivo non-invasively, during breast surgery, and ex vivo on tissue specimens and biopsy samples. In May 2004, on the successful completion of the OPTIMAMM project, a European workshop entitled `Applied Medical Photonics: from Tissue Characterization to Optical Mammography' was held in Berlin, jointly organized by the OPTIMAMM consortium and by another consortium representing the European network on `Medical Photonics'. At this meeting, all the the OPTIMAMM partners presented summaries of their results obtained during the lifetime of the project. Subsequently, the ten collaborating research groups have prepared a series of papers which focus on the specific results of the consortia and review its overall achievements. This special issue of Physics in Medicine and Biology presents these ten papers (pages 2429-2596), which have been accepted for publication following the usual thorough peer-review process for this journal. The first four papers describe the results of the two parallel clinical assessments of scanning time-domain optical mammography, involving a combined total of about 350 patients. These include descriptions of technical developments and the results of efforts to determine the optical properties of benign lesions and carcinomas. The first two contributions summarize the work performed by collaborating groups based in Berlin. A scanning laser pulse optical mammograph, which records craniocaudal and mediolateral projection images of the compressed breast at three near-infrared wavelengths, was used in a clinical trial on 154 patients, all undergoing subsequent histological assessment of at least one breast lesion. Part I describes an analysis of optical images by comparing them with x-ray and MRI mammograms and with results of histopathology. Part II reports on the derived optical properties of a subset of 87 carcinomas. Haemoglobin concentration was found to be systematically higher in tumours than in normal tissue, while no statistical difference was observed for blood oxygenation. A similar breast imaging system is the focus of the next two papers, presented by partners based in Milan. A clinical study involving a total of 194 patients was performed over the lifetime of the project, during which the number of discrete wavelengths employed increased from four to seven, and the spectral range was expanded to accommodate wavelengths above 900 nm. The first of this pair of papers describes the effectiveness of the system for detecting breast lesions, while the second presents an attempt to characterize the optical properties of malignant and benign lesions. Again, cancers were found to associate with a blood content that is higher than in surrounding tissues. The fifth paper in this special issue, authored by consortium partners in London (UK), reports clinical results achieved using a 32-channel time-resolved optical tomography system. Cross-sectional images of the uncompressed human breast are generated from photon flight-time measurements using an iterative, non-linear algorithm. Studies involving patients with a variety of breast conditions detected 17 out of 19 lesions, and results on healthy volunteers displayed heterogeneity which was repeatable over a period of months. The next paper represents a collaboration between the London researchers working on image reconstruction algorithm development, and partners based in Berlin. A paraxial scanning mechanism developed for the Berlin breast imaging system was used to acquire measurements which enable three-dimensional (3D) in vivo images to be generated. A fast reconstruction method, based on the Rytov approximation, uses data which is Fourier transformed with respect to both time and space. The seventh paper, contributed by partners based in Enschede (The Netherlands), provides an overview of their breast imaging system which measures photoacoustic signals occurring within tissue in response to illumination by short pulses of light. The overall performance of the system is discussed, and an algorithm is presented which is designed to utilize these measurements to generate 3D images of the compressed breast. The remaining three papers focus on the measurement of the optical properties of breast tissue. The first of these presents the results of an investigation of the intra- and inter-subject variability of optical properties, and of the relative concentrations of principle chromophores: water, lipid, and haemoglobin. This collaborative study, involving partners based in Lund (Sweden) and Milan, utilized in vivo time-resolved spectroscopy measurements at four near-infrared wavelengths. Differences between pre- and post-menopausal breast tissues were confirmed, and no systematic differences between contralateral breast were evident. The next paper, authored by researchers in Rotterdam (The Netherlands), describes the use of a fibre-optic needle probe and a technique known as differential pathlength spectroscopy to derive highly localized optical properties of the breast in vivo. Results, correlated with histological outcome, reveal that malignant tissue is characterized by a significant decrease in tissue oxygenation and a higher blood content compared with normal breast tissue. The last paper in this special issue describes a theoretical development which enables a more accurate determination of scattering properties from small biopsy samples. The theory was applied by the authors based in Heraklion (Greece) to time-resolved transmittance measurements in order to characterize the scattering of 59 samples of breast tissue. We are grateful to all the authors for their valuable contributions and their prompt responses to reviewers' comments, and we thank all the reviewers for their useful suggestions to improve the quality of the papers and their considerable efforts to meet tight deadlines. And last but not least we would like to thank the European Commission for its generous financial support.
NASA Astrophysics Data System (ADS)
Foss, John; Dewhurst, Richard; Fujii, Kenichi; Regtien, Paul
2011-06-01
Since 1991, Measurement Science and Technology has awarded a Best Paper prize. The Editorial Board of this journal believes that such a prize is an opportunity to thank authors for submitting their work, and serves as an integral part of the on-going quality review of the journal. The current breadth of topical areas that are covered by MST has made it advisable to expand the recognition of excellent publications. Hence, since 2005 the Editorial Board have presented 'Outstanding Paper Awards' in four subject categories: Fluid Mechanics; Measurement Science; Precision Measurements; and Sensors and Sensing Systems. Although the categories mirror subject sections in the journal, the Editorial Board consider articles from all categories in the selection process. This year, for example, the winning article of the Outstanding Paper Award in Sensors and Sensing Systems was an article published in the 'Novel Instrumentation' section. 2010 Award Winners—Fluid Mechanics Assessment of pressure field calculations from particle image velocimetry measurements John J Charonko, Cameron V King, Barton L Smith and Pavlos P Vlachos Department of Mechanical Engineering, Virginia Tech, Blacksburg, VA 24060, USA VT-WFU School of Biomedical Engineering & Sciences, Virginia Tech, Blacksburg, VA 24060, USA Mechanical and Aerospace Engineering Department, Utah State University, UMC4130, Logan, UT 84322, USA Measuring p(t) in the interior of a flow field is one of the most challenging measurements in our field of study. An accurate knowledge of these interior pressures is of considerable value for fundamental studies. Since the gradient of the pressure appears in the Navier-Stokes equations, a knowledge of the pressure at a bounding surface followed by operations on the measured velocity components within the flow field can be analytically related to the pressure at an interior location. Bringing this long-recognized possibility to operational status has been greatly aided by the advent of particle image velocimetry (PIV), wherein 'instantaneous' planes of the velocity field are obtained. (The planar information, of course, falls short of the volumetric data that would be required for a complete measurement strategy.) In this paper [1] the authors first provide a valuable review of the literature in this area before presenting their original contribution. Operating with the constraint of incomplete information, the authors have significantly advanced this aspect of fluid mechanics measurements by: (i) performing error analysis evaluations of the extant methodologies, and (ii) introducing the use of POD (proper orthogonal decomposition) techniques to smooth the PIV data. Three globally unsteady flow fields are investigated in this paper. Two of the subject flows are simulations where the pressure can be compared with the inferred value using simulated PIV results—with the addition of typical measurement uncertainties—and the third test case is the unsteady flow in a planar diffuser. This paper is a benchmark contribution on the path to accurately inferring pressure values in the interior of a flow field. 2010 Award Award Winner—Measurement Science Achieving high effective Q-factors in ultra-high vacuum dynamic force microscopy Jannis Lübbe, Lutz Tröger, Stefan Torbrügge, Ralf Bechstein, Christoph Richter, Angelika Kühnle and Michael Reichling Fachbereich Physik, Universität Osnabrück, Barbarastraße 7, 49076 Osnabrück, Germany Nanoworld Services GmbH, Schottkystraße 10, 91058 Erlangen, Germany Institut für Physikalische Chemie, Johannes Gutenberg-Universität Mainz, Jakob-Welder Weg 11, 55099 Mainz, Germany This paper [2] presents a detailed methodology to achieve good Q-factors in a scanning force microscope. Whilst this instrument, operated in the non-contact mode (NC-AFM), has become a standard tool for atomic scale surface characterization, the paper deals specifically with its operation in ultra-high vacuum (UHV). Performance of the system is measured by the effective Q-factor. Recognizing that many factors influence the Q-factor, the authors conduct a careful study comparing measurements with simulations based on the size of the cantilever. Furthermore, they introduce a methodology to investigate in detail how the effective Q-factor depends on the fixation technique of the cantilever, and describe a strategy for avoiding fixation loss. By taking the necessary care in mounting the cantilever, they show that it is possible to routinely obtain an effective Q-factor in NC-AFM measurements in high or ultra-high vacuum that is within a 20% margin of the intrinsic Q-factor. As a result, the Q-factor that realistically can be expected in a NC-AFM system defines a principal limit of the minimum detectable force gradient for room temperature measurements of the order of 10-6 N m-1. Several features in the format of the article help to make this an excellent paper. Following an informative abstract, the introduction sets the background for the novelty of the paper, before a detailed account in the following sections. The results section is detailed and convincing, and leads to a set of concise quantitative conclusions. There is also a good set of 33 references at the end of the paper. This paper was rated as excellent by the external referees in the initial refereeing process. It is one of 28 papers nominated this year for the best paper award in measurement science, and it gained the most votes from those shortlisted. It is bound to make an impact in the field of atomic force microscopy. 2010 Award Winner—Precision Measurement Interferometric determination of the topographies of absolute sphere radii using the sphere interferometer of PTB Guido Bartl, Michael Krystek, Arnold Nicolaus and Walter Giardini Physikalisch-Technische Bundesanstalt (PTB), Bundesallee 100, D-38116 Braunschweig, Germany National Measurement Institute, Bradfield Road, West Lindfield, NSW 2070, Australia This paper [3] describes a new method to reconstruct the absolute shape of a sphere using a stitching technique. In general, accurate and absolute measurements of spherical shapes are of primary importance in lens fabrication, calibration of three-dimensional coordinate machines and volume determination of density standard spheres. Particularly in the most recent research activities conducted for a redefinition of the kilogram, accurate volume measurements of silicon spheres have been needed for a determination of the Avogadro constant by the x-ray crystal density method. For this purpose, the authors developed a new principle to reconstruct a whole spherical surface from a limited number of segments distributed on the surface of the sphere. In a conventional stitching technique, it has been difficult to completely separate the real shape of the sphere from the misalignment effect in optical interferometry. In order to cope with this problem, the authors developed a new mathematical model describing the offset, tilt and defocus in a Fizeau interferometer. In the model, those misalignment effects are expressed in a matrix form based on Givens rotations, and the real shape of the sphere is successfully deduced from the matrix solution. The authors also validated the correctness of the new method by comparing the results with a conventional roundness measurement. This paper has been downloaded more than one hundred times since its publication. The selection committee members for the Precision Measurement Award (Dr K Fujii, Prof. X Chen, Dr A Yacoot, Dr P Williams and Dr T Eom) selected this paper from a strongly competitive list of ten candidates for its sophisticated idea and originality. Considering its impact on topographic evaluation of spherical surfaces and also on the determination of the fundamental physical constants, the paper was felt to be a good winner of the MST Precision Measurement Award for 2010. 2010 Award Winners—Sensors and Sensing Systems Noncontact modulated laser calorimetry in a dc magnetic field for stable and supercooled liquid silicon Hidekazu Kobatake, Hiroyuki Fukuyama, Takao Tsukada and Satoshi Awaji Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Aoba, Sendai, 980-8577, Japan Department of Chemical Engineering, Tohoku University, Aramaki Aoba, Aoba-ku, Sendai, 980-8577, Japan Institute for Materials Research (IMR), Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan An accurate value of the thermal conductivity of supercooled liquid silicon proves very helpful to better numerical heat flow models used for the production of high-quality silicon crystals. Measuring this important parameter is a challenge since the accuracy is frustrated by convections and imprecise non-contact temperature measurements, resulting in a wide variation of values reported in the literature so far. By improving the non-contact temperature measurement on a magnetically levitated drop of silicon the authors succeeded in a substantial reduction of the uncertainty in the measured values of the heat capacity and the thermal conductivity including their temperature dependence. Modulation of the heat power and an accurate measurement of the amplitude and phase of the resulting temperature variation constitute the major contribution to a higher accuracy. In their winning paper [4] the authors present a clear overview of their measurement setup and the experimental procedure and conditions, and give an extensive analysis of the parameters that play a part in the overall uncertainty. Measured values are compared with those obtained by other methods and reported in relevant literature. Moreover, the improvements of the instrumentation implemented by the authors contribute to a better understanding of the heat transport processes in liquid silicon. The four chairmen would like to thank the authors for choosing to publish their work in Measurement Science and Technology, and hope that other researchers enjoy reading these works and feel encouraged to submit their own best work to the journal. References [1] Charonko J J, King C V, Smith B L and Vlachos P V 2010 Assessment of pressure field calculations from particle image velocimetry measurements Meas. Sci. Technol. 21 105401 (15pp) [2] Lübbe J, Tröger L, Torbrügge S, Bechstein R, Richter C, Kühnle A and Reichling M 2010 Achieving high effective Q-factors in ultra-high vacuum dynamic force microscopy Meas. Sci. Technol. 21 125501 (9pp) [3] Bartl G, Krystek M, Nicolaus A and Giardini W 2010 Interferometric determination of the topographies of absolute sphere radii using the sphere interferometer of PTB Meas. Sci. Technol. 21 115101 (8pp) [4] Kobatake H, Fukuyama H, Tsukada T and Awaji S 2010 Noncontact modulated laser calorimetry in a dc magnetic field for stable and supercooled liquid silicon Meas. Sci. Technol. 21 025901 (9pp)
Entwicklung von Landnutzungsszenarien für landschaftsökologische Fragestellungen
NASA Astrophysics Data System (ADS)
Fritsch, Uta
2002-04-01
Die Landschaften Mitteleuropas sind das Resultat einer langwierigen Geschichte menschlicher Landnutzung mit ihren unterschiedlichen, z.T. konkurrierenden Nutzungsansprüchen. Durch eine überwiegend intensive Beanspruchung haben die direkten und indirekten Auswirkungen der Landnutzung in vielen Fällen zu Umweltproblemen geführt. Die Disziplin der Landschaftsökologie hat es sich zur Aufgabe gemacht, Konzepte für eine nachhaltige Nutzung der Landschaft zu entwickeln. Eine wichtige Fragestellung stellt dabei die Abschätzung der möglichen Folgen von Landnutzungsänderungen dar. Für die Analyse der relevanten Prozesse in der Landschaft werden häufig mathematische Modelle eingesetzt, welche es erlauben die Landschaft unter aktuellen Verhältnissen oder hinsichtlich veränderter Rahmenbedingungen zu untersuchen. Die hypothetische Änderung der Landnutzung, die als Landnutzungsszenario bezeichnet wird, verkörpert eine wesentliche Modifikation der Rahmenbedingungen, weil Landnutzung mageblich Einfluss auf die natürlichen Prozesse der Landschaft nimmt. Während die Antriebskräfte einer solchen Änderung überwiegend von sozio-ökonomischen und politischen Entscheidungen gesteuert werden, orientiert sich die exakte Verortung der Landnutzungsänderungen an den naturräumlichen Bedingungen und folgt z.T. erkennbaren Regeln. Anhand dieser Vorgaben ist es möglich, räumlich explizite Landnutzungsszenarien zu entwickeln, die als Eingangsdaten für die Modellierung verschiedener landschaftsökologischer Fragestellungen wie z.B. für die Untersuchung des Einflusses der Landnutzung auf den Wasserhaushalt, die Erosionsgefahr oder die Habitatqualität dienen können. Im Rahmen dieser Dissertation wurde das rasterbasierte deterministische Allokationsmodell luck (Land Use Change Scenario Kit) für die explizite Verortung der Landnutzungsänderungen entwickelt. Es basiert auf den in der Landschaftsökologie üblichen räumlichen Daten wie Landnutzung, Boden sowie Topographie und richtet sich bei der Szenarienableitung nach den Leitbildern der Landschaftsplanung. Das Modell fut auf der Hypothese, dass das Landnutzungsmuster als Funktion seiner landschaftsökologischen Faktoren beschrieben werden kann. Das Veränderungspotenzial einer Landnutzungseinheit resultiert im Modell aus einer Kombination der Bewertung der relativen Eignung des Standortes für die jeweilige Landnutzung und der Berücksichtigung von Standorteigenschaften der umliegenden Nachbarn. Die Durchführung der Landnutzungsänderung im Modell ist iterativ angelegt, um den graduellen Prozess des Landschaftswandels nachvollziehen zu können. Als Fallbeispiel für die Anwendung solcher räumlich expliziten Landnutzungsszenarien dient die Fragestellung, inwieweit Landnutzungsänderungen die Hochwasserentstehung beeinflussen. Um den Einfluss auf die Hochwasserentstehung für jede der Landnutzungskategorien – bebaute, landwirtschaftlich genutzte und naturnahe Flächen – abschätzen zu können, wird im Landnutzungsmodell luck exemplarisch für jede Kategorie ein Teilmodell für die Veränderung von Landnutzung angeboten: 1) Ausdehnung der Siedlungsfläche: Dieses Teilmodell fut auf der Annahme, dass sich Siedlungen nur in direkter Nachbarschaft bereits bestehender Bebauung und bevorzugt entlang von Entwicklungsachsen ausbreiten. Steile Hangneigungen stellen für potenzielle Standorte ein Hemmnis bei der Ausbreitung dar. 2) Stilllegung von Grenzertragsackerflächen: Gemä der Hypothese, dass sich die Stilllegung von Ackerflächen an der potenziellen Ertragsleistung der Standorte orientiert, werden in diesem Teilmodell alle Ackerstandorte dahingehend bewertet und die Flächen mit der geringsten Leistungsfähigkeit stillgelegt. Bei homogenen Gebietseigenschaften werden die Stilllegungsflächen zufällig auf die Ackerfläche verteilt. 3) Etablierung von Schutzgebieten in Ufer- und Auenbereichen: Ausgehend von der These, dass sich entlang von Flüssen sensible Flächen befinden, deren Schutz positive Folgen für das Leistungsvermögen der Landschaft haben kann, werden in diesem Teilmodell schützenswerte Ufer- und Auenbereiche auf derzeit landwirtschaftlich genutzten Flächen ausgewiesen. Die Gröe der Schutzgebietsfläche orientiert sich an der Morphologie der umgebenden Landschaft. Die drei Teilmodelle wurden hinsichtlich der implizierten Hypothesen mit vielen unterschiedlichen Ansätzen validiert. Das Resultat dieser intensiven Analyse zeigt für jedes Teilmodell eine zufriedenstellende Tauglichkeit. Die Modellierung der Landnutzungsänderungen wurden in drei mesoskaligen Flusseinzugsgebieten mit einer Fläche zwischen 100 und 500 km² durchgeführt, die sich markant in ihrer Landnutzung unterscheiden. Besonderer Wert wurde bei der Gebietsauswahl darauf gelegt, dass eines der Gebiete intensiv landwirtschaftlich genutzt wird, eines dicht besiedelt und eines vorwiegend bewaldet ist. Im Hinblick auf ihre Relevanz für die vorliegende Fragestellung wurden aus bestehenden Landnutzungstrends die Szenarien für (1) die prognostizierte Siedlungsfläche für das Jahr 2010, (2) die möglichen Konsequenzen des EU-weiten Beschlusses der Agenda 2000 und (3) die Novelle des Bundesnaturschutzgesetzes aus dem Jahr 2001 abgeleitet. Jedes Szenario wurde mit Hilfe des Modells auf die drei Untersuchungsgebiete angewendet. Dabei wurden für die Siedlungsausdehnung in allen drei Gebieten realistische Landnutzungsmuster generiert. Einschränkungen ergeben sich bei der Suche nach Grenzertragsstilllegungsflächen. Hier hat unter homogenen Gebietseigenschaften die zufällige Verteilung von Flächen für die Stilllegung zu einem unrealistischen Ergebnis geführt. Die Güte der Schutzgebietsausweisung ist mageblich an die aktuelle Landnutzung der Aue und die Morphologie des Geländes gebunden. Die besten Ergebnisse werden erzielt, wenn die Flächen in den Ufer- und Auenbereichen mehrheitlich unter derzeitiger Ackernutzung stehen und der Flusslauf sich in das Relief eingetieft hat. Exemplarisch werden für jeden Landnutzungstrend die hydrologischen Auswirkungen anhand eines historischen Hochwassers beschrieben, aus denen jedoch keine pauschale Aussage zum Einfluss der Landnutzung abgeleitet werden kann. Die Studie demonstriert die Bedeutung des Landnutzungsmusters für die natürlichen Prozesse in der Landschaft und unterstreicht die Notwendigkeit einer räumlich expliziten Modellierung für landschaftsökologische Fragestellungen in der Mesoskala. Today′s landscapes in Central Europe are the result of a long history of land-use, which is characterised by many different demands. The immediate and long-term consequences of predominantly intensive land-use have led to environmental problems in many cases. Therefore it is necessary to develop strategies for the maintenance of landscape efficiency which take into account the different claims of utilisation. In this context the estimation of possible impacts of land-use changes represents an important statement of problem. For the analysis of the relevant processes within the landscape, it is common to apply mathematical models. Such models enable the investigation of the landscape under current conditions or with regard to modified boundary conditions. A hypothetic alteration of land-use, which is termed as land-use scenario, represents a substantial modification of the boundary conditions, because land-use exerts a strong influence on the natural processes of the landscape. While the driving forces are predominantly governed by socio-economical and political decisions, the exact location of land-use changes within the landscape mainly depends on the natural conditions and follows partly transparent rules. With these presumptions it is possible to develop land-use scenarios, which can serve as input data for the modelling of different questions of landscape ecology such as the influence of land-use on the water balance, the danger of erosion or the quality of habitat characteristics. In the context of this thesis the grid-based deterministic allocation model luck (Land Use Change Scenario Kit) for the allocation of land-use changes was developed. It is based upon the types of spatial data, which are commonly used in landscape ecology, such as information on land-use, soils as well as topography. The derivation of scenarios follows the approaches of landscape planning. The model is based upon the hypothesis, that land-use structure can be described as a function of its landscape ecological factors. The potential of a site to become subject to land-use changes, results from a combination of its local qualities and the site characteristics of its neighbourhood. Land-use change is realised iteratively in order to simulate the gradual process of changes in the landscape. The influence of land-use changes on flood generation serves as a case study to demonstrate the need for spatial explicit land-use scenarios. For each land-use category – built up areas, agriculturally used areas and natural/semi-natural land – the model luck offers a submodel for investigating the effect of land-use changes on flood generation: 1) Expansion of settlement area: This submodel is based upon the assumption that settlements spread only in the neighbourhood of already existing built-up areas and preferentially along infrastructural axes of development. Steep slopes inhibit the spreading on potential locations. 2) Set-aside of marginal yield sites under agricultural use: Setting-aside of arable land is based on the hypothesis that the selection of arable land to be set-aside depends on the potential yield efficiency of the locations. Within this submodel all fields under agricultural use are valued to that effect and the ones with the least productive efficiency are selected as set-aside locations. In case of homogeneous area qualities the set-aside locations are selected randomly. 3) Establishment of protected areas in waterside and ripearian areas: This submodel takes into consideration that the protection of sensitive areas along the river courses may have positive consequences for the efficiency of the landscape. Therefore this submodel establishes protection zones on waterside and ripearian sites under currently agricultural use, that might be of value for nature conservation. The size of the protection area depends on the morphology of the surrounding landscape. The three submodels were validated with respect to the implied hypotheses by the help of many different approaches. The result of this intensive analysis shows a satisfying suitability for each of the submodels. The simulation of land-use changes was carried out for three mesoscale river catchments with an area between 100 and 500 km². Special attention was paid to the fact that these areas should be markingly different in their land-use: One study area is predominantly under intensive agricultural use, one is densely populated and the third one is covered by forest in large parts of the area. With regard to their relevance to the onhand question from existing land-use trends scenarios were derived for the prognosed settlement area for the year 2010, for the possible consequences of the EU-wide agreement of Agenda 2000 and for the amending federal conservation law dating to the year 2001, which enhances the enlargement of protected areas. Each scenario was applied to the three study areas utilizing the model luck. For the expansion of the settlement areas in all three study areas realistic land-use patterns were generated. Limitations arose only in the context of the search for marginal yield fields. Here, the random distribution of areas to be set-aside under homogeneous conditions led to unrealistic results. The quality of the establishment of protected areas in waterside and ripearian areas is substantially bound to current land-use and the morphology of the area. The best results for this submodel are achieved if waterside and ripearian areas are mainly arable land and if the river has lowered its course into the morphology. The hydrological consequences are described exemplarily for each land-use trend with a historical flood event. The interpretation of the hydrographs does not allow global statements about the influence of land-use. The study demonstrates the significance of land-use pattern for the natural processes in the landscape and underlines the necessity of spatially explicit modelling for landscape ecological questions at the mesoscale.
NASA Astrophysics Data System (ADS)
Foss, John; Dewhurst, Richard; Fujii, Kenichi; Regtien, Paul
2009-05-01
From 1991 to 2004, Measurement Science and Technology had awarded a Best Paper prize. The Editorial Board of this journal believed that such a prize was an opportunity to thank authors for submitting their work, and that it served as an integral part of the on-going quality review of the journal. The current breadth of topical areas that are covered by MST has made it advisable to expand the recognition of excellent publications. Hence, since 2005 the Editorial Board Members have presented 'Outstanding Paper Awards' in four subject categories: Measurement Science; Fluid Mechanics; Precision Measurements; and Sensors and Sensing Systems. 2008 Award Winners—Measurement Science Noise level estimation in weakly nonlinear slowly time-varying systems J R M Aerts, J Lataire, R Pintelon and J J J Dirckx Laboratory of Biomedical Physics, Universiteit Antwerpen, Groenenborgerlaan 171, 2020 Antwerp, Belgium and Department of Fundamental Electricity and Instrumentation (ELEC), Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium This paper [1] examines new methods to perform noise estimation in weakly nonlinear time-varying systems. In a clear presentation that describes the problem, the paper concentrates on weakly nonlinear phenomena in the acoustic regime. However, both the concepts and theory developed have wide applicability in other fields within measurement science wherever there is a time-varying nonlinear response. The theory uses two methods to estimate noise. The first is called the background frequency method, and the second is a periodic difference method. Both methods have their advantages, and disadvantages, which the authors highlight in a balanced account. They also spend some effort in validating the two approaches. Just as importantly, applications of the theory are presented as two experimental case histories. The first is a study of a vibrating membrane from a high quality microphone. This is an example of a time-invariant system, and the results show a frequency spectrum of the vibration velocity in the typical range of 2.0 kHz to 8.0 kHz. Secondly, as an example of a time-variant system, a membrane forming the middle ear of a gerbil is examined, in a similar frequency range. The paper shows that this time variation can be misinterpreted as an elevated noise floor, with the classical noise estimation method giving an incorrect result. In contrast, these new methods help to retrieve the correct noise information from experimental measurements, and to derive meaningful signal-to-noise ratios. Several features in the format of the article help to make this an excellent paper. Following an informative abstract, it has a concise introduction followed by a detailed formulation of the problem to be addressed. This helps to place the important theory of the paper into context in the next section—one in which an appendix is used for derivation of a useful algorithm. A following section has a detailed account of experiments that are used to demonstrate the effect of nonlinear time-varying systems. They show how the resultant noise measurements can be analysed using the paper's theory. A good range of velocity-frequency spectra are used to emphasize the concepts and techniques presented. The conclusions are concise, and the paper is rounded off with a solid reference list of some 20 papers describing related work, with most papers cited from 2004 onwards. This paper was rated as excellent by the external referees, and has been downloaded several hundred times since its publication. It is one of about 20 papers nominated this year for the best paper award in measurement science, and it gained the most votes from those shortlisted. With a generic appeal across the broad spectrum of measurement science, it also has potential relevance to applications in the life sciences. 2008 Award Winners—Fluid Mechanics Time-resolved entropy measurements using a fast response entropy probe Michel Mansour, Ndaona Chokani, Anestis I Kalfas and Reza S Abhari Laboratory for Energy Conversion, Department of Mechanical and Process Engineering, ETH Zurich, Zurich, Switzerland and Department of Mechanical Engineering, Aristotle University of Thessaloniki, Greece The paper [2] describes the development of a probe for measuring time-resolved entropy fluctuations. The technique was developed for use in turbomachinery applications where knowledge of the unsteady entropy in the wake of a rotating blade row can provide significant insight into the loss (dissipation) generation mechanisms. There is no direct method to measure entropy; hence the authors use an approach where the stagnation pressure and stagnation temperature are measured on the same probe. The stagnation, or 'total', pressure has been the subject of several of the authors' papers in the last few years. The more significant challenge was the time-resolved measurement of total temperature. The high Mach numbers present in the flows of interest lead to significant temperature gradients in the probe itself which contaminate this type of measurement. The paper describes a novel probe design along with a detailed heat transfer calculation and uncertainty analysis. The fluid mechanics members of the MST Editorial and Advisory Boards—T Fansler, J Foss, M Koochesfahani, I Marusic, S Morris, K Okamoto, F Scarano, E Tomita and M Wernet—considered all papers in the Fluid Mechanics section and the others that related to fluid mechanics. In the final analysis, the strengths of this paper strongly recommended it for the Award. 2008 Award Winners—Precision Measurement Self-calibration of divided circles on the basis of a prime factor algorithm R Probst Physikalisch-Technische Bundesanstalt (PTB), Bundesallee 100, D-38116 Braunschweig, Germany This paper [3] describes a new method for the self-calibration of divided circles, which may be widely used for angle calibration of circular scales, precision polygons, indexing tables and angle encoders. In the field of angle metrology, two self-calibration methods have been proposed recently for high-precision angle encoders. One is called the equal-division-averaged (EDA) method proposed by Kajitani and Masuda in 1989, in which a divided circle of an angle encoder is compared to a second divided circle in several averaged distributions. The other method, proposed by Ernst in 1994, uses a divided circle with a dual exponential division number. Both methods are used for angle standards without any external reference. However, both also take a rather long time to complete a set of measurements. In this paper, the author proposed a new algorithm called the prime factor division (PFD) method based on the discrete Fourier transformation (DFT). This algorithm uses the circular division number N expressed as a product N = R × S, where the factors R and S are prime integer numbers. For the self-calibration of a divided circle, N difference measurements between R angle positions in a regular distribution and one reference angle position determined by S are evaluated by a two-dimensional DFT, yielding the N absolute division errors. The factor R is preferably chosen to be small, down to a minimum of R = 2, whereas the factor S may be as large as appropriate for the division number N of interest. In the case of a cross-calibration between two divided circles, the PFD method reduces the number of measurements from N2 to (R + 1) × N, resulting in a significant shortening of the time need for the cross-calibration between two divided circles. The author tested the new algorithm by calibrating a regular precision polygon and a gearwheel, and evaluated the performance by selecting the appropriate combinations of (R + 1)N difference measurements from a complete set of N2 cross-calibration measurements. He showed that the complex DFT algorithm is a key tool for the treatment of the division errors and the calibration of divided circles, especially by use of the PFD method. This paper was rated as excellent by the nomination committee members on precision measurement: Dr K Fujii (chairman), Prof. X Chen, Dr A Yacoot, Dr P Williams and Prof. Z Wei, in a strongly competitive list of 15 candidates, and has been downloaded several hundred times since its publication. Considering its impact on practical calibration for angle metrology and also its sophisticated idea on the algorithm, the paper was nominated for the MST Precision Measurement Award from the papers published in 2008. 2008 Award Winners—Sensors and Sensing Systems A minimally invasive in-fiber Bragg grating sensor for intervertebral disc pressure measurements Christopher R Dennison, Peter M Wild, David R Wilson and Peter A Cripton Department of Mechanical Engineering, University of Victoria, PO Box 3055, Victoria, British Columbia, V8W 3P6, Canada, Division of Orthopaedic Engineering Research, Department of Orthopaedics University of British Columbia, Canada and Department of Mechanical Engineering, University of British Columbia, Canada A major cause of back pain is the degeneration of intervertebral discs (IVDs). The pressure distribution in IVDs appears to be a good indicator of the degeneration state. To measure this pressure a very small sensor is required. The authors of this paper [4] have developed such a sensor, based on an in-fibre Bragg grating (FBG). Compared to a bare fibre, this sensor has a substantially higher sensitivity, and compared to the currently used strain gauge sensors the new sensor has much smaller dimensions, allowing in vivo pressure measurement. The paper starts with a clear introduction in which the problem of present IVD pressure sensors is described and the need for smaller sensors is motivated. In subsequent sections the construction of the probe is described in detail, and a finite-element model is used to predict the sensor behaviour. Experimental results are reported in detail, including uncertainty margins. Comparative experiments between the strain gauge based sensors and the new FBG sensor are presented as well, demonstrating the superiority of the latter. The paper received high quality ratings by the reviewers, and the appreciation by the readers of MST is confirmed by a high number of downloads. The reduction of sensor diameter (from a few mm down to 400 μm) and the higher flexibility are important features of this application. The use of a silicone sealant in a steel hypodermic tube around the fibre, resulting in a higher sensitivity compared to a bare fibre, is a significant achievement of this research. Its sensitivity can further be increased by a reduction of the fibre diameter: this is the subject of another excellent article [5] by the authors, which appeared in the December issue of 2008. The Outstanding Paper Awards, comprising a cash honorarium and certificate, will be presented to the authors of the winning papers at suitable venues in the near future. The Editorial Board would like to congratulate the winning authors and would like to encourage all researchers to think of Measurement Science and Technology as the home for your best submissions. References [1] Aerts J R M, Lataire J, Pintelon R and Dirckx J J J 2008 Noise level estimation in weakly nonlinear slowly time-varying systems Meas. Sci. Technol. 19 105101 (14pp) [2] Mansour M, Chokani N, Kalfas A I and Abhari R S 2008 Time-resolved entropy measurements using a fast response entropy probe Meas. Sci. Technol. 19 115401 (10pp) [3] Probst R 2008 Self-calibration of divided circles on the basis of a prime factor algorithm Meas. Sci. Technol. 19 015101 (11pp) [4] Dennison C R, Wild P M, Wilson D R and Cripton P A 2008 A minimally invasive in-fiber Bragg grating sensor for intervertebral disc pressure measurements Meas. Sci. Technol. 19 085201 (12pp) [5] Dennison C R and Wild P M 2008 Enhanced sensitivity of an in-fibre Bragg grating pressure sensor achieved through fibre diameter reduction Meas. Sci. Technol. 19 125301 (11pp)
NASA Astrophysics Data System (ADS)
Foss, John; Dewhurst, Richard; Fujii, Kenichi; Regtien, Paul
2010-06-01
Since 1991, Measurement Science and Technology has awarded a Best Paper prize. The Editorial Board of this journal believes that such a prize is an opportunity to thank authors for submitting their work, and serves as an integral part of the on-going quality review of the journal. The current breadth of topical areas that are covered by MST has made it advisable to expand the recognition of excellent publications. Hence, since 2005 the Editorial Board have presented 'Outstanding Paper Awards' in four subject categories: Fluid Mechanics; Measurement Science; Precision Measurements; and Sensors and Sensing Systems. This year also saw the introduction of a new category—Optical and Laser-based Techniques. 2009 Award Winners—Fluid Mechanics Digital particle image velocimetry (DPIV) robust phase correlation Adric Eckstein and Pavlos P Vlachos Department of Mechanical Engineering, Virginia Tech, Blacksburg, VA, USA This paper [1] represents a valuable improvement to the phase-only correlation technique (first proposed by Wernet in this journal in 2005 (Wernet M 2005 Symmetric phase-only filtering: a new paradigm for DPIV data processing Meas. Sci. Technol. 16 601-18) for particle-image-velocimetry (PIV) measurements of fluid flow. The authors establish a sound theoretical foundation and clearly describe the working principle of their robust phase correlation method. The methodology for assessing performance is excellent. Detailed results on several internationally recognized PIV test cases are presented. The robust phase correlation method is of general applicability and therefore can be expected to have substantial impact in this very active area of fluid-mechanics measurements. 2009 Award Winner—Precision Measurement A nanonewton force facility and a novel method for measurements of the air and vacuum permittivity at zero frequencies V Nesterov Physikalisch-Technische Bundesanstalt (PTB), Bundesallee 100, D-38116 Braunschweig, Germany This paper [2] describes a new method for measuring the air and vacuum permittivity by observing a nanonewton force in a disk-pendulum. The frequency dependence of the vacuum permittivity, ɛ0, has been one of the fundamental questions in physics. Although it is confidently believed that there is no frequency dependence in ɛ0, very few experiments have been conducted to verify the properties of ɛ0. Especially in the region near zero frequencies, no experiment has been done in the vacuum. The precise measurement of the vacuum permittivity is therefore of fundamental interest to test the correctness of Maxwell's equations in an electrostatic field. In this paper, the author developed a new principle using a disk-pendulum, where the displacement of the disk-pendulum in the presence of an electrostatic field has been detected by optical interferometry. The nanonewton forces generated by the electrostatic field have been measured to evaluate the air and vacuum permittivity at zero frequencies, achieving a relative standard uncertainty of 10-5 in the measurement of ɛ0. The measurement system developed here also opened a new way for the force standard in the nanonewton region, investigation of the Casimir effect, direct measurement of laser power etc. This paper has been downloaded more than 200 times since its publication. The selection committee members on precision measurement selected this paper from a strongly competitive list of eleven candidates for its sophisticated idea, originality in apparatus, and excellent result with uncertainty evaluation. Considering its impact on fundamental physics and also on precision force measurement in the nanonewton region, the paper was nominated for the MST Precision Measurement Award from the papers published in 2009. 2009 Award Winners—Sensors and Sensing Systems High-resolution and compact refractometer for salinity measurements D Malardé, Z Y Wu, P Grosso, J-L de Bougrenet de la Tocnaye and M Le Menn Optics Department, UMR CNRS 6082 FOTON, TELECOM Bretagne, CS 83818, 29238 Brest Cédex 3, France Laboratoire de Métrologie et de Chimie Océanographique, SHOM 13 rue du Châtelier, BP 30316, 29603 Brest Cedex, France The absolute salinity of seawater is an important component in the determination of the ocean's thermodynamic properties. This paper [3] presents a measurement system for very accurate determination of the salinity. The method is based on measuring the refractive index of seawater, a parameter that is related to the salinity. Although the method is essentially simple: using a prism and a PSD for direct measurement of the angle of refraction, the refraction also depends on many more parameters (temperature, pressure, wavelength). Moreover, for obtaining accurate salinity values, a high resolution of the refractive index is required, of the order of 10-7, under all possible conditions. Starting with a list of notations and symbols, the paper follows with an overview of existing methods for salinity measurements and their performances. Next, the principle based on refractometry is introduced, and a theoretical model is presented, comprising a large number of parameters that influence the properties of the refractometer. From this analysis there follows an optimization of the geometry of the device. The sensitivity for salinity and other water parameters are estimated from theory, and verified experimentally. Salinity measurements are compared with those using standard salinometers, and show an uncertainty less than a few 10-3 psu (practical salinity units). The obtained resolution is about 4 × 10-7, which is limited by that of the readout electronics of the PSD. It is demonstrated that this can be further improved to achieve the required resolution lower than 10-7. The authors make clear that their prototype, the length of which is about 12 cm, is an important step to a fully in situ oceanographic salinometer, but further testing is required to confirm reliability. Indeed, the paper does not report on long term effects of, for instance, contamination of the optical parts. Hopefully this will be investigated and published in the next stage of the research. 2009 Award Winners—Measurement Science Analysis and mitigation of measurement uncertainties in the traceability chain for the calibration of photovoltaic devices Harald Mullejans, Willem Zaaiman and Roberto Galleano European Commission, Joint Research Centre, Institute for Energy, Renewable Energy Unit, European Solar Test Installation, Via E Fermi 2749, I-21027 Ispra (Va), Italy The energy output and efficiency from different types of photovoltaic (PV) devices is an essential characteristic of photovoltaics. They may only be assessed from cell performance under standard test conditions, with traceability back to international standards. As this paper [4] points out in its introduction, the reliability of the measurements and their uncertainty are of crucial importance to manufacturers, operators and investors. Several features in the format of the article help to make this an excellent paper. Following an informative abstract, the paper has a concise introduction followed by a detailed formulation of the problem to be addressed. After a discussion of present-day international codes related to the measurement and performance of PVs, the paper goes on to describe one of the most difficult issues in traceability, that of deriving an irradiance quantity which is normally measured by a reference PV device. This might appear straightforward, but the paper highlights the difficulties of measuring the irradiance of sunlight (either simulated or natural) during the evaluation of the electrical performance of PV devices. The calibration of any PV device essentially consists of the measurement of the IV characteristics, by sweeping the device from its short-circuit condition (SC) to its open-circuit condition (OC) when exposed to (pulsed or continuous) simulated or natural sunlight. In this paper, all measurements were carried out at the European Solar Test Installation at Ispra in Italy. After describing the measurement methods, the authors discuss at length the calculations of measurement uncertainty. This leads to a rigorous discussion of relative uncertainties from measurement parameters. Uncertainties arise from a range of parameters that include small contributions from electrical, temperature and fill factor parameters. But the greatest arises from uncertainties both in the optical source and in the reference cell. As a consequence, the authors make a convincing case both for identifying the sources of uncertainty and for reducing overall uncertainties from +/-2.6% to below +/-2%. The conclusions are concise, and the paper is rounded off with a list of 34 references with several papers cited from 2005 onwards. This paper was rated as excellent by external referees, and has been downloaded several hundred times since its publication. It is one of 30 papers nominated this year in the Measurement Science category, and gained most votes from those short-listed. The discussion on measurement uncertainty sets a fine example to all researchers involved in experimental measurement science. 2009 Award Winners—Optical and Laser-based Techniques Measurement of tiny droplets using a newly developed optical fibre probe micro-fabricated by a femtosecond pulse laser T Saito, K Matsuda, Y Ozawa, S Oishi and S Aoshima Shizuoka University, 3-5-1 Johoku, Naka-ku Hamamatsu, Shizuoka 432-8561, Japan CRL, Hamamatsu Photonics K.K., 5000 Hirakuchi, Hamakita-ku Hamamatsu, Shizuoka 434-8601, Japan This Award goes to a paper [5] describing a fascinating and novel optical fibre probe applied to measurements in gas-liquid two-phase flows. Optical probes have already been widely applied to two-phase flows. These include the use of four-tip optical probes (F-TOP) for millimetre-size bubbles/droplets, and a single-tip optical fibre probe (S-TOP) for sub-millimetre size bubbles/droplets. This new paper describes how micro-machining a groove into the side of the silica fibre using a femtosecond laser system leads to a new optical fibre probe (Fs-TOP). It is capable of measurements of velocity and size of droplets down to about 50 µm in diameter. A useful introduction describes the importance of characterization required in gas-liquid two-phase flows. Measurements are required in a variety of research fields, e.g. sprays, automotive engines, fine chemistry and atomic power plants. Characterization of bubbles/droplets at high density has been desired for both laboratory and industrial use. Addressed towards this goal, this paper describes the use of optical fibre probes that have been modified with a side-wall groove. The authors point out that whereas the S-TOP is made very easily, with the tip of the S-TOP only micro-ground into a wedge shape, the Fs-TOP is very difficult to fabricate. In addition to a wedge tip, fabrication involves a femtosecond laser system with nano-order six-axis automatic optical stages and confocal optics. Hence it is important to intelligently select between these two types of probe depending on the measurement objective. Moreover, like all types of new probe, it is first necessary to validate its performance against other probes with accepted performance characteristics. After describing the experimental arrangement, the authors discuss at length the performance of the new Fs-TOP and that of the S-TOP for comparison. Additionally, results from the Fs-TOP are compared with those from visualization of the droplets by a high-speed video camera. Results show satisfactory agreement. After a lengthy presentation of results and discussion, the authors produce a concise set of inclusions, including a summary of the small perturbation in droplet size arising from the presence of the probe. The paper was one of 22 papers nominated this year for the Best Paper Award in Optical and Laser-based Techniques. It crosses the disciplines of optical measurement science and measurements required for the characterization of two-phase fluid flow. Exploiting new technological facilities in micromachining, the paper is an excellent example of new opportunities that may arise in measurement. References [1] Eckstein A and Vlachos P P 2009 Digital particle image velocimetry (DPIV) robust phase correlation Meas. Sci. Technol. 20 055401 (14pp) [2] Nesterov V 2009 A nanonewton force facility and a novel method for measurements of the air and vacuum permittivity at zero frequencies Meas. Sci. Technol. 20 084012 (6pp) [3] Malardé D, Wu Z Y, Grosso P, de Bougrenet de la Tocnaye J-L and Le Menn M 2009 High-resolution and compact refractometer for salinity measurements Meas. Sci. Technol. 20 015204 (8pp) [4] Müllejans H, Zaaiman W and Galleano R 2009 Analysis and mitigation of measurement uncertainties in the traceability chain for the calibration of photovoltaic devices Meas. Sci. Technol. 20 075101 (12pp) [5] Saito T, Matsuda K, Ozawa Y, Oishi S and Aoshima S 2009 Measurement of tiny droplets using a newly developed optical fibre probe micro-fabricated by a femtosecond pulse laser Meas. Sci. Technol. 20 114002 (12pp)
Large-scale hydrological modelling in the semi-arid north-east of Brazil
NASA Astrophysics Data System (ADS)
Güntner, Andreas
2002-07-01
Semi-arid areas are, due to their climatic setting, characterized by small water resources. An increasing water demand as a consequence of population growth and economic development as well as a decreasing water availability in the course of possible climate change may aggravate water scarcity in future, which often exists already for present-day conditions in these areas. Understanding the mechanisms and feedbacks of complex natural and human systems, together with the quantitative assessment of future changes in volume, timing and quality of water resources are a prerequisite for the development of sustainable measures of water management to enhance the adaptive capacity of these regions. For this task, dynamic integrated models, containing a hydrological model as one component, are indispensable tools. The main objective of this study is to develop a hydrological model for the quantification of water availability in view of environmental change over a large geographic domain of semi-arid environments. The study area is the Federal State of Ceará (150 000 km2) in the semi-arid north-east of Brazil. Mean annual precipitation in this area is 850 mm, falling in a rainy season with duration of about five months. Being mainly characterized by crystalline bedrock and shallow soils, surface water provides the largest part of the water supply. The area has recurrently been affected by droughts which caused serious economic losses and social impacts like migration from the rural regions. The hydrological model Wasa (Model of Water Availability in Semi-Arid Environments) developed in this study is a deterministic, spatially distributed model being composed of conceptual, process-based approaches. Water availability (river discharge, storage volumes in reservoirs, soil moisture) is determined with daily resolution. Sub-basins, grid cells or administrative units (municipalities) can be chosen as spatial target units. The administrative units enable the coupling of Wasa in the framework of an integrated model which contains modules that do not work on the basis of natural spatial units. The target units mentioned above are disaggregated in Wasa into smaller modelling units within a new multi-scale, hierarchical approach. The landscape units defined in this scheme capture in particular the effect of structured variability of terrain, soil and vegetation characteristics along toposequences on soil moisture and runoff generation. Lateral hydrological processes at the hillslope scale, as reinfiltration of surface runoff, being of particular importance in semi-arid environments, can thus be represented also within the large-scale model in a simplified form. Depending on the resolution of available data, small-scale variability is not represented explicitly with geographic reference in Wasa, but by the distribution of sub-scale units and by statistical transition frequencies for lateral fluxes between these units. Further model components of Wasa which respect specific features of semi-arid hydrology are: (1) A two-layer model for evapotranspiration comprises energy transfer at the soil surface (including soil evaporation), which is of importance in view of the mainly sparse vegetation cover. Additionally, vegetation parameters are differentiated in space and time in dependence on the occurrence of the rainy season. (2) The infiltration module represents in particular infiltration-excess surface runoff as the dominant runoff component. (3) For the aggregate description of the water balance of reservoirs that cannot be represented explicitly in the model, a storage approach respecting different reservoirs size classes and their interaction via the river network is applied. (4) A model for the quantification of water withdrawal by water use in different sectors is coupled to Wasa. (5) A cascade model for the temporal disaggregation of precipitation time series, adapted to the specific characteristics of tropical convective rainfall, is applied for the generating rainfall time series of higher temporal resolution. All model parameters of Wasa can be derived from physiographic information of the study area. Thus, model calibration is primarily not required. Model applications of Wasa for historical time series generally results in a good model performance when comparing the simulation results of river discharge and reservoir storage volumes with observed data for river basins of various sizes. The mean water balance as well as the high interannual and intra-annual variability is reasonably represented by the model. Limitations of the modelling concept are most markedly seen for sub-basins with a runoff component from deep groundwater bodies of which the dynamics cannot be satisfactorily represented without calibration. Further results of model applications are: (1) Lateral processes of redistribution of runoff and soil moisture at the hillslope scale, in particular reinfiltration of surface runoff, lead to markedly smaller discharge volumes at the basin scale than the simple sum of runoff of the individual sub-areas. Thus, these processes are to be captured also in large-scale models. The different relevance of these processes for different conditions is demonstrated by a larger percentage decrease of discharge volumes in dry as compared to wet years. (2) Precipitation characteristics have a major impact on the hydrological response of semi-arid environments. In particular, underestimated rainfall intensities in the rainfall input due to the rough temporal resolution of the model and due to interpolation effects and, consequently, underestimated runoff volumes have to be compensated in the model. A scaling factor in the infiltration module or the use of disaggregated hourly rainfall data show good results in this respect. The simulation results of Wasa are characterized by large uncertainties. These are, on the one hand, due to uncertainties of the model structure to adequately represent the relevant hydrological processes. On the other hand, they are due to uncertainties of input data and parameters particularly in view of the low data availability. Of major importance is: (1) The uncertainty of rainfall data with regard to their spatial and temporal pattern has, due to the strong non-linear hydrological response, a large impact on the simulation results. (2) The uncertainty of soil parameters is in general of larger importance on model uncertainty than uncertainty of vegetation or topographic parameters. (3) The effect of uncertainty of individual model components or parameters is usually different for years with rainfall volumes being above or below the average, because individual hydrological processes are of different relevance in both cases. Thus, the uncertainty of individual model components or parameters is of different importance for the uncertainty of scenario simulations with increasing or decreasing precipitation trends. (4) The most important factor of uncertainty for scenarios of water availability in the study area is the uncertainty in the results of global climate models on which the regional climate scenarios are based. Both a marked increase or a decrease in precipitation can be assumed for the given data. Results of model simulations for climate scenarios until the year 2050 show that a possible future change in precipitation volumes causes a larger percentage change in runoff volumes by a factor of two to three. In the case of a decreasing precipitation trend, the efficiency of new reservoirs for securing water availability tends to decrease in the study area because of the interaction of the large number of reservoirs in retaining the overall decreasing runoff volumes. Semiaride Gebiete sind auf Grund der klimatischen Bedingungen durch geringe Wasserressourcen gekennzeichnet. Ein zukünftig steigender Wasserbedarf in Folge von Bevölkerungswachstum und ökonomischer Entwicklung sowie eine geringere Wasserverfügbarkeit durch mögliche Klimaänderungen können dort zu einer Verschärfung der vielfach schon heute auftretenden Wasserknappheit führen. Das Verständnis der Mechanismen und Wechselwirkungen des komplexen Systems von Mensch und Umwelt sowie die quantitative Bestimmung zukünftiger Veränderungen in der Menge, der zeitlichen Verteilung und der Qualität von Wasserressourcen sind eine grundlegende Voraussetzung für die Entwicklung von nachhaltigen Maßnahmen des Wassermanagements mit dem Ziel einer höheren Anpassungsfähigkeit dieser Regionen gegenüber künftigen Änderungen. Hierzu sind dynamische integrierte Modelle unerlässlich, die als eine Komponente ein hydrologisches Modell beinhalten. Vorrangiges Ziel dieser Arbeit ist daher die Erstellung eines hydrologischen Modells zur großräumigen Bestimmung der Wasserverfügbarkeit unter sich ändernden Umweltbedingungen in semiariden Gebieten. Als Untersuchungsraum dient der im semiariden tropischen Nordosten Brasiliens gelegene Bundestaat Ceará (150 000 km2). Die mittleren Jahresniederschläge in diesem Gebiet liegen bei 850 mm innerhalb einer etwa fünfmonatigen Regenzeit. Mit vorwiegend kristallinem Grundgebirge und geringmächtigen Böden stellt Oberflächenwasser den größten Teil der Wasserversorgung bereit. Die Region war wiederholt von Dürren betroffen, die zu schweren ökonomischen Schäden und sozialen Folgen wie Migration aus den ländlichen Gebieten geführt haben. Das hier entwickelte hydrologische Modell Wasa (Model of Water Availability in Semi-Arid Environments) ist ein deterministisches, flächendifferenziertes Modell, das aus konzeptionellen, prozess-basierten Ansätzen aufgebaut ist. Die Wasserverfügbarkeit (Abfluss im Gewässernetz, Speicherung in Stauseen, Bodenfeuchte) wird mit täglicher Auflösung bestimmt. Als räumliche Zieleinheiten können Teileinzugsgebiete, Rasterzellen oder administrative Einheiten (Gemeinden) gewählt werden. Letztere ermöglichen die Kopplung des Modells im Rahmen der integrierten Modellierung mit Modulen, die nicht auf der Basis natürlicher Raumeinheiten arbeiten. Im Rahmen eines neuen skalenübergreifenden, hierarchischen Ansatzes werden in Wasa die genannten Zieleinheiten in kleinere räumliche Modellierungseinheiten unterteilt. Die ausgewiesenen Landschaftseinheiten erfassen insbesondere die strukturierte Variabilität von Gelände-, Boden- und Vegetationseigenschaften entlang von Toposequenzen in ihrem Einfluss auf Bodenfeuchte und Abflussbildung. Laterale hydrologische Prozesse auf kleiner Skala, wie die für semiaride Bedingungen bedeutsame Wiederversickerung von Oberflächenabfluss, können somit auch in der erforderlichen großskaligen Modellanwendung vereinfacht wiedergegeben werden. In Abhängigkeit von der Auflösung der verfügbaren Daten wird in Wasa die kleinskalige Variabilität nicht räumlich explizit sondern über die Verteilung von Flächenanteilen subskaliger Einheiten und über statistische Übergangshäufigkeiten für laterale Flüsse zwischen den Einheiten berücksichtigt. Weitere Modellkomponenten von Wasa, die spezifische Bedingungen semiarider Gebiete berücksichtigen, sind: (1) Ein Zwei-Schichten-Modell zur Bestimmung der Evapotranspiration berücksichtigt auch den Energieumsatz an der Bodenoberfläche (inklusive Bodenverdunstung), der in Anbetracht der meist lichten Vegetationsbedeckung von Bedeutung ist. Die Vegetationsparameter werden zudem flächen- und zeitdifferenziert in Abhängigkeit vom Auftreten der Regenzeit modifiziert. (2) Das Infiltrationsmodul bildet insbesondere Oberflächenabfluss durch Infiltrationsüberschuss als dominierender Abflusskomponente ab. (3) Zur aggregierten Beschreibung der Wasserbilanz von im Modell nicht einzeln erfassbaren Stauseen wird ein Speichermodell unter Berücksichtigung verschiedener Größenklassen und ihrer Interaktion über das Gewässernetz eingesetzt. (4) Ein Modell zur Bestimmung der Entnahme durch Wassernutzung in verschiedenen Sektoren ist an Wasa gekoppelt. (5) Ein Kaskadenmodell zur zeitlichen Disaggregierung von Niederschlagszeitreihen, das in dieser Arbeit speziell für tropische konvektive Niederschlagseigenschaften angepasst wird, wird zur Erzeugung höher aufgelöster Niederschlagsdaten verwendet. Alle Modellparameter von Wasa können von physiographischen Gebietsinformationen abgeleitet werden, sodass eine Modellkalibrierung primär nicht erforderlich ist. Die Modellanwendung von Wasa für historische Zeitreihen ergibt im Allgemeinen eine gute Übereinstimmung der Simulationsergebnisse für Abfluss und Stauseespeichervolumen mit Beobachtungsdaten in unterschiedlich großen Einzugsgebieten. Die mittlere Wasserbilanz sowie die hohe monatliche und jährliche Variabilität wird vom Modell angemessen wiedergegeben. Die Grenzen der Anwendbarkeit des Modell-konzepts zeigen sich am deutlichsten in Teilgebieten mit Abflusskomponenten aus tieferen Grundwasserleitern, deren Dynamik ohne Kalibrierung nicht zufriedenstellend abgebildet werden kann. Die Modellanwendungen zeigen weiterhin: (1) Laterale Prozesse der Umverteilung von Bodenfeuchte und Abfluss auf der Hangskala, vor allem die Wiederversickerung von Oberflächenabfluss, führen auf der Skala von Einzugsgebieten zu deutlich kleineren Abflussvolumen als die einfache Summe der Abflüsse der Teilflächen. Diese Prozesse sollten daher auch in großskaligen Modellen abgebildet werden. Die unterschiedliche Ausprägung dieser Prozesse für unterschiedliche Bedingungen zeigt sich an Hand einer prozentual größeren Verringerung der Abflussvolumen in trockenen im Vergleich zu feuchten Jahren. (2) Die Niederschlagseigenschaften haben einen sehr großen Einfluss auf die hydrologische Reaktion in semiariden Gebieten. Insbesondere die durch die grobe zeitliche Auflösung des Modells und durch Interpolationseffekte unterschätzten Niederschlagsintensitäten in den Eingangsdaten und die daraus folgende Unterschätzung von Abflussvolumen müssen im Modell kompensiert werden. Ein Skalierungsfaktor in der Infiltrationsroutine oder die Verwendung disaggregierter stündlicher Niederschlagsdaten zeigen hier gute Ergebnisse. Die Simulationsergebnisse mit Wasa sind insgesamt durch große Unsicherheiten gekennzeichnet. Diese sind einerseits in Unsicherheiten der Modellstruktur zur adäquaten Beschreibung der relevanten hydrologischen Prozesse begründet, andererseits in Daten- und Parametersunsicherheiten in Anbetracht der geringen Datenverfügbarkeit. Von besonderer Bedeutung ist: (1) Die Unsicherheit der Niederschlagsdaten in ihrem räumlichen Muster und ihrer zeitlichen Struktur hat wegen der stark nicht-linearen hydrologischen Reaktion einen großen Einfluss auf die Simulationsergebnisse. (2) Die Unsicherheit von Bodenparametern hat im Vergleich zu Vegetationsparametern und topographischen Parametern im Allgemeinen einen größeren Einfluss auf die Modellunsicherheit. (3) Der Effekt der Unsicherheit einzelner Modellkomponenten und -parameter ist für Jahre mit unter- oder überdurchschnittlichen Niederschlagsvolumen zumeist unterschiedlich, da einzelne hydrologische Prozesse dann jeweils unterschiedlich relevant sind. Die Unsicherheit einzelner Modellkomponenten- und parameter hat somit eine unterschiedliche Bedeutung für die Unsicherheit von Szenarienrechnungen mit steigenden oder fallenden Niederschlagstrends. (4) Der bedeutendste Unsicherheitsfaktor für Szenarien der Wasserverfügbarkeit für die Untersuchungsregion ist die Unsicherheit der den regionalen Klimaszenarien zu Grunde liegenden Ergebnisse globaler Klimamodelle. Eine deutliche Zunahme oder Abnahme der Niederschläge bis 2050 kann gemäß den hier vorliegenden Daten für das Untersuchungsgebiet gleichermaßen angenommen werden. Modellsimulationen für Klimaszenarien bis zum Jahr 2050 ergeben, dass eine mögliche zukünftige Veränderung der Niederschlagsmengen zu einer prozentual zwei- bis dreifach größeren Veränderung der Abflussvolumen führt. Im Falle eines Trends von abnehmenden Niederschlagsmengen besteht in der Untersuchungsregion die Tendenz, dass auf Grund der gegenseitigen Beeinflussung der großen Zahl von Stauseen beim Rückhalt der tendenziell abnehmenden Abflussvolumen die Effizienz von neugebauten Stauseen zur Sicherung der Wasserverfügbarkeit zunehmend geringer wird. --- Anmerkung: Der Autor ist Träger des von der Universitätsgesellschaft Potsdam e.V. vergebenen Wissenschaftspreises zur Förderung des wissenschaftlichen Nachwuchses für die beste Dissertation des Jahres 2002/2003 an der Universität Potsdam.
NASA Astrophysics Data System (ADS)
Diedrich, Cajus G.
2010-05-01
Nine Middle Triassic paleogeographical maps comprising the uppermost Upper Bunter, Lower to Middle Muschelkalk and Upper Muschelkalk to Lower Keuper time frame (Diedrich 2008b) show the marine ingression and regression cycle of the Middle Triassic Germanic Basin (Diedrich 2010c). For bathymetrical and palaeoenvironmental interpretations especially reptiles and their footprints are used. This Germanic Basin as analogon for the Arabian Gulf (Knaust 1997), north of the Tethys, was under marine and finally terrestrial influenced sediments in a time frame (after Kozur and Bachmann 2008) between 247.2 My (Myophoria Fm, Aegean, Lower Anisian) to 237.9 My (Grabfeld Fm, Longobardian, Lower Ladinian). In a duration of 9.3 My the Germanic Basin was filled up mainly with marine carbonates and at the end by siliciclastics influenced by the northern Tethys through the Silesian, Carpathian and later the Burgundian Gates which connected the Germanic Basin to the Northern Tethys. With the marine ingression from the East via the Silesian Gate (Poland) a ten to hundred kilometers extended intertidal flat to sabkha facies belt surrounded first only the central and then the Western Germanic Basin (Winterswijk, Netherlands). Those intertidal zones were used mainly by two different small reptiles as their primary habitat. Hereby they left Millions of the small tom medium sized footprints of the ichnogenera Rhynchosauroides and Procolophonichnium (Diedrich 2005, 2008a). Larger terrestrial and beach and sabkha adapted reptiles were Tanystrophaeus antiquus and unknown archosaurs, which are recorded only by their footprints. At the beginning of the ingression at the uppermost Bunter a shallow marine invertebrate fauna and coastal reptiles appeared in the Germanic Basin which must have originated mainly from the Northern Tethys. Especially all marine reptiles immigrated from the Tethys which is proven not only by assamblaged Tethyan cephalopod Ceratite species (cf. Diedrich 2008a). The coastal intertidal zones appeared with mud cracked biolaminate and sabkha dolomites ("Biolaminate and Sabkha facies") and expanded further west and south within the Lower Muschelkalk Winterswijk Fm (Aegean/Bithynian boundary), Osnabrück Fm, and Jena Fm (Bithynian to Pelsonian) (Diedrich and Trostheide 2007, Diedrich 2008a). The intertidal zones changed their extensions several times in the Lower Muschalkalk due to the less eustatically and more tectonically controlled very shallow relief cratonic basin morphology and were more stable in the western part of the flat carbonate ramp basin (Winterswijk, Netherlands) and in coastal zones in general. In the Germanic Basin centre (Rüdersdorf to Gogolin, Germany/Poland) the conditions were all that time under very shallow carbonate sand barr (Oolithtic, Terebratula or Shell bioclastic facies) or shallow subtidal ("Wellenkalk facies") conditions, whereas even extended seagrass meadows in shallow carbonate facies types are indirectly proven by invertebrate communities, especially snails. Those algae attracted especially placodontids which were the "Triassic seacows" feeding on such algae (Diedrich, 2010a), which immigrated with Paraplacodus, Placodus and Cyamodus already with the first Lower Muschelkalk ingression sequence. Also other reptiles such as nothosaurs Nothosaurus (small species), Cymatosaurus, the pachypleurosaurids Dactylosaurus, Neusticosaurus or Serpianosaurus must have originated from the tethys and were shallow marine and even lagoonary adapted paraxial swimming smaller marine reptiles. This "Lower Muschelkalk" time was highly tectonically active represented by several seismic layers (slumps, sigmoidal shocked layers, etc.) (cf. Schwarz 1975, Rüffer 1996, Knaust 2000, Diedrich 2008a), which were also reaching the intertidal beach zones, possibly even with tsunamite hazard events (Diedrich 2008b, 2009b). Such tsunamis or quick floodings due to storm events must have had hazardous impacts on marine reptiles or fishes, and the beach inhabiting terrestrial reptiles, which could have been killed by high amounts which explains the presence of many skeletons, bonebeds, and footprint preservations in the Germanic Basin biolaminate and lagoonal facies. With a high seismic peak during the Middle Muschelkalk Karlstadt Fm (Pelsonian/Illyrian boundary) in the intertidal zones up to 19 tectonically shocked biolaminate layers (locality Bernburg, Central Germany, Diedrich 2009b) prove the beginning of the Alpine tectonics and its raise (fold belt structure: Müller et al. 1964), but also the opening of the Carpathian Gate (graben structure: Szulc 1998), from which the epicenters were estimated by two main slickenside directions. Those can be found all over the Germanic Basin "Lower Muschelkalk" sediments (Szulc 1998, Föhlisch 2007, Diedrich 2009b). This time period of the Pelsonian/Illyrian boundary gave even such extended intertidal zones, that reptiles left Millions of tracks all over those biolaminate facies types, allowing those to migrate and distribute East (Bohemian Island) - West (Rhenisch Massif, London-Brabant Massif) due to "intertidal flat bridges". Therefore chirotherid archosaur trackmakers left Chirotherium, Isochirotherium and Brachychirotherium trackways quite abundantly not anymore in the typical Bunter red sandstone facies; now they appeared in the new environments, the intertidal biolaminates such as well documented at Bernburg (Central Germany, Diedrich 2009b), but also on other Middle Triassic coast east of the Massif Central (Demathieu 1985) or the Alps (e.g. Avanzini 2002). The only surviving marine reptiles were smaller lagoonal adapted pachypleurosaurs such as the common Anarosaurus and smaller sized Nothosaurus. Placodontids disappeared with the loss of the palaeoenvironment of the macroalgae meadows and seem to have migrated to the Carpathian gate and northern Tethys, where those habitats were still present. The dramatical habitat change with terrestrial territory loss, and marginal marine beach zone extensions seem to be also the reason for the beginning of the dinosaur raise in the world. Within the Middle Muschelkalk Heilbronn and Diemel Formations a massive sea level fall caused a new extension of intertidal zones and sabkhas, but also halite and gypsum evaporates ("Domolite-evaporate facies") in the basin center including the southern Germanic Basin branch (region Tübingen/Stuttgart, Southwestern Germany). The "Middle Muschelkalk" shallow relief and lagoon to intertidal dominated period changed again drastically within a new tectonic active "Upper Muschelkalk" time and strong "ingression" of the northern Tethys into the Germanic Basin within the Illyrian time (Bad Sulza Fm, Trochitenkalk Fm). A shallow marine, with shallow water carbonates filled Germanic Basin developed again, but this time with different consequences onto the former coastal zones, in which intertidal biolaminated and sabkhas disappeared as a result of steeper coastal morphologies. Whereas in the first ingression a shallow marine reptile fauna was present (Nothosaur-Pachypleurosaur taphocoenosis, Lower Bad Sulza Fm, Diedrich in prep.). The fauna changed with the main transgression within the Upper Bad Sulza Fm to a Placodontid-Pistosaur taphocoenosis with more open marine adapted forms (Diedrich in prep.). At those time also crinoid bioherms developed massively all over the central and southern Germanic Basin in front of the costs at the "steeper coast margins" (which were still hot high angled), as a "crinoid belt" (e.g. Aigner and Bachmann 1991), which was responsible for massive crinoidal limestones (= "Trochitenkalk facies"). In this period again "Triassic seacows" seem to have populated well the entire Germanic Basin, and here again seagrass meadow areas documented by benthic invertebrate palaeocommunities (Diedrich 2009a, 2010a). The marine macroplants must have built extended meadows on the shallow marine and oxygen-rich seafloor conditions of the "Tonplatten facies" on which many different invertebrates settled in- or epifaunistic. This tectonical deepening controlled situation continued with the Meißner Fm and aequivalent Formations and its cephalopod Ceratite rich "Tonplatten facies", whereas the "maximum flooding" (if the term can be used here in a cratonic and tectonically controlled basin: cf. definition of marine cycles in: Aigner and Bachmann 1991) was in the compressus biozone (ceratite biozone, middle Meißner Fm, Anisian/Ladinian boundary, cf. Diedrich 2009a). The high stand is underlined by now full adapted marine reptiles such as nothosaurs (Nothosaurus mirabilis, Simosaurus gaillardodti), pistosaurs (Pistosaurus longaevus) and especially the open marine ichtyosaurs (Shastasaurus, Mixosaurus, Omphtalmosaurus) support the full marine and highest water level conditions. The "regression" or better suggested here "basin uplifting" started in the upper Meißner Fm with a reducing carbonate sedimentation which was overtaken slowly by terrestrial sediments already within the Warburg/Erfurt Formations (Fassanian/Longobardian boundary, Lower Ladinian). The fresh water and clay mineral influence caused a reduction of the marine benthic community biodiversity and the development of brackish lagoons, in which some invertebrate faunas and dominantly small marine reptiles pachypleurosaurs lived. At that time all placodontid reptiles disappeared, which must have been the chain reaction of the macroalgae loss and environmental changes. A change of terrestrial influence and periodic marine influence is documented in repeating intercalated massive dolomites (Alberti-Bed, Anthraconit-Bed and others) and clay layers of the Lower Keuper Erfurt and especially Grabfeld Fm (Longobardian). In this final period the Lower Keuper Germanic Basin was less and less marine influenced, finally dominated at that time on the limnic influenced costs by large amphibians such as Mastodonsaurus, Gerrhothorax or Plagiosuchus, which were found especially at southern German and Central german sites (Schoch and Wild 1999, Diedrich 2010b), including the famous southern German "Grenzbonebed" (Fassanian/Longobardian boundary) (Reif 1982, Hagdorn 1990). This bonebed already contains a strongly reduced marine reptile fauna with pachypleurosaurs and giant lagoon-adapted nothosaurs (N. giganteus, S. gaillardoti) and few marine hypersaline adapted shells such as Costatoria costata (cf. Hagdorn et al. 2009). The absence of cephalopod ceratites and rare nautilid presence are the last proves for the periodic restricted lagoon situations- being comparable in its facies and reptile fauna to the lagoon of the Northern Tethys Monte San Giorgio, Switzerland/Italy (e.g. De Zanche and Farabegoli 1988, Furrer 1995) to which the Germanic Basin was connected through the Burgundian Gate, France. The marine influence and marine sediment fill of the Germanic Basin stopped finally at the beginning of the Middle Keuper (lower Upper Triassic), diachronously more earlier in northern Germany (Warburg/Erfurt Fm, cf.: Kozur and Bachmann 2008, Diedrich 2010b) as in southern Germany (cf. Hagdorn et al. 2009) indicating a periodic marine influence from the Northern Tethys through the Burgundian Gate. At the final tectonical stage (last seismits in the Grabfeld Fm, Longobardian: cf. Bachmann and Aref 2005) no intertidal flats nor biolamnintes developed anymore in a low relief Germanic Basin morphology, which reason can be explained be the carbonate reduction, strong terrigenous clay input, and brakish-lagoonary conditions, in which cyanobacterial mats of the low-relief intertidal zones could not develop. References Aigner, T. and Bachmann, G.H. 1991. Sequence Stratigraphy of the German Muschelkalk. In: Hagdorn, H. and Seilacher, A. (Eds.): Muschelkalk. Schöntaler Symposium. 15-18. Goldschneck-Verlag, Stuttgart. Avanzini, M. 2002. Dinosauromorph tracks from the Middle Triassic (Anisian) of the Southern Alps (Valle di Non-Italy). Bolletino della Società Paleontologica Italiana, 41 (1), 37-40. Bachmann, G.H. and Aref, M.A.M., 2005. A seismite in Triassic gypsum deposits (Grabfeld Formation, Ladinian), Southwest Germany. Sedimentary Geology 180, 75-89. De Zanche, V. and Farabegoli, E. 1988. Anisian paleogeographic evolution in the Central-Western Southern Alps. Memoirs Scientifique Geologique 40, 399-411. Demathieu, G.R. 1985. Trace fossil assemblages in Middle Triassic marginal marine deposits, eastern border of the Massif Central, France. Societe Economie Paléontologie et Mineralogie, Special Publications, 35, 53-66. Diedrich, C. 2005. Actuopalaeontological trackway experiments with Iguana on intertidal flat carbonates of the Arabian Gulf - a comparison to fossil Rhynchosauroides tracks of Triassic carbonate tidal flat megatracksites in the European Germanic Basin. Senckenbergiana maritime, 35 (2), 203-220. Diedrich, C. 2008a. Millions of reptile tracks - Early to Middle Triassic carbonate tidal flat migration bridges of Central Europe. Palaeogeography, Palaeoclimatology, Palaeoecology, 259, 410-423. Diedrich, C. 2008b. Palaeogeographic evolution of the marine Middle Triassic marine Germanic Basin changements - with emphasis on the carbonate tidal flat and shallow marine habitats of reptiles in Central Pangaea. Global and Planetary Change, 65 (2009), 27-55. Diedrich, C. 2009a. The vertebrates of the Anisian/Ladinian boundary (Middle Triassic) from Bissendorf (NW Germany) and their contribution to the anatomy, palaeoecology, and palaeobiogeography of the Germanic Basin reptiles. Palaeogeography, Palaeoclimatology, Palaeoecology, 273 (2009), 1-16. Diedrich, C. 2009b. Die Saurierspuren-Grabung im basalen Mittleren Muschelkalk (Anis, Mitteltrias) von Bernburg (Sachsen-Anhalt). Archäologie in Sachsen-Anhalt, Sonderband 2009, 1-62. Diedrich, 2010a. Palaeoecology of Placodus gigas (Reptilia) and other placodontids - macroalgae feeder of the Middle Triassic in the Germanic Basin of Central Europe and comparison to convergent developed sirenia. Palaeogeography, Palaeoclimatology, Palaeoecology, (in review). Diedrich, 2010b. The vertebrate fauna of the Lower Ladinian (Middle Triassic) from Lamerden (Germany) and contribution to the palaeoecology, anatomy and palaeogeography of the Germanic Basin reptiles. Palaeogeography, Palaeoclimatology, Palaeoecology, (in review). Diedrich, 2010c. The palaeogeographic reconstructions of the Middle Triassic tectonical controlled carbonatic Germanic Basin of Central Europe - a northern Tethys connected cratonic marine Basin - coastal basin margin mappings by the use of reptile footprint rich intertidal and sabkha environments. Abstract, Fifth International Conference on the Geology of the Tethys Realm, Quena-Luxor,Egypt), 3-5. Diedrich, in prep. The shallow marine fish and sauropterygian reptile vertebrate fauna of the Germanic Basin from the atavus/pulcher Bonebeds in the Bad Sulza Fm (Illyrian, Middle Triassic) of Bad Sulza (Central Germany). Diedrich, C. and Trostheide, F. 2007. Auf den Spuren der terresten Muschelkalksaurier und aquatischen Sauropterygier vom obersten Röt bis zum Mittleren Muschelkalk (Unter-/Mitteltrias) von Sachsen-Anhalt. Abhandlungen und Berichte für Naturkunde, 30, 5-56. Föhlisch, K. 2007. Überlieferungen seismischer Aktivität im Unteren Muschelkalk. Beiträge zur Geologie Thüringens, N.F. 14, 55-83. Furrer, H. 1995. The Kalkschieferzone (Upper Meride estone Ladinian) near Meride (Canton Ticino, Southern Switzerland) and the evolution of a Middle Triassic intraplatform basin. Eclogae geolicae Helvetiae, 88(3), 827-852. Hagdorn, H. 1990. Das Muschelkalk/Keuper-Bonebed von Crailsheim. In: Weidert, W. K. (Ed.), Klassische Fundstellen der Paläontologie, Band 2. 78-88. Goldschneck-Verlag, Stuttgart. Hagdorn, H., E. Nitsch, Aigner, T. and Simon, T. 2009. Field guide 6th international Triassic field workshop (Pan-European Correlation of the Triassic) Triassic of Southwest Germany. September 7-11, 2009, www.stratigraphie.de/perm-trias_workshops.html, 1-72. Knaust, D. 1997. Die Karbonatrampe am SE-Rand des Persischen Golfes (Vereinigte Arabische Emirate) - rezentes Analogon für den Unteren Muschelkalk der Germanischen Trias? Greifswalder Geowissenschaftliche Beiträge, 5, 101-123. Knaust, D. 2000. Signatures of tectonically controlled sedimentation in Lower Muschelkalk carbonates (Middle Triassic) of the Germanic Basin. Zentralblatt für Geologie und Paläontologie, I, 1998 (9-10), 893-924. Kozur, H.W. and Bachmann, G.H. 2008. Updated correlation of the Germanic Triassic with the Tethyan scale and assigned numeric ages. Berichte der Geologischen Bundesanstalt Wien, 76, 53-58. Reif, W.E. 1982. Muschelkal/Keuper bone-beds (Middle Triassic, SW-Germany) - storm condensation in a regressive cycle. In: Einsele, G. and Seilacher, A. (Eds.), Cyclic and Event Stratification. 299-325. Springer-Verlag, Berlin-Heidelberg-New York. Müller, W. et al., 1964. Vulkanogene Lagen aus der Grenzbitumenzone (Mittlere Trias) des Monte San Giorgio in den Tessiner Kalkalpen. Eclogae geolicae Helvetiae, 57(2), 431-450. Rüffer, T. 1996. Seismite im Unteren Muschelkalk westlich von Halle (Saale). Hallesches Jahrbuch für Geowissenschaften, B 18, 119-130. Schoch, R. and Wild, R. 1999. Die Wirbeltiere des Muschelkalks unter besonderer Berücksichtigung Süddeutschlands. In: Hauschke, N. and Wilde, V. (Eds.), Trias eine ganz andere Welt. Europa im frühen Erdmittelalter. 331-342. Pfeil-Verlag, München. Schwarz, U. 1975. Sedimentary structures and facies analysis of shallow marine carbonates (Lower Muschelkalk, Middle Triassic, SW-Germany). Contributions to Sedimentology, 3, 1-100. Szulc, J. 1998. Anisian-Carnian evolution of the Germanic Basin and its eustatic, tectonic and climate controls. Zentralblatt für Geologie und Paläontologie, I, 7-8, 813-852.
NARRATIVE: A short history of my life in science A short history of my life in science
NASA Astrophysics Data System (ADS)
Manson, Joseph R.
2010-08-01
I was certainly surprised, and felt extremely honored, when Salvador Miret-Artés suggested that he would like to organize this festschrift. Before that day I never anticipated that such an honor would come to me. I would like to thank Salvador for the large amount of time and work he has expended in organizing this special issue, the Editors of Journal of Physics: Condensed Matter for making it possible, and also the contributing authors for their efforts. My family home was outside of Petersburg, Virginia in Dinwiddie County in an area that was, during my youth, largely occupied by small farms. This is a region rich in American history and our earliest ancestors on both sides of the family settled in this area, beginning in the decade after the first Virginia settlement in Jamestown. My father was an engineer and my mother was a former school teacher, and their parents were small business owners. From earliest memories I recall being interested in finding out how things worked and especially learning about the wonders of nature. These interests were fostered by my parents who encouraged such investigations during long walks, visits to friends and relatives, and trips to museums. However, my earliest memory of wanting to become a scientist is associated with a Christmas gift of a chemistry set when I was about ten years old. I was absolutely fascinated by the amazing results that could be achieved with simple chemical reactions and realized then that I wanted to do something in life that would be associated with science. The gift of that small chemistry set developed over the next few years into a serious interest in chemistry, and throughout my junior high-school years I spent nearly all the money I earned doing odd jobs for neighbors on small laboratory equipment and chemical supplies, eventually taking over our old abandoned chicken house and turning it into a small chemistry lab. I remember being somewhat frustrated at the limits, mainly financial, that kept the scale of my chemical experiments to simple things such as growing crystals of all available salts, making interesting colors and dyes, and a whole variety of pyrotechnics. The fireworks and small explosives were largely carried out without the knowledge of my parents, and it was surely fortunate that my lab was well away from the house because fires nearly got out of hand a couple of times. Interest in becoming a chemist continued into my high-school years until I took a traditional course in elementary physics. This course was a little out of the ordinary because it was taught by the industrial shop teacher, Mr John M Leete, a man who had an interest in science but very little scientific training or knowledge. He had been given this course because there was nobody else available to teach it, and the way he chose to handle his assignment was to gather the eight or so students around a circular table and spend each hour of class time reading a book together and trying to understand it. This turned out to be an interesting and effective way to learn, with Mr Leete probably learning just as much as the students. The experience of this course made quite an impression, not only because of the fascination of the subject matter, but also because of what it demonstrated about the process of teaching and learning. It was at this time that I realized that physics was the science that I wanted to pursue. I finished high-school at the beginning of 1961, and after working in a local tobacco factory for a short period I enrolled as an undergraduate at the University of Richmond, a college with a very beautiful campus on the outskirts of Richmond and relatively close to home in Petersburg. Another advantage of living in Richmond was that I could continue playing in the Richmond Symphony Orchestra, eventually becoming its principal bassoonist. Music was an interest that developed in high school, which was when I first became a member of the Richmond Symphony, and later in college I earned a modest income playing there as well as in the Norfolk Symphony and several other ensembles. After four years in Richmond and graduating with a degree in physics and mathematics I enrolled as a graduate student in the Physics Department of the University of Virginia. At Virginia I went to work with the well-known Professor Nicholas Cabrera who was then also the department Chair. After a little more than a year, Cabrera took a temporary leave of absence to take a position in Mexico, but this temporary departure later became permanent when he answered a call to return to Spain to establish and lead the Department of Physics at the new Universidad Autonoma de Madrid. Before leaving Virginia, however, Cabrera hired a new Assistant Professor who had already made quite a name for himself, and this was Vittorio Celli. Celli and I immediately hit it off and I continued my studies with him. He is the person to whom I owe the greatest debt as a teacher and mentor who instilled the highest standards of scientific research. We began work on some problems in superconductivity and surface magnetism which I found very interesting, but one day we had a meeting with Professor Sam Fisher of the Department of Aeronautical Engineering at Virginia that totally changed the direction of our research. Sam and his graduate students had carried out a series of experiments on the scattering of high-quality jet beams of helium atoms off of clean, cleaved lithium fluoride surfaces under high-vacuum conditions. Essentially, Sam was repeating experiments originally performed by Otto Stern and coworkers in Frankfurt and Hamburg in the late 1920s, but he had not realized the fact that he was seeing diffraction patterns. We immediately recognized that he was measuring diffraction, and not long thereafter we recognized that his measurements should be sufficiently precise to measure energy transfers due to single quantum excitations of vibrations at the surface. Both Vittorio and I quickly dropped the research that we had been doing and started to think about developing theories that could exploit the possibilities of gaining information about the microscopic structure and dynamics of surfaces using, as a tool, the methods of atom scattering. With Cabrera and Frank Goodman, who was Guest Professor in the Aeronautical Engineering Department, we developed these ideas into theories that have turned out to be useful for describing both diffraction and single-phonon inelastic scattering in atom-surface collisions. This experience of developing theoretical explanations for that interesting new data of Fisher's group left me with a great appreciation for experimental physics and is probably the reason that a large part of my work ever since has been oriented toward trying to develop theoretical methods to aid in explaining experimental results. A major change in life, and one very much for the better, occurred during my graduate school years, and this was marriage to Lucy Schenkman. We met through our mutual interest in music when I was an undergraduate in Richmond. Our marriage has been a very rewarding experience. We are extremely proud of our two children, who have now gone on into their marriages and careers, and have given us four wonderful grandchildren. After receiving my PhD in 1969 I took a position as Assistant Professor of Physics at Clemson University and have remained there ever since. During those early years at Clemson our teaching loads were rather high, but there was still time to carry out a research program and mentor graduate students. Clemson had established a generous sabbatical program and in 1977 I took my first sabbatical at the Centre d'Études Nucléaires de Saclay (now Le Centre CEA de Saclay), working with Jean Lapujoulade and Georges Armand. Armand by that time was doing work in various areas of the theory of surface physics and Lapujoulade was leader of a group that had built one of the earliest pieces of apparatus for scattering jet beams of helium atoms from surfaces under clean high-vacuum conditions. This sabbatical was the beginning of a very fruitful and extremely pleasant collaboration working on a large variety of problems on the structure and dynamics of metal surfaces as elucidated by helium atom scattering, a collaboration that lasted until the retirements of Armand and Lapujoulade in the early 1990s. Nearly every summer and two additional sabbaticals in 1984 and 1992 were spent in this very stimulating research atmosphere located in the scientific complex on the southern edge of Paris. This period included collaborations and interactions with many of the colleagues, visitors and students at Saclay including C S Jayanthi, Abdelkader Kara, Luc Barbier, Hans-Joachim Ernst, Francoise Fabre and Bernard Salanon. Directly after this first sabbatical I spent a summer under the auspices of an Oak Ridge Associated Universities grant working with Rufus Ritchie in what was then the Health Sciences Division of the Oak Ridge National Laboratory (ORNL). Ritchie was a senior scientist at ORNL as well as a professor at the Physics Department of the University of Tennessee at nearby Knoxville. This one summer at ORNL also developed into a long and productive, as well as extremely pleasant, scientific collaboration with Ritchie that lasted into the 1990s. During the academic years at Clemson I was able to take off to ORNL for week-long periods several times a year. Our work involved calculating energy transfers in the interactions of electrons and other charged particles with surfaces, problems involving interactions of atomic and charged particles with surface plasmons and theories for developing interaction potentials between particles or between particles and surfaces. This work also had the great advantage of developing interactions and scientific collaborations with many others at ORNL, including Thomas Ferrell and Bruce Warmack. One of the advantages of being a scientist is the opportunity to travel and meet colleagues and people from all over the world. Sometimes the importance of the social aspects of science is not recognized, but they play an important role both in the satisfaction of being a scientist and also in shaping the problems and research topics that a given scientist works on, and sometimes in unexpected ways. As an example, these first visits to Paris and Oak Ridge brought me into contact with many scientists, several of which I would eventually collaborate with throughout my career. Among these were Salvador Miret-Artés and Pedro Echenique. In the early 1980s, during one of my summer visits to Saclay, it was Jean Lapujoulade who brought me a preprint of a manuscript by Salvador Miret-Artés, and asked me to look at it to see if it made any sense. Indeed, the ideas in that paper did make sense. In fact, Miret-Artés had pointed out a very special type of resonance effect that should be readily observable in He atom scattering from corrugated surfaces, an effect that was obvious when one thought about it a little, but that nobody up to then had ever noticed. This paper eventually led to a visit by Miret-Artés to Saclay where we met, and over the years this meeting developed into a friendship and a long and valuable collaboration that continues to this day. My collaboration with Pedro Echenique was initiated through contact with Rufus Ritchie. From the beginning Ritchie would often mention the name of this very energetic and brilliant Basque with whom he worked while he was on sabbatical at the University of Cambridge in England and with whom he had a continuing collaboration. Furthermore, several of the problems the two of us worked on were ones that Ritchie had also discussed with Echenique. The collaboration between Echenique and Ritchie continued to be prolific, and within a few years of that initial ORNL visit I was also collaborating with Echenique, and that collaboration continues. Beginning the summer of 1985 I began a collaboration with Professor J Peter Toennies of the Max-Planck-Institut für Strömungsforschung in Göttingen, Germany (now the Max-Planck-Institut für Dynamik und Selbstorganization). Toennies was already, at that time, a major figure in the areas of physics and chemistry that use molecular and atomic beams. This was just a few years after he, with graduate student Bruce Doak, had succeeded in the first measurements of surface specific phonons using He atom scattering and, in particular, had obtained complete dispersion relations for Rayleigh modes. This was precisely the type of experiment that Celli, Cabrera and I had suggested over a decade earlier, so our research interests were an excellent match. Our work that summer with graduate student Christof Wöll and postdoc Angela Lahee developed experimental and theoretical methods for measuring the presence of isolated atomic or molecular adsorbates on surfaces. This initial visit led to a long and productive period of research on many aspects of He atom scattering from surfaces, and almost every summer from then through 1997 was spent in the very pleasant and historic city of Göttingen, which still has visible roman ruins and many old German buildings dating from the 1500s. This period was marked by interactions and collaborations with many of the graduate students, postdocs and visitors to the Toennies lab. Many of these collaborations continue to some extent even today, and include work with Andrew Graham, John Ellis, Frank Hofmann, Massimo Bertino, Robert Grisenti, Alexi Glebov, Wieland Schöllkopf, Walter Silvestri and Horst-Günter Rubahn. It was also during this period that I developed a long friendship and scientific collaboration with Jim Skofronick and Sanford Safron of the Department of Physics at Florida State University. Both were frequent visitors to the Toennies laboratory, and our collaboration extended far beyond our overlapping stays there. Among the fondest memories of visits to Göttingen are the many long walks and bicycle rides taken with these colleagues and many others through the streets of Göttingen and the surrounding countryside of Niedersachsen. The long series of visits to Göttingen were interrupted by three summers beginning in 1988 at the Institut für Grenzflachenforschung and Vakuumphysik at the Forschungszentrum Jülich in Germany working with George Comsa and his group. I had met Comsa in the late 1970s at a scientific meeting in France and we had continued our scientific correspondence ever since, which eventually led to the invitation to visit his lab for an extended stay. Among the very large range of surface-related experiments being carried out in the Comsa group were machines, operated by Bene Poelsema and Rudolf David and the then graduate students Klaus Kern and Peter Zeppenfeld, devoted to He atom scattering from metal and adsorbate-covered surfaces. Once again, it was a great privilege to carry out scientific research in such a stimulating environment. In 1998, with a three-month summer visit, I began a collaboration with Professor Karl-Heinz Rieder at the Institut für Experimentalphysik of the Freie Universität Berlin in Germany. Rieder was a pioneer in the field of surface scattering experiments using helium and other rare gas atomic beams as projectiles and after he moved to the Freie Universität from the IBM Zürich laboratories he continued this work as well as becoming a world leader in the field of single molecule manipulation on surfaces using scanning tunneling microscopy (STM). This collaboration resulted in visits to Berlin every summer through 2006 during which we collaborated on several projects involving both atom-surface scattering and STM. The work during this period included interesting collaborative work with many members of the Rieder group including Ludwig Bartels, Daniel Farías, Gerhard Meyer and Saw Hla. It was a great experience to be able to pursue science in such favorable surroundings, and to have, in addition, all the cultural advantages of the city of Berlin with its concerts, opera and museums. Although I have been fortunate to be able to take my family to live in many beautiful and interesting places such as Göttingen, Jülich and Berlin, it has always been Paris and France that we remembered with the greatest fondness, probably because it was the first of our travel ventures and because our children were very young when we first went and they did a large part of their growing up there. So it was a real pleasure to have the opportunity to return to Paris for an extended summer visit in 2007, this time not at Saclay, but at the Université de Paris-Sud in its Laboratoire des Collisions Atomiques et Moléculaires (LCAM). Several times in the preceding years I had discussed and corresponded with Hocine Khemliche of that laboratory about their experiments on the scattering of ions and neutral atoms and molecules from surfaces. They had just published a paper that had attracted considerable attention on the diffraction of very fast atoms from the corrugated surface of lithium fluoride, and our discussions eventually led to an invitation to come and work with him and his colleague, Philippe Roncin. It was a real pleasure for my wife and I to return to the Paris area and to re-experience some of the things that we had done there two decades earlier, to observe the changes in culture and lifestyles that had occurred since, and to experience a whole new set of adventures. The following summer of 2008 I was Guest Professor at the Donostia International Physics Center in San Sebastian, Spain. This institute was created and is directed by Pedro Echenique and it was a great honor to be able to work in that environment and enjoy the company and discussions with all the prominent scientists there, the many guests as well as the permanent staff. In the spring and summer of 2009 I was Guest Professor in the Institut für Experimentalphysik of the Technische Universität Graz in Austria, the institute directed by Professor Wolfgang Ernst. The story of how this particular collaboration developed is interesting because it illustrates again how important personal contacts and social interactions are in the progress of science. In 2007 I was contacted by Bodil Holst, then of the TU-Graz and now at the Technical University of Bergen in Norway, about the possibility of having one of her very bright graduate students visit Clemson for a few months as a part of his work in analyzing some He atom scattering data that they had taken on silica glass surfaces. A decade earlier Holst had been a postdoctoral research associate at the Toennies laboratory in Göttingen during several of the summers I was also there and, although we never worked directly together at that time, we had many interesting discussions about science and other subjects. The student, Wolfram Steurer, arrived in Clemson in the middle of the fall semester with his belongings in a bag that was smaller than the guitar he carried strapped over his back. He immediately created for himself a place in our department, and by the time he left three months later we had developed some rudimentary ideas of how to analyze the data he had, and we also suspected that this data might reveal some new aspects of the dynamics of glass surfaces that had not been realized before. Wolfram delved into this problem with a vengeance, and shortly afterwards we began a series of papers involving Holst, Ernst and Steurer and the Graz graduate students, Andreas Apfolter and Mattias Koch, as well as Elin Søndergård of the French CNRS laboratory located at the St Gobain Corporation's research facilities in Paris. It was this initial contact with Holst a decade earlier that eventually led to a productive period of research on glass surfaces, something that I would have never predicted beforehand, but a collaboration that shows every sign of continuing into the future to produce new and interesting results. So, this brings the history of my career in science nearly to the present. In 2008 I decided, after several years of contemplation, to retire from official service at Clemson and become an emeritus faculty member. However, I am still actively continuing research, retain an office in the department, keep regular contact with my university peers, and remain active in promoting and mentoring the careers of some of my colleagues. Above all, however, I remain active in many of the collaborations that I have developed over the years, some of which are described here. As this brief story should indicate, I have enjoyed a very rich life in science, and much of this enjoyment is due to the people I have been associated with. One could not have asked for a finer group of friends, colleagues, students and collaborators, and I fully anticipate that my association with them and my contributions to science will continue into the foreseeable future.
NASA Astrophysics Data System (ADS)
Williams, H. Thomas
2015-12-01
After a quarter century of discoveries that rattled the foundations of classical mechanics and electrodynamics, the year 1926 saw the publication of two works intended to provide a theoretical structure to support new quantum explanations of the subatomic world. Heisenberg's matrix mechanics and Schrödinger's wave mechanics provided compatible but mathematically disparate ways of unifying the discoveries of Planck, Einstein, Bohr and many others. Efforts began immediately to prove the equivalence of these two structures, culminated successfully by John von Neumann's 1932 volume Mathematical Foundations of Quantum Mechanics. This forms the springboard for the current effort. We begin with a presentation of a minimal set of von Neumann postulates while introducing language and notation to facilitate subsequent discussion of quantum calculations based in finite dimensional Hilbert spaces. Chapters that follow address two-state quantum systems (with spin one-half as the primary example), entanglement of multiple two-state systems, quantum angular momentum theory and quantum approaches to statistical mechanics. A concluding chapter gives an overview of issues associated with quantum mechanics in continuous infinite-dimensional Hilbert spaces.