Stopping-power and mass energy-absorption coefficient ratios for Solid Water.
Ho, A K; Paliwal, B R
1986-01-01
The AAPM Task Group 21 protocol provides tables of ratios of average restricted stopping powers and ratios of mean energy-absorption coefficients for different materials. These values were based on the work of Cunningham and Schulz. We have calculated these quantities for Solid Water (manufactured by RMI), using the same x-ray spectra and method as that used by Cunningham and Schulz. These values should be useful to people who are using Solid Water for high-energy photon calibration.
Sham, P C; Zerbin-Rüdin, E; Kendler, K S
1995-01-01
Nearly all previous evidence of the familial transmission of age at onset of schizophrenia has been in siblings and twins. In his paper, Bruno Schulz examined the age at onset distribution of schizophrenia in affected parent and offspring pairs, using a systematic series of ascertained cases (n = 106), as well as a second series of chronic in-patients (n = 36). The parent-offspring correlation in age at onset, for cases with a definite diagnosis in the systematically ascertained series, was estimated at 0.346 (95% confidence interval 0.134, 0.528). Schulz did not test for differences between the two series and between males and females, but our reanalysis, using correlational methods and a mixed linear model, did not detect any significant differences. These results are consistent with previous findings that age at onset of schizophrenia is influenced by familial factors which may be genetic.
A strong shock tube problem calculated by different numerical schemes
NASA Astrophysics Data System (ADS)
Lee, Wen Ho; Clancy, Sean P.
1996-05-01
Calculated results are presented for the solution of a very strong shock tube problem on a coarse mesh using (1) MESA code, (2) UNICORN code, (3) Schulz hydro, and (4) modified TVD scheme. The first two codes are written in Eulerian coordinates, whereas methods (3) and (4) are in Lagrangian coordinates. MESA and UNICORN codes are both of second order and use different monotonic advection method to avoid the Gibbs phenomena. Code (3) uses typical artificial viscosity for inviscid flow, whereas code (4) uses a modified TVD scheme. The test problem is a strong shock tube problem with a pressure ratio of 109 and density ratio of 103 in an ideal gas. For no mass-matching case, Schulz hydro is better than TVD scheme. In the case of mass-matching, there is no difference between them. MESA and UNICORN results are nearly the same. However, the computed positions such as the contact discontinuity (i.e. the material interface) are not as accurate as the Lagrangian methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, W.H.; Clancy, S.P.
Calculated results are presented for the solution of a very strong shock tube problem on a coarse mesh using (1) MESA code, (2) UNICORN code, (3) Schulz hydro, and (4) modified TVD scheme. The first two codes are written in Eulerian coordinates, whereas methods (3) and (4) are in Lagrangian coordinates. MESA and UNICORN codes are both of second order and use different monotonic advection method to avoid the Gibbs phenomena. Code (3) uses typical artificial viscosity for inviscid flow, whereas code (4) uses a modified TVD scheme. The test problem is a strong shock tube problem with a pressuremore » ratio of 10{sup 9} and density ratio of 10{sup 3} in an ideal gas. For no mass-matching case, Schulz hydro is better than TVD scheme. In the case of mass-matching, there is no difference between them. MESA and UNICORN results are nearly the same. However, the computed positions such as the contact discontinuity (i.e. the material interface) are not as accurate as the Lagrangian methods. {copyright} {ital 1996 American Institute of Physics.}« less
Passings to note: Paul Michael Packman, MD; S. Charles Schulz, MD.
Black, Donald W
2018-02-01
One of the keys to the success of Annals of Clinical Psychiatry has always been the tireless efforts of our dedicated Editorial Board. We recently lost 2 longtime Editorial Board members, Drs. Paul Michael Packman and S. Charles Schulz. Both will be greatly missed.
Clinical Judgment in Science: Reply
ERIC Educational Resources Information Center
Westen, Drew; Weinberger, Joel
2005-01-01
This paper presents replies to comments published by M. S. Schulz and R. J. Waldinger, J. M. Wood and M. T. Nezworski, and H. N. Garb and W. M. Grove on the original article by D. Westen and J. Weinberger. Schulz and Waldinger (2005) make the important point that just as researchers can capitalize on the knowledge of experienced clinical observers…
Worldwide Report, Telecommunications Policy, Research, and Development
1985-12-06
11 Nov 85) 81 FEDERAL REPUBLIC OF GERMANY Commission To Investigate How To Improve Domestic Technology (R. Schulze; Duesseldorf VDI NACHRICHTEN...from four terminal interfaces to generate the international standard PCM-32 link (which has a data rate of 2048 kbps). Four PCM 32 links (from 128...INVESTIGATE HOW TO IMPROVE DOMESTIC TECHNOLOGY Duesseldorf VDI NACHRICHTEN in German 3 May 85 p 2 [Article by R. Schulze: "Communications
Patty, Philipus J; Frisken, Barbara J
2006-04-01
We compare results for the number-weighted mean radius and polydispersity obtained either by directly fitting number distributions to dynamic light-scattering data or by converting results obtained by fitting intensity-weighted distributions. We find that results from fits using number distributions are angle independent and that converting intensity-weighted distributions is not always reliable, especially when the polydispersity of the sample is large. We compare the results of fitting symmetric and asymmetric distributions, as represented by Gaussian and Schulz distributions, respectively, to data for extruded vesicles and find that the Schulz distribution provides a better estimate of the size distribution for these samples.
Equilibrium polymerization on the equivalent-neighbor lattice
NASA Technical Reports Server (NTRS)
Kaufman, Miron
1989-01-01
The equilibrium polymerization problem is solved exactly on the equivalent-neighbor lattice. The Flory-Huggins (Flory, 1986) entropy of mixing is exact for this lattice. The discrete version of the n-vector model is verified when n approaches 0 is equivalent to the equal reactivity polymerization process in the whole parameter space, including the polymerized phase. The polymerization processes for polymers satisfying the Schulz (1939) distribution exhibit nonuniversal critical behavior. A close analogy is found between the polymerization problem of index the Schulz r and the Bose-Einstein ideal gas in d = -2r dimensions, with the critical polymerization corresponding to the Bose-Einstein condensation.
Stuffed Snoopy wearing cap and sporting a Space Shuttle emblem
2000-02-22
JSC2000-01580 (22 February 2000) --- Snoopy, who has had a long history with the astronauts and Houston's Mission Control Center, showed up in the Shuttle Flight Control Room on one of the consoles during the STS-99 mission. The NASA Astronaut personal safety award -- called the Silver Snoopy -- is given for outstanding performance by NASA employees or NASA contractors who contribute to flight safety or mission success. Snoopy is a product of the imagination of the late cartoonist Charles Schulz. Schulz died on Saturday, Feb. 12, 2000, the second day of the 11-day SRTM mission and on the eve of his final color strip appearing in Sunday newspapers on February 13, 2000.
Colloidal Stability in Asymmetric Electrolytes: Modifications of the Schulze-Hardy Rule.
Trefalt, Gregor; Szilagyi, Istvan; Téllez, Gabriel; Borkovec, Michal
2017-02-21
The Schulze-Hardy rule suggests a strong dependence of the critical coagulation concentration (CCC) on the ionic valence. This rule is addressed theoretically and confronted with recent experimental results. The commonly presented derivation of this rule assumes symmetric electrolytes and highly charged particles. Both assumptions are incorrect. Symmetric electrolytes containing multivalent ions are hardly soluble, and experiments are normally carried out with the well-soluble salts of asymmetric electrolytes containing monovalent and multivalent ions. In this situation, however, the behavior is completely different whether the multivalent ions represent the counterions or co-ions. When these ions represent the counterions, meaning that the multivalent ions have the opposite sign than the charge of the particle, they adsorb strongly to the particles. Thereby, they progressively reduce the magnitude of the surface charge with increasing valence. In fact, this dependence of the charge density on the counterion valence is mainly responsible for the decrease of the CCC with the valence. In the co-ion case, where the multivalent ions have the same sign as the charge of the particle, the multivalent ions are repelled from the particles, and the surfaces remain highly charged. In this case, the inverse Schulze-Hardy rule normally applies, whereby the CCC varies inversely proportional to the co-ion valence.
New records of spider wasps (Hymenoptera, Pompilidae) from Colombia
Castro-Huertas, Valentina; Pitts, James P.; Rodriguez, Juanita; Cecilia Waichert; Fernández, Fernando
2014-01-01
Abstract New records of genera and species of spider wasps (Hymenoptera: Pompilidae) from Colombia are provided. Agenioideus, Cryptocheilus, Evagetes, Mystacagenia, and Xerochares are newly recorded genera from Colombia. Nineteen species are first recorded from Colombia: Aimatocare vitrea (Fox); Ageniella azteca (Cameron); Ageniella curtipinus (Cameron); Ageniella fallax (Arlé); Ageniella hirsuta Banks; Ageniella pilifrons (Cameron); Ageniella pretiosa Banks; Ageniella sanguinolenta (Smith); Ageniella zeteki (Banks); Agenioideus birkmanni (Banks); Aporus (Aporus) cuzco Evans; Aporus (Cosmiaporus) diverticulus (Fox); Aporus (Notoplaniceps) canescens Smith; Euplaniceps exilis (Banks); Euplaniceps herbertii (Fox); Irenangelus clarus Evans; Mystacagenia bellula Evans; Phanochilus nobilitatus (Smith) and Xerochares expulsus Schulz. The following species and genera have their occurence ranges expanded for South America: Ageniella azteca (Cameron); Ageniella zeteki (Banks); Agenioideus birkmanni (Banks); and Xerochares expulsus Schulz; Cryptocheilus Panzer; and Xerochares Evans. PMID:25349495
Zodrow, E.L.; Mastalerz, Maria
2009-01-01
Fossilized cuticles, though rare in the roof rocks of coal seam in the younger part of the Pennsylvanian Sydney Coalfield, Nova Scotia, represent nearly all of the major plant groups. Selected for investigation, by methods of Fourier transform infrared spectroscopy (FTIR) and elemental analysis, are fossilized cuticles (FCs) and cuticles extracted from compressions by Schulze's process (CCs) of Alethopteris ambigua. These investigations are supplemented by FTIR analysis of FCs and CCs of Cordaites principalis, and a cuticle-fossilized medullosalean(?) axis. The purpose of this study is threefold: (1) to try to determine biochemical discriminators between FCs and CCs of the same species using semi-quantitative FTIR techniques; (2) to assess the effects chemical treatments have, particularly Schulze's process, on functional groups; and most importantly (3) to study the primary origin of FCs. Results are equivocal in respect to (1); (2) after Schulze's treatment aliphatic moieties tend to be reduced relative to oxygenated groups, and some aliphatic chains may be shortened; and (3) a primary chemical model is proposed. The model is based on a variety of geological observations, including stratal distribution, clay and pyrite mineralogies associated with FCs and compressions, and regional geological structure. The model presupposes compression-cuticle fossilization under anoxic conditions for late authigenic deposition of sub-micron-sized pyrite on the compressions. Rock joints subsequently provided conduits for oxygen-enriched ground-water circulation to initiate in situ pyritic oxidation that produced sulfuric acid for macerating compressions, with resultant loss of vitrinite, but with preservation of cuticles as FCs. The timing of the process remains undetermined, though it is assumed to be late to post-diagenetic. Although FCs represent a pathway of organic matter transformation (pomd) distinct from other plant-fossilization processes, global applicability of the chemical models remains to be tested. CCs and FCs are inferred endpoints on a spectrum of pomd which complicates assessing origin of in-between transformations (partially macerated cuticles). FCs index highly acidic levels that existed locally in the roof rocks.
Genetics Home Reference: CLN2 disease
... Z, Mole SE, Noher de Halac I, Pearce DA, Poupetova H, Schulz A, Specchio N, Xin W, ... Jul 25. Citation on PubMed Getty AL, Pearce DA. Interactions of the proteins of neuronal ceroid lipofuscinosis: ...
Ich spreche Deutsch: A User's Report
ERIC Educational Resources Information Center
Glassar, Sheila
1969-01-01
The textbook under discussion, "Ich spreche Deutsch" by Heinz Griesbach and Dora Schulz (London-Harlow: Longmans-Hueber, 1966), is intended to be a one-year introduction to German, particularly for less academic pupils and students. (FWB)
1987-06-01
MATERIAL 12 PERSONAL AuT$OR(S) Schulz, Frederick F. 13a TYPE OF REPORT 3b r;ME COVERED 14. DATE OF REPORT (Year, Month, Day) 15 PAGE CQUNT Master’s Thesis...Determination of the complex propaqation constant, y jk, required finding the roots of Eq. (2.85) such that, tanh[ yL] - [Zss/ZoS 1/2 = i (3.5) The...Assuninq the total nressure dron across the test sample was independent of the crvmressed thickness, the extracted value of DC flow resistance per
Reiswig, Henry M; Araya, Juan Francisco
2014-12-02
All records of the 15 hexactinellid sponge species known to occur off Chile are reviewed, including the first record in the Southeastern Pacific of the genus Caulophacus Schulze, 1885, with the new species Caulophacus chilense sp. n. collected as bycatch in the deep water fisheries of the Patagonian toothfish Dissostichus eleginoides Smitt, 1898 off Caldera (27ºS), Region of Atacama, northern Chile. All Chilean hexactinellid species occur in bathyal to abyssal depths (from 256 up to 4142 m); nine of them are reported for the Sala y Gomez and Nazca Ridges, with one species each in the Juan Fernandez Archipelago and Easter Island. The Chilean hexactinellid fauna is still largely unknown, consisting of only 2.5 % of the known hexactinellid extant species. Further studies and deep water sampling are essential to assess their ecology and distribution, particularly in northern Chile.
Zodrow, E.L.; D'Angelo, J. A.; Mastalerz, Maria; Keefe, D.
2009-01-01
Cuticles have been macerated from suitably preserved compressed fossil foliage by Schulze's process for the past 150 years, whereas the physical-biochemical relationship between the "coalified layer" with preserved cuticle as a unit has hardly been investigated, although they provide complementary information. This relationship is conceptualized by an analogue model of the anatomy of an extant leaf: "vitrinite (mesophyll) + cuticle (biomacropolymer) = compression". Alkaline solutions from Schulze's process as a proxy for the vitrinite, are studied by means of liquid-solid states Fourier transform infrared spectroscopy (FTIR). In addition, cuticle-free coalified layers and fossilized cuticles of seed ferns mainly from Canada, Spain and Argentina of Late Pennsylvanian-Late Triassic age are included in the study sample. Infrared data of cuticle and alkaline solutions differ which is primarily contingent on the mesophyll +biomacropolymer characteristics. The compression records two pathways of organic matter transformation. One is the vitrinized component that reflects the diagenetic-post-diagenetic coalification history parallel with the evolution of the associated coal seam. The other is the cuticle that reflects the sum-total of evolutionary pathway of the biomacropolymer, its monomeric, or polymeric fragmentation, though factors promoting preservation include entombing clay minerals and lower pH conditions. Caution is advised when interpreting liquid-state-based FTIR data, as some IR signals may have resulted from the interaction of Schulze's process with the cuticular biochemistry. A biochemical-study course for taphonomy is suggested, as fossilized cuticles, cuticle-free coalified layers, and compressions are responses to shared physicogeochemical factors. ?? 2009 Elsevier B.V. All rights reserved.
Healthy Pre-Pregnancy Diet and Exercise May Reduce Risk of Gestational Diabetes
... 21, 2014 All NICHD Spotlights Zhang, C., Liu, S., Solomon, C. G., & Hu, F. B. (2006). Dietary fiber ... 2241. PMID: 19940226 Zhang, C., Schulze, M. B., Solomon, C. G., & Hu, F. B. (2006). A prospective ...
ERIC Educational Resources Information Center
Ross, Myron H., Ed.
Papers included are as follows: "An Overview" (Ross); "The Outlook for Social Security in the Wake of the 1983 Amendments" (Munnell); "The Economics of Aging: Doomsday or Shangrila?" (Schulz); "Retirement Incentives--the Carrot and the Stick. (Why No One Works beyond 65 Anymore)" (Quinn); "Inflation and…
Toward Lifelong Visual Localization and Mapping
2013-06-01
lab, Michael Benjamin, Rob Truax, Aisha Walcott, David Rosen, Mark VanMiddlesworth, Ross Finman, Elias Mueggler, and Dehann Fourie. They really made...S. Thrun, M. Beetz, M. Bennewitz, W. Burgard, AB Cremers , F. Dellaert, D. Fox, D. Hahnel, C. Rosenberg, N. Roy, J. Schulte, and D. Schulz. Proba
An Assessment of Three Procedures to Teach Echoic Responding
ERIC Educational Resources Information Center
Cividini-Motta, Catia; Scharrer, Nicole; Ahearn, William H.
2017-01-01
The research literature has revealed mixed outcomes on various procedures for increasing vocalizations and echoic responding in persons with disabilities (Miguel, Carr, & Michael "The Analysis of Verbal Behavior," 18, 3-13, 2002; Stock, Schulze, & Mirenda "The Analysis of Verbal Behavior," 24, 123-133, 2008). We…
Internet Based Robot Control Using CORBA Based Communications
2009-12-01
Proceedings of the IADIS International Conference WWW/Internet, ICWI 2002, pp. 485–490. [5] Flanagan, David , Farley, Jim, Crawford, William, and...Conference on Robotics andAutomation, ICRA’00., pp. 2019–2024. [7] Schulz, D., Burgard, W., Cremers , A., Fox, D., and Thrun, S. (2000), Web interfaces
Influence of the heterogeneity on the hydraulic conductivity of a real aquifer
NASA Astrophysics Data System (ADS)
Carmine, Fallico; Aldo Pedro, Ferrante; Chiara, Vita Maria; Bartolo Samuele, De
2010-05-01
Many factors influence the flux in the porous media therefore the values of the representative parameters of the aquifer such as the hydraulic conductivity (k). A lot of studies have shown that this parameter increases with the portion of the aquifer tested. The main cause of this behaviour is the heterogeneity in the aquifer (Sànchez-Vila et al., 1996). It was also verified that the scale dependence of hydraulic conductivity does not depend on the specific method of measurement (Schulze-Makuch and Cherkauer, 1998). An experimental approach to study this phenomenon is based on sets of measurements carried out at different scales. However, one should consider that for the lower scale values k can be determined by direct measurements, performed in the laboratory using samples of different dimensions; whyle, for the large scales the measurement of the hydraulic conductivity requires indirect methods (Johnson and Sen, 1988; Katz and Thompson, 1986; Bernabé and Revil, 1995). In this study the confined aquifer of Montalto Uffugo test field was examined. This aquifer has the geological characteristics of a recently formed valley, with conglomeratic and sandy alluvial deposits; specifically the layer of sands and conglomerates, with a significant percentage of silt at various levels, lies about 55-60 m below the ground surface, where there is a heavy clay formation. Moreover in the test field, for the considered confined aquifer, there are one completely penetrating well, five partially penetrating wells and two completely penetrating piezometers. Along two vertical lines a series of cylindrical samples (6.4 cm of diameter and 15 cm of head) were extracted and for each one of them the k value was measured in laboratory by direct methods, based on the use of flux cells. Also indirect methods were used; in fact, a series of slug tests was carried out, determining the corresponding k values and the radius of influence (R). Moreover another series of pumping tests was carried out determining again the corresponding k values and the radius of influence; in fact, changing the pumping rate, varies also R. For the different sets of k values, obtained by different measurement methods, a statistical analysis was performed, determining the meaningful statistical parameters. All the obtained k values were examined, furnishing a scaling law of k for the considered aquifer. The equation describing this experimental trend is a power law, according to Schulze-Makuch and Cherkauer (1998). These results, obtained for the Montalto Uffugo test field, show that the hydraulic conductivity grows with the radius of influence, id est with the volume of the aquifer involved in the measurement. Moreover, the threshold value, to which k tends with the growing of R, was determined. References Bernabé, Y. and Revil, A. 1995. Pore-scale heterogeneity, energy dissipation and the transport properties of rocks. Geophys. Res. Lett. 22: 1529-1532. Johnson, D.L. and Sen, P.N. 1988. Dependence of the conductivity of a porous medium on electrolytic conductivity. Phys. Rev. B Condens. matter. 37: 3502-3510. Katz, A.J. and Thompson, A.H. 1986. Quantitative prediction of permeability in porous rock. Phys. Rev. B Condens. Matter. 34: 8179-8181. Sanchez-Villa X, Carrera J, Girardi JP (1996). Scale effects in transmissivity. J Hydrol 183:1-22. Schulze-Makuch D, Cherkauer DS (1998). Variations in hydraulic conductivity with scale of measurement during aquifer tests in heterogeneous, porous, carbonate rocks. Hydrogeol J. 6:204-215.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-06
... Shenandoah salamander. We will also review the following threatened species: Knieskern's beaked-rush, small...; 57 FR 54722. Shenandoah salamander Plethodon Endangered........ U.S.A.; VA........ August 18, 1989..., 330 Cummings Street, Abingdon, VA 24210. Shenandoah salamander....... Cindy Schulz, (804) U.S. Fish...
Climate change. Managing forests after Kyoto.
Schulze, D E; Wirth, C; Heimann, M
2000-09-22
The Kyoto protocol aims to reduce carbon emissions into the atmosphere. Part of the strategy is the active management of terrestrial carbon sinks, principally through afforestation and reforestation. In their Perspective, Schulze et al. argue that the preservation of old-growth forests may have a larger positive effect on the carbon cycle than promotion of regrowth.
CARDIOVASCULAR RESPONSES TO ULTRAFINE CARBON PARTICLE EXPOSURES IN RATS
TD-02-042 (U. KODAVANTI) GPRA # 10108
Cardiovascular Responses to Ultrafine Carbon Particle Exposures in Rats.
V. Harder1, B. Lentner1, A. Ziesenis1, E. Karg1, L. Ruprecht1, U. Kodavanti2, A. Stampfl3, J. Heyder1, H. Schulz1
GSF- Institute for Inhalation Biology1, I...
Multicultural Women's History: A Curriculum Unit for the Elementary Grades.
ERIC Educational Resources Information Center
Tomin, Barbara; Burgoa, Carol
This guide offers a curriculum unit for elementary schools to help increase student awareness of multicultural women's history. The unit contains five short biographies of women from different ethnic backgrounds. The women featured are Mary Shadd Cary, Frances Willard, Tye Leung Schulze, Felisa Rincon de Gautier, and Ada Deer. Vocabulary exercises…
Explaining Hong Kong Students' International Achievement in Civic Learning
ERIC Educational Resources Information Center
Kennedy, Kerry J.; Lijuan, Li
2016-01-01
This study identifies predictors of Hong Kong students' civic learning. It has adopted a cross-sectional quantitative design using secondary data from the 2009 International Civics and Citizenship Education Study (ICCS 2009; Schulz et al., 2010). Multi-level analysis reveals that most of the variance in student achievement can be accounted for by…
Theoretical Analyses of the Functional Regions of the Heavy Chain of Botulinum Neurotoxin
1994-01-01
SE. Schulz GM. liallctt M. Effects of hb)tulinum toxin injections on speech in adductor spasmodic dysphonia . Neurology 1988,38:1220-1225. 3. Jankovic...hemifacial spasm. Mov Disord 1987:4:237-254. 5. Brin MF. Blitzer A, Fahn S, Gould W. Lovelace RE. Adductor laryngeal dystonia (spastic dysphonia ): treatment
Assistive Device Use in Visually Impaired Older Adults: Role of Control Beliefs
ERIC Educational Resources Information Center
Becker, Stefanie; Wahl, Hans-Werner; Schilling, Oliver; Burmedi, David
2005-01-01
Purpose: We investigate whether psychological control, conceptually framed within the life-span theory of control by Heckhausen and Schulz, drives assistive device use in visually impaired elders. In particular, we expect the two primary control modes differentiated in the life-span theory of control (i.e., selective primary and compensatory…
Three-Level Systems as Amplifiers and Attenuators: A Thermodynamic Analysis
NASA Astrophysics Data System (ADS)
Boukobza, E.; Tannor, D. J.
2007-06-01
Thermodynamics of a three-level maser was studied in the pioneering work of Scovil Schulz-DuBois [Phys. Rev. Lett. 2, 262 (1959)PRLTAO0031-900710.1103/PhysRevLett.2.262]. In this Letter we consider the same three-level model, but we give a full thermodynamic analysis based on Hamiltonian and dissipative Lindblad superoperators. The first law of thermodynamics is obtained using a recently developed alternative [Phys. Rev. A 74, 063823 (2006)PLRAAN1050-294710.1103/PhysRevA.74.063823] to Alicki’s definitions for heat flux and power [J. Phys. AJPHAC50305-4470 12, L103 (1979)10.1088/0305-4470/12/5/007]. Using a novel variation on Spohn’s entropy production function [J. Math. Phys. (N.Y.)JMAPAQ0022-2488 19, 1227 (1978)10.1063/1.523789], we obtain Carnot’s efficiency inequality and the Scovil Schulz-DuBois maser efficiency formula when the three-level system is operated as a heat engine (amplifier). Finally, we show that the three-level system has two other modes of operation—a refrigerator mode and a squanderer mode —both of which attenuate the electric field.
Astronaut Thomas Stafford and Snoopy
NASA Technical Reports Server (NTRS)
1969-01-01
Astronaut Thomas P. Stafford, commander of the Apollo 10 lunar orbit mission, takes time out from his preflight training activities to have his picture made with Snoopy, the character from Charles Schulz's syndicated comic strip, 'Peanuts'. During the Apollo 10 lunar orbit operations the Lunar Module will be called Snoopy when it is separated from the Command/Service Modules.
Preliminary Study for the Modeling of an Artificial Icing Cloud.
1983-08-01
C.E. and Schulz, R.J., "Analytical Study of Icing Simulation for Turbine Engines in Altitude Test Cells". Arnold Engineering Devel- opment Center...Dept. SAMSO-TR-79-31, May 1979. 7. Keenan, J.H. and Keyes, F.G., "Thermodynamic Properties of Steam", John Wiley and Sons, Inc., N.Y., i961. 8. Pelton
Electron Impact Cross Sections for Molecular Lasers
1984-04-27
range coumunication and surveillance, isotope separation, and controlled thermonuclear fussion . Among all kinds of lasers, the gaseous discharge...shape resonance of n symmetry (reviewed by Schulz, 1976). Like vibrational excitation from the ground state, such process from nuclear -excited states as...energy range specifically in the 1-4 eV resonant region. 4 - A. Vibrational Excitation of Nuclear -Excited N2 For vibrational excitation by
2009-12-01
Baldwin. 1976. Amblyomma americanum: area control with gran- ules or concentrated sprays of diazinon, propoxur , and chlorpyrifos. J. Econ. Entomol. 69...Hung, A. J. Krivenko, Jr., J. J. Schulze, and T. M. Jordan. 200lb. Effects of an application of granular carbaryl on non-target forest floor
2011-09-25
The PhoEnix aircraft takes off during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
A hot air balloon passes over the campus of the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-27
The campus of the 2011 Green Flight Challenge, sponsored by Google, is seen in this aerial view at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Wednesday, Sept. 28, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
The Pipistrel-USA, Taurus G4 aircraft takes off during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
A hot air balloons pass over the campus of the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
The e-Genius aircraft takes off during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
The PhoEnix aircraft takes off for the start of the speed competition during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
e-Genius Aircraft Pilot Klaus Ohlmann poses for a photograph during the 2011 Green Flight Challenge, sponsored by Google, held at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
The e-Genius aircraft crew wait as their aircraft is inspected during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
Support personnel prepare noise level measuring equipment along the runway for the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
e-Genius Aircraft Pilot Eric Raymond poses for a photograph during the 2011 Green Flight Challenge, sponsored by Google, held at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
PhoEnix Aircraft Co-Pilot Jeff Shingleton poses for a photograph during the 2011 Green Flight Challenge, sponsored by Google, held at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
PhoEnix Aircraft Pilot Jim Lee poses for a photograph during the 2011 Green Flight Challenge, sponsored by Google, held at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
EcoEagle Aircraft Pilot Mikhael Ponso poses for a photograph during the 2011 Green Flight Challenge, sponsored by Google, held at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
The Embry-Riddle Aeronautical University, EcoEagle aircraft takes off during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
Various team members applaud as aircraft return from the speed competition during the 2011 Green Flight Challenge, sponsored by Google, held at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
The e-Genius aircraft takes off for the start of the speed competition during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
Sid Siddiqi, seated, and other support personnel prepare noise level measuring equipment for the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
Rouster, Paul; Pavlovic, Marko; Szilagyi, Istvan
2017-07-13
Ion specific effects on colloidal stability of titania nanosheets (TNS) were investigated in aqueous suspensions. The charge of the particles was varied by the pH of the solutions, therefore, the influence of mono- and multivalent anions on the charging and aggregation behavior could be studied when they were present either as counter or co-ions in the systems. The aggregation processes in the presence of inorganic salts were mainly driven by interparticle forces of electrostatic origin, however, chemical interactions between more complex ions and the surface led to additional attractive forces. The adsorption of anions significantly changed the surface charge properties and hence, the resistance of the TNS against salt-induced aggregation. On the basis of their ability in destabilization of the dispersions, the monovalent ions could be ordered according to the Hofmeister series in acidic solutions, where they act as counterions. However, the behavior of the biphosphate anion was atypical and its adsorption induced charge reversal of the particles. The multivalent anions destabilized the oppositely charged TNS more effectively and the aggregation processes followed the Schulze-Hardy rule. Only weak or negligible interactions were observed between the anions and the particles in alkaline suspensions, where the TNS possessed negative charge.
Strzemiecka, Beata; Kołodziejek, Joanna; Kasperkowiak, Małgorzata; Voelkel, Adam
2013-01-04
Inverse gas chromatography (IGC) at infinite dilution was applied to evaluate the surface properties of sorbents and the effect of different carrier gas humidity. They were stored in different environmental humidity - 29%, 40%, and 80%. The dispersive components of the surface free energy of the zeolites and perlite were determined by Schulz-Lavielle method, whereas their tendency to undergo specific interactions was estimated basing on the electron donor-acceptor approach presented by Flour and Papirer. Surface parameters were used to monitor the changes of the properties caused by the humidity of the storage environment as well as of RH of carrier gas. The increase of humidity of storage environment caused a decrease of sorbents surface activity and increase the ability to specific interaction. Copyright © 2012 Elsevier B.V. All rights reserved.
Quantifying Entrepreneurial Networks: Data Collection in Addis Ababa, Ethiopia
2013-06-10
to meet with the country manager for Schulze Global Investments (SGI), an emerging markets private equity firm. I had been introduced to Ms...member of the mirt team who has developed an interest in photography into a thriving business focusing on assisting firms with marketing and...bring in revenue. The branding business has now taken off and he has four employees and his clients include Pepsi . I also had the opportunity to eat
2011-09-28
CAFE Foundation Hanger Boss Mike Fenn waves the speed competition checkered flag for the PhoEnix aircraft during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
CAFE Foundation Hanger Boss Mike Fenn waves the speed competition checkered flag for the EcoEagle aircraft during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-27
The e-Genius aircraft prepares to takeoff for the miles per gallon (MPG) flight during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
A Pipistrel-USA team member wipes down the Taurus G4 aircraft prior to competition as part of the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-27
The PhoEnix aircraft prepares to takeoff for the miles per gallon (MPG) flight during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
Pipistrel-USA Taurus G4 Aircraft Pilot Robin Reid poses for a photograph during the 2011 Green Flight Challenge, sponsored by Google, held at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
Pipistrel-USA Taurus G4 Aircraft Pilot David Morss poses for a photograph during the 2011 Green Flight Challenge, sponsored by Google, held at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-27
The Pipistrel-USA, Taurus G4 aircraft prepares to takeoff for the miles per gallon (MPG) flight during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-27
The e-Genius aircraft is pulled out to the runway for the miles per gallon (MPG) flight during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-27
The EcoEagle, left, and the PhoEnix aircraft are seen on the campus of the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Wednesday, Sept. 28, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
CAFE Foundation Hanger Boss Mike Fenn waves the speed competition start flag for the EcoEagle aircraft during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-27
Media and ground crew look at aircraft as they participate in the miles per gallon (MPG) flight during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
Team members of the e-Genius aircraft prepare their plane prior to competition as part of the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
CAFE Foundation Hanger Boss Mike Fenn waves the speed competition checkered flag for the e-Genius aircraft during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
The Embry-Riddle Aeronautical University, EcoEagle is seen as it passes a Grumman Albatross during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
Beginning of the End: The Leadership of SS Obersturmbannfuehrer Jochen Peiper
2004-06-17
of Rudolf Lehmann and Ralf Tiemann on the Leibstandarte are based on the Bundesarchiven (National Archives) in Koblenz and Freiburg in Germany...Hans Schmidt, Paul Hausser, Richard Schulze-Kossens, Rudolf Lehmann, and Ralf Tiemann. Some German sources are at odds with American sources, but that...or a constitution. Among the men who set up the first military training for the Waffen-SS were Felix Steiner and Cassius Freiherr (Baron) von
Effects of tectonics and large scale climatic changes on the evolutionary history of Hyalomma ticks.
Sands, Arthur F; Apanaskevich, Dmitry A; Matthee, Sonja; Horak, Ivan G; Harrison, Alan; Karim, Shahid; Mohammad, Mohammad K; Mumcuoglu, Kosta Y; Rajakaruna, Rupika S; Santos-Silva, Maria M; Matthee, Conrad A
2017-09-01
Hyalomma Koch, 1844 are ixodid ticks that infest mammals, birds and reptiles, to which 27 recognized species occur across the Afrotropical, Palearctic and Oriental regions. Despite their medical and veterinary importance, the evolutionary history of the group is enigmatic. To investigate various taxonomic hypotheses based on morphology, and also some of the mechanisms involved in the diversification of the genus, we sequenced and analysed data derived from two mtDNA fragments, three nuclear DNA genes and 47 morphological characters. Bayesian and Parsimony analyses based on the combined data (2242 characters for 84 taxa) provided maximum resolution and strongly supported the monophyly of Hyalomma and the subgenus Euhyalomma Filippova, 1984 (including H. punt Hoogstraal, Kaiser and Pedersen, 1969). A predicted close evolutionary association was found between morphologically similar H. dromedarii Koch, 1844, H. somalicum Tonelli Rondelli, 1935, H. impeltatum Schulze and Schlottke, 1929 and H. punt, and together they form a sister lineage to H. asiaticum Schulze and Schlottke, 1929, H. schulzei Olenev, 1931 and H. scupense Schulze, 1919. Congruent with morphological suggestions, H. anatolicum Koch, 1844, H. excavatum Koch, 1844 and H. lusitanicum Koch, 1844 form a clade and so also H. glabrum Delpy, 1949, H. marginatum Koch, 1844, H. turanicum Pomerantzev, 1946 and H. rufipes Koch, 1844. Wide scale continental sampling revealed cryptic divergences within African H. truncatum Koch, 1844 and H. rufipes and suggested that the taxonomy of these lineages is in need of a revision. The most basal lineages in Hyalomma represent taxa currently confined to Eurasia and molecular clock estimates suggest that members of the genus started to diverge approximately 36.25 million years ago (Mya). The early diversification event coincides well with the collision of the Indian and Eurasian Plates, an event that was also characterized by large scale faunal turnover in the region. Using S-Diva, we also propose that the closure of the Tethyan seaway allowed for the genus to first enter Africa approximately 17.73Mya. In concert, our data supports the notion that tectonic events and large scale global changes in the environment contributed significantly to produce the rich species diversity currently found in the genus Hyalomma. Copyright © 2017 Elsevier Inc. All rights reserved.
Flow Quality for Turbine Engine Loads Simulator (TELS) Facility
1980-06-01
2.2 GAS INGESTION A mathematical simulation of the turbojet engine and jet deflector was formulated to estimate the severity of the recirculating...3. Swain. R. L. and Mitchell, J. G. "’Smlulatlon of Turbine Engine Operational Loads." Journal of Aircraft Vol. 15, No. 6, June 1978• 4. Ryan, J...3 AEDC-TR-79-83 ~...~ i ,i g - Flow Quality for Turbine Engine Loads Simulator (TELS) Facility R..I. Schulz ARO, Inc. June 1980
KernelADASYN: Kernel Based Adaptive Synthetic Data Generation for Imbalanced Learning
2015-08-17
eases [35], Indian liver patient dataset (ILPD) [36], Parkinsons dataset [37], Vertebral Column dataset [38], breast cancer dataset [39], breast tissue...Both the data set of Breast Cancer and Breast Tissue aim to predict the patient is normal or abnormal according to the measurements. The data set SPECT...9, pp. 1263–1284, 2009. [3] M. Elter, R. Schulz-Wendtland, and T. Wittenberg, “The prediction of breast cancer biopsy outcomes using two cad
U.S. National Security and Military Strategies A Selected Bibliography
1999-08-01
Strategy Research Project. Carlisle Barracks: U.S. Army War College, May 1998. 51pp. (AD-A345-628) Kennedy, Claudia J. The Age of Revolutions. (The...Olson, eds. Managing Contemporary Conflict: Pillars of Success. Boulder: Westview Press, 1996. 269pp. (U240 .M15 1996) Marcella , Gabriel, comp...1 vol. (U413 .D6M16 1998) Marcella , Gabriel, and Donald E. Schulz. Colombia’s Three Wars: U.S. Strategy at the Cross- roads. Carlisle Barracks
2016-01-01
DC) product following cutaneous exposure to VX was affected by the DC procedure. Fur-clipped, male, unanesthetized guinea pigs were used as subjects...RSDL) Following Cutaneous VX Exposure in Guinea Pigs Irwin Koplovitz Susan Schulz Julia Morgan Robert Reed Edward Clarkson C. Gary Hurst...Decontamination Procedures Using Reactive Skin 5a. CONTRACT NUMBER Decontamination Lotion (RSDL) Following Cutaneous VX Exposure in Guinea Pigs 5b
2011-09-27
The e-Genius aircraft is pulled pulled out to the runway for the miles per gallon (MPG) flight during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
CAFE Foundation Hanger Boss Mike Fenn waves the speed competition checkered flag for the Taurus G4 aircraft during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
The Pipistrel-USA Taurus G4 aircraft is pushed back to the weigh-in hanger as they start the day's 2011 Green Flight Challenge competition, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-27
The e-Genius pilots talk with a fellow team member prior to their takeoff for the miles per gallon (MPG) flight during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
CAFE Foundation Weights crew member Ron Stout, left, and Weights Chief Wayne Cook, weigh-in the e-Genius aircraft during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
The Pipistrel-USA, Taurus G4 aircraft approaches for landing as a Grumman Albatross plane is seen in the forground during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
CAFE Foundation Hanger Boss Mike Fenn directs the e-Genius aircraft to the start of the speed competition during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
CAFE Foundation Hanger Boss Mike Fenn waves the speed competition start flag for the Pipistrel-USA, Taurus G4 aircraft during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-27
The Pipistrel-USA team look up at aircraft as they participate in the miles per gallon (MPG) flight during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-27
The Embry-Riddle Aeronautical University, EcoEagle prepares to takeoff as an demonstration aircraft for the miles per gallon (MPG) flight during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
CAFE Foundation Hanger Boss Mike Fenn directs the EcoEagle aircraft to the start of the speed competition during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
CAFE Foundation Weights Chief Wayne Cook, left, talks with the e-Genius aircraft crew about their weigh-in during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-27
The checkered flag is waved as the PhoEnix aircraft crosses the finish line of the miles per gallon (MPG) flight during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
The Pipistrel-USA, Taurus G4 aircraft is prepared to be rolled out of the weigh-in hanger during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-27
The Pipistrel-USA, Taurus G4 aircraft is seen as it participates in the miles per gallon (MPG) flight during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
Replicas of Snoopy and Charlie Brown decorate top of console in MCC
NASA Technical Reports Server (NTRS)
1969-01-01
Replicas of Snoopy and Charlie Brown, the two characters from Charles Schulz's syndicated comic strip 'Peanuts', decorate the top of a console in the Mission Operations Control Room in the Mission Control Center, bldg 30, on the first day of the Apollo 10 lunar orbit mission. During the Apollo 10 lunar orbit operations the Lunar Module will be called Snoopy when it is separated from the Command/Service Modules. The code words for the Command Module will be Charlie Brown.
2011-09-25
Brien A. Seeley M.D., President of Comparative Aircraft Flight Efficiency (CAFE) Foundation briefs pilots and ground crew prior to competition as part of the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
Brien A. Seeley M.D., President of Comparative Aircraft Flight Efficiency (CAFE) Foundation, right, briefs pilots and ground crew prior to competition as part of the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2007-09-01
CONCEPTS, REAL-TIME IMPLEMENTATION AND MEASUREMENTS TOWARDS 3GPP-LTE T. Haustein , J. Eichinger, W. Zirwas, E. Schulz Nokia Siemens...BER (bottom) in an office scenario while the UE is moved from one room to another. REFERENCES [1] V. Jungnickel, A. Forck, T. Haustein , C. Juchems...2.12.2006 [3] T. Haustein , A. Forck, H. Gäbler, V. Jungnickel and S. Schif- fermüller, „Real-Time Experiments on Channel Adaptive Transmis- sion in
2007-12-01
and Security 6. AUTHOR( S ) David V. Schulz 5. FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) Naval Postgraduate School...Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING /MONITORING AGENCY NAME( S ) AND ADDRESS(ES) N/A 10. SPONSORING...responding agencies. In fact, the slow Katrina response was attributed to “coordination difficulties” between the military, law enforcement, and
2014-04-01
families. Mil Behav Health 2013; 1(1): 22-30. 4 Aday, Lu Ann ; Andersen R. A framework for the study of access to medical care. Health Serv Res 1974...identifying and re -shaping negative and destructive thoughts), and support (Belle et al., 2006; Gottman, Gottman, & Atkins, 2011, Schulz et al...Research, Translation and Practice. University of Michigan Geriatrics Retreat, Human Research across the Translational Spectrum: From the Lab to the
2011-09-27
The PhoEnix, lower left, EcoEagle, 2nd from left, Taurus G4, and e-Genius aircraft, top right, are seen on the campus of the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Wednesday, Sept. 28, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
Wayne Cook, Weights Chief, inspects the Pipistrel-USA, Taurus G4 as it rest on a scale built into the floor of the hanger during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
Phoenix Air team members reattach the wings to their PhoEnix aircraft after pulling it out the weigh-in hanger as they start the day's 2011 Green Flight Challenge competition, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
Team members of Pipistrel-USA prepare to have their Taurus G4 aircraft wings weighed using a scale built into the floor of the hanger during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
CAFE Foundation safety volunteers Meg Hurt, left, and Gail Vann wait on the runway for the arrival of the next aircraft to take part in the speed competition during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-27
The e-Genius, left, Taurus G4, 2nd from left, EcoEagle, and PhoEnix aircraft, top right, are seen on the campus of the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Wednesday, Sept. 28, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-27
CAFE Foundation Hanger Boss Mike Fenn waves the checkered flag as aircraft pass the finish line of the miles per gallon (MPG) flight during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
BKCASE(TM) Body of Knowledge and Curriculum to Advance Systems Engineering
2010-04-28
Lawson, Lawson Konsult AB, Sweden Johann Amsenga, Eclipse RDC, South Africa Alex Lee , Defence Science and Technology Agency, Singapore Erik Aslaksen...Engineering Division, US Tim Ferris, University of South Australia and INCOSE, Australia Jean-Claude Roussel, EADS, France Kevin Forsberg, Center for...Systems Management and INCOSE, US Sven-Olaf Schulze, Berner & Mattner Systemtechnik GmbH, Germany Richard Freeman, Air Force Center for Systems
2000-04-01
Center, Washington DC. 2. Koplovitz, I., S. Schulz, M. Shutz, et al. 1997. Memantine ef- fects on soman-induced seizures and seizure-related brain dam...neuronal culture as a model for soman-in- duced neurotoxicity and effectiveness of memantine as a neuroprotective drug. Arch. Toxicol. 69:384-390...for soman induced neurotoxicity and effectiveness of memantine as a neuroprotective drug. Drug Dev. Rev. 30:45-53. 27. Bredlow, J. D., G. F
Sulphur bacteria mediated formation of Palaeoproterozoic phosphorites
NASA Astrophysics Data System (ADS)
Joosu, Lauri; Lepland, Aivo; Kirsimäe, Kalle
2014-05-01
Modern phosphorite formation is typically associated with high productivity in upwelling areas where apatite (Ca-phosphate) precipitation is mediated by sulphur oxidising bacteria [1]. They inhabit the oxic/anoxic interface within the upper few centimetres of sediment column, accumulating phosphate in their cells under oxic conditions and releasing it rapidly when conditions become anoxic. Sulphur bacteria are known to live in close association with a consortium of anaerobic methane oxidising archaea and syntrophic sulphate-reducing bacteria. Paleoproterozoic, c. 2.0 Ga Zaonega Formation in Karelia, Russia contains several P-rich intervals in the upper part of 1500 m thick succession of organic-rich sedimentary rocks interlayered with mafic tuffs and lavas. Apatite in these P-rich intervals forms impure laminae, lenses and round-oval nodules which diameters typically range from 300 to 1000 μm. Individual apatite particles in P-rich laminae and nodules commonly occur as cylinders that are 1-8 μm long and have diameters of 0.5-4 μm. Cross-sections of best preserved cylindrical apatite particles reveal a thin outer rim whereas the internal parts consist of small anhedral elongated crystallites, intergrown with carbonaceous material. During recrystallization the outer rim thickens towards interior and cylinders may attain hexagonal crystal habit, but their size and shape remains largely unchanged [2]. The sizes of Zaonega nodules are similar to giant sulphide-oxidising bacteria known from modern and ancient settings [3, 4]. Individual apatite cylinders and aggregates have shapes and sizes similar to the methanotrophic archaea that inhabit microbial mats in modern seep/vent areas where they operate in close associations with sulphur-oxidising microbial communities [5]. Seep/vent influence during the Zaonega phosphogenesis is indicated by variable, though positive Eu anomaly, expected in magmatically active sedimentary environment experiencing several lava flows. Moreover, P-rich intervals in the Zaonega Formation are found in organic-rich sediments exhibiting strongly negative δ13Corg values (-37 to -34 per mil) which is interpreted to reflect the methanotrophic biomass. We conclude that modern-style phosphogenesis, mediated by sulphide-oxidising bacteria living in consortium with methanotrophs, was established at least 2 Ga ago. [1] Schulz and Schulz (2005) Science 307, 416-418 [2] Lepland, Joosu, Kirsimäe, Prave, Romashkin, Črne, Martin, Fallick, Somelar, Üpraus, Mänd, Roberts, van Zuilen, Wirth, Schreiber (2014) Nature geoscience 7, 20-24 [3] Bailey, Joye, Kalanetra, Flood, and Corsetti (2007) Nature 445, 198-201 [4] Schulz, Brinkhoff, Ferdelman, Marine, Teske and Jorgensen (1999) Science 284, 493-495 [5] Knittel, Losekann, Boetius, Kort and Amann (2005) Applied and Environmental Microbiology 71, 467-479.
Biogenic hydrogen peroxide as a possible adaptation of life on Mars: the search for biosignatures
NASA Astrophysics Data System (ADS)
Houtkooper, J. M.; Schulze-Makuch, D.
2007-08-01
The hypothesis that putative Martian organisms incorporate H2O2 into their intracellular liquids (Houtkooper and Schulze-Makuch, 2007) has significant implications, as it explains the Viking observations quite well; it provides a functional adaptation to Martian environmental conditions; and, it is feasible as an adaptation based on the biochemistry of terrestrial organisms. It would explain many of the puzzling Viking observations such as (1) the lack of organics detected by GC-MS, (2) the lack of detected oxidant(s) to support a chemical explanation, (3) evolution of O2 upon wetting (GEx experiment), (4) limited organic synthesis reactions (PR experiment), and (5) the gas release observations made (LR experiment). An intracellular liquid containing a high concentration of H2O2 has advantages such as providing a low freezing point, a source of oxygen, and hygroscopicity, allowing an organism to obtain water vapor from the Martian atmosphere or from the adsorbed layers of water molecules on mineral grains. Perhaps surprisingly, H2O2 is used by many terrestrial organisms for diverse purposes, e.g., metabolism (Acetobacter peroxidans), as defense mechanism (Bombardier beetle), and also to mediate diverse physiological responses such as cell proliferation, differentiation, and migration. The detection of H2O2-containing organisms may well suffer from the same problems as the Viking experiments: Because of the excess oxidative contents, as derived from the GEx experiment, the organisms may decompose completely into H2O, CO2, O2 and N2. This can happen when exposed to an excess of water vapor (through hyperhydration), too high a temperature or a combination of both. Therefore, the addition of too much water vapor may be fatal. Moreover, employing pyrolysis in order to detect organic molecules may result in the organisms autooxidizing completely. Although the instrument suite aboard the Phoenix Lander offers some interesting possibilities (Schulze-Makuch and Houtkooper, 2007), it may be interesting to consider possible biosignatures in the environment, based on what might be inferred from the putative organisms. The metabolism is constrained by the necessity to produce an excess of H2O2 from the environment and the release of possible metabolites. The production of H2O2 can be realized from the available constituents in the atmosphere, CO2, H2O and O2. The last two components are present as a small fraction of the Martian atmosphere, but so is CO2 in Earth's atmosphere. Possible overall metabolic pathways are: CO2 + 3 H2O !CH2O + 2 H2O2(1) CO2 + 6 H2O !CH4 + 4 H2O2(2) CO2 + H2O !CO + H2O2(3) O2 + 2 H2O !2 H2O2(4) These pathways produce the following metabolites: (1) either organic macromolecules, which stay in or at the organisms, or, formaldehyde, released into the atmosphere; (2) methane, released into the atmosphere; (3) carbon monoxide, released into the atmosphere; and (4) only H2O2; this last pathway is energetically the least costly way to produce H2O2, despite O2 being a minor component (0.1%) of the atmosphere. As the amount of biomass in the Martian soil could be as high as 1300 ppm (Houtkooper and Schulze-Makuch, 2007), it might be possible to detect seasonal variations in the metabolic endproducts of formaldehyde, CH4, CO and O2. Even more interesting would be the observation of local variations in the surface boundary layer, in which diurnal rhythms might be revealed. Since a metabolic pathway would be involved, an additional signature could be the isotopic ratios of carbon and oxygen. Continuous monitoring of the composition, especially of the minor constituents of the atmosphere should therefore have a high priority in future missions to Mars, especially lander missions. Other signatures to be watched for are the spectral absorption of pigments providing UV shielding and photosynthesis. References: Houtkooper, J.M., and Schulze-Makuch, D. (2007) A Possible Biogenic Origin for Hydrogen Peroxide on Mars: The Viking Results Reinterpreted. In press at Int. J. of Astrobiology. Schulze-Makuch, D., and Houtkooper, J.M. (2007) Martian Extremophiles? The H2O-H2O2 Hypothesis and Its Implications for the Mars Phoenix Mission. LPSC XXXVIII, abstract #1171.
2011-09-27
Pipistrel-USA Pilots Robin Reid, left, and David Morss, talk on their cell phones shortly after participating in the miles per gallon (MPG) flight in their Taurus G4 aircraft during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-25
Pipistrel-USA Pilot David Morss, left, CAFE Foundation Weights Chief Wayne Cook, 2nd from left, and Weight crew member Ron Stout look on as Pipistrel-USA Pilot Robin Reid is weighed-in during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Monday, Sept. 26, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
Division III Working Group on Planetary System Nomenclature
NASA Astrophysics Data System (ADS)
Schulz, Rita; Aksnes, K.; Blue, J.; Bowell, E.; Burba, G. A.; Consolmagno, G.; Courtin, R.; Lopes, R.; Marov, M. Ya.; Marsden, B. G.; Robinson, M. S.; Shevchenko, V. V.; Smith, B. A.
2010-05-01
The meeting was attended by 5 members of the WG (E. Bowell, G. Consolmagno, R. Courtain, R. Lopez, R. Schulz) one Task Group member (J. Watanabe), and several guests from the CSBN and CBAT. It was decided at the beginning of the meeting that the attending members of the WGPSN would discuss matters, provide their opinion or vote, and then ask the other 8 formal members to do the same via email. As a consequence the following discussed items have been agreed by majority vote of the WG members.
2011-09-28
CAFE Foundation volunteer Oliver Dyer-Bennet, left, CAFE Foundation Hanger Boss Mike Fenn, center, and CAFE Foundation volunteer, Justin Dyer-Bennett scan the sky for aircraft during the speed competition portion of the 2011 Green Flight Challenge, sponsored by Google, being held at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
Replicas of Snoopy and Charlie Brown decorate top of console in MCC
1969-05-18
S69-34314 (18 May 1969) --- Replicas of Snoopy and Charlie Brown, the two characters from Charles Schulz's syndicated comic strip, "Peanuts," decorate the top of a console in the Mission Operations Control Room in the Mission Control Center, Building 30, on the first day of the Apollo 10 lunar orbit mission. During lunar orbit operations, the Lunar Module will be called ?Snoopy? when it is separated from the Command and Service Modules. The code words for the Command Module will be ?Charlie Brown?.
2011-09-27
CAFE Foundation Security Chief and Event Manager Bruno Mombrinie, left, talks with CAFE Foundation eCharging Chief Alan Soule as flight crews prepare for the miles per gallon (MPG) flight during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
Gibbons, Richard A.; Dixon, Stephen N.; Pocock, David H.
1973-01-01
A specimen of intestinal glycoprotein isolated from the pig and two samples of dextran, all of which are polydisperse (that is, the preparations may be regarded as consisting of a continuous distribution of molecular weights), have been examined in the ultracentrifuge under meniscus-depletion conditions at equilibrium. They are compared with each other and with a glycoprotein from Cysticercus tenuicollis cyst fluid which is almost monodisperse. The quantity c−⅓ (c=concentration) is plotted against ξ (the reduced radius); this plot is linear when the molecular-weight distribution approximates to the `most probable', i.e. when Mn:Mw:Mz: M(z+1)....... is as 1:2:3:4: etc. The use of this plot, and related procedures, to evaluate qualitatively and semi-quantitatively molecular-weight distribution functions where they can be realistically approximated to Schulz distributions is discussed. The theoretical basis is given in an Appendix. PMID:4778265
Rediscovering the Schulze-Hardy rule in competitive adsorption to an air-water interface.
Stenger, Patrick C; Isbell, Stephen G; St Hillaire, Debra; Zasadzinski, Joseph A
2009-09-01
The ratio of divalent to monovalent ion concentration necessary to displace the surface-active protein, albumin, by lung surfactant monolayers and multilayers at an air-water interface scales as 2(-6), the same concentration dependence as the critical flocculation concentration (CFC) for colloids with a high surface potential. Confirming this analogy between competitive adsorption and colloid stability, polymer-induced depletion attraction and electrostatic potentials are additive in their effects; the range of the depletion attraction, twice the polymer radius of gyration, must be greater than the Debye length to have an effect on adsorption.
1994-04-01
Physics, Vol.72, No.12, 1992, pp.5535-5538. (12] Ragan, D.R., Gustavsen, R., and Schiferl , D., "Calibration of the Ruby R2 and R2 Fluorescence Shifts... Schiferl , D., "Pressure and Temperature Dependence of Laser-Induced Fluorescence of Sm:YAG to 100 kbar and 7000 C and an Empirical Model," Journal of...1964). IS. H. D’Amour. D. Schiferl , W. Denner, H. Schulz and 20. J. W. McCauley and G. V. Gibbs. Z. Knistallogr. 135, W, B. Holzapfel, J. appl. Phys. 49
BKCASE (trademark): Body of Knowledge and Curriculum to Advance Systems Engineering
2010-10-01
Association Francaise d‘lingeniere Systeme, France Tim Ferris University of South Australia, Australia Kevin Forsberg Center for Systems Management US 7... Lee Defence Science and Technology Agency, Singapore Ray Madachy Naval Postgraduate School, US James Martin Aerospace Corporation US , Greg...Pyster Stevens Institute of Technology, US Garry Roedler Lockheed Martin, US Jean‐Claude Roussel EADS, France 9/2010 Sven‐Olaf Schulze Berner & Mattner
Zhioua, Elyes; Ginsberg, Howard S.; Humber, Richard A.; LeBrun, Roger A.
1999-01-01
Free-living larval, nymphal, and adult Ixodes scapularis Say were collected from scattered locales in southern New England and New York to determine infection rates with entomopathogenic fungi. Infection rates of larvae, nymphs, males, and females were 0% (571), 0% (272), 0% (57), and 4.3% (47), respectively. Two entomopathogenic fungi were isolated from field-collected I. scapularis females from Fire Island, NY. Isolates were identified as Verticillium lecanii (Zimmermann) Viegas and Verticillium sp. (a member of the Verticillium lecanii species complex).Ixodes scapularis Say is the principal vector of Borrelia burgdorferi Johnson, Schmid, Hyde, Steigerwalt & Brenner (Burgdorfer et al. 1982, Johnson et al. 1984), the etiologic agent of Lyme disease in the northeastern and upper-midwestern United States. Control of I. scapularis is based on chemical treatment (Mather et al. 1987b; Schulze et al. 1987, 1991), environmental management (Wilson et al. 1988, Schulze et al. 1995), and habitat modification (Wilson 1986). These methods have shown variable success, and some potentially have negative environmental effects (Wilson and Deblinger 1993, Ginsberg 1994).Studies concerning natural predators, parasitoids, and pathogens of I. scapularis are rare. The use of ground-dwelling birds as tick predators has had only limited success (Duffy et al. 1992). Nymphal I. scapularis are often infected with the parasitic wasp Ixodiphagus hookeri (Howard) (Mather et al. 1987a, Hu et al. 1993, Stafford et al. 1996, Hu and Hyland 1997), but this wasp does not effectively control I. scapularis populations (Stafford et al. 1996). The entomopathogenic nematodes Steinernema carpocapsae (Weiser) and S. glaseri (Steiner) are pathogenic only to engorged female I. scapularis, and thus have limited applicability (Zhioua et al. 1995). In contrast, the entomogenous fungus Metarhizium anisopliae (Metschnikoff) Sorokin is highly pathogenic to all stages of I. scapularis, unfed as well as engorged, and thus has considerable potential as a microbial control agent (Zhioua et al. 1997).European studies have suggested that entomopathogenic fungi might serve as natural controls of of Ixodes ricinus L. populations (Samsinakova et al. 1974, Eilenberg et al. 1991, Kalsbeek et al. 1995). In the current study, we describe the isolation of entomopathogenic fungi from field-collected I. scapularis.
Wang, Xiaohong; Gan, Lu; Jochum, Klaus P; Schröder, Heinz C; Müller, Werner E G
2011-01-01
The depth of the ocean is plentifully populated with a highly diverse fauna and flora, from where the Challenger expedition (1873-1876) treasured up a rich collection of vitreous sponges [Hexactinellida]. They have been described by Schulze and represent the phylogenetically oldest class of siliceous sponges [phylum Porifera]; they are eye-catching because of their distinct body plan, which relies on a filigree skeleton. It is constructed by an array of morphologically determined elements, the spicules. Later, during the German Deep Sea Expedition "Valdivia" (1898-1899), Schulze could describe the largest siliceous hexactinellid sponge on Earth, the up to 3 m high Monorhaphis chuni, which develops the equally largest bio-silica structures, the giant basal spicules (3 m × 10 mm). With such spicules as a model, basic knowledge on the morphology, formation, and development of the skeletal elements could be elaborated. Spicules are formed by a proteinaceous scaffold which mediates the formation of siliceous lamellae in which the proteins are encased. Up to eight hundred 5 to 10 μm thick lamellae can be concentrically arranged around an axial canal. The silica matrix is composed of almost pure silicon and oxygen, providing it with unusual optophysical properties that are superior to those of man-made waveguides. Experiments indicated that the spicules function in vivo as a nonocular photoreception system. In addition, the spicules have exceptional mechanical properties, combining mechanical stability with strength and stiffness. Like demosponges the hexactinellids synthesize their silica enzymatically, via the enzyme silicatein. All these basic insights will surely contribute also to a further applied utilization and exploration of bio-silica in material/medical science.
Wang, Xiaohong; Gan, Lu; Jochum, Klaus P.; Schröder, Heinz C.; Müller, Werner E. G.
2011-01-01
The depth of the ocean is plentifully populated with a highly diverse fauna and flora, from where the Challenger expedition (1873–1876) treasured up a rich collection of vitreous sponges [Hexactinellida]. They have been described by Schulze and represent the phylogenetically oldest class of siliceous sponges [phylum Porifera]; they are eye-catching because of their distinct body plan, which relies on a filigree skeleton. It is constructed by an array of morphologically determined elements, the spicules. Later, during the German Deep Sea Expedition “Valdivia” (1898-1899), Schulze could describe the largest siliceous hexactinellid sponge on Earth, the up to 3 m high Monorhaphis chuni, which develops the equally largest bio-silica structures, the giant basal spicules (3 m × 10 mm). With such spicules as a model, basic knowledge on the morphology, formation, and development of the skeletal elements could be elaborated. Spicules are formed by a proteinaceous scaffold which mediates the formation of siliceous lamellae in which the proteins are encased. Up to eight hundred 5 to 10 μm thick lamellae can be concentrically arranged around an axial canal. The silica matrix is composed of almost pure silicon and oxygen, providing it with unusual optophysical properties that are superior to those of man-made waveguides. Experiments indicated that the spicules function in vivo as a nonocular photoreception system. In addition, the spicules have exceptional mechanical properties, combining mechanical stability with strength and stiffness. Like demosponges the hexactinellids synthesize their silica enzymatically, via the enzyme silicatein. All these basic insights will surely contribute also to a further applied utilization and exploration of bio-silica in material/medical science. PMID:21941585
Giant siliceous spicules from the deep-sea glass sponge Monorhaphis chuni.
Wang, Xiaohong; Schröder, Heinz C; Müller, Werner E G
2009-01-01
Only 13 years after realizing, during a repair of a telegraph cable pulled out from the deep sea, that the depth of the ocean is plentifully populated with a highly diverse fauna and flora, the Challenger expedition (1873-1876) treasured up a rich collection of vitreous sponges (Hexactinellida). They had been described by Schulze and represent the phylogenetically oldest class of siliceous sponges (phylum Porifera); they are eye-catching because of their distinct body plan, which relies on a filigree skeleton. It is constructed by an array of morphologically determined elements, the spicules. Soon after, during the German Deep Sea Expedition "Valdivia" (1898-1899), Schulze could describe the largest siliceous hexactinellid sponge on Earth, the up to 3-m high Monorhaphis chuni, which develops the equally largest bio-silica structure, the giant basal spicules (3 mx10 mm). Using these spicules as a model, basic knowledge on the morphology, formation, and development of the skeletal elements could be achieved. They are formed by a proteinaceous scaffold (composed of a 27-kDa protein), which mediates the formation of the siliceous lamellae, into which the proteins are encased. The high number of 800 of 5-10 microm thick lamellae is concentrically arranged around the axial canal. The silica matrix is composed of almost pure silicon oxide, providing it with unusually optophysical properties, which are superior to those of man-made waveguides. Experiments might suggest that the spicules function in vivo as a nonocular photoreception system. In addition, the spicules have exceptional mechanical properties, combining mechanical stability with strength and stiffness. Like demosponges, also the hexactinellids synthesize their silica enzymatically, via the enzyme silicatein (27-kDa protein). It is suggested that these basic insights will surely contribute to a further applied utilization and exploration of silica in bio-material/biomedical science.
DRACO development for 3D simulations
NASA Astrophysics Data System (ADS)
Fatenejad, Milad; Moses, Gregory
2006-10-01
The DRACO (r-z) lagrangian radiation-hydrodynamics laser fusion simulation code is being extended to model 3D hydrodynamics in (x-y-z) coordinates with hexahedral cells on a structured grid. The equation of motion is solved with a lagrangian update with optional rezoning. The fluid equations are solved using an explicit scheme based on (Schulz, 1964) while the SALE-3D algorithm (Amsden, 1981) is used as a template for computing cell volumes and other quantities. A second order rezoner has been added which uses linear interpolation of the underlying continuous functions to preserve accuracy (Van Leer, 1976). Artificial restoring force terms and smoothing algorithms are used to avoid grid distortion in high aspect ratio cells. These include alternate node couplers along with a rotational restoring force based on the Tensor Code (Maenchen, 1964). Electron and ion thermal conduction is modeled using an extension of Kershaw's method (Kershaw, 1981) to 3D geometry. Test problem simulations will be presented to demonstrate the applicability of this new version of DRACO to the study of fluid instabilities in three dimensions.
Chandra Discovers the X-ray Signature of a Powerful Wind from a Galactic Microquasar
NASA Astrophysics Data System (ADS)
2000-11-01
NASA's Chandra X-ray Observatory has detected, for the first time in X rays, a stellar fingerprint known as a P Cygni profile--the distinctive spectral signature of a powerful wind produced by an object in space. The discovery reveals a 4.5-million-mile-per-hour wind coming from a highly compact pair of stars in our galaxy, report researchers from Penn State and the Massachusetts Institute of Technology in a paper they will present on 8 November 2000 during a meeting of the High-Energy Astrophysics Division of the American Astronomical Society in Honolulu, Hawaii. The paper also has been accepted for publication in The Astrophysical Journal Letters. "To our knowledge, these are the first P Cygni profiles reported in X rays," say researchers Niel Brandt, assistant professor of astronomy and astrophysics at Penn State, and Norbert S. Schulz, research scientist at the Massachusetts Institute of Technology. The team made the discovery during their first observation of a binary-star system with the Chandra X-ray Observatory, which was launched into space in July 1999. The system, known as Circinus X-1, is located about 20,000 light years from Earth in the constellation Circinus near the Southern Cross. It contains a super-dense neutron star in orbit around a normal fusion-burning star like our Sun. Although Circinus X-1 was discovered in 1971, many properties of this system remain mysterious because Circinus X-1 lies in the galactic plane where obscuring dust and gas have blocked its effective study in many wavelengths. The P Cygni spectral profile, previously detected primarily at ultraviolet and optical wavelengths but never before in X rays, is the textbook tool astronomers rely on for probing stellar winds. The profile looks like the outline of a roller coaster, with one really big hill and valley in the middle, on a data plot with velocity on one axis and the flow rate of photons per second on the other. It is named after the famous star P Cygni, in which such profiles have been observed for over one hundred years. "When you see a P Cygni profile, you immediately know the object you are observing is producing a powerful outflow," Brandt says. Chandra is the first X-ray observatory capable of capturing data of sufficiently high resolution to reveal an X-ray P Cygni profile. Brandt and Schulz say their discovery occurred because they were able to use Chandra continuously for one-third of a day to observe Circinus X-1, plus its signal in X rays is generally very bright, partly because it is relatively nearby in our own Galaxy. P Cygni lines at ultraviolet or optical wavelengths had not been previously seen from Circinus X-1 because a large amount of dust in the galactic plane lies between Earth and this system and this dust is an efficient absorber of ultraviolet and optical light. However, the energetic X rays created by Circinus X-1 could easily penetrate through the obscuring dust and gas--similar to the way medical X-rays on Earth can penetrate through people's bodies. "We were hoping to detect some kind of X-ray line emission from the accreting neutron star in Circinus X-1, but it caught us totally by surprise to observe a complex emission structure like a P Cygni profile in high-energy X rays." schulz says. "This detection clearly marks a new area in X-ray astrophysics, where we will be able to study dynamical structures in the universe like we currently do at ultraviolet or optical wavelengths." Brandt and Schulz used two of Chandra's instruments, known together as the High-Energy Transmission Grating Spectrometer (HETGS), to detect the X rays and produce a high-resolution X-ray spectrum of Circinus X-1. This spectrum is analogous to the rainbow we can see at optical wavelengths. "Chandra's X-ray spectrum is 50 times more detailed than previous X-ray observatories could obtain," Schulz says. First, the super-fine transmission gratings acted like a prism to separate the X-rays into discrete energy bands. Then, the Advanced CCD Imaging Spectrometer (ACIS) was used as a camera to record the X-ray spectral data, which computers processed and plotted onto a graph, revealing the P Cygni signature. Specific elements, such as silicon or iron, emit specific X-ray wavelengths, revealing their presence in the emitting material to astronomers. Before the observation with Chandra, astronomers knew the force of gravity in an X-ray binary system strips material off the surface of the normal star and then pulls this material toward the surface of the super-dense neutron star, forming a relatively flat spiraling cloud of gas called an accretion disk. The detailed Chandra data revealed, in addition, that the radiation and rotational forces in the Circinus X-1 disk are blasting some of the inward-spiraling gas back out into space in a powerful wind, which creates the P Cygni lines in the object's spectrum. P Cygni profiles carry much diagnostic information that is hard to obtain in other ways--such as how fast the wind is moving, how much material it contains, how dense it is, and its chemical composition. "The wind coming out of Circinus X-1 is composed of gas that contains highly ionized atoms of silicon, neon, iron, magnesium, and sulfur, and its peak observed velocity is about 4.5 million miles per hour--so fast it would cross the entire radius of the Earth in about three seconds," Brandt reports. The astronomers used Doppler techniques that detect positive velocities from material moving away from Earth, with signals shifted toward the red end of the spectrum, and negative velocities from material that is coming toward Earth, with signals shifted toward the blue end of the spectrum. "We learned these two stars clearly interact dramatically with each other while this wind is blowing outward at high velocity, which appears to be causing certain properties of the wind to change over time," Schulz says. The researchers produced a time-lapse movie of one of their spectra, which is available on the World Wide Web, along with other information about the discovery, at http://www.astro.psu.edu/users/niel/cirx1/cirx1.html. A binary-star system 20,000 light years from Earth in the constellation of Circinus. Animation showing the strong variability over time of one of the P Cygni spectral lines seen by Chandra from Circinus X-1 (Click Image to View Animation) Credit: Niel Brandt and Norbert Schulz (Note: This animation is the same as the one referred to in the above paragraph) Atoms irradiated with energetic X-rays can emit as well as absorb them at specific wavelengths. Whether astronomers observe emission or absorption depends on the state and environment of the irradiated atoms, so these processes carry vital information about the emitting and absorbing material. Regarding the time-lapse movie, Schulz commented "You can see this profile flipping up and down between a strong emission line on the red side and a strong absorption line on the blue side. We don't yet fully understand what this means, but it does indicate the dynamic nature of this system. We see indications that sometimes either the emitting or the absorbing region gets obscured by matter so thick that not even X rays can penetrate it." The researchers say one reason their discovery that Circinus X-1 has a high-velocity wind is important is that this small two-star system now has striking similarities with a type of luminous active galaxy known as a broad-absorption-line quasar. Broad-absorption-line quasars are galaxies containing a violent centers powered by supermassive black holes. "This type of galaxy has an accretion disk circling its black hole plus very powerful winds created when radiation pushes material off of the disk and out into space," Brandt says. "The disk winds from broad-absorption-line quasars create P Cygni lines in the spectra of these objects. Circinus X-1, with the newly detected X-ray P Cygni profiles, appears in many ways to be a microscopic version of a broad-absorption-line quasar." "Although a typical AGN has a roughly ten-million-solar-mass black hole at its center while the Circinus X-1 system has a neutron star only slightly more massive than our Sun, both systems must obey the same laws of physics," Brandt says. "Gas is gas and gravity is gravity and that's all there is to it--you put gas and gravity together and they make a disk and often, apparently, a disk-generated wind." The researchers hope X-ray P Cygni profiles will be found to be a fairly common property of X-ray binaries containing neutron stars and black holes. "If we can find X-ray P Cygni profiles in more systems, we can learn a great deal about the geometry and the dynamics of the winds these systems emit," Schulz says. "Due to the penetrating nature of X rays, X-ray P Cygni lines have the significant advantage that they can be used to probe winds even from systems that are heavily obscured by dust along the line of sight." The High-Energy Transmission Grating Spectrometer was built by the Massachusetts Institute of Technology with Bruno Rossi Professor Claude Canizares as Principal Investigator. The ACIS X-ray camera was conceived and developed for NASA by Penn State and the Massachusetts Institute of Technology under the leadership of Gordon Garmire, Evan Pugh Professor of Astronomy and Astrophysics at Penn State. The observation of Circinus X-1 was part of the first round of Chandra's guest observer program. The guest observer program is a competitive one open to the World science community. NASA's Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program. TRW Inc., Redondo Beach, California, is the prime contractor for the spacecraft. The Smithsonian's Chandra X-ray Center controls science and flight operations from Cambridge, Massachusetts. Circinus X-1 Handout Constellation Circinus To follow Chandra's progress, visit the Chandra site at: http://chandra.harvard.edu AND http://chandra.nasa.gov This research was supported by the Chandra X-ray Center, the Alfred P. Sloan Foundation, and the Smithsonian Astrophysical Observatory. This is a joint press release from Penn State and the Massachusetts Institute of Technology Digital images and movies are available on the World Wide Web at http://www.astro.psu.edu/users/niel/cirx1/cirx1.html Science Contacts: Niel Brandt: 814-865-3509 Norbert S. Schulz: 617-258-5767 Barbara K. Kennedy (PIO at Penn State): 814-863-4682 Deborah Halber (PIO at MIT): 617-253-2700 or 617-258-9276
Projection-operator calculations of the lowest e(-)-He resonance
NASA Technical Reports Server (NTRS)
Berk, A.; Bhatia, A. K.; Junker, B. R.; Temkin, A.
1986-01-01
The 1s (2s)2:2S Schulz resonance of He(-) is investigated theoretically, applying the full projection-operator formalism developed by Temkin and Bhatia (1985) in a Rayleigh-Ritz variational calculation. The technique is described in detail, and results for five different approximations of the He target state are presented in a table. Good convergence is obtained, but it is found that even the best calculated value of the resonance is about 130 meV higher than the experimentally measured value of 19.367 + or - 0.007 eV (Brunt et al., 1977), a discrepancy attributed to the contribution of the shift in the Feshbach formalism.
2015-08-01
CJ 60 ca N w ~ 40 0 20 0 0.01 0.1 1 10 100 1000 10000 100000 [Antagonist, nM] Figure 3D. Ator= Atorvastatin 0 20 40 60 80 100...receptor; ICI= ICI 182,780, an antagonist for estrogen receptor; Ator, Atorvastatin , a statin used for lowering cholesterol * 0 20 40 60 80 100 120 % L i...inhibitors with atorvastatin in human cancer cells. J Med Chem 55:4990–5002 23. Thoma R, Schulz-Gasch T, D’Arcy B et al (2004) Insight into steroid
BKCASE (trademark): Body of Knowledge and Curriculum to Advance Systems Engineering
2010-09-01
i l IEEE USc a r ey , Alain Faisandier Association Francaise d‘lingeniere Systeme, France Tim Ferris University of South Australia, Australia...Lawson Lawson Konsult AB, Sweden Yeaw lip “Alex” Lee Defence Science and Technology Agency, Singapore Ray Madachy Naval Postgraduate School, US James...9/2010 Sven‐Olaf Schulze Berner & Mattner, Germany Seiko Shiraska KEIO University, Japan Hillary Sillitto Thales Group, UK John Snoderly Defense
Literak, Ivan; Kocianova, Elena; Dusbabek, Frantisek; Martinu, Jana; Podzemny, Petr; Sychra, Oldrich
2007-11-01
In winter months during 2003-2006, wild birds were captured and examined for ticks and chiggers at two sites near Brno, Czech Republic. In total, 1,362 birds, mostly passerines, were examined. The tick Ixodes arboricola Schulze et Schlottke, 1929 was found on 47 (3%) birds of six species. Ixodes ricinus Linnaeus, 1758 was found on 11 (1%) birds of five species. Larvae of chiggers Ascoschoengastia latyshevi (Schluger 1955) were found on 13 (1%) birds of six species. I. arboricola and A. latyshevi associated with hole-nesting birds can appear on birds rather frequently even during winter months. I. ricinus occurs on birds in winter sporadically.
2014-06-01
ICI=ICI 182,780, an anti‐estrogen (antagonist) C Figure 3D. Ator= Atorvastatin 0 20 40 60 80 100 120 0.01 0.1 1 10 100 1000 10000 100000 % E 2 A c t...receptor; Ator, Atorvastatin , a statin used for lowering cholesterol * 0 20 40 60 80 100 120 % L i v e c e l l s Figure 5. RO and other compounds...inhibitors with atorvastatin in human cancer cells. J Med Chem 55:4990–5002 23. Thoma R, Schulz-Gasch T, D’Arcy B et al (2004) Insight into steroid
Resonant transfer excitation in collisions of F6+ and Mg9+ with H2
NASA Astrophysics Data System (ADS)
Bernstein, E. M.; Kamal, A.; Zaharakis, K. E.; Clark, M. W.; Tanis, J. A.; Ferguson, S. M.; Badnell, N. R.
1991-10-01
Experimental and theoretical investigations of resonant transfer excitation (RTE) for F6++H2 and Mg9++H2 collisions have been made. For both collision systems good agreement is obtained between the measured cross sections for K-shell x-ray emission coincident with electron-capture and theoretical RTE calculations. For F6+ the present calculations are about 10% lower than previous results of Bhalla and Karim [Phys. Rev. A 39, 6060 (1989); 41, 4097(E) (1990]; the measured cross sections are a factor of 2.3 larger than earlier measurements of Schulz et al. [Phys. Rev. A 38, 5454 (1988)]. The previous disagreement between experiment and theory for F6+ is removed.
Conical Current Sheets in a Source-Surface Model of the Heliosphere
NASA Astrophysics Data System (ADS)
Schulz, M.
2007-12-01
Different methods of modeling the coronal and heliospheric magnetic field are conveniently visualized and intercompared by applying them to ideally axisymmetric field models. Thus, for example, a dipolar B field with its moment parallel to the Sun's rotation axis leads to a flat heliospheric current sheet. More general solar B fields (still axisymmetric about the solar rotation axis for simplicity) typically lead to cone-shaped current sheets beyond the source surface (and presumably also in MHD models). As in the dipolar case [Schulz et al., Solar Phys., 60, 83-104, 1978], such conical current sheets can be made realistically thin by taking the source surface to be non-spherical in a way that reflects the underlying structure of the Sun's main B field. A source surface that seems to work well in this respect [Schulz, Ann. Geophysicae, 15, 1379-1387, 1997] is a surface of constant F = (1/r)kB, where B is the scalar strength of the Sun's main magnetic field and k (~ 1.4) is a shape parameter. This construction tends to flatten the source surface in regions where B is relatively weak. Thus, for example, the source surface for a dipolar B field is shaped somewhat like a Rugby football, whereas the source surface for an axisymmetric quadrupolar B field is similarly elongated but somewhat flattened (as if stuffed into a cone) at mid-latitudes. A linear combination of co-axial dipolar and quadrupolar B fields generates a somewhat pear-shaped (but still convex) source surface. If the region surrounded by the source surface is regarded as current-free, then the source surface itself should be (as nearly as possible) an equipotential surface for the corresponding magnetic scalar potential (expanded, for example, in spherical harmonics). The solar wind should then flow not quite radially, but rather in a straight line along the outward normal to the source surface, and the heliospheric B field should follow a corresponding generalization of Parker's spiral [Levine et al., Solar Phys., 77, 363-392, 1982]. In particular, heliospheric current sheets (of which there are two if the underlying solar B field is mainly quadrupolar) should emanate from neutral lines on the corresponding source surface. However, because the source surface is relatively flattened in regions where such neutral lines tend to appear, the radial component of the heliospheric B field at r ~} 1 AU and beyond is much more nearly latitude-independent in absolute value than one would expect from models based on a spherical source surface.
Possible Niches for Extant Life on Titan in Light of Cassini/Huygens Results
NASA Astrophysics Data System (ADS)
Grinspoon, D. H.; Bullock, M. A.; Spencer, J. R.; Schulze-Makuch, D.
2005-08-01
Results from the first year of the Cassini mission show that Titan has an active surface with few impact craters and abundant hints of cryovolcanism, tectonism, aeolian and fluvial activity (Porco et al., 2005; Elachi et al., 2005). Methane clouds and surface characteristics strongly imply the presence of an active global methane cycle analogous to Earth's hydrological cycle. Astrobiological interest in Titan has previously focused on possible prebiological chemical evolution on a moon with a thick nitrogen atmosphere and rich organic chemistry (Raulin and Owen, 2002). Yet the emerging new picture of Titan has raised prospects for the possibility of extant life. Several key requirements for life appear to be present, including liquid reservoirs, organic molecules and ample energy sources. One promising location may be hot springs in contact with hydrocarbon reservoirs. Hydrogenation of photochemically produced acetylene could provide metabolic energy for near-surface organisms and also replenish atmospheric methane (Schulze-Makuch and Grinspoon, 2005). The energy released could be used by organisms to drive endothermic reactions, or go into heating their surroundings, helping to create their own liquid microenvironments. In environments which are energy-rich but liquid-poor, like the near-surface of Titan, natural selection may favor organisms that use their ``waste heat" to melt their own watering holes. Downward transport of high energy photochemical compounds could provide an energy supply for near-surface organisms which could be used, in part, to maintain the liquid environments conducive to life. We will present the results of thermal modeling designed to test the feasibility of biothermal melting on Titan. C. Porco and the Cassini Imaging Team (2005) Nature 434, 159-168; C. Elachi et al, Science, 308, 970-974; F. Raulin and T. Owen (2002) Space Sci. Rev. 104, 377 - 394.; D. Schulze-Makuch and D. H. Grinspoon (2005) Astrobiology, in press.
Dynamics of the Earth's Radiation Belts and Inner Magnetosphere
NASA Astrophysics Data System (ADS)
Schultz, Colin
2013-12-01
Trapped by Earth's magnetic field far above the planet's surface, the energetic particles that fill the radiation belts are a sign of the Sun's influence and a threat to our technological future. In the AGU monograph Dynamics of the Earth's Radiation Belts and Inner Magnetosphere, editors Danny Summers, Ian R. Mann, Daniel N. Baker, and Michael Schulz explore the inner workings of the magnetosphere. The book reviews current knowledge of the magnetosphere and recent research results and sets the stage for the work currently being done by NASA's Van Allen Probes (formerly known as the Radiation Belt Storm Probes). In this interview, Eos talks to Summers about magnetospheric research, whistler mode waves, solar storms, and the effects of the radiation belts on Earth.
Efficiency at maximum power of a laser quantum heat engine enhanced by noise-induced coherence
NASA Astrophysics Data System (ADS)
Dorfman, Konstantin E.; Xu, Dazhi; Cao, Jianshu
2018-04-01
Quantum coherence has been demonstrated in various systems including organic solar cells and solid state devices. In this article, we report the lower and upper bounds for the performance of quantum heat engines determined by the efficiency at maximum power. Our prediction based on the canonical three-level Scovil and Schulz-Dubois maser model strongly depends on the ratio of system-bath couplings for the hot and cold baths and recovers the theoretical bounds established previously for the Carnot engine. Further, introducing a fourth level to the maser model can enhance the maximal power and its efficiency, thus demonstrating the importance of quantum coherence in the thermodynamics and operation of the heat engines beyond the classical limit.
Energetic Electron Populations in the Magnetosphere During Geomagnetic Storms and Substorms
NASA Technical Reports Server (NTRS)
McKenzie, David L.; Anderson, Phillip C.
2002-01-01
This report summarizes the scientific work performed by the Aerospace Corporation under NASA Grant NAG5-10278, 'Energetic Electron Populations in the Magnetosphere during Geomagnetic Storms and Subsisting.' The period of performance for the Grant was March 1, 2001 to February 28, 2002. The following is a summary of the Statement of Work for this Grant. Use data from the PIXIE instrument on the Polar spacecraft from September 1998 onward to derive the statistical relationship between particle precipitation patterns and various geomagnetic activity indices. We are particularly interested in the occurrence of substorms during storm main phase and the efficacy of storms and substorms in injecting ring-current particles. We will compare stormtime simulations of the diffuse aurora using the models of Chen and Schulz with stormtime PIXIE measurements.
Characteristics of sedimentary organic matter in coastal and depositional areas in the Baltic Sea
NASA Astrophysics Data System (ADS)
Winogradow, A.; Pempkowiak, J.
2018-05-01
As organic matter (OM) is readily mineralized to carbon dioxide (Smith and Hollibangh, 1993; Emerson and Hedges, 2002; Szymczycha et al., 2017) it has a direct link to the carbon dioxide abundance in seawater and an indirect influence on the carbon dioxide concentration in the atmosphere (Emerson and Hedges, 2002; Schulz and Zabel, 2006). OM is a quantitatively minor yet important component of seawater. OM in seawater can originate from internal sources (marine, or planktonic, or autochthonous OM) or external sources (terrestrial, or allochtonous OM) (Maksymowska et al., 2000; Emerson and Hedges, 2002; Turnewitsch et al., 2007; Arndt et al., 2013). It is commonly divided into two fractions: dissolved (DOM) and particulate (POM). Organic carbon (OC) is, most often, used as a measure of OM.
Weak- versus strong-disorder superfluid—Bose glass transition in one dimension
NASA Astrophysics Data System (ADS)
Doggen, Elmer V. H.; Lemarié, Gabriel; Capponi, Sylvain; Laflorencie, Nicolas
2017-11-01
Using large-scale simulations based on matrix product state and quantum Monte Carlo techniques, we study the superfluid to Bose glass transition for one-dimensional attractive hard-core bosons at zero temperature, across the full regime from weak to strong disorder. As a function of interaction and disorder strength, we identify a Berezinskii-Kosterlitz-Thouless critical line with two different regimes. At small attraction where critical disorder is weak compared to the bandwidth, the critical Luttinger parameter Kc takes its universal Giamarchi-Schulz value Kc=3 /2 . Conversely, a nonuniversal Kc>3 /2 emerges for stronger attraction where weak-link physics is relevant. In this strong-disorder regime, the transition is characterized by self-similar power-law-distributed weak links with a continuously varying characteristic exponent α .
Understanding retirement: the promise of life-span developmental frameworks.
Löckenhoff, Corinna E
2012-09-01
The impending retirement of large population cohorts creates a pressing need for practical interventions to optimize outcomes at the individual and societal level. This necessitates comprehensive theoretical models that acknowledge the multi-layered nature of the retirement process and shed light on the dynamic mechanisms that drive longitudinal patterns of adjustment. The present commentary highlights ways in which contemporary life-span developmental frameworks can inform retirement research, drawing on the specific examples of Bronfenbrenner's Ecological Model, Baltes and Baltes Selective Optimization with Compensation Framework, Schulz and Heckhausen's Motivational Theory of Life-Span Development, and Carstensen's Socioemotional Selectivity Theory. Ultimately, a life-span developmental perspective on retirement offers not only new interpretations of known phenomena but may also help to identify novel directions for future research as well as promising pathways for interventions.
NASA Astrophysics Data System (ADS)
Isbaner, Sebastian; Hähnel, Dirk; Gregor, Ingo; Enderlein, Jörg
2017-02-01
Confocal Spinning Disk Systems are widely used for 3D cell imaging because they offer the advantage of optical sectioning at high framerates and are easy to use. However, as in confocal microscopy, the imaging resolution is diffraction limited, which can be theoretically improved by a factor of 2 using the principle of Image Scanning Microscopy (ISM) [1]. ISM with a Confocal Spinning Disk setup (CSDISM) has been shown to improve contrast as well as lateral resolution (FWHM) from 201 +/- 20 nm to 130 +/- 10 nm at 488 nm excitation. A minimum total acquisition time of one second per ISM image makes this method highly suitable for 3D live cell imaging [2]. Here, we present a multicolor implementation of CSDISM for the popular Micro-Manager Open Source Microscopy platform. Since changes in the optical path are not necessary, this will allow any researcher to easily upgrade their standard Confocal Spinning Disk system at remarkable low cost ( 5000 USD) with an ISM superresolution option. [1]. Müller, C.B. and Enderlein, J. Image Scanning Microscopy. Physical Review Letters 104, (2010). [2]. Schulz, O. et al. Resolution doubling in fluorescence microscopy with confocal spinning-disk image scanning microscopy. Proceedings of the National Academy of Sciences of the United States of America 110, 21000-5 (2013).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rongle Zhang; Jie Chang; Yuanyuan Xu
A new kinetic model of the Fischer-Tropsch synthesis (FTS) is proposed to describe the non-Anderson-Schulz-Flory (ASF) product distribution. The model is based on the double-polymerization monomers hypothesis, in which the surface C{sub 2}{asterisk} species acts as a chain-growth monomer in the light-product range, while C{sub 1}{asterisk} species acts as a chain-growth monomer in the heavy-product range. The detailed kinetic model in the Langmuir-Hinshelwood-Hougen-Watson type based on the elementary reactions is derived for FTS and the water-gas-shift reaction. Kinetic model candidates are evaluated by minimization of multiresponse objective functions with a genetic algorithm approach. The model of hydrocarbon product distribution ismore » consistent with experimental data (
NASA Technical Reports Server (NTRS)
Rame, Enrique; Wilkinson, Allen; Elliot, Alan; Young, Carolyn
2009-01-01
We have done a complete flowability characterization of the lunar soil simulant, JSC-1a, following closely the ASTM-6773 standard for the Schulze ring shear test. The measurements, which involve pre-shearing the material before each yield point, show JSC-1a to be cohesionless, with an angle of internal friction near 40 deg. We also measured yield loci after consolidating the material in a vibration table which show it to have significant cohesion (approximately equal to 1 kPa) and an angle of internal friction of about 60 deg. Hopper designs based on each type of flowability test differ significantly. These differences highlight the need to discern the condition of the lunar soil in the specific process where flowability is an issue. We close with a list not necessarily comprehensive of engineering rules of thumb that apply to powder flow in hoppers.
Resonant transfer excitation in collisions of F sup 6+ and Mg sup 9+ with H sub 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, E.M.; Kamal, A.; Zaharakis, K.E.
1991-10-01
Experimental and theoretical investigations of resonant transfer excitation (RTE) for F{sup 6+}+H{sub 2} and Mg{sup 9+}+H{sub 2} collisions have been made. For both collision systems good agreement is obtained between the measured cross sections for {ital K}-shell x-ray emission coincident with electron-capture and theoretical RTE calculations. For F{sup 6+} the present calculations are about 10% lower than previous results of Bhalla and Karim (Phys. Rev. A 39, 6060 (1989); 41, 4097(E) (1990)); the measured cross sections are a factor of 2.3 larger than earlier measurements of Schulz {ital et} {ital al}. (Phys. Rev. A 38, 5454 (1988)). The previous disagreementmore » between experiment and theory for F{sup 6+} is removed.« less
Diviani, Nicola; Camerini, Anne-Linda; Reinholz, Danuta; Galfetti, Alessandra; Schulz, Peter J
2012-01-01
Objectives Although public health offices have a detailed record of the vaccination coverage among adolescents in Switzerland, little is known about the factors that determine the decisions of parents to get their children vaccinated. Based on Schulz & Nakamoto's Extended Health Empowerment Model, the present study aims at surveying parents of adolescents in Ticino (Switzerland) to get insights into the role of health literacy, health empowerment, information search behaviour and potential confounding variables that influence whether adolescents are not at all vaccinated, undervaccinated or fully covered against measles, mumps and rubella (MMR). Methods and analysis A survey including concepts of the Extended Health Empowerment Model will be administered to all families with adolescents attending the third year of middle school in Ticino. Subsequently, survey responses will be matched with actual data on MMR vaccination coverage of adolescents collected from the Cantonal Office of Public Health in Ticino. Discussion The results of this study will allow one to draw more comprehensive conclusions about the factors that play a role in parents’ decisions regarding the vaccination of their children. At the same time, the study will provide useful insights on which are the main issues to be considered when addressing parents (on an interpersonal as well as a mass communication level) regarding the vaccination of their children. PMID:23166139
NASA Astrophysics Data System (ADS)
Schulz, M.
2008-05-01
Different methods of modeling the coronal and heliospheric magnetic field are conveniently visualized and intercompared by applying them to ideally axisymmetric field models. Thus, for example, a dipolar main B field with its moment parallel to the Sun's rotation axis leads to a flat heliospheric current sheet. More general solar main B fields (still axisymmetric about the solar rotation axis for simplicity) typically lead to cone-shaped current sheets beyond the source surface (and presumably also in MHD models). As in the dipolar case [Schulz et al., Solar Phys., 60, 83-104, 1978], such conical current sheets can be made realistically thin by taking the source surface to be non-spherical in a way that reflects the underlying structure of the Sun's main B field. A source surface that seems to work well in this respect [Schulz, Ann. Geophysicae, 15, 1379-1387, 1997] is a surface of constant F = (1/r)kB, where B is the scalar strength of the Sun's main magnetic field and k (~ 1.4) is a shape parameter. This construction tends to flatten the source surface in regions where B is relatively weak. Thus, for example, the source surface for a dipolar B field is shaped somewhat like a Rugby football, whereas the source surface for an axisymmetric quadrupolar B field is similarly elongated but somewhat flattened (as if stuffed into a pair of co-axial cones) at mid-latitudes. A linear combination of co-axial dipolar and quadrupolar B fields generates a somewhat apple-shaped source surface. If the region surrounded by the source surface is regarded as current-free, then the source surface itself should be (as nearly as possible) an equipotential surface for the corresponding magnetic scalar potential (expanded, for example, in spherical harmonics). More generally, the mean-square tangential component of the coronal magnetic field over the source surface should be minimized with respect to any adjustable parameters of the field model. The solar wind should then flow not quite radially, but rather in a straight line along the outward normal to the source surface, and the heliospheric B field should follow a corresponding generalization of Parker's spiral [Levine et al., Solar Phys., 77, 363-392, 1982]. In this work the above program is implemented for a Sun with an axisymmetric but purely quadrupolar main magnetic field. Two heliospheric current sheets emanate from circular neutral lines at mid-latitudes on the corresponding source surface. However, because the source surface is relatively flattened in regions where these neutral lines appear, the radial component of the heliospheric B field at r ~ 1 AU and beyond is much more nearly latitude-independent in absolute value than one would expect from a model based on a spherical source surface.
NASA Technical Reports Server (NTRS)
Wells, Douglas P.
2011-01-01
The Green Flight Challenge is one of the National Aeronautics and Space Administration s Centennial Challenges designed to push technology and make passenger aircraft more efficient. Airliners currently average around 50 passenger-miles per gallon and this competition will push teams to greater than 200 passenger-miles per gallon. The aircraft must also fly at least 100 miles per hour for 200 miles. The total prize money for this competition is $1.65 Million. The Green Flight Challenge will be run by the Comparative Aircraft Flight Efficiency (CAFE) Foundation September 25 October 1, 2011 at Charles M. Schulz Sonoma County Airport in California. Thirteen custom aircraft were developed with electric, bio-diesel, and other bio-fuel engines. The aircraft are using various technologies to improve aerodynamic, propulsion, and structural efficiency. This paper will explore the feasibility of the rule set, competitor vehicles, design approaches, and technologies used.
Heckhausen, J; Schulz, R
1999-07-01
This reply to S. J. Gould's (1999) critique of J. Heckhausen and R. Schulz's (1995) life-span theory of control addresses four issues: (1) the universal claim that primary control holds functional primacy over secondary control, (2) the status of secondary control as a confederate to primary control, (3) empirical evidence and paradigms for investigating universality and cultural variations, and (4) the capacity of the human control system to manage both gains and losses in control throughout the life span and aging-related decline in particular. Theoretical perspectives and empirical evidence from evolutionary, comparative, developmental, and cultural psychology are presented to support the authors' view that primary control striving holds functional primacy throughout the life span and across cultural and historical settings. Recommendations for empirically investigating the variations in the way primary control striving is expressed in different cultures are outlined.
Bespyatova, L A; Bugmyrin, S V
2015-01-01
Changes in the population density of two hard tick species, Ixodes (Exopalpiger) trianguliceps Birula, 1895 and Ixodes persulcatus Schulze, 1930, were examined in 1998-2001, and in 2003-2004 near Gomselga Village (Kondopoga District, 62° 04' N, 33° 55' E) in central Karelia. Data on the abundance of ixodid ticks and the species composition of their hosts in 4 forest sites at different stages of post-felling regeneration (secondary succession), i. e. 7-14, 12-19, 25-32, and 80-87 after logging were obtained. I. persulcatus dominated, comprising 73 % of the total tick number in samples. Regenera- tion of the forest resulted in fluctuations of the population density of two examined tick species: I. (Exopalpiger) trianguliceps (larvae 2.8-5.3; nymphs 1.5-2.2; adults 0-0.09) and I. persulcatus (larvae 4.3-10.6; nymphs 0.6-4.2).
Fluctuations of Wigner-type random matrices associated with symmetric spaces of class DIII and CI
NASA Astrophysics Data System (ADS)
Stolz, Michael
2018-02-01
Wigner-type randomizations of the tangent spaces of classical symmetric spaces can be thought of as ordinary Wigner matrices on which additional symmetries have been imposed. In particular, they fall within the scope of a framework, due to Schenker and Schulz-Baldes, for the study of fluctuations of Wigner matrices with additional dependencies among their entries. In this contribution, we complement the results of these authors by explicit calculations of the asymptotic covariances for symmetry classes DIII and CI and thus obtain explicit CLTs for these classes. On the technical level, the present work is an exercise in controlling the cumulative effect of systematically occurring sign factors in an involved sum of products by setting up a suitable combinatorial model for the summands. This aspect may be of independent interest. Research supported by Deutsche Forschungsgemeinschaft (DFG) via SFB 878.
Phylogenetic perspectives on noise-induced fear and annoyance
NASA Astrophysics Data System (ADS)
Bowles, Ann
2003-04-01
Negative human responses to noise are typically interpreted in terms of human psychological, cognitive, or social processes. However, it may be useful to frame hypotheses about human responses in terms of evolutionary history, during which negative responses have been part of a suite of adaptions to a variable sound environment. By comparing the responses of a range of nonhuman animals to various types of noise, it is possible to develop hypotheses about the ecology of human responses. Examples of noise-related phenomena that could be explained usefully from this perspective include the Schulz curve, noise-induced physical stress, acute fear responses induced by transient noise, and the relationship between temperament and noise-induced annoyance. Responses of animals from a range of taxa will be described and their behavior interpreted in terms of their life-history strategies. With this perspective, some testable hypotheses about noise-induced fear and annoyance will be suggested.
Solar heavy ion Heinrich fluence spectrum at low earth orbit.
Croley, D R; Spitale, G C
1998-01-01
Solar heavy ions from the JPL Solar Heavy Ion Model have been transported into low earth orbit using the Schulz cutoff criterion for L-shell access by ions of a specific charge to mass ratio. The NASA Brouwer orbit generator was used to get L values along the orbit at 60 second time intervals. Heavy ion fluences of ions 2 < or = Z < or = 92 have been determined for the LET range 1 to 130 MeV-cm2/mg by 60, 120 or 250 mils of aluminum over a period of 24 hours in a 425 km circular orbit inclined 51 degrees. The ion fluence is time dependent in the sense that the position of the spacecraft in the orbit at the flare onset time fixes the relationship between particle flux and spacecraft passage through high L-values where particles have access to the spacecraft.
OverPlotter: A Utility for Herschel Data Processing
NASA Astrophysics Data System (ADS)
Zhang, L.; Mei, Y.; Schulz, B.
2008-08-01
The OverPlotter utility is a GUI tool written in Java to support interactive data processing (DP) and analysis for the Herschel Space Observatory within the framework of the Herschel Common Science System (HCSS)(Wieprecht et al 2004). The tool expands upon the capabilities of the TableViewer (Zhang & Schulz 2005), providing now also the means to create additional overlays of several X/Y scatter plots within the same display area. These layers can be scaled and panned, either individually, or together as one graph. Visual comparison of data with different origins and units becomes much easier. The number of available layers is not limited, except by computer memory and performance. Presentation images can be easily created by adding annotations, labeling layers and setting colors. The tool will be very helpful especially in the early phases of Herschel data analysis, when a quick access to contents of data products is important.
visCOS: An R-package to evaluate model performance of hydrological models
NASA Astrophysics Data System (ADS)
Klotz, Daniel; Herrnegger, Mathew; Wesemann, Johannes; Schulz, Karsten
2016-04-01
The evaluation of model performance is a central part of (hydrological) modelling. Much attention has been given to the development of evaluation criteria and diagnostic frameworks. (Klemeš, 1986; Gupta et al., 2008; among many others). Nevertheless, many applications exist for which objective functions do not yet provide satisfying summaries. Thus, the necessity to visualize results arises in order to explore a wider range of model capacities, be it strengths or deficiencies. Visualizations are usually devised for specific projects and these efforts are often not distributed to a broader community (e.g. via open source software packages). Hence, the opportunity to explicitly discuss a state-of-the-art presentation technique is often missed. We therefore present a comprehensive R-package for evaluating model performance by visualizing and exploring different aspects of hydrological time-series. The presented package comprises a set of useful plots and visualization methods, which complement existing packages, such as hydroGOF (Zambrano-Bigiarini et al., 2012). It is derived from practical applications of the hydrological models COSERO and COSEROreg (Kling et al., 2014). visCOS, providing an interface in R, represents an easy-to-use software package for visualizing and assessing model performance and can be implemented in the process of model calibration or model development. The package provides functions to load hydrological data into R, clean the data, process, visualize, explore and finally save the results in a consistent way. Together with an interactive zoom function of the time series, an online calculation of the objective functions for variable time-windows is included. Common hydrological objective functions, such as the Nash-Sutcliffe Efficiency and the Kling-Gupta Efficiency, can also be evaluated and visualized in different ways for defined sub-periods like hydrological years or seasonal sections. Many hydrologists use long-term water-balances as a pivotal tool in model evaluation. They allow inferences about different systematic model-shortcomings and are an efficient way for communicating these in practice (Schulz et al., 2015). The evaluation and construction of such water balances is implemented with the presented package. During the (manual) calibration of a model or in the scope of model development, many model runs and iterations are necessary. Thus, users are often interested in comparing different model results in a visual way in order to learn about the model and to analyse parameter-changes on the output. A method to illuminate these differences and the evolution of changes is also included. References: • Gupta, H.V.; Wagener, T.; Liu, Y. (2008): Reconciling theory with observations: elements of a diagnostic approach to model evaluation, Hydrol. Process. 22, doi: 10.1002/hyp.6989. • Klemeš, V. (1986): Operational testing of hydrological simulation models, Hydrolog. Sci. J., doi: 10.1080/02626668609491024. • Kling, H.; Stanzel, P.; Fuchs, M.; and Nachtnebel, H. P. (2014): Performance of the COSERO precipitation-runoff model under non-stationary conditions in basins with different climates, Hydrolog. Sci. J., doi: 10.1080/02626667.2014.959956. • Schulz, K., Herrnegger, M., Wesemann, J., Klotz, D. Senoner, T. (2015): Kalibrierung COSERO - Mur für Pro Vis, Verbund Trading GmbH (Abteilung STG), final report, Institute of Water Management, Hydrology and Hydraulic Engineering, University of Natural Resources and Applied Life Sciences, Vienna, Austria, 217pp. • Zambrano-Bigiarini, M; Bellin, A. (2010): Comparing Goodness-of-fit Measures for Calibration of Models Focused on Extreme Events. European Geosciences Union (EGU), Geophysical Research Abstracts 14, EGU2012-11549-1.
Photoemission Spectroscopy of Delta- Plutonium: Experimental Review
NASA Astrophysics Data System (ADS)
Tobin, J. G.
2002-03-01
The electronic structure of Plutonium, particularly delta- Plutonium, remains ill defined and without direct experimental verification. Recently, we have embarked upon a program of study of alpha- and delta- Plutonium, using synchrotron radiation from the Advanced Light Source in Berkeley, CA, USA [1]. This work is set within the context of Plutonium Aging [2] and the complexities of Plutonium Science [3]. The resonant photoemission of delta-plutonium is in partial agreement with an atomic, localized model of resonant photoemission, which would be consistent with a correlated electronic structure. The results of our synchrotron- based studies will be compared with those of recent laboratory- based works [4,5,6]. The talk will conclude with a brief discussion of our plans for the future, such as the performance of spin-resolving and dichroic photoemission measurements of Plutonium [7] and the development of single crystal ultrathin films of Plutonium. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48. 1. J. Terry, R.K. Schulze, J.D. Farr, T. Zocco, K. Heinzelman, E. Rotenberg, D.K. Shuh, G. van der Laan, D.A. Arena, and J.G. Tobin, “5f Resonant Photoemission from Plutonium”, UCRL-JC-140782, Surf. Sci. Lett., accepted October 2001. 2. B.D. Wirth, A.J. Schwartz, M.J. Fluss, M.J. Caturla, M.A. Wall, and W.G. Wolfer, MRS Bulletin 26, 679 (2001). 3. S.S. Hecker, MRS Bulletin 26, 667 (2001). 4. T. Gouder, L. Havela, F. Wastin, and J. Rebizant, Europhys. Lett. 55, 705 (2001); MRS Bulletin 26, 684 (2001); Phys. Rev. Lett. 84, 3378 (2000). 5. A.J. Arko, J.J. Joyce, L. Morales, J. Wills, J. Lashley, F. Wastin, and J. Rebizant, Phys. Rev. B 62, 1773 (2000). 6. L.E. Cox, O. Eriksson, and B.R. Cooper, Phys. Rev. B 46, 13571 (1992). 7. J. Tobin, D.A. Arena, B. Chung, P. Roussel, J. Terry, R.K. Schulze, J.D. Farr, T. Zocco, K. Heinzelman, E. Rotenberg, and D.K. Shuh, “Photoelectron Spectroscopy of Plutonium at the Advanced Light Source”, UCRL-JC-145703, J. Nucl. Sci. Tech./ Proc. of Actinides 2001, submitted November 2001.
Pleistocene Indian Monsoon rainfall variability dominated by obliquity
NASA Astrophysics Data System (ADS)
Gebregiorgis, D.; Hathorne, E. C.; Giosan, L.; Collett, T. S.; Nuernberg, D.; Frank, M.
2015-12-01
The past variability of the Indian Monsoon is mostly known from records of wind strength over the Arabian Sea while Quaternary proxy records of Indian monsoon precipitation are still lacking. Here we utilize scanning x-ray fluorescence (XRF) data from a sediment core obtained by the IODP vessel JOIDES Resolution in the Andaman Sea (Site 17) to investigate changes in sediment supply from the peak monsoon precipitation regions to the core site. We use Ti/Ca and K/Rb ratios to trace changes in terrigenous flux and weathering regime, respectively, while Zr/Rb ratios suggest grain size variations. The age model of Site 17 is based on correlation of benthic C. wuellerstorfi/C. mundulus δ18O data to the LR04 global benthic δ18O stack at a resolution of ~3 kyr (Lisiecki and Raymo, 2005) for the last 2 Myrs. In its youngest part the age model is supported by five 14C ages on planktic foraminifera and the youngest Toba ash layer (Ali et al., 2015) resulting in a nearly constant sedimentation rate of ~6.5 cm/kyr. Frequency analysis of the 4 mm resolution Ti/Ca, K/Rb, and Zr/Rb time series using the REDFIT program (Schulz and Mudelsee, 2002), reveals the three main Milankovitch orbital cycles above the 90% confidence level. Depth domain spectral analysis reveals the presence of significant cyclicity at wavelengths of 28.5 and 2.8 m corresponding to the ~400 kyr and ~41 kyr cycles, respectively, during the last 2 Myr. These records suggest that Indian monsoon variability has varied in the obliquity and eccentricity bands, the latter in particular after the mid Pleistocene transition (MPT), while strong precession forcing is lacking in this super-high resolution record. Northern summer insolation and Southern Hemisphere latent heat export are out of phase during precessional cycles, but in phase in the obliquity band, which indicates that Indian monsoon precipitation has likely been more sensitive to both NH pull and SH push mechanisms (Clemens and Prell, 2003). References Ali, S., et al., 2015. Geochem., Geophy., Geosys., 16, 505-521. Clemens, S.C. and Prell, W.L., 2003. Marine Geology, 201(1): 35-51. Lisiecki, L. E. and M. E. Raymo ,2005. Paleoceanography, 20, PA1003. Schulz, M., and Mudelsee, M., 2002. Computers & Geosciences, v. 28, p. 421-426.
Polytype stability and defects in differently doped bulk SiC
NASA Astrophysics Data System (ADS)
Schmitt, Erwin; Straubinger, Thomas; Rasp, Michael; Vogel, Michael; Wohlfart, Andreas
2008-03-01
In this work, we present recent results on development and production of n-type 4 H bulk material. From previous studies it is evident that inclusions of foreign polytypes can act as origin of severe structural imperfections [N. Schulze, D.L. Barret, G. Pensl, S. Rohmfeld, M. Hundhausen, Mater. Sci. Eng. B 61-62 (1999) 44; D. Hofmann, E. Schmitt, M. Bickermann, M. Kölbl, P.J. Wellmann, A. Winnacker, Mater. Sci. Eng. B 61-62 (1999) 48], accompanied by defects like micropipes, stacking faults and dislocations. For that reason, we have carried out investigations to sustain polytype stability throughout the entire process, including nucleation and subsequent growth. Assisted by numerical calculations the influence of growth conditions, especially with respect to thermal field, Si/C ratio and doping, was examined. Several methods for the evaluation of material properties were applied to determine the quality most precisely, e.g. KOH-defect etching, optical microscopy, electron microscopy, X-ray diffraction and resistivity mapping. The key experience we gained was that moderate growth conditions with reduced temperature gradients are only one prerequisite for the reduction of defect density. Also stoichiometry in the gas phase and its modulation by nitrogen doping have to be taken into account and must be adjusted on the prevailing growth regime. We finally identified an optimized process that initiated a considerable improvement of material quality. Best values for 3″ 4 H wafers show that EPD<5×10 3 cm -2 and MPD<0.1 cm -2 can be achieved.
New solutions for climate network visualization
NASA Astrophysics Data System (ADS)
Nocke, Thomas; Buschmann, Stefan; Donges, Jonathan F.; Marwan, Norbert
2016-04-01
An increasing amount of climate and climate impact research methods deals with geo-referenced networks, including energy, trade, supply-chain, disease dissemination and climatic tele-connection networks. At the same time, the size and complexity of these networks increases, resulting in networks of more than hundred thousand or even millions of edges, which are often temporally evolving, have additional data at nodes and edges, and can consist of multiple layers even in real 3D. This gives challenges to both the static representation and the interactive exploration of these networks, first of all avoiding edge clutter ("edge spagetti") and allowing interactivity even for unfiltered networks. Within this presentation, we illustrate potential solutions to these challenges. Therefore, we give a glimpse on a questionnaire performed with climate and complex system scientists with respect to their network visualization requirements, and on a review of available state-of-the-art visualization techniques and tools for this purpose (see as well Nocke et al., 2015). In the main part, we present alternative visualization solutions for several use cases (global, regional, and multi-layered climate networks) including alternative geographic projections, edge bundling, and 3-D network support (based on CGV and GTX tools), and implementation details to reach interactive frame rates. References: Nocke, T., S. Buschmann, J. F. Donges, N. Marwan, H.-J. Schulz, and C. Tominski: Review: Visual analytics of climate networks, Nonlinear Processes in Geophysics, 22, 545-570, doi:10.5194/npg-22-545-2015, 2015
Estimates of surface humidity and latent heat fluxes over oceans from SSM/I data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, S.H.; Atlas, R.M.; Shie, C.L.
1995-08-01
Monthly averages of daily latent heat fluxes over the oceans for February and August 1988 are estimated using a stability-dependent bulk scheme. Daily fluxes are computed from daily SSM/I (Special Sensor Microwave/Imager) wind speeds and EOF-retrieved SSM/I surface humidity, National Meteorological Center sea surface temperatures, and the European Centre for Medium-Range Weather Forecasts analyzed 2-m temperatures. Daily surface specific humidity (Q) is estimated from SSM/I precipitable water of total (W) and a 500-m bottom layer (W{sub B}) using an EOF (empirical orthogonal function) method. This method has six W-based categories of EOFs (independent of geographical locations) and is developed usingmore » 23 177 FGGE IIb humidity soundings over the global oceans. For 1200 FGGE IIb humidity soundings, the accuracy of EOF-retrieved Q is 0.75 g kg{sup -1} for the case without errors in W and W{sub B} and increases to 1.16 g kg{sup -1} for the case with errors in W and W{sub B}. Compared to 342 collocated radiosonde observations, the EOF-retrieved SSM/I Q has an accuracy of 1.7 g kg{sup -1}. The method improves upon the humidity retrieval of Liu and is competitive with that of Schulz et al. The SSM/I surface humidity and latent heat fluxes of these two months agree reasonably well with those of COADS (Comprehensive Ocean-Atmosphere Data Set). Compared to the COADS, the sea-air humidity difference of SSM/I has a positive bias of approximately 1-3 g kg{sup -1} (an overestimation of flux) over the wintertime eastern equatorial Pacific Ocean, it has a negative bias of about 1-2 g kg{sup -1} (an underestimation of flux). The results further suggest that the two monthly flux estimates, computed from daily and monthly mean data, do not differ significantly over the oceans. 35 refs., 12 figs., 4 tabs.« less
Morozova, Olga V.; Bakhvalova, Valentina N.; Morozov, Igor V.
2007-01-01
The tick-borne encephalitis virus (TBEV) strains have been isolated from unfed adult ticks Ixodes persulcatus Schulze in Novosibirsk region (South-Western Siberia, Russia) beginning from 1980 till 2006. The TBEV 3’-untraslated region (3’UTR) variable fragment was amplified with primers corresponding to conserved flanking areas. The RT-PCR product lengths varied in range from 100 to 400 bp. Comparative analysis of 3’UTR nucleotide sequences revealed a few groups of the TBEV strains within Siberian genetic subtype with significant intra-group homology and essential differences between groups. Correlation between lengths of the 3’UTR fragments and hemagglutination (HA) titers for subsequent passages of the TBEV strains was not found. However, for the viral strains with shorter 3’UTR (less than 200 nucleotides) incubation period for suckling mice was longer than 5 days. It might be resulted from decreased RNA synthesis or reduced neuroinvasiveness. PMID:23675045
Chan, Leighton; Heinemann, Allen W; Roberts, Jason
2014-01-01
Note from the AJOT Editor-in-Chief: Since 2010, the American Journal of Occupational Therapy (AJOT) has adopted reporting standards based on the Consolidated Standards of Reporting Trials (CONSORT) Statement and American Psychological Association (APA) guidelines in an effort to publish transparent clinical research that can be easily evaluated for methodological and analytical rigor (APA Publications and Communications Board Working Group on Journal Article Reporting Standards, 2008; Moher, Schulz, & Altman, 2001). AJOT has now joined 28 other major rehabilitation and disability journals in a collaborative initiative to enhance clinical research reporting standards through adoption of the EQUATOR Network reporting guidelines, described below. Authors will now be required to use these guidelines in the preparation of manuscripts that will be submitted to AJOT. Reviewers will also use these guidelines to evaluate the quality and rigor of all AJOT submissions. By adopting these standards we hope to further enhance the quality and clinical applicability of articles to our readers. Copyright © 2014 by the American Occupational Therapy Association, Inc.
Sallen, Jeffrey; Hirschmann, Florian; Herrmann, Christian
2018-01-01
The demands of a career in competitive sports can lead to chronic stress perception among athletes if there is a non-conformity of requirements and available coping resources. The Trier Inventory for Chronic Stress (TICS) (Schulz et al., 2004) is said to be thoroughly validated. Nevertheless, it has not yet been subjected to a confirmatory factor analysis. The present study aims (1) to evaluate the factorial validity of the TICS within the context of competitive sports and (2) to adapt a short version (TICS-36). The total sample consisted of 564 athletes (age in years: M = 19.1, SD = 3.70). The factor structure of the original TICS did not adequately fit the present data, whereas the short version presented a satisfactory fit. The results indicate that the TICS-36 is an economical instrument for gathering interpretable information about chronic stress. For assessment in competitive sports with TICS-36, we generated overall and gender-specific norm values. PMID:29593611
Realizing a Rasch measurement through instructionally- sequenced domains of test items.
NASA Astrophysics Data System (ADS)
Schulz, E. Matthew
2016-11-01
This paper presents results from a project in which instructionally-sequenced domains were defined for purposes of constructing measures that that conform to an ideal in Guttman scaling and Rasch measurement. A fundamental idea in these measurement systems is that every person higher on the measurement scale can do everything that lower-level persons can do, plus at least one more thing. This idea has had limited application in educational measurement due to the stochastic nature of item response data and the sheer number of items needed to obtain reliable measures. However, it has been shown by Schulz, Lee, and Mullen [1] that this ideal can be can be realized at a higher level of abstraction - when items within a content strand are aggregated into a small number of domains that are ordered in instructional timing and difficulty. The present paper shows how this was done, and the results, in an achievement level setting project for the 2007 Grade 12 NAEP Economics Assessment.
Wang, Haiqiang; Woodward, Clifford E; Forsman, Jan
2014-05-21
We analyze a system consisting of two spherical particles immersed in a polydispersed polymer solution under theta conditions. An exact theory is developed to describe the potential of mean force between the spheres for the case where the polymer molecular weight dispersity is described by the Schulz-Flory distribution. Exact results can be derived for the protein regime, where the sphere radius (R(s)) is small compared to the average radius of gyration of the polymer (R(g)). Numerical results are relatively easily obtained in the cases where the sphere radius is increased. We find that even when q = R(g)/R(s) ⪆ 10, then the use of a monopole expansion for the polymer end-point distribution about the spheres is sufficient. For even larger spheres q ≈ 1, accuracy is maintained by including a dipolar correction. The implications of these findings on generating a full many-body effective interaction for a collection of N spheres imbedded in the polymer solution are discussed.
The Cu-Li-Sn Phase Diagram: Isopleths, Liquidus Projection and Reaction Scheme
Flandorfer, Hans
2016-01-01
The Cu-Li-Sn phase diagram was constructed based on XRD and DTA data of 60 different alloy compositions. Eight ternary phases and 14 binary solid phases form 44 invariant ternary reactions, which are illustrated by a Scheil-Schulz reaction scheme and a liquidus projection. Phase equilibria as a function of concentration and temperature are shown along nine isopleths. This report together with an earlier publication of our group provides for the first time comprehensive investigations of phase equilibria and respective phase diagrams. Most of the phase equilibria could be established based on our experimental results. Only in the Li-rich part where many binary and ternary compounds are present estimations had to be done which are all indicated by dashed lines. A stable ternary miscibility gap could be found which was predicted by modelling the liquid ternary phase in a recent work. The phase diagrams are a crucial input for material databases and thermodynamic optimizations regarding new anode materials for high-power Li-ion batteries. PMID:27788175
Holocene Millennial Time Scale Hydrological Changes In Central-east Africa
NASA Astrophysics Data System (ADS)
Jolly, D.; Bonnefille, R.; Beaufort, L.
The Holocene hydrological changes of a tropical swamp is reconstructed using a high resolution pollen record (ca 50 yrs) from the Kuruyange valley (Burundi, Africa, 3°35'S, 29°41'E), at 2000 m elevation. The sequence was dated by 10 radiocarbon dates, allowing reconstruction between ca 12 500 and 1000 cal yr B.P. In the Kuruyange swamp, peat accumulated rapidly at a sedimentation rate varying from 0.73 (prior to 6200 cal yr B.P.) to 1.51 mm/yr (during the late Holocene). A pollen index of water table, based on a ratio of aquatic versus non-aquatic plants has been used in order to test the hypothesis of hydrological constraints on the swampy ecosystem. Eight arid phases are evidenced by the index minima at 12 200, 11 200, 9900, 8600, 6500, 5000, 3400, 1600 cal yr B.P. The good agreement existing between this index and independent data such as (i) low-resolution East-African lake level reconstruct ions (Gillespie et al., 1983) and (ii) ?18O analyses from Arabian Sea (Sirocko et al., 1993) suggests the water table level responds to the monsoon dynamic. The Index varies periodically with a combination of 1/1515, 1/880 and 1/431 years-1 frequencies, revealed by time series analyses (Blackman-Tukey and Maximum Entropy). The extrapolation of the composite curve based on these 3 periodicities show that two major climatic events defined in the high latitudes between 1000 and 660 cal yr B.P. (Medieval Warm Period) and between 500 and 100 cal yr B.P. (Little Ice Age) are recorded in our data and show respectively high and low stands of the water table. Our results support some previous pollen-derived climate estimates in Ethiopia done by Bonnefille and Umer (1994). Moreover, the "1500 year" cycle registered in our data from the tropics, already evidenced in higher latitudes (Wijmstra et al., 1984; Bondet al., 1997; Schulz et al., 1999; Bond et al., 2001) support the hypothesis of strong teleconnections between tropical/subtropical and polar climates during the deglaciation (Sirocko et al., 1996) and the Holocene. References Bond et al., Science,278, 1257 (1997) Bond et al., Science,294, 2130 (2001) Bonnefille &Umer, Palaeogeography, Palaeoclimatology, Palaeoecology, 109, 331 (1994) Gillespie et al., Nature, 306, 680 (1983) Schulz et al., Geophysical Research Letters, 26, 3385 (1999) Sirocko et al., Nature, 364, 322 (1993) Sirocko et al., Science, 272, 526 (1996) Wijmstra et al., Acta Botanica Neerlandica, 33, 547 (1984)
Platelet-rich-plasmapheresis for minimising peri-operative allogeneic blood transfusion.
Carless, Paul A; Rubens, Fraser D; Anthony, Danielle M; O'Connell, Dianne; Henry, David A
2011-03-16
Concerns regarding the safety of transfused blood have generated considerable enthusiasm for the use of technologies intended to reduce the use of allogeneic blood (blood from an unrelated donor). Platelet-rich plasmapheresis (PRP) offers an alternative approach to blood conservation. To examine the evidence for the efficacy of PRP in reducing peri-operative allogeneic red blood cell (RBC) transfusion, and the evidence for any effect on clinical outcomes such as mortality and re-operation rates. We identified studies by searching MEDLINE (1950 to 2009), EMBASE (1980 to 2009), The Cochrane Library (Issue 1, 2009), the Internet (to March 2009) and the reference lists of published articles, reports, and reviews. Controlled parallel group trials in which adult patients, scheduled for non-urgent surgery, were randomised to PRP, or to a control group which did not receive the intervention. Primary outcomes measured were: the number of patients exposed to allogeneic RBC transfusion, and the amount of RBC transfused. Other outcomes measured were: the number of patients exposed to allogeneic platelet transfusions, fresh frozen plasma, and cryoprecipitate, blood loss, re-operation for bleeding, post-operative complications (thrombosis), mortality, and length of hospital stay. Treatment effects were pooled using a random-effects model. Trial quality was assessed using criteria proposed by Schulz et al (Schulz 1995). Twenty-two trials of PRP were identified that reported data for the number of patients exposed to allogeneic RBC transfusion. These trials evaluated a total of 1589 patients. The relative risk (RR) of exposure to allogeneic blood transfusion in those patients randomised to PRP was 0.73 (95%CI 0.59 to 0.90), equating to a relative risk reduction (RRR) of 27% and a risk difference (RD) of 19% (95%CI 10% to 29%). However, significant heterogeneity of treatment effect was observed (p < 0.00001; I² = 79%). When the four trials by Boldt are excluded, the RR is 0.76 (95% CI 0.62 to 0.93). On average, PRP did not significantly reduce the total volume of RBC transfused (weighted mean difference [WMD] -0.69, 95%CI -1.93 to 0.56 units). Trials provided inadequate data regarding the impact of PRP on morbidity, mortality, and hospital length of stay. Trials were generally small and of poor methodological quality. Although the results suggest that PRP is effective in reducing allogeneic RBC transfusion in adult patients undergoing elective surgery, there was considerable heterogeneity of treatment effects and the trials were of poor methodological quality. The available studies provided inadequate data for firm conclusions to be drawn regarding the impact of PRP on clinically important endpoints.
Mantokoudis, Georgios; Dähler, Claudia; Dubach, Patrick; Kompis, Martin; Caversaccio, Marco D.; Senn, Pascal
2013-01-01
Objective To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. Methods Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280×720, 640×480, 320×240, 160×120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0–500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. Results Higher frame rate (>7 fps), higher camera resolution (>640×480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). Conclusion Webcameras have the potential to improve telecommunication of hearing-impaired individuals. PMID:23359119
SU-F-E-07: Web-Based Training for Radiosurgery: Methods and Metrics for Global Reach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulz, R; Thomas, E; Popple, R
Purpose: Webinars have become an evolving tool with greater or lesser success in reaching health care providers (HCPs). This study seeks to assess best practices and metrics for success in webinar deployment for optimal global reach. Methods: Webinars have been developed and launched to reach practicing health care providers in the field of radiation oncology and radiosurgery. One such webinar was launched in early February 2016. “Multiple Brain Metastases & Volumetric Modulated Arc Radiosurgery: Refining the Single-Isocenter Technique to Benefit Surgeons and Patients” presented by Drs. Fiveash and Thomas from UAB was submitted to and accredited by the Institute formore » Medical Education as qualifying for CME as well as MDCB for educational credit for dosimetrists, in order to encourage participation. MedicalPhysicsWeb was chosen as the platform to inform attendees regarding the webinar. Further IME accredited the activity for 1 AMA PRA Category 1 credit for physicians & medical physicists. The program was qualified by the ABR in meeting the criteria for self-assessment towards fulfilling MOC requirements. Free SAMs credits were underwritten by an educational grant from Varian Medical Systems. Results: The webinar in question attracted 992 pre-registrants from 66 countries. Outside the US and Canada; 11 were from the Americas; 32 were from Europe; 9 from the Middle East and Africa. Australasia and the Indian subcontinent represented the remaining 14 countries. Pre-registrants included 423 Medical Physicists, 225 Medical Dosimetrists, 24 Radiation Therapists, 66 Radiation Oncologists & other. Conclusion: The effectiveness of CME and SAM-CME programs such as this can be gauged by the high rate of respondents who state an intention to change practice habits, a primary goal of continuing medical education and self-assessment. This webinar succeeded in being the most successful webinar on Medical Physics Web as measured by pre-registration, participation and participation to pre-registration ratio. R.A. Schulz is an employee of Varian Medical Systems.« less
The Nuts and Bolts of Low-level Laser (Light) Therapy
Chung, Hoon; Dai, Tianhong; Sharma, Sulbha K.; Huang, Ying-Ying; Carroll, James D.; Hamblin, Michael R.
2011-01-01
Soon after the discovery of lasers in the 1960s it was realized that laser therapy had the potential to improve wound healing and reduce pain, inflammation and swelling. In recent years the field sometimes known as photobiomodulation has broadened to include light-emitting diodes and other light sources, and the range of wavelengths used now includes many in the red and near infrared. The term “low level laser therapy” or LLLT has become widely recognized and implies the existence of the biphasic dose response or the Arndt-Schulz curve. This review will cover the mechanisms of action of LLLT at a cellular and at a tissular level and will summarize the various light sources and principles of dosimetry that are employed in clinical practice. The range of diseases, injuries, and conditions that can be benefited by LLLT will be summarized with an emphasis on those that have reported randomized controlled clinical trials. Serious life-threatening diseases such as stroke, heart attack, spinal cord injury, and traumatic brain injury may soon be amenable to LLLT therapy. PMID:22045511
Sayah, Mohamed Yassine; Chabir, Rachida; Benyahia, Hamid; Rodi Kandri, Youssef; Ouazzani Chahdi, Fouad; Touzani, Hanan; Errachidi, Faouzi
2016-01-01
Orange (Citrus sinensis) and grapefruit (Citrus paradise) peels were used as a source of pectin, which was extracted under different conditions. The peels are used under two states: fresh and residual (after essential oil extraction). Organic acid (citric acid) and mineral acid (sulfuric acid) were used in the pectin extraction. The aim of this study is the evaluation the effect of extraction conditions on pectin yield, degree of esterification “DE” and on molecular weight “Mw”. Results showed that the pectin yield was higher using the residual peels. Moreover, both peels allow the obtainment of a high methoxyl pectin with DE >50%. The molecular weight was calculated using Mark-Houwink-Sakurada equation which describes its relationship with intrinsic viscosity. This later was determined using four equations; Huggins equation, kramer, Schulz-Blaschke and Martin equation. The molecular weight varied from 1.538 x1005 to 2.47x1005 g/mol for grapefruit pectin and from 1.639 x1005 to 2.471 x1005 g/mol for orange pectin. PMID:27644093
Menzer, Melissa M; Torney-Purta, Judith
2012-10-01
The purpose of this study was to examine two aspects of context for peer aggression: national individualism and distributions of socioeconomic status in the school. School administrators for each school reported on their perceptions of the frequency of bullying and violence in their school. The sample comprised 990 school principals/headmasters from nationally representative samples of schools in 15 countries surveyed as part of the larger IEA Civic Education Study (Torney-Purta, Lehmann, Oswald, & Schulz, 2001). A national context of individualism was associated with violence but not bullying. Schools with high socioeconomic diversity had more bullying than homogeneously low or high socioeconomic status schools. In addition, diverse schools had more violence than affluent schools. Results suggest that bullying and violence should be investigated as separate constructs. Furthermore, contexts, such as national culture and school socioeconomic diversity, are important in understanding the prevalence of bullying and violence in schools internationally. Copyright © 2012 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
g_contacts: Fast contact search in bio-molecular ensemble data
NASA Astrophysics Data System (ADS)
Blau, Christian; Grubmuller, Helmut
2013-12-01
Short-range interatomic interactions govern many bio-molecular processes. Therefore, identifying close interaction partners in ensemble data is an essential task in structural biology and computational biophysics. A contact search can be cast as a typical range search problem for which efficient algorithms have been developed. However, none of those has yet been adapted to the context of macromolecular ensembles, particularly in a molecular dynamics (MD) framework. Here a set-decomposition algorithm is implemented which detects all contacting atoms or residues in maximum O(Nlog(N)) run-time, in contrast to the O(N2) complexity of a brute-force approach. Catalogue identifier: AEQA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQA_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 8945 No. of bytes in distributed program, including test data, etc.: 981604 Distribution format: tar.gz Programming language: C99. Computer: PC. Operating system: Linux. RAM: ≈Size of input frame Classification: 3, 4.14. External routines: Gromacs 4.6[1] Nature of problem: Finding atoms or residues that are closer to one another than a given cut-off. Solution method: Excluding distant atoms from distance calculations by decomposing the given set of atoms into disjoint subsets. Running time:≤O(Nlog(N)) References: [1] S. Pronk, S. Pall, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R. Shirts, J.C. Smith, P. M. Kasson, D. van der Spoel, B. Hess and Erik Lindahl, Gromacs 4.5: a high-throughput and highly parallel open source molecular simulation toolkit, Bioinformatics 29 (7) (2013).
Diviani, Nicola; Dima, Alexandra Lelia; Schulz, Peter Johannes
2017-04-11
The eHealth Literacy Scale (eHEALS) is a tool to assess consumers' comfort and skills in using information technologies for health. Although evidence exists of reliability and construct validity of the scale, less agreement exists on structural validity. The aim of this study was to validate the Italian version of the eHealth Literacy Scale (I-eHEALS) in a community sample with a focus on its structural validity, by applying psychometric techniques that account for item difficulty. Two Web-based surveys were conducted among a total of 296 people living in the Italian-speaking region of Switzerland (Ticino). After examining the latent variables underlying the observed variables of the Italian scale via principal component analysis (PCA), fit indices for two alternative models were calculated using confirmatory factor analysis (CFA). The scale structure was examined via parametric and nonparametric item response theory (IRT) analyses accounting for differences between items regarding the proportion of answers indicating high ability. Convergent validity was assessed by correlations with theoretically related constructs. CFA showed a suboptimal model fit for both models. IRT analyses confirmed all items measure a single dimension as intended. Reliability and construct validity of the final scale were also confirmed. The contrasting results of factor analysis (FA) and IRT analyses highlight the importance of considering differences in item difficulty when examining health literacy scales. The findings support the reliability and validity of the translated scale and its use for assessing Italian-speaking consumers' eHealth literacy. ©Nicola Diviani, Alexandra Lelia Dima, Peter Johannes Schulz. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 11.04.2017.
Koller, Roger; Guignard, Jérémie; Caversaccio, Marco; Kompis, Martin; Senn, Pascal
2017-01-01
Background Telecommunication is limited or even impossible for more than one-thirds of all cochlear implant (CI) users. Objective We sought therefore to study the impact of voice quality on speech perception with voice over Internet protocol (VoIP) under real and adverse network conditions. Methods Telephone speech perception was assessed in 19 CI users (15-69 years, average 42 years), using the German HSM (Hochmair-Schulz-Moser) sentence test comparing Skype and conventional telephone (public switched telephone networks, PSTN) transmission using a personal computer (PC) and a digital enhanced cordless telecommunications (DECT) telephone dual device. Five different Internet transmission quality modes and four accessories (PC speakers, headphones, 3.5 mm jack audio cable, and induction loop) were compared. As a secondary outcome, the subjective perceived voice quality was assessed using the mean opinion score (MOS). Results Speech telephone perception was significantly better (median 91.6%, P<.001) with Skype compared with PSTN (median 42.5%) under optimal conditions. Skype calls under adverse network conditions (data packet loss > 15%) were not superior to conventional telephony. In addition, there were no significant differences between the tested accessories (P>.05) using a PC. Coupling a Skype DECT phone device with an audio cable to the CI, however, resulted in higher speech perception (median 65%) and subjective MOS scores (3.2) than using PSTN (median 7.5%, P<.001). Conclusions Skype calls significantly improve speech perception for CI users compared with conventional telephony under real network conditions. Listening accessories do not further improve listening experience. Current Skype DECT telephone devices do not fully offer technical advantages in voice quality. PMID:28438727
[Here the world is burning: the 70th anniversary of the death of neurologist Dr. John Rittmeister].
Teller, Ch
2013-09-01
John Rittmeister was a German neurologist (1898-1943) who was executed in Berlin-Plötzensee because of his decision to support organized political resistance against National Socialism. He grew up in a socially and materially privileged environment and following his final school examinations (Abitur) in 1917 he volunteered for war duties despite limited physical capabilities and was posted as a private to the war front in the Italian Alps and the Champagne district. While he was there he made his first social experiences outside his original surroundings. After the war he studied medicine and following the final state examinations and graduation he progressed to specialist training as a neurologist in Munich. At this time he came into contact with C.G. Jung. During a study period in London in 1929 he worked for several weeks as a resident at Toynbee Hall, a university institution in Whitechapel and experienced the methods of community work used there which were known under the term settlement movement. He continued his specialist activities in the neurological clinic in Zürich founded by C. von Monakow. Following the experiences in London he broke up with C.G. Jung and turned to Sigmund Freud and therapeutic analysis under Gustav Bally. In 1937 he returned to Germany. In 1939 he became director of the Policlinic of the German Institute for Psychological Research and Psychotherapy. Probably also due to his own war experiences in 1941/1942 he participated in the drafting of a flyer for the Schulze-Boysen/Harnack group against the war and after 8 months in prison he was executed in Berlin on 13 May 1943.
Fadda, Marta; Galimberti, Elisa; Fiordelli, Maddalena; Schulz, Peter Johannes
2018-03-07
There is mixed evidence on the effectiveness of vaccination-related interventions. A major limitation of most intervention studies is that they do not apply randomized controlled trials (RCTs), the method that, over the last 2 decades, has increasingly been considered as the only method to provide proof of the effectiveness of an intervention and, consequently, as the most important instrument in deciding whether to adopt an intervention or not. This study, however, holds that methods other than RCTs also can produce meaningful results. The aim of this study was to evaluate 2 mobile phone-based interventions aimed at increasing parents' knowledge of the measles-mumps-rubella (MMR) vaccination (through elements of gamification) and their psychological empowerment (through the use of narratives), respectively. The 2 interventions were part of an RCT. We conducted 2 studies with the RCT participants: a Web-based survey aimed at assessing their rating of the tool regarding a number of qualities such as usability and usefulness (N=140), and qualitative telephonic interviews to explore participants' experiences with the app (N=60). The results of the survey showed that participants receiving the knowledge intervention (alone or together with the empowerment intervention) liked the app significantly better compared with the group that only received the empowerment intervention (F 2,137 =15.335; P<.001). Parents who were exposed to the empowerment intervention complained that they did not receive useful information but were only invited to make an informed, autonomous MMR vaccination decision. The results suggest that efforts to empower patients should always be accompanied by the provision of factual information. Using a narrative format that promotes parents' identification can be an appropriate strategy, but it should be employed together with the presentation of more points of views and notions regarding, for instance, the risks and benefits of the vaccination at the same time. International Standard Randomized Controlled Trial Number 30768813; http://www.isrctn.com/ ISRCTN30768813 (Archived by WebCite at http://www.webcitation.org/6xOQSJ3w8). ©Marta Fadda, Elisa Galimberti, Maddalena Fiordelli, Peter Johannes Schulz. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 07.03.2018.
Pits and Channels of Hebrus Valles
2017-01-26
The drainages in this image are part of Hebrus Valles, an outflow channel system likely formed by catastrophic floods. Hebrus Valles is located in the plains of the Northern lowlands, just west of the Elysium volcanic region. Individual channels range from several hundred meters to several kilometers wide and form multi-threaded (anastamosing) patterns. Separating the channels are streamlined forms, whose tails point downstream and indicate that channel flow is to the north. The channels seemingly terminate in an elongated pit that is approximately 1875 meters long and 1125 meters wide. Using the shadow that the wall has cast on the floor of the pit, we can estimate that the pit is nearly 500 meters deep. The pit, which formed after the channels, exposes a bouldery layer below the dusty surface mantle and is underlain by sediments. Boulders several meters in diameter litter the slopes down into the pit. Pits such as these are of interest as possible candidate landing sites for human exploration because they might retain subsurface water ice (Schulze-Makuch et al. 2016, 6th Mars Polar Conf.) that could be utilized by future long-term human settlements. http://photojournal.jpl.nasa.gov/catalog/PIA11704
NASA Astrophysics Data System (ADS)
Wang, Xiao-hong; Zhang, Xue-hua; Schröder, Heinz C.; Müller, Werner E. G.
2009-09-01
Like all sponges (phylum Porifera), the glass sponges (Hexactinellida) are provided with an elaborate and distinct body plan, which relies on a filigree skeleton. It is constructed by an array of morphologically determined elements, the spicules. Schulze described the largest siliceous hexactinellid sponge on Earth, the up to 3 m high Monorhaphis chuni, collected during the German Deep Sea Expedition "Valdivia" (1898-1899). This species develops an equally large bio-silica structure, the giant basal spicule (3 m × 10 mm). Using these spicules as a model, one can obtain the basic knowledge on the morphology, formation, and development of silica skeletal elements. The silica matrix is composed of almost pure silica, endowing it with unusual optophysical properties, which are superior to those of man-made waveguides. Experiments suggest that the spicules function in vivo as a nonocular photoreception system. The spicules are also provided with exceptional mechanical properties. Like demosponges, the hexactinellids synthesize their silica enzymatically via the enzyme silicatein (27 kDa protein). This enzyme is located in/embedded in the silica layers. This knowledge will surely contribute to a further utilization and exploration of silica in biomaterial/biomedical science.
The Development of Sleep Medicine: A Historical Sketch
Schulz, Hartmut; Salzarulo, Piero
2016-01-01
For centuries the scope of sleep disorders in medical writings was limited to those disturbances which were either perceived by the sleeper him- or herself as troublesome, such as insomnia, or which were recognized by an observer as strange behavioral acts during sleep, such as sleepwalking or sleep terrors. Awareness of other sleep disorders, which are caused by malfunction of a physiological system during sleep, such as sleep-related respiratory disorders, were widely unknown or ignored before sleep monitoring techniques became available, mainly in the second half of the 20th century. Finally, circadian sleep-wake disorders were recognized as a group of disturbances by its own only when chronobiology and sleep research began to interact extensively in the last two decades of the 20th century. Sleep medicine as a medical specialty with its own diagnostic procedures and therapeutic strategies could be established only when key findings in neurophysiology and basic sleep research allowed a breakthrough in the understanding of the sleeping brain, mainly since the second half of the last century. Citation: Schulz H, Salzarulo P. The development of sleep medicine: a historical sketch. J Clin Sleep Med 2016;12(7):1041–1052. PMID:27250813
Filtration of Pathogenic Parasites Using Surfactant-Modified Zeolites
NASA Astrophysics Data System (ADS)
Lehner, T.; Schulze-Makuch, D.; Bowman, R.
2003-12-01
Migration of pathogenic microorganisms, specifically Cryptosporidium parvum and Giardia lamblia, in groundwater due to sewage effluent and mismanaged wastewater has become an increased concern for human health in many regions. Cryptosporididosis and Giardiasis produces moderate to severe intestinal illness for many weeks and is a serious threat for immunodeficient persons. Previous studies by Schulze-Makuch et al. (2002) indicated that surfactant-modified zeolites (SMZ) removed all of the bacteria and most viruses in laboratory experiments. This study focuses on the efficiency of the SMZ to prevent migration of the protozoan spores in groundwater. Adsorption of the spores involves interactions between the surface properties of the spores and the SMZ. The efficiency of removal is tested simulating natural conditions. Laboratory experiments are conducted in a plexiglass model aquifer and pathogen removal is measured by taking water samples from strategically placed piezometers in the model. Since C. parvum and G. lamblia are hazardous to humans and move primarily in spore state through groundwater, polystyrene microspheres of similar sizes and Bacillus subtilis, a sporulating bacterium, are used as analogues for the protozoa. Preliminary results show a significant decrease in concentration of the B. subtilis spores down-gradient of the barrier.
Testing the H2O2-H2O hypothesis for life on Mars with the TEGA instrument on the Phoenix lander.
Schulze-Makuch, Dirk; Turse, Carol; Houtkooper, Joop M; McKay, Christopher P
2008-04-01
In the time since the Viking life-detection experiments were conducted on Mars, many missions have enhanced our knowledge about the environmental conditions on the Red Planet. However, the martian surface chemistry and the Viking lander results remain puzzling. Nonbiological explanations that favor a strong inorganic oxidant are currently favored (e.g., Mancinelli, 1989; Plumb et al., 1989; Quinn and Zent, 1999; Klein, 1999; Yen et al., 2000), but problems remain regarding the lifetime, source, and abundance of that oxidant to account for the Viking observations (Zent and McKay, 1994). Alternatively, a hypothesis that favors the biological origin of a strong oxidizer has recently been advanced (Houtkooper and Schulze-Makuch, 2007). Here, we report on laboratory experiments that simulate the experiments to be conducted by the Thermal and Evolved Gas Analyzer (TEGA) instrument of the Phoenix lander, which is to descend on Mars in May 2008. Our experiments provide a baseline for an unbiased test for chemical versus biological responses, which can be applied at the time the Phoenix lander transmits its first results from the martian surface.
Biphasic Dose Response in Low Level Light Therapy – An Update
Huang, Ying-Ying; Sharma, Sulbha K; Carroll, James; Hamblin, Michael R
2011-01-01
Low-level laser (light) therapy (LLLT) has been known since 1967 but still remains controversial due to incomplete understanding of the basic mechanisms and the selection of inappropriate dosimetric parameters that led to negative studies. The biphasic dose-response or Arndt-Schulz curve in LLLT has been shown both in vitro studies and in animal experiments. This review will provide an update to our previous (Huang et al. 2009) coverage of this topic. In vitro mediators of LLLT such as adenosine triphosphate (ATP) and mitochondrial membrane potential show biphasic patterns, while others such as mitochondrial reactive oxygen species show a triphasic dose-response with two distinct peaks. The Janus nature of reactive oxygen species (ROS) that may act as a beneficial signaling molecule at low concentrations and a harmful cytotoxic agent at high concentrations, may partly explain the observed responses in vivo. Transcranial LLLT for traumatic brain injury (TBI) in mice shows a distinct biphasic pattern with peaks in beneficial neurological effects observed when the number of treatments is varied, and when the energy density of an individual treatment is varied. Further understanding of the extent to which biphasic dose responses apply in LLLT will be necessary to optimize clinical treatments. PMID:22461763
Edades relativas de cúmulos globulares
NASA Astrophysics Data System (ADS)
Miller Bertolami, M.; Forte, J. C.
El trabajo de Rossemberg et al (1999), estudia las edades relativas de cúmulos globulares galácticos mediante el análisis de ciertos parámetros morfológicos de los diagramas color-magnitud de dichos cúmulos. Este trabajo se centra en tres puntos: analizar la consistencia de los resultados obtenidos por Rossemberg et al (1999) al emplear observaciones en el sistema fotométrico de Washington, más precisamente, las magnitudes C y T1 en lugar de las magnitudes V e I utilizadas por dichos autores. De la existencia de colores integrados, metalicidad y edad (relativa) para 21 de los cúmulos utilizados en dicho trabajo, se analiza la consistencia de estos resultados con las dependencias de color integrado como función de la edad y la metalicidad que se desprenden de los modelos teóricos de luz integrada por Worthey (1994), Schulz (2002) y Lee et al (2002). Por último se lleva a cabo una breve comparación de la morfología de los diagramas color-magnitud de los cúmulos globulares y de las isocronas utilizadas, a fin de intentar identificar algunas de las posibles causas de las diferencias observadas en los incisos anteriores.
Instabilities of coupled Cu2O5 ladders
NASA Astrophysics Data System (ADS)
Schuetz, Florian; Marston, Brad
2008-03-01
The spin-ladder compound Sr14-xCaxCu24O41 has a complex phase diagram including charge-density-wave order as well as unconventional superconductivity under high pressure. Due to its quasi-one-dimensional natureootnotetextS. Lee, J. B. Marston, J. O. Fjaerestad, Phys. Rev. B 72, 075126. fundamental questions about the high-Tc cuprates might be more easily addressed in this context. However, due to the spatial proximity of neighboring ladders inter-ladder Coulomb repulsion as well as hopping between ladders might still be important. Using the functional renormalization groupootnotetextM. Salmhofer and C. Honerkamp, Prog. Theor. Physics 105, 1 (2001). and an analysis of generalized susceptibilities ootnotetextD. Zanchi and H. J. Schulz, Phys. Rev. B 61, 13609 (2000); C. J. Halboth and W. Metzner, Phys. Rev. Lett. 85, 5162 (2000)., we study a model of coupled Cu2O5 ladders ootnotetextK. Wohlfeld, A. M. Oles, and G. A. Sawatzky, Phys. Rev. B 75, 180501(R) (2007).. We investigate instabilities towards charge, spin, and pairing order as a function of hole doping, inter-ladder hopping, and interaction strength starting from experimentally relevant hopping parametersootnotetextT. F. A. Müller, et al., Phys. Rev. B 57, R12655 (1998)..
Cytological lesions in the midgut of Tribolium confusum larvae exposed to gamma radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jafri, R.H.; Ismail, M.
1977-01-01
The major cytological lesions in Tribolium confusum after irradiation were displayed by the midgut epithelium. At 24 hr following exposure to 5.3 kR, the regenerative cells called nidi appeared numerous. They gradually disappeared with increases in dosage and time in accordance with Arndt-Schulze's Law. The columnar epithelial cells and their nuclei appeared swollen and vacuolated on the fifth and twelfth day following exposure to 5.3 kR. They appeared disorganized and shed into the lumen of the midgut on the twelfth and fifth day following 50- and 70-kR irradiation, respectively. The basement membrane and the muscularis appeared loose on the fifthmore » and twelfth day dollowing 70-kR irradiation. It was observed that once the catabolic activity, i.e., histolysis, was initiated in the midgut, it continued to accelerate with increasing dose and time. Thus, the last effects at low doses, 5.3 and 10 kR, appeared as immediate effects at high doses, 50 and 70 kR. The differentiated cells, i.e., columnar epithelial cells, appeared radioresistant as compared to undifferentiated cells, i.e., regenerative cells, which appeared radiosensitive in accordance with the principle of Bergonie and Tribondeau.« less
Model-based analysis of keratin intermediate filament assembly
NASA Astrophysics Data System (ADS)
Martin, Ines; Leitner, Anke; Walther, Paul; Herrmann, Harald; Marti, Othmar
2015-09-01
The cytoskeleton of epithelial cells consists of three types of filament systems: microtubules, actin filaments and intermediate filaments (IFs). Here, we took a closer look at type I and type II IF proteins, i.e. keratins. They are hallmark constituents of epithelial cells and are responsible for the generation of stiffness, the cellular response to mechanical stimuli and the integrity of entire cell layers. Thereby, keratin networks constitute an important instrument for cells to adapt to their environment. In particular, we applied models to characterize the assembly of keratin K8 and K18 into elongated filaments as a means for network formation. For this purpose, we measured the length of in vitro assembled keratin K8/K18 filaments by transmission electron microscopy at different time points. We evaluated the experimental data of the longitudinal annealing reaction using two models from polymer chemistry: the Schulz-Zimm model and the condensation polymerization model. In both scenarios one has to make assumptions about the reaction process. We compare how well the models fit the measured data and thus determine which assumptions fit best. Based on mathematical modelling of experimental filament assembly data we define basic mechanistic properties of the elongation reaction process.
NASA Astrophysics Data System (ADS)
Schulz, Hans-Martin; Bernard, Sylvain; Horsfield, Brian; Krüger, Martin; Littke, Ralf; di primio, Rolando
2013-04-01
The Early Toarcian Posidonia Shale is a proven hydrocarbon source rock which was deposited in a shallow epicontinental basin. In southern Germany, Tethyan warm-water influences from the south led to carbonate sedimentation, whereas cold-water influxes from the north controlled siliciclastic sedimentation in the northwestern parts of Germany and the Netherlands. Restricted sea-floor circulation and organic matter preservation are considered to be the consequence of an oceanic anoxic event. In contrast, non-marine conditions led to sedimentation of coarser grained sediments under progressively terrestrial conditions in northeastern Germany The present-day distribution of Posidonia Shale in northern Germany is restricted to the centres of rift basins that formed in the Late Jurassic (e.g., Lower Saxony Basin and Dogger Troughs like the West and East Holstein Troughs) as a result of erosion on the basin margins and bounding highs. The source rock characteristics are in part dependent on grain size as the Posidonia Shale in eastern Germany is referred to as a mixed to non-source rock facies. In the study area, the TOC content and the organic matter quality vary vertically and laterally, likely as a consequence of a rising sea level during the Toarcian. Here we present and compare data of whole Posidonia Shale sections, investigating these variations and highlighting the variability of Posidonia Shale depositional system. During all phases of burial, gas was generated in the Posidonia Shale. Low sedimentation rates led to diffusion of early diagenetically formed biogenic methane. Isochronously formed diagenetic carbonates tightened the matrix and increased brittleness. Thermogenic gas generation occurred in wide areas of Lower Saxony as well as in Schleswig Holstein. Biogenic methane gas can still be formed today in Posidonia Shale at shallow depth in areas which were covered by Pleistocene glaciers. Submicrometric interparticle pores predominate in immature samples. At thermal maturities beyond the oil window, intra-mineral and intra-organic pores develop. In such overmature samples, nanopores occur within pyrobitumen masses. Important for gas storage and transport, they likely result from exsolution of gaseous hydrocarbon. References Bernard S., Wirth R., Schreiber A., Bowen L., Aplin A.C., Mathia E.J., Schulz H-M., & Horsfield B.: FIB-SEM and TEM investigations of an organic-rich shale maturation series (Lower Toarcian Posidonia Shale): Nanoscale pore system and fluid-rock interactions. AAPG Bulletin Special Issue "Electron Microscopy of Shale Hydrocarbon Reservoirs" (in press). Bernard, S., Horsfield, B., Schulz, H-M., Wirth, R., Schreiber, A., & Sherwood, N., 2012, Geochemical evolution of organic-rich shales with increasing maturity: A STXM and TEM study of the Posidonia Shale (Lower Toarcian, northern Germany): Marine and Petroleum Geology 31 (1) 70-89. Lott, G.K., Wong, T.E., Dusar, M., Andsbjerg, J., Mönnig, E., Feldman-Olszewska, A. & Verreussel, R.M.C.H., 2010. Jurassic. In: Doornenbal, J.C. and Stevenson, A.G. (editors): Petroleum Geological Atlas of the Southern Permian Basin Area. EAGE Publications b.v. (Houten): 175-193.
Lima, Leandro; Sinaimeri, Blerina; Sacomoto, Gustavo; Lopez-Maestre, Helene; Marchet, Camille; Miele, Vincent; Sagot, Marie-France; Lacroix, Vincent
2017-01-01
The main challenge in de novo genome assembly of DNA-seq data is certainly to deal with repeats that are longer than the reads. In de novo transcriptome assembly of RNA-seq reads, on the other hand, this problem has been underestimated so far. Even though we have fewer and shorter repeated sequences in transcriptomics, they do create ambiguities and confuse assemblers if not addressed properly. Most transcriptome assemblers of short reads are based on de Bruijn graphs (DBG) and have no clear and explicit model for repeats in RNA-seq data, relying instead on heuristics to deal with them. The results of this work are threefold. First, we introduce a formal model for representing high copy-number and low-divergence repeats in RNA-seq data and exploit its properties to infer a combinatorial characteristic of repeat-associated subgraphs. We show that the problem of identifying such subgraphs in a DBG is NP-complete. Second, we show that in the specific case of local assembly of alternative splicing (AS) events, we can implicitly avoid such subgraphs, and we present an efficient algorithm to enumerate AS events that are not included in repeats. Using simulated data, we show that this strategy is significantly more sensitive and precise than the previous version of KisSplice (Sacomoto et al. in WABI, pp 99-111, 1), Trinity (Grabherr et al. in Nat Biotechnol 29(7):644-652, 2), and Oases (Schulz et al. in Bioinformatics 28(8):1086-1092, 3), for the specific task of calling AS events. Third, we turn our focus to full-length transcriptome assembly, and we show that exploring the topology of DBGs can improve de novo transcriptome evaluation methods. Based on the observation that repeats create complicated regions in a DBG, and when assemblers try to traverse these regions, they can infer erroneous transcripts, we propose a measure to flag transcripts traversing such troublesome regions, thereby giving a confidence level for each transcript. The originality of our work when compared to other transcriptome evaluation methods is that we use only the topology of the DBG, and not read nor coverage information. We show that our simple method gives better results than Rsem-Eval (Li et al. in Genome Biol 15(12):553, 4) and TransRate (Smith-Unna et al. in Genome Res 26(8):1134-1144, 5) on both real and simulated datasets for detecting chimeras, and therefore is able to capture assembly errors missed by these methods.
Bartels, Paul J; Fontoura, Paulo; Nelson, Diane R
2015-05-05
Five marine arthrotardigrade species are recorded from Moorea, Society Islands, French Polynesia. Four were collected from coral sand; two, Dipodarctus anaholiensis Pollock, 1995 and Florarctus kwoni Chang & Rho, 1997, are new records for the region, and two, Halechiniscus perfectus Schulz, 1955 and Styraconyx kristenseni kristenseni Renaud-Mornant, 1981, have been previously reported. The fifth, a new species Styraconyx turbinarium sp. nov., is described and was collected from the drifting brown alga Turbinaria ornata. The new species is characterized by the presence of peduncles on all digits, an elongate primary clava, and the lateral cirrus A arising from a common pedestal and enveloped by a common membrane extending almost to the claval tip. The new species differs from the most similar species, Styraconyx tyrrhenus D'Addabbo Gallo, Morone De Lucia & de Zio Grimaldi, 1989, by having longer and differently shaped primary clavae which are elongated in the new species and club-shaped in S. tyrrhenus. By having a dorsal cuticle that is coarsely punctated but without folds or other ornamentations, the new species can be easily distinguished from S. craticulus (Pollock, 1983), a species with similar primary clavae, but with cuticular dorsal folds ornamented with a grid-like pattern.
Extraction of ice absorptions in comet spectra, and application to VIRTIS/Rosetta
NASA Astrophysics Data System (ADS)
Erard, Stéphane; Despan, Daniela; Leyrat, Cédric; Drossart, Pierre; Capaccioni, Fabrizio; Filacchione, Gianrico
2014-05-01
Detection of ice spectral features can be difficult on comet surfaces, due to the mixing with dark opaque materials, as shown by Deep Impact and Epoxi observations. We study here the possible use of high-level spectral detection techniques in this context. A method based on wavelet decomposition and a multiscale vision model, partly derived from image analysis techniques, was presented recently (Erard, 2013). It is here used to extract shallow features from spectra in reflected light, up to ~3 µm. The outcome of the analysis is a description of the bands detected, and a quantitative and reliable confidence parameter. The bands can be described either by the most appropriate wavelet scale only (for rapid analyses) or after reconstruction from all scales involved (for more precise measurements). An interesting side effect is the ability to separate even narrow features from random noise, as well as to identify low-frequency variations i.e., wide and shallow bands. Tests are performed on laboratory analogues spectra and available observational data. The technique is expected to provide detection of ice in the early stages of Rosetta observations of 67P this year, from VIRTIS data (Coradini et al., 2009). Strategies are devised to quickly analyze large datasets, e. g., by applying the extraction technique to components first identified by an ACI (Erard et al., 2011). The exact position of the bands can be diagnostic of surface temperature, in particular at 1.6 µm (e. g., Fink & Larson, 1975) and 3.6 µm (Filacchione et al., 2013), and may complement estimates retrieved from the onset of thermal emission longward of 3.5 µm. Erard, S. (2013) 8th EPSC EPSC2013-520. Coradini et al (2009), Rosetta book, Schulz et al Eds. Erard, S. et al (2011) Planet & Space Sc 59, 1842-1852 Fink, U. & Larson, H. (1975) Icarus 24, 411-420 Filacchione et al (2013) AGU Fall Meeting Abstracts A7
Effect of aggregate graining compositions on skid resistance of Exposed Aggregate Concrete pavement
NASA Astrophysics Data System (ADS)
Wasilewska, Marta; Gardziejczyk, Wladysław; Gierasimiuk, Pawel
2018-05-01
The paper presents the evaluation of skid resistance of EAC (Exposed Aggregate Concrete) pavements which differ in aggregate graining compositions. The tests were carried out on concrete mixes with a maximum aggregate size of 8 mm. Three types of coarse aggregates were selected depending on their resistance to polishing which was determined on the basis of the PSV (Polished Stone Value). Basalt (PSV 48), gabbro (PSV 50) and trachybasalt (PSV 52) aggregates were chosen. For each type of aggregate three graining compositions were designed, which differed in the content of coarse aggregate > 4mm. Their content for each series was as follows: A - 38%, B - 50% and C - 68%. Evaluation of the skid resistance has been performed using the FAP (Friction After Polishing) test equipment also known as the Wehner/Schulze machine. Laboratory method enables to compare the skid resistance of different types of wearing course under specified conditions simulating polishing processes. In addition, macrotexture measurements were made on the surface of each specimen using the Elatexure laser profile. Analysis of variance showed that at significance level α = 0.05, aggregate graining compositions as well as the PSV have a significant influence on the obtained values of the friction coefficient μm of the tested EAC pavements. The highest values of the μm have been obtained for EAC with the lowest amount of coarse aggregates (compositions A). In these cases the resistance to polishing of the aggregate does not significantly affect the friction coefficients. This is related to the large areas of cement mortar between the exposed coarse grains. Based on the analysis of microscope images, it was observed that the coarse aggregates were not sufficiently exposed. It has been proved that PSV significantly affected the coefficient of friction in the case of compositions B and C. This is caused by large areas of exposed coarse aggregate. The best parameters were achieved for the EAC pavements with graining composition B and C and trachybasalt aggregate.
NASA Astrophysics Data System (ADS)
Ferrante, Aldo Pedro; Fallico, Carmine; Rios, Ana C.; Fernanda Rivera, Maria; Santillan, Patricio; Salazar, Mario
2013-04-01
The contamination of large areas and correspondent aquifers often imposes to implement some recovery operations which are generally complex and very expensive. Anyway, these interventions necessarily require the preventive characterization of the aquifers to be reclaimed and in particular the knowledge of the relevant hydrodispersive parameters. The determination of these parameters requires the implementation tracer tests for the specific site (Sauty JP, 1978). To reduce cost and time that such test requires tracer tests on undisturbed soil samples, representative of the whole aquifer, can be performed. These laboratory tests are much less expensive and require less time, but the results are certainly less reliable than those obtained by field tests for several reasons, including the particular scale of investigation. In any case the hydrodispersive parameters values, obtained by tests carried out in laboratory, can provide useful information on the considered aquifer, allowing to carry out initial verifications on the transmission and propagation of the pollutants in the aquifer considered. For this purpose, tracer tests with inlet of short time were carried out in the Soil Physics Laboratory of the Department of Soil Protection (University of Calabria), on a series of sandy soil samples with six different lengths, repeating each test with three different water flow velocities (5 m/d; 10 m/s and 15 m/d) (J. Feyen et al., 1998). The lengths of the samples taken into account are respectively 15 cm, 24 cm, 30 cm, 45 cm, 60 cm and 75 cm, while the solution used for each test was made of 100 ml of water and NaCl with a concentration of this substance corresponding to 10 g/L. For the porous medium taken into consideration a particle size analysis was carried out, resulting primarily made of sand, with total porosity equal to 0.33. Each soil sample was placed in a flow cell in which was inlet the tracer from the bottom upwards, measuring by a conductivimeter the variation of the outgoing concentration over time and obtaining the respective breakthrough curve. The flow was induced and regulated by a peristaltic pump. The results obtained are consistent together with those obtained by other researchers for analogues soil types; moreover the existence of a scaling law for the hydrodispersive parameters considered, ie the longitudinal dispersivity (?L) and the longitudinal dispersion coefficient (DL), was also verified (Neuman S.P., 1990), (Schulze-Makuch D., 2005). References Feyen J. et al., 1998. "Modelling Water Flow and Solute Transport in Heterogeneous Soils: A Review of Recent Approaches", Silsoe Research Institute. Neuman S.P., 1990. "Universal Scaling of Hydraulic Conductivities and Dispersivities in Geologic Media", Water Resources Research, vol. 26. Schulze-Makuch D., 2005. "Longitudinal Dispersivity Data and Implications for Scaling Behavior", Ground Water, vol.43. Sauty J.P., 1978. "Identification des paramètres du transport hydrodispersif dans les aquifères par interprétation de traçages en ecoulement cylindrique convergent ou divergent", Journal of hydrology n. 39.
Aridity induces super-optimal investment in leaf venation by Eucalyptus and Corymbia
NASA Astrophysics Data System (ADS)
Drake, Paul L.; de Boer, Hugo J.; Price, Charles A.; Veneklaas, Erik J.
2016-04-01
The close relationship between leaf water status and stomatal conductance implies that the hydraulic architecture of leaves poses an important constraint on carbon uptake, specifically in arid environments with high evaporative demands. However, it remains uncertain how morphological, hydraulic and photosynthetic traits are coordinated to achieve optimal leaf functioning in arid environments. Zwieniecki and Boyce (2014) proposed a generic framework on the hydraulic architecture of leaves based on the argument that water is optimally distributed when the lateral distance between neighboring water transport veins (dx) is approximately equal to the distance from these veins to the epidermis (dy), expressed as dx:dy ≈1. Many derived angiosperms realize this optimal hydraulic architecture by closely coordinating leaf vein density with leaf thickness and the lateral position of veins inside the leaf. Zwieniecki and Boyce (2014) further suggested that over-investment in veins (dx:dy <1) provides no functional benefit owing to the minor additional increases in leaf gas exchange that may be achieved by reducing dx beyond dy. Although this framework is valid for derived angiosperms adapted to temperate and moist (sub)tropical environments, we hypothesize that super-investment in leaf venation (resulting in dx:dy<<1) may provide a specific gas exchange advantage in arid environments that select for thick and amphistomatous leaf morphologies. The relatively long dy inherent to these leaf morphologies imposes hydraulic constraints on productivity that may (partially) be offset by reducing dx beyond dy. To test our hypothesis we assembled the leaf hydraulic, morphological and photosynthetic traits of 65 species (401 individuals) within the widely distributed and closely related genera Eucalyptus and Corymbia along a 2000-km-long aridity gradient in Western Australia (see Schulze et al., 2006). We inferred the potential functional benefit of reducing dx beyond dy using a semi-empirical model that links leaf morphology and hydraulics to photosynthesis. Our results reveal that Eucalyptus and Corymbia evolved extremely high vein densities in addition to thick amphistomatous leaf morphologies along the natural aridity gradient resulting in dx:dy ratios ranging between 0.8 and 0.08. We propose that as the thickness of amphistomatous leaves increases, the effect of reducing dx beyond dy is to offset the reduction in photosynthesis that would result from the theoretical optimal architecture of dx:dy ≈1. Our model quantified the resulting relative gain in photosynthesis at 10% to 15%, which could provide a crucial gas exchange advantage. We conclude that aridity confounds selection for leaf traits associated with a long leaf lifespan and thermal capacitance as well as those supporting higher rates of leaf water transport and photosynthesis. References Schulze, E.-D., Turner, N. C., Nicolle, D. and Schumacher, J.: Species differences in carbon isotope ratios, specific leaf area and nitrogen concentrations in leaves of Eucalyptus growing in a common garden compared with along an aridity gradient, Physiol. Plant., 127(3), 434-444, 2006. Zwieniecki, M. A. and Boyce, C. K.: Evolution of a unique anatomical precision in angiosperm leaf venation lifts constraints on vascular plant ecology, Proc. R. Soc. B Biol. Sci., 281(1779), 2014.
Itz, Marlena L; Schweinberger, Stefan R; Schulz, Claudia; Kaufmann, Jürgen M
2014-11-15
Spatially caricatured faces were recently shown to benefit face learning (Schulz et al., 2012a). Moreover, spatial information may be particularly important for encoding unfamiliar faces, but less so for recognizing familiar faces (Kaufmann et al., 2013). To directly test the possibility of a major role of reflectance information for the recognition of familiar faces, we compared effects of selective photorealistic caricaturing in either shape or reflectance on face learning and recognition. Participants learned 3D-photographed faces across different viewpoints, and different images were presented at learning and test. At test, performance benefits for both types of caricatures were modulated by familiarity: Benefits for learned faces were substantially larger for reflectance caricatures, whereas benefits for novel faces were numerically larger for shape caricatures. ERPs confirmed a consistent reduction of the occipitotemporal P200 (200-240 ms) by shape caricaturing, whereas the most prominent effect of reflectance caricaturing was seen in an enhanced posterior N250 (240-400 ms), a component that has been related to the activation of acquired face representations. Our results suggest that performance benefits for face learning caused by distinctive spatial versus reflectance information are mediated by different neural processes with different timing and support a prominent role of reflectance for the recognition of learned faces. Copyright © 2014 Elsevier Inc. All rights reserved.
[EXOSKELETON ABNORMALITIES IN TAIGA TICK FEMALES FROM POPULATIONS OF THE ASIATIC PART OF RUSSIA].
Nikitin, A Ya; Morozov, I M
2016-01-01
Studies of the phenotypic structure of Ixodes persulcatus (Schulze, 1930) populations in relation to their exoskeleton abnormalities are important in both theoretical and practical respects. The data on the species' population structure in Asiatic part of Russia are fragmentary. The goal of the study was to describe taiga tick population structure based on the pattern of females' exoskeleton abnormalities revealed in Asiatic part of Russia. A total of 3872 I. persulcatus females from 16 geographically remote sites of Far Eastern, Siberian, and Ural Federal Districts (FEFD, SFD, and UFD accordingly) were studied. It was demonstrated that all the populations possessed specimens with exoskeleton abnormalities. The «shagreen skin» abnormality was dominant in all these areas. At the same time, the percentage of abnormalities among the specimens collected to the north of 55°N is considerably higher (63.4 ± 3.39 %) than that of samples from the SFD southward territories (33.1 ± 3.43 %). The frequency of abnormalities in its turn is lower (24.4 ± 1.93 %) in the females from the territories with moderate monsoon and moderate continental climate (FEFD) than that in specimens from SFD and UFD areas with sharp continental climate. Thus, such polymorphism of the females' exoskeleton structure may reflect the natural phenogeographical variability of the character rather than the result of the anthropogenic impact. 403
Effect of technological advances on cochlear implant performance in adults.
Lenarz, Minoo; Joseph, Gert; Sönmez, Hasibe; Büchner, Andreas; Lenarz, Thomas
2011-12-01
To evaluate the effect of technological advances in the past 20 years on the hearing performance of a large cohort of adult cochlear implant (CI) patients. Individual, retrospective, cohort study. According to technological developments in electrode design and speech-processing strategies, we defined five virtual intervals on the time scale between 1984 and 2008. A cohort of 1,005 postlingually deafened adults was selected for this study, and their hearing performance with a CI was evaluated retrospectively according to these five technological intervals. The test battery was composed of four standard German speech tests: Freiburger monosyllabic test, speech tracking test, Hochmair-Schulz-Moser (HSM) sentence test in quiet, and HSM sentence test in 10 dB noise. The direct comparison of the speech perception in postlingually deafened adults, who were implanted during different technological periods, reveals an obvious improvement in the speech perception in patients who benefited from the recent electrode designs and speech-processing strategies. The major influence of technological advances on CI performance seems to be on speech perception in noise. Better speech perception in noisy surroundings is strong proof for demonstrating the success rate of new electrode designs and speech-processing strategies. Standard (internationally comparable) speech tests in noise should become an obligatory part of the postoperative test battery for adult CI patients. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.
Krøigård, Anne Bruun; Clemmensen, Ole; Gjørup, Hans; Hertz, Jens Michael; Bygum, Anette
2016-03-10
Odonto-onycho-dermal dysplasia (OODD) is a rare form of ectodermal dysplasia characterized by severe oligodontia, onychodysplasia, palmoplantar hyperkeratosis, dry skin, hypotrichosis, and hyperhidrosis of the palms and soles. The ectodermal dysplasias resulting from biallelic mutations in the WNT10A gene result in highly variable phenotypes, ranging from isolated tooth agenesis to OODD and Schöpf-Schulz-Passarge syndrome (SSPS). We identified a female patient, with consanguineous parents, who was clinically diagnosed with OODD. Genetic testing showed that she was homozygous for a previously reported pathogenic mutation in the WNT10A gene, c.321C > A, p.Cys107*. The skin and nail abnormalities were for many years interpreted as psoriasis and treated accordingly. A thorough clinical examination revealed hypotrichosis and hyperhidrosis of the soles and dental examination revealed agenesis of permanent teeth except the two maxillary central incisors. Skin biopsies from the hyperkeratotic palms and soles showed the characteristic changes of eccrine syringofibroadenomatosis, which has been described in patients with ectodermal dysplasias. Together with a family history of tooth anomalies, this lead to the clinical suspicion of a hereditary ectodermal dysplasia. This case illustrates the challenges of diagnosing ectodermal dysplasia like OODD and highlights the relevance of interdisciplinary cooperation in the diagnosis of rare conditions.
Maternal dietary antigens and the immune response in the offspring of the guinea-pig.
Telemo, E; Jakobsson, I; Weström, B R; Folkesson, H
1987-01-01
Guinea-pig dams and their litters were raised on either a cow's milk protein-containing diet (MCD) or a milk-free diet (MFD). At 8 weeks of age all litters were challenged i.p. with 50 micrograms milk whey-protein concentrate (V67) and 100 mg A1(OH)3 in saline. The immune response was estimated 2 weeks later as the serum IgG antibody titres against V67, beta-lactoglobulin (beta-LG) and alpha-lactalbumin (alpha-LA) using enzyme-linked immunosorbent assay (ELISA) and the tracheal Schulze-Dale response to these antigens. Feeding milk protein antigen to dams from birth and during pregnancy induces antigen-specific hyporesponsiveness (tolerance) in their offspring, despite no direct contact between the offspring and the milk proteins. Tolerance seems to be induced by the antigen itself since withdrawal of the MCD 10 days before delivery reduced tolerance in the offspring. No tolerance was produced in the offspring of dams fed the antigen from 3 months of age (adult). beta-LG appears to be a major antigen in milk whey while alpha-LA is a minor one since there was almost no antibody or tracheal response to alpha-LA in any of the animals tested. The results indicate that maternal antigen experience and antigens present during pregnancy are important for the subsequent immune response to these antigens in offspring. PMID:3653926
Sudden gains in exposure therapy for obsessive-compulsive disorder.
Collins, Lindsey M; Coles, Meredith E
2017-06-01
Prior research in the treatment of depression and anxiety has demonstrated that a sudden reduction in symptoms between two consecutive sessions (sudden gain) is related to lower post-treatment symptom severity (e.g. Hofmann, Schulz, Meuret, Moscovitch, & Suvak, 2006; Tang & DeRubeis, 1999). However, only one study has examined sudden gains in the treatment of obsessive compulsive disorder (OCD). In that study, one-third of the patients with OCD experienced a sudden gain (Aderka et al., 2012). Further, patients who had a sudden gain had lower clinician-rated OCD symptom severity post-treatment (Aderka et al., 2012). In replication, the current study examined the frequency, characteristics, and clinical impact of sudden gains in 27 OCD patients during exposure and response prevention (ERP) therapy. Fifty two percent of patients experienced a sudden gain. The mean magnitude of a sudden gain represented, on average, 61.4% of total symptom reduction. Following treatment, individuals who had experienced a sudden gain were rated as less severe on the clinical global impression scale, but they did not experience a greater reduction in OCD symptoms (pre-to post-treatment) than those without a sudden gain. None of the pre-treatment characteristics tested were found to significantly predict whether a patient would have a sudden gain. Additional research examining predictors of, and patterns of, change in OCD symptoms is warranted. Copyright © 2017 Elsevier Ltd. All rights reserved.
Conceptual processing in music as revealed by N400 effects on words and musical targets.
Daltrozzo, Jérôme; Schön, Daniele
2009-10-01
The cognitive processing of concepts, that is, abstract general ideas, has been mostly studied with language. However, other domains, such as music, can also convey concepts. Koelsch et al. [Koelsch, S., Kasper, E., Sammler, D., Schulze, K., Gunter, T., & Friederici, A. D. Music, language and meaning: Brain signatures of semantic processing. Nature Neuroscience, 7, 302-307, 2004] showed that 10 sec of music can influence the semantic processing of words. However, the length of the musical excerpts did not allow the authors to study the effect of words on musical targets. In this study, we decided to replicate Koelsch et al. findings using 1-sec musical excerpts (Experiment 1). This allowed us to study the reverse influence, namely, of a linguistic context on conceptual processing of musical excerpts (Experiment 2). In both experiments, we recorded behavioral and electrophysiological responses while participants were presented 50 related and 50 unrelated pairs (context/target). Experiments 1 and 2 showed a larger N400 component of the event-related brain potentials to targets following a conceptually unrelated compared to a related context. The presence of an N400 effect with musical targets suggests that music may convey concepts. The relevance of these results for the comprehension of music as a structured set of conceptual units and for the domain specificity of the mechanisms underlying N400 effects are discussed.
Nowak, Magdalena
2010-05-11
The problem of the unnatural transfer of exotic ticks (Acari: Ixodida) on reptiles (Reptilia) imported to Poland is presented. In the period from 2003 to 2007, 382 specimens of reptiles belonging to the following genera were investigated: Testudo, Iguana, Varanus, Gongylophis, Python, Spalerosophis, Psammophis. The reptiles most infested with ticks are imported to Poland from Ghana in Africa, and are the commonly bred terrarium reptiles: Varanus exanthematicus and Python regius. As a result of the investigations, the transfer of exotic ticks on reptiles to Poland was confirmed. There were 2104 specimens of the genera Amblyomma and Hyalomma. The following species were found: Amblyomma exornatum Koch, 1844, Amblyomma flavomaculatum (Lucas, 1846), Amblyomma latum Koch, 1844, Amblyomma nuttalli Donitz, 1909, Amblyomma quadricavum (Schulze, 1941), Amblyomma transversale (Lucas, 1844), Amblyomma varanense (Supino, 1897), Amblyomma sp. Koch, 1844, Hyalomma aegyptium (Linnaeus, 1758). All the species of ticks of genus Amblyomma revealed have been discovered in Poland for the first time. During the research, 13 cases of anomalies of morphological structure were confirmed in the ticks A. flavomaculatum, A. latum and H. aegyptium. The expanding phenomenon of the import of exotic reptiles in Poland and Central Europe is important for parasitological and epidemiological considerations, and therefore requires monitoring and wide-ranging prophylactic activities to prevent the inflow of exotic parasites to Poland. (c) 2010 Elsevier B.V. All rights reserved.
Fischer-Tropsch synthesis in near-critical n-hexane: Pressure-tuning effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bochniak, D.J.; Subramaniam, B.
For Fe-catalyzed Fischer-Tropsch (FT) synthesis with near-critical n-hexane (P{sub c} = 29.7 bar; T{sub c} = 233.7 C) as the reaction medium, isothermal pressure tuning from 1.2--2.4 P{sub c} (for n-hexane) at the reaction temperature (240 C) significantly changes syngas conversion and product selectivity. For fixed feed rates of syngas (H{sub 2}/CO = 0.5; 50 std. cm{sup 3}/g catalyst) and n-hexane (1 mL/min), syngas conversion attains a steady state at all pressures, increasing roughly threefold in this pressure range. Effective rate constants, estimated assuming a first-order dependence of syngas conversion on hydrogen, reveal that the catalyst effectiveness increases with pressuremore » implying the alleviation of pore-diffusion limitations. Pore accessibilities increase at higher pressures because the extraction of heavier hydrocarbons from the catalyst pores is enhanced by the liquid-like densities, yet better-than-liquid transport properties, of n-hexane. This explanation is consistent with the single {alpha} (= 0.78) Anderson-Schulz-Flory product distribution, the constant chain termination probability, and the higher primary product (1-olefin) selectivities ({approximately}80%) observed at the higher pressures. Results indicate that the pressure tunability of the density and transport properties of near-critical reaction media offers a powerful tool to optimize catalyst activity and product selectivity during FT reactions on supported catalysts.« less
NASA Astrophysics Data System (ADS)
Peters, Stefan T. M.; Münker, Carsten; Becker, Harry; Schulz, Toni
2015-11-01
In their study on W isotope compositions of iron meteorites from the IVB and IIAB groups, Cook et al. (2014) suggested that the 180W isotope anomalies that have previously been reported for irons (Schulz et al., 2013; Peters et al., 2014) were largely caused by cosmic ray induced neutron capture and spallation reactions, and not by radioactive decay of 184Os as was argued by Peters et al. (2014). Cook et al. (2014) proposed a new decay constant value for 184Os (λ184Os = 3.15 ± 0.81 × 10-14 a-1) that is ca. a factor of 2 lower than previously suggested. However, based on a careful inspection of the Cook et al. (2014) model, we show here that cosmogenic W isotope heterogeneities have a negligible effect on the calculated decay constant from the dataset presented by Peters et al. (2014), as most iron meteorites examined in this study have cosmic ray exposure ages <100 Myrs. The bias between the two studies rather arises from the fact that Cook et al. (2014) estimated 184Os/184W ratios of their iron meteorite aliquots from literature Os and W concentration data, whereas in the Peters et al. (2014) study W isotope and Os-W concentration measurements were performed on the same digestion aliquot.
NASA Astrophysics Data System (ADS)
Wasilewska, Marta
2017-10-01
This paper presents the comparison of skid resistance of wearing course made of SMA (Stone Mastic Asphalt) mixtures which differ in resistance to polishing of coarse aggregate. Dolomite, limestone, granite and trachybasalt were taken for investigation. SMA mixtures have the same nominal size of aggregate (11 mm) and very similar aggregate particle-size distribution in mineral mixtures. Tested SMA11 mixtures were designed according to EN 13108-5 and Polish National Specification WT-2: 2014. Evaluation of the skid resistance has been performed using the FAP (Friction After Polishing) test equipment also known as the Wehner/Schulze machine. Laboratory method enables to compare the skid resistance of different types of mixtures under specified conditions simulating polishing processes. Tests were performed on both the specimens made of each coarse aggregate and SMA11 mixtures containing these aggregates. Measuring of friction coefficient μm was conducted before and during polishing process up to 180 0000 passes of polishing head. Comparison of the results showed differences in sensitivity to polishing among particular mixtures which depend on the petrographic properties of rock used to produce aggregate. Limestone and dolomite tend to have a fairly uniform texture with low hardness which makes these rock types susceptible to rapid polishing. This caused lower coefficient of friction for SMA11 mixtures with limestone and dolomite in comparison with other test mixtures. These significant differences were already registered at the beginning of the polishing process. Limestone aggregate had lower value of μm before starting the process than trachybasalt and granite aggregate after its completion. Despite the differences in structure and mineralogical composition between the granite and trachybasalt, slightly different values of the friction coefficient at the end of polishing were obtained. Images of the surface were taken with the optical microscope for better understanding of the phenomena occurring on the surface of specimen. Results may be valuable information when selecting aggregate to asphalt mixtures at the stage of its design and maintenance of existing road pavements.
Large-amplitude late-time radio variability in GRB 151027B
NASA Astrophysics Data System (ADS)
Greiner, J.; Bolmer, J.; Wieringa, M.; van der Horst, A. J.; Petry, D.; Schulze, S.; Knust, F.; de Bruyn, G.; Krühler, T.; Wiseman, P.; Klose, S.; Delvaux, C.; Graham, J. F.; Kann, D. A.; Moin, A.; Nicuesa-Guelbenzu, A.; Schady, P.; Schmidl, S.; Schweyer, T.; Tanga, M.; Tingay, S.; van Eerten, H.; Varela, K.
2018-06-01
Context. Deriving physical parameters from gamma-ray burst (GRB) afterglow observations remains a challenge, even 20 years after the discovery of afterglows. The main reason for the lack of progress is that the peak of the synchrotron emission is in the sub-mm range, thus requiring radio observations in conjunction with X-ray/optical/near-infrared data in order to measure the corresponding spectral slopes and consequently remove the ambiguity with respect to slow vs. fast cooling and the ordering of the characteristic frequencies. Aims: We have embarked on a multifrequency, multi-epoch observing campaign to obtain sufficient data for a given GRB that allows us to test the simplest version of the fireball afterglow model. Methods: We observed GRB 151027B, the 1000th Swift-detected GRB, with GROND in the optical-near-IR, ALMA in the sub-millimeter, ATCA in the radio band; we combined this with public Swift/XRT X-ray data. Results: While some observations at crucial times only return upper limits or surprising features, the fireball model is narrowly constrained by our data set, and allows us to draw a consistent picture with a fully determined parameter set. Surprisingly, we find rapid, large-amplitude flux density variations in the radio band which are extreme not only for GRBs, but generally for any radio source. We interpret them as scintillation effects, though their extreme nature requires the scattering screen to be at a much smaller distance than usually assumed, multiple screens, or a combination of the two. Conclusions: The data are consistent with the simplest fireball scenario for a blast wave moving into a constant-density medium, and slow-cooling electrons. All fireball parameters are constrained at or better than a factor of 2, except for the density and the fraction of the energy in the magnetic field which has a factor of 10 uncertainty in both directions. This paper makes use of the following data: ATCA: Proposal C2955 (PI: Greiner), ALMA: ADS/JAO.ALMA#2015.1.01558.T (PI: Schulze).
Health behaviour change interventions for couples: A systematic review.
Arden-Close, Emily; McGrath, Nuala
2017-05-01
Partners are a significant influence on individuals' health, and concordance in health behaviours increases over time in couples. Several theories suggest that couple-focused interventions for health behaviour change may therefore be more effective than individual interventions. A systematic review of health behaviour change interventions for couples was conducted. Systematic search methods identified randomized controlled trials (RCTs) and non-randomized interventions of health behaviour change for couples with at least one member at risk of a chronic physical illness, published from 1990-2014. We identified 14 studies, targeting the following health behaviours: cancer prevention (6), obesity (1), diet (2), smoking in pregnancy (2), physical activity (1) and multiple health behaviours (2). In four out of seven trials couple-focused interventions were more effective than usual care. Of four RCTs comparing a couple-focused intervention to an individual intervention, two found that the couple-focused intervention was more effective. The studies were heterogeneous, and included participants at risk of a variety of illnesses. In many cases the intervention was compared to usual care for an individual or an individual-focused intervention, which meant the impact of the couplebased content could not be isolated. Three arm studies could determine whether any added benefits of couple-focused interventions are due to adding the partner or specific content of couple-focused interventions. Statement of contribution What is already known on this subject? Health behaviours and health behaviour change are more often concordant across couples than between individuals in the general population. Couple-focused interventions for chronic conditions are more effective than individual interventions or usual care (Martire, Schulz, Helgeson, Small, & Saghafi, ). What does this study add? Identified studies targeted a variety of health behaviours, with few studies in any one area. Further assessment of the effectiveness of couple-focused versus individual interventions for those at risk is needed. Three-arm study designs are needed to determine benefits of targeting couples versus couple-focused intervention content. © 2017 The Authors. British Journal of Health Psychology published by John Wiley & Sons Ltd on behalf of the British Psychological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schadewaldt, N; Schulz, H; Helle, M
2014-06-01
Purpose: To analyze the effect of computing radiation dose on automatically generated MR-based simulated CT images compared to true patient CTs. Methods: Six prostate cancer patients received a regular planning CT for RT planning as well as a conventional 3D fast-field dual-echo scan on a Philips 3.0T Achieva, adding approximately 2 min of scan time to the clinical protocol. Simulated CTs (simCT) where synthesized by assigning known average CT values to the tissue classes air, water, fat, cortical and cancellous bone. For this, Dixon reconstruction of the nearly out-of-phase (echo 1) and in-phase images (echo 2) allowed for water andmore » fat classification. Model based bone segmentation was performed on a combination of the DIXON images. A subsequent automatic threshold divides into cortical and cancellous bone. For validation, the simCT was registered to the true CT and clinical treatment plans were re-computed on the simCT in pinnacle{sup 3}. To differentiate effects related to the 5 tissue classes and changes in the patient anatomy not compensated by rigid registration, we also calculate the dose on a stratified CT, where HU values are sorted in to the same 5 tissue classes as the simCT. Results: Dose and volume parameters on PTV and risk organs as used for the clinical approval were compared. All deviations are below 1.1%, except the anal sphincter mean dose, which is at most 2.2%, but well below clinical acceptance threshold. Average deviations are below 0.4% for PTV and risk organs and 1.3% for the anal sphincter. The deviations of the stratifiedCT are in the same range as for the simCT. All plans would have passed clinical acceptance thresholds on the simulated CT images. Conclusion: This study demonstrated the clinical usability of MR based dose calculation with the presented Dixon acquisition and subsequent fully automatic image processing. N. Schadewaldt, H. Schulz, M. Helle and S. Renisch are employed by Phlips Technologie Innovative Techonologies, a subsidiary of Royal Philips NV.« less
Konnai, Satoru; Nishikado, Hideto; Yamada, Shinji; Imamura, Saiki; Ito, Takuya; Onuma, Misao; Murata, Shiro; Ohashi, Kazuhiko
2011-02-01
Lipocalins have been known for their several biological activities in blood-sucking arthropods. Recently, the identification and characterization of lipocalins from Ixodes ricinus (LIRs) have been reported and functions of lipocalins are well documented. In this study, we have characterized four Ixodes persulcatus lipocalins that were discovered while analyzing I. persulcatus tick salivary gland EST library. We show that the four I. persulcatus lipocalins, here after named LIPERs (lipocalin from I. persulcatus) are 28.8-94.4% identical to LIRs from I. ricinus. Reverse transcriptase-PCR analysis revealed that lipocalin genes were expressed specifically in the salivary glands throughout life cycle stages of the ticks and were up-regulated by blood feeding. The specific expressions were also confirmed by Western blotting analysis. Furthermore, to investigate whether native lipocalins are secreted into the host during tick feeding, the reactivity of anti-serum raised against saliva of adult ticks to recombinant lipocalins was tested by Western blotting. The lipocalins are potentially secreted into the host during tick feeding as revealed by specific reactivity of recombinant lipocalins with mouse antibodies to I. persulcatus tick saliva. Preliminary vaccination of mice with recombinant lipocalins elicited that period to reach engorgement was significantly delayed and the engorgement weight was significantly reduced as compared to the control. Further elucidation of the biological functions of LIPERs are required to fully understand the pathways involved in the modulation of host immune responses. Copyright © 2010 Elsevier Inc. All rights reserved.
Electron Radiation Belts of the Solar System
NASA Astrophysics Data System (ADS)
Mauk, Barry; Fox, Nicola
To address the question of what factors dictate similarities and differences between radiation belts, we present comparisons between the electron radiation belt spectra of all five strongly magnetized planets within the solar system: Earth, Jupiter, Saturn, Uranus, and Neptune. We choose the highest intensity observed electron spectrum within each system (highest specifically near 1 MeV) and compare them against expectations based on the so-called Kennel-Petschek limit (KP; 1966) for each system. For evaluating the KP limit, we begin with the new relativis-tically correct formulation of Summers et al. (2009) but then add several refinements of our own. Specifically, we: 1) utilized a much more flexible analytic spectral shape that allows us to accurately fit observed radiation belt spectra; 2) adopt the point of view that the anisotropy parameter is not a free parameter but must take on a minimal value, as originally proposed by Kennel and Petschek (1966); and 3) examine the differential characteristics of the KP limit along the lines of what Schulz and Davidson (1988) performed for the non-relativistic formula-tion. We find that three factors limit the highest electron radiation belt intensities within solar system planetary magnetospheres: a) whistler mode interactions that limit spectral intensities to a differential Kennel-Petschek limit (3 planets); b) the absence of robust acceleration pro-cesses associated with injection dynamics (1 planet); and c) material interactions between the radiation particles and clouds of gas and dust (1 planet).
Mantokoudis, Georgios; Dähler, Claudia; Dubach, Patrick; Kompis, Martin; Caversaccio, Marco D; Senn, Pascal
2013-01-01
To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0-500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. Higher frame rate (>7 fps), higher camera resolution (>640 × 480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). Webcameras have the potential to improve telecommunication of hearing-impaired individuals.
Liquid Chromatography Applied to Space System
NASA Astrophysics Data System (ADS)
Poinot, Pauline; Chazalnoel, Pascale; Geffroy, Claude; Sternberg, Robert; Carbonnier, Benjamin
Searching for signs of past or present life in our Solar System is a real challenge that stirs up the curiosity of scientists. Until now, in situ instrumentation was designed to detect and determine concentrations of a wide number of organic biomarkers. The relevant method which was and still is employed in missions dedicated to the quest of life (from Viking to ExoMars) corresponds to the pyrolysis-GC-MS. Along the missions, this approach has been significantly improved in terms of extraction efficiency and detection with the use of chemical derivative agents (e.g. MTBSTFA, DMF-DMA, TMAH…), and in terms of analysis sensitivity and resolution with the development of in situ high-resolution mass spectrometer (e.g. TOF-MS). Thanks to such an approach, organic compounds such as amino acids, sugars, tholins or polycyclic aromatic hydrocarbons (PAHs) were expected to be found. However, while there’s a consensus that the GC-MS of Viking, Huygens, MSL and MOMA space missions worked the way they had been designed to, pyrolysis is much more in debate (Glavin et al. 2001; Navarro-González et al. 2006). Indeed, (1) it is thought to remove low levels of organics, (2) water and CO2 could interfere with the detection of likely organic pyrolysis products, and (3) only low to mid-molecular weight organic molecules can be detected by this technique. As a result, researchers are now focusing on other in situ techniques which are no longer based on the volatility of the organic matter, but on the liquid phase extraction and analysis. In this line, micro-fluidic systems involving sandwich and/or competitive immunoassays (e.g. LMC, SOLID; Parro et al. 2005; Sims et al. 2012), micro-chip capillary electrophoreses (e.g. MOA; Bada et al. 2008), or nanopore-based analysis (e.g. BOLD; Schulze-Makuch et al. 2012) have been conceived for in situ analysis. Thanks to such approaches, molecular biological polymers (polysaccharides, polypeptides, polynucleotides, phospholipids, glycolipids, etc.) which are good examples for one of the two intrinsic features of life (i.e. complexity) would then be searched for. Although these methods are very promising as they have already demonstrated real benefits in terms of sensitivity towards specific compounds of middle/high molecular weight, they cannot be used to detect in one pot a wide range of biopolymer targets with very diverse nature, such as peptides or oligonucleotides. In this context, it would be interesting to develop a “micro-lab” equipped with a miniaturized HPLC-MS as the ones currently developed in the field of biological and medicinal sciences. The objective is to demonstrate unequivocally the presence or absence in space of a wide range of biopolymers thanks to a “one step one pot” instrumentation. We propose to demonstrate the feasibility and the validity of such a concept. For that, we optimize the chromatographic conditions and the mass spectrometer parameters to detect in the range of ppb, proteins and polypeptides biomarkers, while taking into account the space constraints. On a UPLC-HRMS (Q-Exactive and Qq-TOF), different stationary phases (laboratory-made or commercially available), different eluents, gradient flows, temperatures, pressures, and the use of a pre-concentration stage are tested. Dual detection (MS and diode array) is also considered. First experiments have highlighted the ability of such a technique to find ultra-traces level of organic matters under definite space constraints (elution flow, solvents, temperature...). This work is funded by the French Space Agency (CNES) References Glavin DP, Schubert Ml, Botta O, Kminek G, Bada JL (2001) Detecting pyrolysis products from bacteria on Mars. Earth Planet Sc Lett 185:1-2. doi:10.1016/S0012-821X(00)00370-8 Navarro-González R, Navarro KF, de la Rosa J, Iñliguez E, Molina P, Mira LD (2006) The limitations on organic detection in Mars-like soils by thermal volatilization-gas chromatography-MS and their implications for the Viking results. Proc Natl Acad Sci U.S.A 103:89-94 Bada JL, Ehrenfreund P, Grunthaner F et al (2008) Urey: Mars Organic and Oxidant Detector. Space Sci Rev 135: 269-279. doi: 10.1007/s11214-007-9213-3 doi_10.1073_pnas.0604210103 Schulze-Makuch D, Head JN, Houtkooper JM et al (2012) The Biological Oxidant and Life Detection (BOLD) mission: A proposal for a mission to Mars. Planet Space Sci 67:57-69. doi: 10.1016/j.pss.2012.03.008 Parro V, Rodríguez-Manfredi JA, Briones C et al (2005) Instrument development to search for biomarkers on mars: Terrestrial acidophile, iron-powered chemolithoautotrophic communities as model systems. Planet Space Sci 53:729-737. doi:10.1016/j.pss.2005.02.003 Sims MR, Cullen DC, Rix CS et al (2012) Development status of the life marker chip instrument for ExoMars. Planet Space Sci 72:129-137. doi:10.1016/j.pss.2012.04.007
[Exoskeleton anomalies among taiga tick males from populations of the Asiatic part of Russia].
Nikitin, A Ya; Morozov, I M
2017-01-01
The taiga tick (Icodes persulcatus, Schulze, 1930) is the main and most epidemiologically dangerous vector of tick-born encephalitis virus (TBEV) and Borrelia in most parts of Russia's territory (Alekseev et al., 2008). The purpose of this article is to describe the incidence rate of I. persulcatus males with exoskeleton anomalies in populations of the Asiatic part of Russia. A total of 2630 taiga tick males were morphologically analyzed. They were collected in Far Eastern, Siberian and Ural Federal Districts (respectively, FEFD, SFD, UFD) in 15 geographically remote locations. It is shown that in all populations there are adult ticks with impaired exoskeleton, among which two types dominate: twin dents at the back of conscutum (P11), and uneven surface of conscutum - a "shagreen skin" (P9). The frequency of abnormalities in males from the areas with temperate monsoon and temperate continental climate (FEFD) was definitely lower (6.5 ± 1.05 %), than in individuals from the territories of SFD (29.7 ± 1.03 %) and UFD (25.8 ± 3.93 %) with continental and sharply continental climate. FEFD territory is also characterized by a less number of males having two simultaneous exoskeleton anomalies. Similar district-preconditioned differences in the frequency of recorded body distortions are also typical of females, with a higher percentage of deviant individuals in comparison with males. Thus, the identified polymorphism of exoskeleton structure of the taiga tick may reflect the natural phenogeographical variability of this trait and might not be the result of human impact.
Andersen, Susan L
2016-11-01
Adolescence as highlighted in this special issue is a period of tremendous growth, synaptic exuberance, and plasticity, but also a period for the emergence of mental illness and addiction. This commentary aims to stimulate research on prevention science to reduce the impact of early life events that often manifest during adolescence. By promoting a better understanding of what creates a normal and abnormal trajectory, the reviews by van Duijvenvoorde et al., Kilford et al., Lichenstein et al., and Tottenham and Galvan in this special issue comprehensively describe how the adolescent brain develops under typical conditions and how this process can go awry in humans. Preclinical reviews also within this issue describe how adolescents have prolonged extinction periods to maximize learning about their environment (Baker et al.), whereas Schulz and Sisk focus on the importance of puberty and how it interacts with stress (Romeo). Caballero and Tseng then set the stage of describing the neural circuitry that is often central to these changes and psychopathology. Factors that affect the mis-wiring of the brain for illness, including prenatal exposure to anti-mitotic agents (Gomes et al.) and early life stress and inflammation (Schwarz and Brenhouse), are included as examples of how exposure to early adversity manifests. These reviews are synthesized and show how information from the maturational stages that precede or occur during adolescence is likely to hold the key towards optimizing development to produce an adolescent and adult that is resilient and well adapted to their environment. Copyright © 2016 Elsevier Ltd. All rights reserved.
On adiabatic pair potentials of highly charged colloid particles
NASA Astrophysics Data System (ADS)
Sogami, Ikuo S.
2018-03-01
Generalizing the Debye-Hückel formalism, we develop a new mean field theory for adiabatic pair potentials of highly charged particles in colloid dispersions. The unoccupied volume and the osmotic pressure are the key concepts to describe the chemical and thermodynamical equilibrium of the gas of small ions in the outside region of all of the colloid particles. To define the proper thermodynamic quantities, it is postulated to take an ensemble averaging with respect to the particle configurations in the integrals for their densities consisting of the electric potential satisfying a set of equations that are derived by linearizing the Poisson-Boltzmann equation. With the Fourier integral representation of the electric potential, we calculate first the internal electric energy of the system from which the Helmholtz free energy is obtained through the Legendre transformation. Then, the Gibbs free energy is calculated using both ways of the Legendre transformation with respect to the unoccupied volume and the summation of chemical potentials. The thermodynamic functions provide three types of pair potentials, all of which are inversely proportional to the fraction of the unoccupied volume. At the limit when the fraction factor reduces to unity, the Helmholtz pair potential turns exactly into the well known Derjaguin-Landau-Verwey-Overbeek repulsive potential. The Gibbs pair potential possessing a medium-range strong repulsive part and a long-range weak attractive tail can explain the Schulze-Hardy rule for coagulation in combination with the van der Waals-London potential and describes a rich variety of phenomena of phase transitions observed in the dilute dispersions of highly charged particles.
Effect of gender on the hearing performance of adult cochlear implant patients.
Lenarz, Minoo; Sönmez, Hasibe; Joseph, Gert; Büchner, Andreas; Lenarz, Thomas
2012-05-01
To evaluate the role of gender on the hearing performance of postlingually deafened adult patients with cochlear implants. Individual retrospective cohort study. There were 638 postlingually deafened adults (280 men and 358 women) selected for a retrospective evaluation of their hearing performance with cochlear implants. Both genders underwent the same surgical and rehabilitative procedures and benefited from the latest technological advances available. There was no significant difference in the age, duration of deafness, and preoperative hearing performance between the genders. The test battery was composed of the Freiburger Monosyllabic Test, Speech Tracking, and the Hochmair-Schulz-Moser (HSM) sentence test in quiet and in 10-dB noise. The results of 5 years of follow-up are presented here. Genders showed a similar performance in Freiburger Monosyllabic Test and Speech Tracking Test. However, in the HSM test in noise, men performed slightly better than women in all of the follow-up sessions, which was statistically significant at 2 and 4 years after implantation. Although normal-hearing women use more predictive cognitive strategies in speech comprehension and are supposed to have a more efficient declarative memory system, this may not necessarily lead to a better adaptation to the altered auditory information delivered by a cochlear implant. Our study showed that in more complex listening situations such as speech tests in noise, men tend to perform slightly better than women. Gender may have an influence on the hearing performance of postlingually deafened adults with cochlear implants. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.
NASA Astrophysics Data System (ADS)
Seidl, Roman; Barthel, Roland
2016-04-01
Interdisciplinary scientific and societal knowledge plays an increasingly important role in global change research. Also, in the field of water resources interdisciplinarity as well as cooperation with stakeholders from outside academia have been recognized as important. In this contribution, we revisit an integrated regional modelling system (DANUBIA), which was developed by an interdisciplinary team of researchers and relied on stakeholder participation in the framework of the GLOWA-Danube project from 2001 to 2011 (Mauser and Prasch 2016). As the model was developed before the current increase in literature on participatory modelling and interdisciplinarity, we ask how a socio-hydrology approach would have helped and in what way it would have made the work different. The present contribution firstly presents the interdisciplinary concept of DANUBIA, mainly with focus on the integration of human behaviour in a spatially explicit, process-based numerical modelling system (Roland Barthel, Janisch, Schwarz, Trifkovic, Nickel, Schulz, and Mauser 2008; R. Barthel, Nickel, Meleg, Trifkovic, and Braun 2005). Secondly, we compare the approaches to interdisciplinarity in GLOWA-Danube with concepts and ideas presented by socio-hydrology. Thirdly, we frame DANUBIA and a review of key literature on socio-hydrology in the context of a survey among hydrologists (N = 184). This discussion is used to highlight gaps and opportunities of the socio-hydrology approach. We show that the interdisciplinary aspect of the project and the participatory process of stakeholder integration in DANUBIA were not entirely successful. However, important insights were gained and important lessons were learnt. Against the background of these experiences we feel that in its current state, socio-hydrology is still lacking a plan for knowledge integration. Moreover, we consider necessary that socio-hydrology takes into account the lessons learnt from these earlier examples of knowledge integration (see also, Hamilton, ElSawah, Guillaume, Jakeman, and Pierce 2015; Jakeman and Letcher 2003). Our contribution attempts to close a gap between previous concepts of integration of socio-economic aspects into hydrology (typically inspired by Integrated Water Resources Management) and the new socio-hydrology approach. We suppose that socio-hydrology could benefit from widening its scope and considering previous research at the boundaries between hydrology and social sciences. At the same time, concepts developed prior to socio-hydrology were seldom entirely successful. It might be beneficial to review these approaches developed earlier and those that are being developed in parallel from the perspective of socio-hydrology. References: Barthel, R., S. Janisch, N. Schwarz, A. Trifkovic, D. Nickel, C. Schulz, and W. Mauser. 2008. An integrated modelling framework for simulating regional-scale actor responses to global change in the water domain. Environmental Modelling & Software, 23: 1095-1121. Barthel, R., D. Nickel, A. Meleg, A. Trifkovic, and J. Braun. 2005. Linking the physical and the socio-economic compartments of an integrated water and land use management model on a river basin scale using an object-oriented water supply model. Physics and Chemistry of the Earth, 30: 389-397. doi: 10.1016/j.pce.2005.06.006 Hamilton, S. H., S. ElSawah, J. H. A. Guillaume, A. J. Jakeman, and S. A. Pierce. 2015. Integrated assessment and modelling: Overview and synthesis ofsalient dimensions. Environmental Modelling and Software, 64: 215-229. doi: 10.1016/j.envsoft.2014.12.005 Jakeman, A. J., and R. A. Letcher. 2003. Integrated assessment and modelling: features, principles and examples for catchment management. Environmental Modelling & Software, 18: 491-501. doi: http://dx.doi.org/10.1016/S1364-8152(03)00024-0 Mauser, W., and M. Prasch. 2016. Regional Assessment of Global Change Impacts - The Project GLOWA-Danube: Springer International Publishing.
Chemical Heterogeneity and Mineralogy of Halley's Dust
NASA Astrophysics Data System (ADS)
Schulze, H.; Kissel, J.
1992-07-01
It is commonly assumed that comets are pristine bodies which still contain relatively unaltered material from the beginning of our solar system. Therefore, in March 1986 the chemical composition of Halley's dust particles was investigated by time- of-flight mass spectrometers on board the Vega 1 & 2 and Giotto spacecraft using the high relative velocity of 70-80 km/s between spacecraft and Halley for the generation of ions by dust impact ionization (see e.g. Kissel, 1986; Jessberger et al., 1988). This paper investigates the overall chemical variation among the dust particles with special emphasis on rock-forming elements to derive a mineralogical model of the dust and to give constraints to the evolution of cometary and preplanetary matter. The interpretation is based on 123 selected spectra obtained by the mass spectrometer PUMA 1 on Vega 1. Selection criteria, interpretation of raw data and examined instrumental effects are described in more detail elsewhere (Schulze and Kissel, 1992). The bulk composition of Halley's dust is characterized for the rock-forming elements by cosmic abundances within the experimental uncertainty of factor two (see also Jessberger et al., 1988). A small systematic deviation of the abundances can be used for a revision of the ion yields. The volatile elements carbon and nitrogen, however, are significantly enriched to CI-chondrites. A histogram of the Mg/(Mg+Fe)-ratios shows typical peaks at about 0 and 1 which indicate separated phases for Mg and Fe and an anhydrous nature of the dust (e.g. Brownlee et al., 1987; Bradley, 1988). However, also a broad peak occurs at 0.5. Mg-rich spectra are characterized by an excellent Mg-Si correlation with a narrow range of Mg/Si ratios at about 1. Also oxygen is correlated with Mg and Si. Fe-rich spectra partly show a good Fe-S correlation. However, several spectra are rich only in Fe or S. A cluster analysis of the spectra regarding Na, Mg, Al, Si, S, Ca, and Fe revealed seven groups. These groups partly correspond to classifications of interplanetary dust particles (Brownlee et al., 1982). Half of the spectra have chondritic abundances within the experimental uncertainty. About 25% are dominated by Mg and Si indicating a significant portion of Fe-poor Mg silicates in the dust. Nearly 7% of the spectra are typically enriched in Fe and S due to pure Fe sulfide grains which seem to be partly enriched in Ni. Rarely, particles extremely rich in iron occur. Many silicatic spectra show a sulfur excess of unknown origin. Interpreting this heterogeneity in terms of mineralogy indicates that about half of Halley's dust grains are almost monomineralic and composed of Mg-rich silicates (enstatite and/or forsterite), Fe sulfides and Fe metal. Hydrated silicates and magnetite seem to play only a small role. The prevalence of minerals which were formed at rather high temperatures according to the condensation sequence (above ~600 K), is evidence that equilibration to Fe-rich and hydrated silicates by diffusion reactions at lower temperatures is a process too slow to affect these dust particles in their formation environment (Fegley and Prinn, 1988), and that these particles were not intensively altered at low temperatures in the comet. References: Bradley J.P. (1988) Geochim. Cosmochim. Acta. 52. 889-900. Brownlee D.E., Olszewski E., and Wheelock M.M. (1982) Lunar Planet. Sci. XIII, 71-72. Brownlee D.E., Wheelock M.M., Temple S., Bradley J.P., and Kissel J. (1987) Lunar Planet. Sci. XVIII, 134-135. Fegley B. and Prinn G. (1989) The formation and evolution of planetary systems (eds. H.A. Weaver and L. Danly), pp. 171-211. Cambridge. Jessberger E.K., Christoforidis A., and Kissel J. (1988) Nature 332, 691-695. Kissel J. (1986) Europ. Space Agency Spec. Publ. 1077, 67-83. Schulze H. and Kissel J. (1992) Earth Planet. Sci. Lett., submitted. Kissel J. and Krueger F.R. (1987) Appl. Phys. A42, 69-85.
Effect of hot working on the damping capacity and mechanical properties of AZ31 magnesium alloy
NASA Astrophysics Data System (ADS)
Lee, K.; Kang, C.; Kim, K.
2015-04-01
Magnesium alloys have received much attention for their lightweight and other excellent properties, such as low density, high specific strength, and good castability, for use in several industrial and commercial applications. However, both magnesium and its alloys show limited room-temperature formability owing to the limited number of slip systems associated with their hexagonal close-packed crystal structure. It is well known that crystallographic texture plays an important role in both plastic deformation and macroscopic anisotropy of magnesium alloys. Many authors have concentrated on improving the room- temperature formability of Mg alloys. However, despite having a lot of excellent properties in magnesium alloy, the study for various properties of magnesium alloy have not been clarified enough yet. Mg alloys are known to have a good damping capacity compared to other known metals and their alloys. Also, the damping properties of metals are generally recognized to be dependent on microstructural factors such as grain size and texture. However, there are very few studies on the relationship between the damping capacity and texture of Magnesium alloys. Therefore, in this study, specimens of the AZ31 magnesium alloy, were processed by hot working, and their texture and damping property investigated. A 60 mm × 60 mm × 40 mm rectangular plate was cut out by machining an ingot of AZ31 magnesium alloy (Mg-3Al-1Zn in mass%), and rolling was carried out at 673 K to a rolling reduction of 30%. Then, heat treatment was carried out at temperatures in the range of 573-723 K for durations in the range of 30-180 min. The samples were immediately quenched in oil after heat treatment to prevent any change in the microstructure. Texture was evaluated on the compression planes by the Schulz reflection method using nickel-filtered Cu Kα radiation. Electron backscatter diffraction measurements were conducted to observe the spatial distribution of various orientations. Specimens for damping capacity measurements were machined from the rolled specimen, to have a length of 120 mm, width of 20 mm, and thickness of 1 mm. The damping capacity was measured with a flexural internal friction measurement machine at room temperature. It was found that the damping capacity increases with both increasing heat-treatment temperature and time, due to grain growth and the increased pole densities of textures.
Decontamination of radionuclides using γ-Fe2O3 as a Nanosorbent
NASA Astrophysics Data System (ADS)
Bagla, Hemlata; Thakur, Jyotsna
2017-04-01
The release of radioactive waste into the environment and the disposal of conditioned waste is a major environmental concern which demands the improvement in the remediation processes [1]. Due to the advancements in Nanotechnology, novel and simple nanoparticles have been proved very efficient worldwide, in the radioactive waste treatment processes [2]. These nanoparticles prove to be an excellent nanosorbents owing to its very high surface area and other size dependent properties [3]. In the present study, nanocrystalline γ-Fe2O3 was synthesized by gel-combustion method. Gel combustion method [4, 5] is the most facile method of synthesis of nanocrystalline oxides. Fuel deficient composition of ferric nitrate (oxidant) and malonyl dihydrazide (fuel) were mixed well in de-ionised water and heated at temperature 300 °C. The smouldering combustion took place resulting in formation of γ-Fe2O3 which further calcined at 500 °C to remove undesirable impurities. The prepared powder further characterized by various techniques such as X-ray diffractometer, transmission electron microscopy, BET technique and zeta potential measurements. The crystallite size of γ-Fe2O3 was found to be 11 nm. TEM images showed that the grain size obtained was in agreement with the XRD report. Sorption study have been carried out using tracer technique for batch equilibration method at room temperature and atmospheric pressure. A known amount of sorbent (γ-Fe2O3) was mixed with 10 mL of solution containing radiotracer and 1mg/mL solution of carrier. Various parameters such as contact time, pH, amount of sorbent, concentration, temperature, agitation speed were optimized, determination of sorption capacity and interference study was also conducted. The activity is measured by using single channel NaI(Tl) well type gamma ray spectrometer. γ-Fe2O3 was found to be an efficient and cost effective sorbent for the decontamination of heavy radionuclides such as Cs-137, Sr-90, Cd-115m, Cr-51, Hg-203, etc. from low level waste and water effluent. References: 1. Hamasaki T., Nakamichi N., Teruya K., Shirahata S., Removal Efficiency of Radioactive Cesium and Iodine Ions by a Flow-Type Apparatus Designed for Electrochemically Reduced Water Production, PLoS One. 2014; 9(7): e102218. 2. Gehrke I., Geiser A., Somborn-Schulz A., Innovations in nanotechnology for water treatment, Nanotechnol Sci Appl. 2015; 8: 1-17. 3. Galina Lujanienė G., Šemčuk S., Kulakauskaitė I., Mažeika K., Valiulis D., Ju\\vskėnas R., Tautkus S., Sorption of radionuclides and metals to graphene oxide and magnetic graphene oxide, Journal of Radioanalytical and Nuclear Chemistry, 2016;307:3, 2267-2275 4. Patil K.C., Hegde M.S., Rattan T., Aruna S.T., Chemistry of Nanocrystalline Oxide Materials Combustion Synthesis, Properties and Applications, world Scientific Publ. 2008. 5. Thakur, J., Dutta, D. P., Bagla, H. and Tyagi, A. K., Effect of Host Structure and Concentration on the Luminescence of Eu3+ and Tb3+ in Borate Phosphors., J. Am. Ceram. Soc., 2012, 95: 696-704.
Possibilities for the detection of hydrogen peroxide-water-based life on Mars by the Phoenix Lander
NASA Astrophysics Data System (ADS)
Houtkooper, Joop M.; Schulze-Makuch, Dirk
2009-04-01
The Phoenix Lander landed on Mars on 25 May 2008. It has instruments on board to explore the geology and climate of subpolar Mars and to explore if life ever arose on Mars. Although the Phoenix mission is not a life detection mission per se, it will look for the presence of organic compounds and other evidence to support or discredit the notion of past or present life. The possibility of extant life on Mars has been raised by a reinterpretation of the Viking biology experiments [Houtkooper, J. M., Schulze-Makuch, D., 2007. A possible biogenic origin for hydrogen peroxide on Mars: the Viking results reinterpreted. International Journal of Astrobiology 6, 147-152]. The results of these experiments are in accordance with life based on a mixture of water and hydrogen peroxide instead of water. The near-surface conditions on Mars would give an evolutionary advantage to organisms employing a mixture of H 2O 2 and H 2O in their intracellular fluid: the mixture has a low freezing point, is hygroscopic and provides a source of oxygen. The H 2O 2-H 2O hypothesis also explains the Viking results in a logically consistent way. With regard to its compatibility with cellular contents, H 2O 2 is used for a variety of purposes in terran biochemistry. The ability of the anticipated organisms to withstand low temperatures and the relatively high water vapor content of the atmosphere in the Martian arctic, means that Phoenix will land in an area not inimical to H 2O 2-H 2O-based life. Phoenix has a suite of instruments which may be able to detect the signatures of such putative organisms.
Cosmogenic 180W variations in meteorites and re-assessment of a possible 184Os-180W decay system
NASA Astrophysics Data System (ADS)
Cook, David L.; Kruijer, Thomas S.; Leya, Ingo; Kleine, Thorsten
2014-09-01
We measured tungsten (W) isotopes in 23 iron meteorites and the metal phase of the CB chondrite Gujba in order to ascertain if there is evidence for a large-scale nucleosynthetic heterogeneity in the p-process isotope 180W in the solar nebula as recently suggested by Schulz et al. (2013). We observed large excesses in 180W (up to ≈ 6 ε) in some irons. However, significant within-group variations in magmatic IIAB and IVB irons are not consistent with a nucleosynthetic origin, and the collateral effects on 180W from an s-deficit in IVB irons cannot explain the total variation. We present a new model for the combined effects of spallation and neutron capture reactions on 180W in iron meteorites and show that at least some of the observed within-group variability is explained by cosmic ray effects. Neutron capture causes burnout of 180W, whereas spallation reactions lead to positive shifts in 180W. These effects depend on the target composition and cosmic-ray exposure duration; spallation effects increase with Re/W and Os/W ratios in the target and with exposure age. The correlation of 180W/184W with Os/W ratios in iron meteorites results in part from spallogenic production of 180W rather than from 184Os decay, contrary to a recent study by Peters et al. (2014). Residual ε180W excesses after correction for an s-deficit and for cosmic ray effects may be due to ingrowth of 180W from 184Os decay, but the magnitude of this ingrowth is at least a factor of ≈2 smaller than previously suggested. These much smaller effects strongly limit the applicability of the putative 184Os-180W system to investigate geological problems.
Beerens, Koen; Soetaert, Wim; Desmet, Tom
2013-09-01
UDP-hexose 4-epimerases are important enzymes that play key roles in various biological pathways, including lipopolysaccharide biosynthesis, galactose metabolism through the Leloir pathway, and biofilm formation. Unfortunately, the determinants of their substrate specificity are not yet fully understood. They can be classified into three groups, with groups 1 and 3 preferring non-acetylated and acetylated UDP-hexoses, respectively, whereas members of group 2 are equally active on both types of substrates. In this study, the UDP-Glc(NAc) 4-epimerase from Marinithermus hydrothermalis (mGalE) was functionally expressed in Escherichia coli and thoroughly characterized. The enzyme was found to be thermostable, displaying its highest activity at 70 °C and having a half-life of 23 min at 60 °C. Activity could be detected on both acetylated and non-acetylated UDP-hexoses, meaning that this epimerase belongs to group 2. This observation correlates well with the identity of the so-called "gatekeeper" residue (Ser279), which has previously been suggested to influence substrate specificity (Schulz et al., J Biol Chem 279:32796-32803, 2004). Furthermore, substituting this serine to a tyrosine brings about a significant preference for non-acetylated sugars, thereby demonstrating that a single residue can determine substrate specificity among type 1 and type 2 epimerases. In addition, two consecutive glycine residues (Gly118 and Gly119) were identified as a unique feature of GalE enzymes from Thermus species, and their importance for activity as well as affinity was confirmed by mutagenesis. Finally, homology modeling and mutational analysis has revealed that the enzyme's catalytic triad contains a threonine residue (Thr117) instead of the usual serine.
Morozov, I M; Alekseev, A N; Dubinina, E V; Nikitin, A Ya; Melnikova, O V; Andaev, E I
2015-01-01
The paper presents the results of 10-year (2005-2014) observations of an Ixodespersulcatus Schulze population. The purpose of this investigation was to trace long-term changes in the structure of the taiga tick population from the proportion of specimens with external skeletal anomalies and to assess a relationship between the pattern of imago phenotypic variation and the virus percentage of a carrier. There were a total of reports of the external skeletal structure of 1123 females gathered from plants to a flag in an area at 43 km from the Baikal Road connecting Irkutsk and the settlement of Listvyanka (Irkutsk Region). The proportion of specimens with anomalies averaged 37.8 +/- 1.88%. Four-to-seven varying anomalies were annually recorded. There was a preponderance of scutum impairment (an average of 17.0 +/- 3.08% of all females) that was a conglomerate of prominences and indentations along the entire clypeus surface and that was denoted P9. The nature of a change in the proportion of ticks with two anomalies (average monthly registration rate, 2.5 +/- 0.66%) is exhibited by three-year high-frequency oscillations whereas the specimens with P9 anomalies fail to show so clear cycling. The percentage of virus-containing taiga ticks was individually determined estimating the level of tick-borne encephalitis virus antigen by an enzyme immunoassay. A total of 4022 ticks were examined. The male and female data were pooled. There was a positive correlation between the change in the proportion of females with P9 anomaly and the infection of ticks in the examined population (Spearman's correlation coefficient, 0.88; P < 0.01). This supports the earlier observation of the greater epidemiological significance of the imago of a taiga tick with external skeletal anomalies particularly with considerably marked ones.
Zeh, R; Baumann, U
2015-08-01
Cochlear implants (CI) have proven to be a highly effective treatment for severe hearing loss or deafness. Inpatient rehabilitation therapy is frequently discussed as a means to increase the speech perception abilities achieved by CI. However, thus far there exists no quantitative evaluation of the effect of these therapies. A retrospective analysis of audiometric data obtained from 1355 CI users compared standardized and qualitative speech intelligibility tests conducted at two time points (admission to and discharge from inpatient hearing therapy, duration 3-5 weeks). The test battery comprised examination of vowel/consonant identification, the Freiburg numbers and monosyllabic test (65 and 80 dB sound pressure level, SPL, free-field sound level), the Hochmair-Schulz-Moser (HSM) sentence test in quiet and in noise (65 dB SPL speech level; 15 dB signal-to-noise ratio, SNR), and a speech tracking test with and without lip-reading. An average increase of 20 percentage points was scored at discharge compared to the admission tests. Patients of all ages and duration of deafness demonstrated the same amount of benefit from the rehabilitation treatment. After completion of inpatient rehabilitation treatment, patients with short duration of CI experience (below 4 months) achieved test scores comparable to experienced long-term users. The demonstrated benefit of the treatment was independent of age and duration of deafness or CI experience. The rehabilitative training program significantly improved hearing abilities and speech perception in CI users, thus promoting their professional and social inclusion. The present results support the efficacy of inpatient rehabilitation for CI recipients. Integration of this or similar therapeutic concepts in the German catalog of follow-up treatment measures appears justified.
Kurt, Simone; Sausbier, Matthias; Rüttiger, Lukas; Brandt, Niels; Moeller, Christoph K.; Kindler, Jennifer; Sausbier, Ulrike; Zimmermann, Ulrike; van Straaten, Harald; Neuhuber, Winfried; Engel, Jutta; Knipper, Marlies; Ruth, Peter; Schulze, Holger
2012-01-01
Large conductance, voltage- and Ca2+-activated K+ (BK) channels in inner hair cells (IHCs) of the cochlea are essential for hearing. However, germline deletion of BKα, the pore-forming subunit KCNMA1 of the BK channel, surprisingly did not affect hearing thresholds in the first postnatal weeks, even though altered IHC membrane time constants, decreased IHC receptor potential alternating current/direct current ratio, and impaired spike timing of auditory fibers were reported in these mice. To investigate the role of IHC BK channels for central auditory processing, we generated a conditional mouse model with hair cell-specific deletion of BKα from postnatal day 10 onward. This had an unexpected effect on temporal coding in the central auditory system: neuronal single and multiunit responses in the inferior colliculus showed higher excitability and greater precision of temporal coding that may be linked to the improved discrimination of temporally modulated sounds observed in behavioral training. The higher precision of temporal coding, however, was restricted to slower modulations of sound and reduced stimulus-driven activity. This suggests a diminished dynamic range of stimulus coding that is expected to impair signal detection in noise. Thus, BK channels in IHCs are crucial for central coding of the temporal fine structure of sound and for detection of signals in a noisy environment.—Kurt, S., Sausbier, M., Rüttiger, L., Brandt, N., Moeller, C. K., Kindler, J., Sausbier, U., Zimmermann, U., van Straaten, H., Neuhuber, W., Engel, J., Knipper, M., Ruth, P., Schulze, H. Critical role for cochlear hair cell BK channels for coding the temporal structure and dynamic range of auditory information for central auditory processing. PMID:22691916
Cobalt carbide nanoprisms for direct production of lower olefins from syngas
NASA Astrophysics Data System (ADS)
Zhong, Liangshu; Yu, Fei; An, Yunlei; Zhao, Yonghui; Sun, Yuhan; Li, Zhengjia; Lin, Tiejun; Lin, Yanjun; Qi, Xingzhen; Dai, Yuanyuan; Gu, Lin; Hu, Jinsong; Jin, Shifeng; Shen, Qun; Wang, Hui
2016-10-01
Lower olefins—generally referring to ethylene, propylene and butylene—are basic carbon-based building blocks that are widely used in the chemical industry, and are traditionally produced through thermal or catalytic cracking of a range of hydrocarbon feedstocks, such as naphtha, gas oil, condensates and light alkanes. With the rapid depletion of the limited petroleum reserves that serve as the source of these hydrocarbons, there is an urgent need for processes that can produce lower olefins from alternative feedstocks. The ‘Fischer-Tropsch to olefins’ (FTO) process has long offered a way of producing lower olefins directly from syngas—a mixture of hydrogen and carbon monoxide that is readily derived from coal, biomass and natural gas. But the hydrocarbons obtained with the FTO process typically follow the so-called Anderson-Schulz-Flory distribution, which is characterized by a maximum C2-C4 hydrocarbon fraction of about 56.7 per cent and an undesired methane fraction of about 29.2 per cent (refs 1, 10, 11, 12). Here we show that, under mild reaction conditions, cobalt carbide quadrangular nanoprisms catalyse the FTO conversion of syngas with high selectivity for the production of lower olefins (constituting around 60.8 per cent of the carbon products), while generating little methane (about 5.0 per cent), with the ratio of desired unsaturated hydrocarbons to less valuable saturated hydrocarbons amongst the C2-C4 products being as high as 30. Detailed catalyst characterization during the initial reaction stage and theoretical calculations indicate that preferentially exposed {101} and {020} facets play a pivotal role during syngas conversion, in that they favour olefin production and inhibit methane formation, and thereby render cobalt carbide nanoprisms a promising new catalyst system for directly converting syngas into lower olefins.
Job Crafting: Older Workers' Mechanism for Maintaining Person-Job Fit.
Wong, Carol M; Tetrick, Lois E
2017-01-01
Aging at work is a dynamic process. As individuals age, their motives, abilities and values change as suggested by life-span development theories (Lang and Carstensen, 2002; Kanfer and Ackerman, 2004). Their growth and extrinsic motives weaken while intrinsic motives increase (Kooij et al., 2011), which may result in workers investing their resources in different areas accordingly. However, there is significant individual variability in aging trajectories (Hedge et al., 2006). In addition, the changing nature of work, the evolving job demands, as well as the available opportunities at work may no longer be suitable for older workers, increasing the likelihood of person-job misfit. The potential misfit may, in turn, impact how older workers perceive themselves on the job, which leads to conflicting work identities. With the traditional job redesign approach being a top-down process, it is often difficult for organizations to take individual needs and skills into consideration and tailor jobs for every employee (Berg et al., 2010). Therefore, job crafting, being an individualized process initiated by employees themselves, can be a particularly valuable mechanism for older workers to realign and enhance their demands-abilities and needs-supplies fit. Through job crafting, employees can exert personal agency and make changes to the task, social and cognitive aspects of their jobs with the goal of improving their work experience (Wrzesniewski and Dutton, 2001). Building on the Life Span Theory of Control (Heckhausen and Schulz, 1995), we posit that job crafting, particularly cognitive crafting, will be of increasing value as employees age. Through reframing how they think of their job and choosing to emphasize job features that are personally meaningful, older workers can optimize their resources to proactively redesign their jobs and maintain congruent, positive work identities.
Tan, Liqiang; Tan, Xiaoli; Mei, Huiyang; Ai, Yuejie; Sun, Lu; Zhao, Guixia; Hayat, Tasawar; Alsaedi, Ahmed; Chen, Changlun; Wang, Xiangke
2018-05-01
The coagulation behaviors of humic acid (HA) with Cs + (10-500 mM), Sr 2+ (0.8-10.0 mM) and Eu 3+ (0.01-1.0 mM) at different pH values (2.8, 7.1 and 10.0) were acquired through a dynamic light scattering (DLS) technique combined with spectroscopic analysis and molecular dynamic (MD) simulations. The coagulation rate and the average hydrodynamic diameter (
TableViewer for Herschel Data Processing
NASA Astrophysics Data System (ADS)
Zhang, L.; Schulz, B.
2006-07-01
The TableViewer utility is a GUI tool written in Java to support interactive data processing and analysis for the Herschel Space Observatory (Pilbratt et al. 2001). The idea was inherited from a prototype written in IDL (Schulz et al. 2005). It allows to graphically view and analyze tabular data organized in columns with equal numbers of rows. It can be run either as a standalone application, where data access is restricted to FITS (FITS 1999) files only, or it can be run from the Quick Look Analysis(QLA) or Interactive Analysis(IA) command line, from where also objects are accessible. The graphic display is very versatile, allowing plots in either linear or log scales. Zooming, panning, and changing data columns is performed rapidly using a group of navigation buttons. Selecting and de-selecting of fields of data points controls the input to simple analysis tasks like building a statistics table, or generating power spectra. The binary data stored in a TableDataset^1, a Product or in FITS files can also be displayed as tabular data, where values in individual cells can be modified. TableViewer provides several processing utilities which, besides calculation of statistics either for all channels or for selected channels, and calculation of power spectra, allows to convert/repair datasets by changing the unit name of data columns, and by modifying data values in columns with a simple calculator tool. Interactively selected data can be separated out, and modified data sets can be saved to FITS files. The tool will be very helpful especially in the early phases of Herschel data analysis when a quick access to contents of data products is important. TableDataset and Product are Java classes defined in herschel.ia.dataset.
Job Crafting: Older Workers’ Mechanism for Maintaining Person-Job Fit
Wong, Carol M.; Tetrick, Lois E.
2017-01-01
Aging at work is a dynamic process. As individuals age, their motives, abilities and values change as suggested by life-span development theories (Lang and Carstensen, 2002; Kanfer and Ackerman, 2004). Their growth and extrinsic motives weaken while intrinsic motives increase (Kooij et al., 2011), which may result in workers investing their resources in different areas accordingly. However, there is significant individual variability in aging trajectories (Hedge et al., 2006). In addition, the changing nature of work, the evolving job demands, as well as the available opportunities at work may no longer be suitable for older workers, increasing the likelihood of person-job misfit. The potential misfit may, in turn, impact how older workers perceive themselves on the job, which leads to conflicting work identities. With the traditional job redesign approach being a top-down process, it is often difficult for organizations to take individual needs and skills into consideration and tailor jobs for every employee (Berg et al., 2010). Therefore, job crafting, being an individualized process initiated by employees themselves, can be a particularly valuable mechanism for older workers to realign and enhance their demands-abilities and needs-supplies fit. Through job crafting, employees can exert personal agency and make changes to the task, social and cognitive aspects of their jobs with the goal of improving their work experience (Wrzesniewski and Dutton, 2001). Building on the Life Span Theory of Control (Heckhausen and Schulz, 1995), we posit that job crafting, particularly cognitive crafting, will be of increasing value as employees age. Through reframing how they think of their job and choosing to emphasize job features that are personally meaningful, older workers can optimize their resources to proactively redesign their jobs and maintain congruent, positive work identities. PMID:28943859
Quality of life and social support in patients with multiple sclerosis.
Rosiak, Katarzyna; Zagożdżon, Paweł
2017-10-29
Quality of life and needforsocial support in persons diagnosed with multiple sclerosis (MS) are to a large extent determined by the degree of their disability. The aim of the study was to analyze an association between specific forms of MS, subjectively perceived quality of life and social support. The study included subjects with established diagnosis of MS, treated at rehabilitation centers, hospitals and in a home setting, as well as the members of patient organizations. After being informed about objectives of the study, type of included tasks and way to complete them, each participant was handed out a set of questionnaires: Berlin Social Support Scales (Łuszczyńska, Kowalska, Schwarzer, Schulz), Quality of Life Questionnaire (WHOQOLBREF), as well as a survey developed specifically for the purposes of this project. The results were subjected to statistical analysis with STATA 12 package. The study included a total of 110 persons (67 women and 43 men). Quality of life overall, as well in physical, psychological, social relationships and environmental health domains, turned out to be particularly important in patients with primary-progressive MS. Irrespective of MS type, social support overall did not play a significant role on univariate analysis. However, subgroup analysis according to sex demonstrated that men with MS received social support four times less often than women. Quality of life in individuals with primary-progressive MS is significantly lower than in patients presenting with other types of this disease. Men with MS are more likely to present with worse scores for social support overall. They are less likely both to acknowledge the need for support and to realize the availability of support they actually need.
Natural antibody responses to the capsid protein in sera of Dengue infected patients from Sri Lanka.
Nadugala, Mahesha N; Jeewandara, Chandima; Malavige, Gathsaurie N; Premaratne, Prasad H; Goonasekara, Charitha L
2017-01-01
This study aims to characterize the antigenicity of the Capsid (C) protein and the human antibody responses to C protein from the four dengue virus (DENV) serotypes. Parker hydrophilicity prediction, Emini surface accessibility prediction and Karplus & Schulz flexibility predictions were used to bioinformatically characterize antigenicity. The human antibody response to C protein was assessed by ELISA using immune sera and an array of overlapping DENV2 C peptides. DENV2 C protein peptides P1 (located on C protein at 2-18 a.a), P11 (79-95 a.a) and P12 (86-101 a.a) were recognized by most individuals exposed to infections with only one of the 4 DENV serotypes as well as people exposed to infections with two serotypes. These conserved peptide epitopes are located on the amino (1-40 a.a) and carboxy (70-100 a.a) terminal regions of C protein, which were predicted to be antigenic using different bioinformatic tools. DENV2 C peptide P6 (39-56 a.a) was recognized by all individuals exposed to DENV2 infections, some individuals exposed to DENV4 infections and none of the individuals exposed to DENV1 or 3 infections. Thus, unlike C peptides P1, P11 and P12, which contain epitopes, recognized by DENV serotype cross-reactive antibodies, DENV2 peptide P6 contains an epitope that is preferentially recognized by antibodies in people exposed to this serotype compared to other serotypes. We discuss our results in the context of the known structure of C protein and recent work on the human B-cell response to DENV infection.
Influence of a Vented Mouthguard on Physiological Responses in Handball.
Schulze, Antina; Laessing, Johannes; Kwast, Stefan; Busse, Martin
2018-05-23
Schulze, A, Laessing, J, Kwast, S, and Busse, M. Influence of a vented mouthguard on physiological responses in handball. J Strength Cond Res XX(X): 000-000, 2018-Mouthguards (MGs) improve sports safety. However, airway obstruction and a resulting decrease in performance are theoretical disadvantages regarding their use. The study aim was to assess possible limitations of a "vented" MG on aerobic performance in handball. The physiological effects were investigated in 14 male professional players in a newly developed handball-specific course. The measured values were oxygen uptake, ventilation, heart rate, and lactate. Similar oxygen uptake (V[Combining Dot Above]O2) values were observed with and without MG use (51.9 ± 6.4 L·min·kg vs. 52.1 ± 10.9 L·min·kg). During maximum load, ventilation was markedly lower with the vented MG (153.1 ± 25 L·min vs. 166.3 ± 20.8 L·min). The endexpiratory concentrations of O2 (17.2 ± 0.5% vs. 17.6 ± 0.8%) and CO2 (4.0 ± 0.5% vs. 3.7 ± 0.6%) were significantly lower and higher, respectively, when using the MG. The inspiration and expiration times with and without the MG were 0.6 ± 0.1 seconds vs. 0.6 ± 0.1 seconds and 0.7 ± 0.2 seconds vs. 0.6 ± 0.2 seconds (all not significant), respectively, indicating that there was no relevant airflow restriction. The maximum load was not significantly affected by the MG. The lower ventilation for given V[Combining Dot Above]O2 values associated with MG use may be an effect of improved biomechanics and lower respiratory drive of the peripheral musculature.
Assessing Planetary Habitability: Don't Forget Exotic Life!
NASA Astrophysics Data System (ADS)
Schulze-Makuch, Dirk
2012-05-01
With the confirmed detection of more than 700 exoplanets, the temptation looms large to constrain the search for extraterrestrial life to Earth-type planets, which have a similar distance to their star, a similar radius, mass and density. Yet, a look even within our Solar System points to a variety of localities to which life could have adapted to outside of the so-called Habitable Zone (HZ). Examples include the hydrocarbon lakes on Titan, the subsurface ocean environment of Europa, the near- surface environment of Mars, and the lower atmosphere of Venus. Recent Earth analog work and extremophile investigations support this notion, such as the discovery of a large microbial community in a liquid asphalt lake in Trinidad (as analog to Titan) or the discovery of a cryptoendolithic habitat in the Antarctic desert, which exists inside rocks, such as beneath sandstone surfaces and dolerite clasts, and supports a variety of eukaryotic algae, fungi, and cyanobacteria (as analog to Mars). We developed a Planetary Habitability Index (PHI, Schulze-Makuch et al., 2011), which was developed to prioritize exoplanets not based on their similarity to Earth, but whether the extraterrestrial environment could, in principle, be a suitable habitat for life. The index includes parameters that are considered to be essential for life such as the presence of a solid substrate, an atmosphere, energy sources, polymeric chemistry, and liquids on the planetary surface. However, the index does not require that this liquid is water or that the energy source is light (though the presence of light is a definite advantage). Applying the PHI to our Solar System, Earth comes in first, with Titan second, and Mars third.
NASA Astrophysics Data System (ADS)
Gatti, E.; Saidin, M.; Gibbard, P.; Oppenheimer, C.
2011-12-01
The Younger Toba Tuff eruption, approximately 73 ka ago, is the largest known for the Quaternary and its climate, environmental and human consequences are keenly debated (Oppenheimer, 2011).While the distribution (Rose and Chesner, 1987; Rose and Chesner, 1990; Chesner et al., 1991; Schulz et al., 2002; Von Rad et al., 2002) , geochemical properties (Shane et al., 1995; Westgate et al., 1998) and volcanic significance (Rampino and Self, 1982; Rampino and Self, 1993; Rampino and Ambrose, 2000; Oppenheimer, 2002; Mason et al., 2004)of the YTT have been widely studied, few attention has been given to the significance of the distal volcanic ash deposits within their receiving basin context. Although several studies exist on the impact of pyroclastic flows on proximal rivers and lakes (Collins and Dunne, 1986; Thompson et al., 1986; Hayes et al., 2002; Németh and Cronin, 2007), only few address the issues of the dynamic of preservation of super-distal fine ash deposits in rivers (also due to the lack of direct data on super-eruptions). It has also been demonstrated that models of the styles and timing of distal volcanoclastic re-sedimentation are more complicated than those developed for proximal settings of stratovolcanoes (Kataoka et al., 2009). We present an analysis of the taphonomy (intended as accumulation and preservation) of distal volcanic ash in fluvial and lacustrian contexts in newly discovered Toungest Toba Tuff sites in the Lenggong valley, western Peninsular Malaysia. The paper aims to characterise the nature of distal tephras in fluvial environments towards a stratigraphic distinction between primary ash and secondary ash, characterisation of the pre-ash fall receiving environment in term of fluvial dynamic and landscape morphology, and assessment of the time of recovery.
Wiedemann, Ute; Burgmair, Wolfgang; Weber, Matthias M
2007-01-01
Between 1927 and 1944 the psychiatrist Adele Juda (*1888-+1949) studied the biographies of more than 600 German-speaking "geniuses" and their families from a period between 1648 and 1920. The concept of this so-called "Höchstbegabtenstudie" (study on high-gifted persons) had been developed by the psychiatrist, human geneticist and racial hygienist Ernst Rüdin, director of the Deutsche Forschungsanstalt für Psychiatrie in Munich from 1917 to 1945. Juda's study was aimed at a re-examination of the "Genie-Irrsins-Hypothese" (genius-madness-theory) having been much discussed in medicine and anthropology since Cesare Lombroso, as it was hardly consistent with some of Rüdin's racial-hygienic concepts. While trying to make a selection of probands as objective as possible and to overcome a so far common purely casuistic approach, Juda's study also gave cause for criticism, for example as to the subjectivity of psychopathological assessment or the political and ideological conditions under which data were gathered. Nevertheless the "Höchstbegabtenstudie" has to be seen as the most extensive and as well as the last scientific piece of research concerning the "Genialenproblem" having been done in the 20th century, with the material gathered being an important cultural-historical source independent of its originally intended use. As one of the most important results Juda was able to prove a significant relation between mental illness and gift. For different reasons this result was not published until after Juda's death by Bruno Schulz, one of her former colleagues at the genealogical-demographic department, in 1953 and 1955. Last not least due to the fact that they came from Rüdin's former institute these publications were not taken much notice of.
Mantokoudis, Georgios; Koller, Roger; Guignard, Jérémie; Caversaccio, Marco; Kompis, Martin; Senn, Pascal
2017-04-24
Telecommunication is limited or even impossible for more than one-thirds of all cochlear implant (CI) users. We sought therefore to study the impact of voice quality on speech perception with voice over Internet protocol (VoIP) under real and adverse network conditions. Telephone speech perception was assessed in 19 CI users (15-69 years, average 42 years), using the German HSM (Hochmair-Schulz-Moser) sentence test comparing Skype and conventional telephone (public switched telephone networks, PSTN) transmission using a personal computer (PC) and a digital enhanced cordless telecommunications (DECT) telephone dual device. Five different Internet transmission quality modes and four accessories (PC speakers, headphones, 3.5 mm jack audio cable, and induction loop) were compared. As a secondary outcome, the subjective perceived voice quality was assessed using the mean opinion score (MOS). Speech telephone perception was significantly better (median 91.6%, P<.001) with Skype compared with PSTN (median 42.5%) under optimal conditions. Skype calls under adverse network conditions (data packet loss > 15%) were not superior to conventional telephony. In addition, there were no significant differences between the tested accessories (P>.05) using a PC. Coupling a Skype DECT phone device with an audio cable to the CI, however, resulted in higher speech perception (median 65%) and subjective MOS scores (3.2) than using PSTN (median 7.5%, P<.001). Skype calls significantly improve speech perception for CI users compared with conventional telephony under real network conditions. Listening accessories do not further improve listening experience. Current Skype DECT telephone devices do not fully offer technical advantages in voice quality. ©Georgios Mantokoudis, Roger Koller, Jérémie Guignard, Marco Caversaccio, Martin Kompis, Pascal Senn. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 24.04.2017.
Formation of Low Symmetry Ordered Phases in Block Polymer Melts
NASA Astrophysics Data System (ADS)
Bates, Frank
Until recently the phase behavior of asymmetric AB diblock copolymers in the melt state was universally accepted as a solved problem: spherical domains packed on a body centered cubic (BCC) lattice. Recent experiments with low molecular weight diblocks have upended this picture, beginning with the discovery of the Frank-Kasper sigma phase in poly(isoprene)- b-poly(lactide) (PI-PLA) followed recently by the identification of a dodecagonal quasicrystal phase (DDQC) as a metastable state that evolves from the supercooled disordered liquid. Self-consistent mean-field theory shows that introducing conformational asymmetry (bA >bB where b is the statistical segment length) opens a window in the phase portrait at fA <<1/2 that supports the formation of various low symmetry ordered phases. However, contrary to the widely accepted mean-field picture, the disordered state near the order-disorder transition (ODT) is highly structured and rapid cooling of this micellar fluid several tens of degrees below the ODT temperature arrests macromolecular chain exchange transitioning the material from an ergodic to non-ergodic state. We have explored the evolution of order following such temperature quenches and during subsequent reheating using synchrotron small-angle X-ray scattering (SAXS) revealing surprising analogies with the behavior of metal alloys. This presentation will associate the formation of ordered low symmetry phases with the concept of sphericity, the tendency for the self-assembled nanoparticles to be spherical in competition with the constraints imposed by periodic and aperiodic packing without voids and subject to the condition of incompressibility. Supported by NSF-DMR-1104368. This work was conducted in collaboration with Kyungtae Kim, Morgan Schulze, Akash Arora, Ronald Lewis, Timothy Gillard, Sangwoo Lee, Kevin Dorfman and Marc Hillmyer.
Role of length polydispersity in the phase behavior of freely rotating hard-rectangle fluids
NASA Astrophysics Data System (ADS)
Díaz-De Armas, Ariel; Martínez-Ratón, Yuri
2017-05-01
We use the density-functional formalism, in particular the scaled-particle theory, applied to a length-polydisperse hard-rectangle fluid to study its phase behavior as a function of the mean particle aspect ratio κ0 and polydispersity Δ0. The numerical solutions of the coexistence equations are calculated by transforming the original problem with infinite degrees of freedoms to a finite set of equations for the amplitudes of the Fourier expansion of the moments of the density profiles. We divide the study into two parts. The first one is devoted to the calculation of the phase diagrams in the packing fraction η0-κ0 plane for a fixed Δ0 and selecting parent distribution functions with exponential (the Schulz distribution) or Gaussian decays. In the second part we study the phase behavior in the η0-Δ0 plane for fixed κ0 while Δ0 is changed. We characterize in detail the orientational ordering of particles and the fractionation of different species between the coexisting phases. Also we study the character (second vs first order) of the isotropic-nematic phase transition as a function of polydispersity. We particularly focus on the stability of the tetratic phase as a function of κ0 and Δ0. The isotropic-nematic transition becomes strongly of first order when polydispersity is increased: The coexistence gap widens and the location of the tricritical point moves to higher values of κ0 while the tetratic phase is slightly destabilized with respect to the nematic one. The results obtained here can be tested in experiments on shaken monolayers of granular rods.
NASA Astrophysics Data System (ADS)
Yebra, Marta; van Dijk, Albert
2015-04-01
Water use efficiency (WUE, the amount of transpiration or evapotranspiration per unit gross (GPP) or net CO2 uptake) is key in all areas of plant production and forest management applications. Therefore, mutually consistent estimates of GPP and transpiration are needed to analysed WUE without introducing any artefacts that might arise by combining independently derived GPP and ET estimates. GPP and transpiration are physiologically linked at ecosystem level by the canopy conductance (Gc). Estimates of Gc can be obtained by scaling stomatal conductance (Kelliher et al. 1995) or inferred from ecosystem level measurements of gas exchange (Baldocchi et al., 2008). To derive large-scale or indeed global estimates of Gc, satellite remote sensing based methods are needed. In a previous study, we used water vapour flux estimates derived from eddy covariance flux tower measurements at 16 Fluxnet sites world-wide to develop a method to estimate Gc using MODIS reflectance observations (Yebra et al. 2013). We combined those estimates with the Penman-Monteith combination equation to derive transpiration (T). The resulting T estimates compared favourably with flux tower estimates (R2=0.82, RMSE=29.8 W m-2). Moreover, the method allowed a single parameterisation for all land cover types, which avoids artefacts resulting from land cover classification. In subsequent research (Yebra et al, in preparation) we used the same satellite-derived Gc values within a process-based but simple canopy GPP model to constrain GPP predictions. The developed model uses a 'big-leaf' description of the plant canopy to estimate the mean GPP flux as the lesser of a conductance-limited and radiation-limited GPP rate. The conductance-limited rate was derived assuming that transport of CO2 from the bulk air to the intercellular leaf space is limited by molecular diffusion through the stomata. The radiation-limited rate was estimated assuming that it is proportional to the absorbed photosynthetically active radiation (PAR), calculated as the product of the fraction of absorbed PAR (fPAR) and PAR flux. The proposed algorithm performs well when evaluated against flux tower GPP (R2=0.79, RMSE= 1.93 µmol m2 s-1). Here we use GPP and T estimates previously derived at the same 16 Fluxnet sites to analyse WUE. Satellite-derived WUE explained variation in (long-term average) WUE among plant functional types but evergreen needleleaf had higher WUE than predicted. The benefit of our approach is that it uses mutually consistent estimates of GPP and T to derive canopy-level WUE without any land cover classification artefacts. References Baldocchi, D. (2008). Turner Review No. 15: 'Breathing' of the terrestrial biosphere: lessons learned from a global network of carbon dioxide flux measurement systems. Australian Journal of Botany, 56, 26 Kelliher, F.M., Leuning, R., Raupach, M.R., & Schulze, E.D. (1995). Maximum conductances for evaporation from global vegetation types. Agricultural and Forest Meteorology, 73, 1-16 Yebra, M., Van Dijk, A., Leuning, R., Huete, A., & Guerschman, J.P. (2013). Evaluation of optical remote sensing to estimate actual evapotranspiration and canopy conductance. Remote Sensing of Environment, 129, 250-261
Rothenfluh, Fabia; Schulz, Peter J
2017-05-01
Physician rating websites (PRWs) offer health care consumers the opportunity to evaluate their doctor anonymously. However, physicians' professional training and experience create a vast knowledge gap in medical matters between physicians and patients. This raises ethical concerns about the relevance and significance of health care consumers' evaluation of physicians' performance. To identify the aspects physician rating websites should offer for evaluation, this study investigated the aspects of physicians and their practice relevant for identifying a good doctor, and whether health care consumers are capable of evaluating these aspects. In a first step, a Delphi study with physicians from 4 specializations was conducted, testing various indicators to identify a good physician. These indicators were theoretically derived from Donabedian, who classifies quality in health care into pillars of structure, process, and outcome. In a second step, a cross-sectional survey with health care consumers in Switzerland (N=211) was launched based on the indicators developed in the Delphi study. Participants were asked to rate the importance of these indicators to identify a good physician and whether they would feel capable to evaluate those aspects after the first visit to a physician. All indicators were ordered into a 4×4 grid based on evaluation and importance, as judged by the physicians and health care consumers. Agreement between the physicians and health care consumers was calculated applying Holsti's method. In the majority of aspects, physicians and health care consumers agreed on what facets of care were important and not important to identify a good physician and whether patients were able to evaluate them, yielding a level of agreement of 74.3%. The two parties agreed that the infrastructure, staff, organization, and interpersonal skills are both important for a good physician and can be evaluated by health care consumers. Technical skills of a doctor and outcomes of care were also judged to be very important, but both parties agreed that they would not be evaluable by health care consumers. Health care consumers in Switzerland show a high appraisal of the importance of physician-approved criteria for assessing health care performance and a moderate self-perception of how capable they are of assessing the quality and performance of a physician. This study supports that health care consumers are differentiating between aspects they perceive they would be able to evaluate after a visit to a physician (such as attributes of structure and the interpersonal skills of a doctor), and others that lay beyond their ability to make an accurate judgment about (such as technical skills of a physician and outcome of care). ©Fabia Rothenfluh, Peter J Schulz. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 01.05.2017.
Effects of eHealth Literacy on General Practitioner Consultations: A Mediation Analysis.
Schulz, Peter Johannes; Fitzpatrick, Mary Anne; Hess, Alexandra; Sudbury-Riley, Lynn; Hartung, Uwe
2017-05-16
Most evidence (not all) points in the direction that individuals with a higher level of health literacy will less frequently utilize the health care system than individuals with lower levels of health literacy. The underlying reasons of this effect are largely unclear, though people's ability to seek health information independently at the time of wide availability of such information on the Internet has been cited in this context. We propose and test two potential mediators of the negative effect of eHealth literacy on health care utilization: (1) health information seeking and (2) gain in empowerment by information seeking. Data were collected in New Zealand, the United Kingdom, and the United States using a Web-based survey administered by a company specialized on providing online panels. Combined, the three samples resulted in a total of 996 baby boomers born between 1946 and 1965 who had used the Internet to search for and share health information in the previous 6 months. Measured variables include eHealth literacy, Internet health information seeking, the self-perceived gain in empowerment by that information, and the number of consultations with one's general practitioner (GP). Path analysis was employed for data analysis. We found a bundle of indirect effect paths showing a positive relationship between health literacy and health care utilization: via health information seeking (Path 1), via gain in empowerment (Path 2), and via both (Path 3). In addition to the emergence of these indirect effects, the direct effect of health literacy on health care utilization disappeared. The indirect paths from health literacy via information seeking and empowerment to GP consultations can be interpreted as a dynamic process and an expression of the ability to find, process, and understand relevant information when that is necessary. ©Peter Johannes Schulz, Mary Anne Fitzpatrick, Alexandra Hess, Lynn Sudbury-Riley, Uwe Hartung. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.05.2017.
NASA Astrophysics Data System (ADS)
Khan, A.
2016-12-01
Pitch Lake is located in the southwest peninsula of the island near La Brea in Trinidad and Tobago, covering an area of approximately 46 hectares. It was discovered in the year 1595 and is the largest of three natural asphalt lakes that exist on Earth. Pitch Lake is a large oval shaped reservoir composed of dominantly hydrocarbon compounds, but also includes minor amounts of clay and muddy water. It is a natural liquid asphalt desert, which is nourished by a form of petroleum consisting of mostly asphaltines from the surrounding oil-rich region. The hydrocarbons mix with mud and gases under high pressure during upward seepage, and the lighter portion evaporates or is volatilized, which produces a high-viscosity liquid asphalt residue. The residue on and near the surface is a hydrocarbon matrix, which poses extremely challenging environmental conditions to microorganisms characterized by an average low water activity in the range of 0.49 to 0.75, recalcitrant carbon substrates, and toxic chemical compounds. Nevertheless, an active microbial community of archaea and bacteria, many of them novel strains, was found to inhabit the liquid hydrocarbon matrix of Pitch Lake. Geochemical analyses of minerals, done by our team, which revealed sulfates, sulfides, silicates, and metals, normally associated with deep-water hydrothermal vents leads to our new hypothetical model to describe the origins of Pitch Lake and its importance to atmospheric and earth sciences. Pitch Lake is likely the terrestrial equivalent of an offshore submarine asphalt volcano just as La Brea Tar Pits are in some ways an on-land version of the asphalt volcanoes discovered off shore of Santa Barbara by Valentine et al. in 2010. Asphalt volcanism possibly also creates the habitat for chemosynthetic life that is widespread in this lake, as reported by Schulze-Makuch et al. in 2011 and Meckenstock et al. in 2014.
Nyman, Samuel R; Szymczynska, Paulina
2016-03-01
Dementia is being increasingly recognised as a major public health issue for our ageing populations. A critical aspect of supporting people with dementia is facilitating their participation in meaningful activities. However, research to date has not drawn on theories of ageing from developmental psychology that would help undergird the importance of such meaningful activity. For the first time, we connect existing activity provision for people with dementia with developmental psychology theories of ageing. We reviewed the literature in two stages: first, we narratively searched the literature to demonstrate the relevance of psychological theories of ageing for provision of meaningful activities for people with dementia, and in particular focused on stage-based theories of adult development (Carl Jung and Erik Erikson), gerotranscendence (Tornstam), selective optimisation with compensation (Baltes and Baltes), and optimisation in primary and secondary control (Heckhausen and Schulz). Second, we systematically searched PubMed and PsycINFO for studies with people with dementia that made use of the aforementioned theories. The narrative review highlights that activity provision for people with dementia goes beyond mere pleasure to meeting fundamental psychological needs. More specifically, that life review therapy and life story work address the need for life review; spiritual/religious activities address the need for death preparation; intergenerational activities address the need for intergenerational relationships; re-acquaintance with previously conducted leisure activities addresses the need for a sense of control and to achieve life goals; and pursuit of new leisure activities addresses the need to be creative. The systematic searches identified two studies that demonstrated the utility of applying Erikson's theory of psychosocial development to dementia care. We argue for the importance of activity provision for people with dementia to help promote wellbeing among an increasing proportion of older people. © Royal Society for Public Health 2016.
Radiative Forcing of the Direct Aerosol Effect from AeroCom Phase II Simulations
NASA Technical Reports Server (NTRS)
Myhre, G.; Samset, B. H.; Schulz, M.; Balkanski, Y.; Bauer, S.; Berntsen, T. K.; Bian, H.; Bellouin, N.; Chin, M.; Diehl, T.;
2013-01-01
We report on the AeroCom Phase II direct aerosol effect (DAE) experiment where 16 detailed global aerosol models have been used to simulate the changes in the aerosol distribution over the industrial era. All 16 models have estimated the radiative forcing (RF) of the anthropogenic DAE, and have taken into account anthropogenic sulphate, black carbon (BC) and organic aerosols (OA) from fossil fuel, biofuel, and biomass burning emissions. In addition several models have simulated the DAE of anthropogenic nitrate and anthropogenic influenced secondary organic aerosols (SOA). The model simulated all-sky RF of the DAE from total anthropogenic aerosols has a range from -0.58 to -0.02 W m(sup-2), with a mean of -0.27 W m(sup-2 for the 16 models. Several models did not include nitrate or SOA and modifying the estimate by accounting for this with information slightly strengthens the mean. Modifying the model estimates for missing aerosol components and for the time period 1750 to 2010 results in a mean RF for the DAE of -0.35 W m(sup-2). Compared to AeroCom Phase I (Schulz et al., 2006) we find very similar spreads in both total DAE and aerosol component RF. However, the RF of the total DAE is stronger negative and RF from BC from fossil fuel and biofuel emissions are stronger positive in the present study than in the previous AeroCom study.We find a tendency for models having a strong (positive) BC RF to also have strong (negative) sulphate or OA RF. This relationship leads to smaller uncertainty in the total RF of the DAE compared to the RF of the sum of the individual aerosol components. The spread in results for the individual aerosol components is substantial, and can be divided into diversities in burden, mass extinction coefficient (MEC), and normalized RF with respect to AOD. We find that these three factors give similar contributions to the spread in results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eshuis, J.J.W.; Tan, Y.Y.; Meetsma, A.
1992-01-01
In N,N-dimethylaniline the ionic complexes [Cp{sup *}{sub 2}MMe(THT)]{sup +}[BPh{sub 4}]{sup {minus}}(M=Zr,Hf) oligomerize propene to low molecular weight oligomers. At room temperature for M = Zr a rather broad molecular weight distribution is obtained (C{sub 6} to C{sub 24}), whereas for M = Hf only one dimer (4-methyl-1-pentene) and one trimer (4,6-dimethyl-1-heptene) are formed. With an increase in temperature the product composition shifts to lower molecular weights, but the specific formation of head-to-tail oligomers is retained. The oligomers are formed by {beta}-Me transfer from the growing oligopropene alkyl chain to the metal center. The molecular weight distributions of the oligomers producedmore » at temperature between 5 and 45 {degrees}C are satisfactorily described by the Flory-Schulz theory. This allows the calculation of ratios of rate coefficients for propagation (k{sub p}) and termination (k{sub t}). Inactivation of the catalysts is caused by two different mechanisms. At room temperature allylic C-H activation of monomer and isobutene (formed by a minor {beta}-H transfer termination) gives inactive (meth) allyl compounds, [Cp{sup *} {sub 2}M({eta}{sup 3}-C{sub 3}H{sub 5})]{sup +} and [Cp{sup *}{sub 2}M({eta}{sup 3}-C{sub 4}H{sub 7})]{sup +} (M = Zr, Hf). At elevated temperatures (>45 {degrees}C) catalytically inactive zwitterionic complexes Cp{sup *}{sub M}{sup +}-m-C{sub 6}H{sub 4}-BPh{sub 3}{sup {minus}}(M = Zr, Hf) are formed through aromatic C-H activation. Reactivation of the inactive (meth)allyl complexes can be achieved by addition of hydrogen to the oligomerization mixtures. 38 refs., 4 figs., 7 tabs.« less
Warner, David F.; Brown, Tyson H.
2011-01-01
A number of studies have demonstrated wide disparities in health among racial/ethnic groups and by gender, yet few have examined how race/ethnicity and gender intersect or combine to affect the health of older adults. The tendency of prior research to treat race/ethnicity and gender separately has potentially obscured important differences in how health is produced and maintained, undermining efforts to eliminate health disparities. The current study extends previous research by taking an intersectionality approach (Mullings & Schulz, 2006), grounded in life course theory, conceptualizing and modeling trajectories of functional limitations as dynamic life course processes that are jointly and simultaneously defined by race/ethnicity and gender. Data from the nationally representative 1994–2006 US Health and Retirement Study and growth curve models are utilized to examine racial/ethnic/gender differences in intra-individual change in functional limitations among White, Black and Mexican American Men and Women, and the extent to which differences in life course capital account for group disparities in initial health status and rates of change with age. Results support an intersectionality approach, with all demographic groups exhibiting worse functional limitation trajectories than White Men. Whereas White Men had the lowest disability levels at baseline, White Women and racial/ethnic minority Men had intermediate disability levels and Black and Hispanic Women had the highest disability levels. These health disparities remained stable with age—except among Black Women who experience a trajectory of accelerated disablement. Dissimilar early life social origins, adult socioeconomic status, marital status, and health behaviors explain the racial/ethnic disparities in functional limitations among Men but only partially explain the disparities among Women. Net of controls for life course capital, Women of all racial/ethnic groups have higher levels of functional limitations relative to White Men and Men of the same race/ethnicity. Findings highlight the utility of an intersectionality approach to understanding health disparities. PMID:21470737
Warner, David F; Brown, Tyson H
2011-04-01
A number of studies have demonstrated wide disparities in health among racial/ethnic groups and by gender, yet few have examined how race/ethnicity and gender intersect or combine to affect the health of older adults. The tendency of prior research to treat race/ethnicity and gender separately has potentially obscured important differences in how health is produced and maintained, undermining efforts to eliminate health disparities. The current study extends previous research by taking an intersectionality approach (Mullings & Schulz, 2006), grounded in life course theory, conceptualizing and modeling trajectories of functional limitations as dynamic life course processes that are jointly and simultaneously defined by race/ethnicity and gender. Data from the nationally representative 1994-2006 US Health and Retirement Study and growth curve models are utilized to examine racial/ethnic/gender differences in intra-individual change in functional limitations among White, Black and Mexican American Men and Women, and the extent to which differences in life course capital account for group disparities in initial health status and rates of change with age. Results support an intersectionality approach, with all demographic groups exhibiting worse functional limitation trajectories than White Men. Whereas White Men had the lowest disability levels at baseline, White Women and racial/ethnic minority Men had intermediate disability levels and Black and Hispanic Women had the highest disability levels. These health disparities remained stable with age-except among Black Women who experience a trajectory of accelerated disablement. Dissimilar early life social origins, adult socioeconomic status, marital status, and health behaviors explain the racial/ethnic disparities in functional limitations among Men but only partially explain the disparities among Women. Net of controls for life course capital, Women of all racial/ethnic groups have higher levels of functional limitations relative to White Men and Men of the same race/ethnicity. Findings highlight the utility of an intersectionality approach to understanding health disparities. Copyright © 2011 Elsevier Ltd. All rights reserved.
Host Model Uncertainty in Aerosol Radiative Forcing Estimates - The AeroCom Prescribed Experiment
NASA Astrophysics Data System (ADS)
Stier, P.; Kinne, S.; Bellouin, N.; Myhre, G.; Takemura, T.; Yu, H.; Randles, C.; Chung, C. E.
2012-04-01
Anthropogenic and natural aerosol radiative effects are recognized to affect global and regional climate. However, even for the case of identical aerosol emissions, the simulated direct aerosol radiative forcings show significant diversity among the AeroCom models (Schulz et al., 2006). Our analysis of aerosol absorption in the AeroCom models indicates a larger diversity in the translation from given aerosol radiative properties (absorption optical depth) to actual atmospheric absorption than in the translation of a given atmospheric burden of black carbon to the radiative properties (absorption optical depth). The large diversity is caused by differences in the simulated cloud fields, radiative transfer, the relative vertical distribution of aerosols and clouds, and the effective surface albedo. This indicates that differences in host model (GCM or CTM hosting the aerosol module) parameterizations contribute significantly to the simulated diversity of aerosol radiative forcing. The magnitude of these host model effects in global aerosol model and satellites retrieved aerosol radiative forcing estimates cannot be estimated from the diagnostics of the "standard" AeroCom forcing experiments. To quantify the contribution of differences in the host models to the simulated aerosol radiative forcing and absorption we conduct the AeroCom Prescribed experiment, a simple aerosol model and satellite retrieval intercomparison with prescribed highly idealised aerosol fields. Quality checks, such as diagnostic output of the 3D aerosol fields as implemented in each model, ensure the comparability of the aerosol implementation in the participating models. The simulated forcing variability among the models and retrievals is a direct measure of the contribution of host model assumptions to the uncertainty in the assessment of the aerosol radiative effects. We will present the results from the AeroCom prescribed experiment with focus on the attribution to the simulated variability to parametric and structural model uncertainties. This work will help to prioritise areas for future model improvements and ultimately lead to uncertainty reduction.
Li, Shuocong; Liu, Hong; Gao, Rui; Abdurahman, Abliz; Dai, Juan; Zeng, Feng
2018-06-01
Microplastics are an emerging contaminants of concern in aquatic environments. The aggregation behaviors of microplastics governing their fate and ecological risks in aquatic environments is in need of evaluation. In this study, the aggregation behavior of polystyrene microspheres (micro-PS) in aquatic environments was systematically investigated over a range of monovalent and divalent electrolytes with and without natural organic matter (i.e., Suwannee River humic acid (HA)), at pH 6.0, respectively. The zeta potentials and hydrodynamic diameters of micro-PS were measured and the subsequent aggregation kinetics and attachment efficiencies (α) were calculated. The aggregation kinetics of micro-PS exhibited reaction- and diffusion-limited regimes in the presence of monovalent or divalent electrolytes with distinct critical coagulation concentration (CCC) values, followed the Derjaguin-Landau-Verwey-Overbeek (DLVO) theory. The CCC values of micro-PS were14.9, 13.7, 14.8, 2.95 and 3.20 mM for NaCl, NaNO 3 , KNO 3 , CaCl 2 and BaCl 2 , respectively. As expected, divalent electrolytes (i.e., CaCl 2 and BaCl 2 ) had stronger influence on the aggregation behaviors of micro-PS as compared to monovalent electrolytes (i.e., NaCl, NaNO 3 and KNO 3 ). HA enhanced micro-PS stability and shifted the CCC values to higher electrolyte concentrations for all types of electrolytes. The CCC values of micro-PS were lower than reported carbonaceous nanoparticles CCC values. The CCC[Ca 2+ ]/CCC [Na + ] ratios in the absence and presence of HA at pH 6.0 were proportional to Z -2.34 and Z -2.30 , respectively. These ratios were in accordance with the theoretical Schulze-Hardy rule, which considers that the CCC is proportional to z -6 -z -2 . These results indicate that the stability of micro-PS in the natural aquatic environment and the possibility of significant aqueous transport of micro-PS. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Babuska, Vladislav; Plomerova, Jaroslava; Vecsey, Ludek; Munzarova, Helena
2016-04-01
Subduction and orogenesis require a strong mantle layer (Burov, Tectonophys. 2010) and our findings confirm the leading role of the mantle lithosphere. We have examined seismic anisotropy of Archean, Proterozoic and Phanerozoic provinces of Europe by means of shear-wave splitting and P-wave travel-time deviations of teleseismic waves observed at dense arrays of seismic stations (e.g., Vecsey et al., Tectonophys. 2007). Lateral variations of seismic-velocity anisotropy delimit domains of the mantle lithosphere, each of them having its own consistent fabric. The domains, modeled in 3D by olivine aggregates with dipping lineation a, or foliation (a,c), represent microplates or their fragments that preserved their pre-assembly fossil fabrics. Evaluating seismic anisotropy in 3D, as well as mapping boundaries of the domains helps to decipher processes of the lithosphere formation. Systematically dipping mantle fabrics and other seismological findings seem to support a model of continental lithosphere built from systems of paleosubductions of plates of ancient oceanic lithosphere (Babuska and Plomerova, AGU Geoph. Monograph 1989), or from stacking of the plates (Helmstaedt and Schulze, Geol. Soc. Spec. Publ. 1989). Seismic anisotropy in the oceanic mantle lithosphere, explained mainly by the olivine A- or D-type fabric (Karato et al., Annu. Rev. Earth Planet. Sci. 2008), was discovered a half century ago (Hess, Nature 1964). Field observations and laboratory experiments indicate the oceanic olivine fabric might be preserved in the subducting lithosphere to a depth of at least 200-300 km. We thus interpret the dipping anisotropic fabrics in domains of the European mantle lithosphere as systems of "frozen" paleosubductions (Babuska and Plomerova, PEPI 2006) and the lithosphere base as a boundary between the fossil anisotropy in the lithospheric mantle and an underlying seismic anisotropy related to present-day flow in the asthenosphere (Plomerova and Babuska, Lithos 2010).
Performance Analysis of and Tool Support for Transactional Memory on BG/Q
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schindewolf, M
2011-12-08
Martin Schindewolf worked during his internship at the Lawrence Livermore National Laboratory (LLNL) under the guidance of Martin Schulz at the Computer Science Group of the Center for Applied Scientific Computing. We studied the performance of the TM subsystem of BG/Q as well as researched the possibilities for tool support for TM. To study the performance, we run CLOMP-TM. CLOMP-TM is a benchmark designed for the purpose to quantify the overhead of OpenMP and compare different synchronization primitives. To advance CLOMP-TM, we added Message Passing Interface (MPI) routines for a hybrid parallelization. This enables to run multiple MPI tasks, eachmore » running OpenMP, on one node. With these enhancements, a beneficial MPI task to OpenMP thread ratio is determined. Further, the synchronization primitives are ranked as a function of the application characteristics. To demonstrate the usefulness of these results, we investigate a real Monte Carlo simulation called Monte Carlo Benchmark (MCB). Applying the lessons learned yields the best task to thread ratio. Further, we were able to tune the synchronization by transactifying the MCB. Further, we develop tools that capture the performance of the TM run time system and present it to the application's developer. The performance of the TM run time system relies on the built-in statistics. These tools use the Blue Gene Performance Monitoring (BGPM) interface to correlate the statistics from the TM run time system with performance counter values. This combination provides detailed insights in the run time behavior of the application and enables to track down the cause of degraded performance. Further, one tool has been implemented that separates the performance counters in three categories: Successful Speculation, Unsuccessful Speculation and No Speculation. All of the tools are crafted around IBM's xlc compiler for C and C++ and have been run and tested on a Q32 early access system.« less
NASA Astrophysics Data System (ADS)
Dethlefsen, Frank; Peter, Anita; Hornbruch, Götz; Lamert, Hendrik; Garbe-Schönberg, Dieter; Beyer, Matthias; Dietrich, Peter; Dahmke, Andreas
2014-05-01
The accidental release of CO2 into potable aquifers, for instance as a consequence of a leakage out of a CO2 store site, can endanger drinking water resources due to the induced geochemical processes. A 10-day CO2 injection experiment into a shallow aquifer was carried out in Wittstock (Northeast Germany) in order to investigate the geochemical impact of a CO2 influx into such an aquifer and to test different monitoring methods. Information regarding the site investigation, the injection procedure monitoring setup, and first geochemical monitoring results are described in [1]. Apart from the utilization of the test results to evaluate monitoring approaches [2], further findings are presented on the evaluation of the geophysical monitoring [3], and the monitoring of stable carbon isotopes [4]. This part of the study focuses of the hydrogeochemical alteration of groundwater due to the CO2 injection test. As a consequence of the CO2 injection, major cations were released, i.e. concentrations increased, whereas major anion concentrations - beside bicarbonate - decreased, probably due to increased anion sorption capacity at variably charged exchange sites of minerals. Trace element concentrations increased as well significantly, whereas the relative concentration increase was far larger than the relative concentration increase of major cations. Furthermore, geochemical reactions show significant spatial heterogeneity, i.e. some elements such as Cr, Cu, Pb either increased in concentration or remained at stable concentrations with increasing TIC at different wells. Statistical analyses of regression coefficients confirm the different spatial reaction patterns at different wells. Concentration time series at single wells give evidence, that the trace element release is pH dependent, i.e. trace elements such as Zn, Ni, Co are released at pH of around 6.2-6.6, whereas other trace elements like As, Cd, Cu are released at pH of 5.6-6.4. [1] Peter, A., et al., Investigation of the geochemical impact of CO2; on shallow groundwater: design and implementation of a CO2; injection test in Northeast Germany. Environmental Earth Sciences, 2012. 67(2): p. 335-349. [2] Dethlefsen, F., et al., Monitoring approaches for detecting and evaluating CO2 and formation water leakages into near-surface aquifers. Energy Procedia, 2013. 37(0): p. 4886-4893. [3] Lamert, H., et al., Feasibility of geoelectrical monitoring and multiphase modeling for process understanding of gaseous CO2; injection into a shallow aquifer. Environmental Earth Sciences, 2012. 67(2): p. 447-462. [4] Schulz, A., et al., Monitoring of a simulated CO2 leakage in a shallow aquifer using stable carbon isotopes. Environmental Science & Technology, 2012. 46(20): p. 11243-11250.
NASA Astrophysics Data System (ADS)
Schulze, D. J.; Page, Z.; Harte, B.; Valley, J.; Channer, D.; Jaques, L.
2006-12-01
Using ion microprobes and secondary-ion mass spectrometry we have analyzed the carbon and oxygen isotopic composition of eclogite-suite diamonds and their coesite inclusions, respectively, from three suites of diamonds of Proterozoic age. Extremely high (for the mantle) oxygen isotope values (delta 18O of +10.2 to +16.9 per mil VSMOW) are preserved in coesites included in eclogitic diamonds from Guaniamo, Venezuela (Schulze et al., Nature, 2003), providing compelling evidence for an origin of their eclogite hosts by subduction of sea water altered ocean floor basalts. In situ SIMS analyses of their host diamonds yield carbon isotope values (delta 13C) of -12 to -18 per mil PDB. SIMS analyses of coesite inclusions from Argyle, Australia diamonds previously analyzed by combustion methods for d13C composition (Jaques et al., Proc. 4th Kimb. Conf, 1989), also yield anomalously high d18O values (+6.8 to +16.0 per mil VSMOW), that correlate with the anomalously low carbon isotope values (-10.3 to -14.1 per mil PDB). One coesite-bearing diamond from Orapa, Botswana analyzed in situ by SIMS has a d18O value of the coesite of +8.5 per mil VSMOW and a d13C value of the adjacent diamond host of -9.0 per mil PDB. A second Orapa stone has a SIMS carbon isotope compositional range of d13C = -14 to -16 per mil PDB, but the coesite is too small for ion probe analysis. At each of these localities, carbon isotope values of coesite-bearing diamonds that are lower than typical of mantle carbon are correlated with oxygen isotope compositions of included coesites that are substantially above the common mantle oxygen isotope range. Such results are not in accord with diamond genesis models involving formation of eclogitic diamonds from igneous melts undergoing fractionation in the mantle or by crystallization from primordial inhomogeneities in Earth's mantle. By analogy with the oxygen isotope compositions of altered ocean floor basalts and Alpine (subduction zone) eclogites they are, however, consistent with a subduction origin for these eclogite assemblages from altered ocean floor basaltic protoliths, and thus the simplest explanation for the source of the low carbon isotope values of these diamonds is formation from biogenic carbon accumulated on or near the ocean floor and subducted to the depths of eclogite and diamond stability with the altered basalts. Significantly these results, which were not predicted from studies of diamond-bearing eclogites, apply to the mantle beneath three different continental crustal blocks of both Proterozoic (Guaniamo and Argyle) and Archean/Proterozoic (Orapa) age.
Tanta, Ivan; Lesinger, Gordana
2015-12-01
Politicians and their public relations advisors depend on the mass communication media to transmit messages dailyand communicate effectively. The development of the mass media, from traditional to new, has changed the working conditions of these professions where one inevitably affects the other. Consequently, the way of formatting information in the newshas changed, along with the way of monitoring the political developments and informs the public on political activities. Amajor role in this process, over and above the political actors, has advisers for public relations, who choose moments andevents to publicise (PR-ization). With the increasing influence of public relations to media reports, politics also changes thepicture of the media and the impact on media coverage. Similarly, the impact on the manner in which the media reportprocess, what topics will be discussed topics and what tone the given information will have. We are living in a world characterized by mediation (Mazzoleni and Schulz, 1999) of the politics and the society as a whole, because politics and publicrelations necessarily need the media to communicate with their audiences. In this regard, we can talk about PR-izationmedia as the fundamental role of public relations practitioners affect attitudes, which skillfully make careful design ofmessages and events that are not included herein are the three professions each other should one without the other does notmake sense. This paper will focus on the influence of the media on politics and on influence of the public relations as profession in the content media perception. In view of the drawn by daily public appearances of Prime Minister, Zoran Milanovi6,and as says Lali63 few politics-related phenomena have over the past twenty years engaged so many reviews by experts andscholars as the Prime Minister's rhetoric. The particular form of the political communication will be reviewed in this paper.Through the interviews and the content analysis of key moments and statements from the media, we shall try to determinehow the communication by Zoran Milanovi6 has changed with the new public relations advisor, and that the change hasaffected the public attitudes that Milanovi6 communication seen through the media-mediated reality.
Tonn, Peter; Reuter, Silja Christin; Kuchler, Isabelle; Reinke, Britta; Hinkelmann, Lena; Stöckigt, Saskia; Siemoneit, Hanna; Schulze, Nina
2017-10-03
In the field of psychiatry and psychotherapy, there are now a growing number of Web-based interventions, mobile phone apps, or treatments that are available via remote transmission screen worldwide. Many of these interventions have been shown to be effective in studies but still find little use in everyday therapeutic work. However, it is important that attitude and expectation toward this treatment are generally examined, because these factors have an important effect on the efficacy of the treatment. To measure the general attitude of the users and prescribers toward telemedicine, which may include, for instance, Web-based interventions or interventions through mobile phone apps, there are a small number of extensive tests. The results of studies based on small groups of patients have been published too, but there is no useful short screening tool to give an insight into the general population's attitude. We have developed a screening instrument that examines such attitude through a few graded questions. This study aimed to explore the Attitude toward Telemedicine in Psychiatry and Psychotherapy (ATiPP) and to evaluate the results of general population and some subgroups. In a three-step process, the questionnaire, which is available in three versions (laypeople, physicians, and psychologists), was developed. Afterwards, it was evaluated by four groups: population-representative laypeople, outpatients in different faculties, physicians, and psychotherapists. The results were evaluated from a total of 1554 questionnaires. The sample population included 1000 laypeople, 455 outpatients, 62 physicians, and 37 psychotherapists. The reliability of all three versions of the questionnaire seemed good, as indicated by the Cronbach alpha values of .849 (the laypeople group), .80 (the outpatients' group), .827 (the physicians' group), and .855 (the psychotherapists' group). The ATiPP was found to be useful and reliable for measuring the attitudes toward the Web-based interventions in psychiatry and psychotherapy and should be used in different studies in this field in the future to evaluate and reflect the attitude of the participants. ©Peter Tonn, Silja Christin Reuter, Isabelle Kuchler, Britta Reinke, Lena Hinkelmann, Saskia Stöckigt, Hanna Siemoneit, Nina Schulze. Originally published in JMIR Mental Health (http://mental.jmir.org), 03.10.2017.
Radiative forcing of the direct aerosol effect from AeroCom Phase II simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myhre, G.; Samset, B. H.; Schulz, M.
2013-01-01
We report on the AeroCom Phase II direct aerosol effect (DAE) experiment where 16 detailed global aerosol models have been used to simulate the changes in the aerosol distribution over the industrial era. All 16 models have estimated the radiative forcing (RF) of the anthropogenic DAE, and have taken into account anthropogenic sulphate, black carbon (BC) and organic aerosols (OA) from fossil fuel, biofuel, and biomass burning emissions. In addition several models have simulated the DAE of anthropogenic nitrate and anthropogenic influenced secondary organic aerosols (SOA). The model simulated all-sky RF of the DAE from total anthropogenic aerosols has amore » range from -0.58 to -0.02 Wm -2, with a mean of -0.27 Wm -2 for the 16 models. Several models did not include nitrate or SOA and modifying the estimate by accounting for this with information from the other AeroCom models reduces the range and slightly strengthens the mean. Modifying the model estimates for missing aerosol components and for the time period 1750 to 2010 results in a mean RF for the DAE of -0.35 Wm -2. Compared to AeroCom Phase I (Schulz et al., 2006) we find very similar spreads in both total DAE and aerosol component RF. However, the RF of the total DAE is stronger negative and RF from BC from fossil fuel and biofuel emissions are stronger positive in the present study than in the previous AeroCom study. We find a tendency for models having a strong (positive) BC RF to also have strong (negative) sulphate or OA RF. This relationship leads to smaller uncertainty in the total RF of the DAE compared to the RF of the sum of the individual aerosol components. The spread in results for the individual aerosol components is substantial, and can be divided into diversities in burden, mass extinction coefficient (MEC), and normalized RF with respect to AOD. We find that these three factors give similar contributions to the spread in results.« less
Bakhvalova, Valentina N; Chicherina, Galina S; Potapova, Olga F; Panov, Victor V; Glupov, Victor V; Potapov, Mikhail A; Seligman, Stephen J; Morozova, Olga V
2016-08-01
The persistence of tick-borne encephalitis virus (TBEV) in nature is maintained by numerous species of reservoir hosts, multiple transmissions between vertebrates and invertebrates, and the virus adaptation to its hosts. Our Aim: was to compare TBEV isolates from ticks and small wild mammals to estimate their roles in the circulation of the viral subtypes. TBEV isolates from two species of ixodid ticks, four species of rodents, and one species of shrews in the Novosibirsk region, South-Western Siberia, Russia, were analyzed using bioassay, hemagglutination, hemagglutination inhibition, neutralization tests, ELISA, reverse transcription with real-time PCR, and phylogenetic analysis. TBEV RNA and/or protein E were found in 70.9% ± 3.0% of mammals and in 3.8% ± 0.4% of ticks. The TBEV infection rate, main subtypes, and neurovirulence were similar between ixodid tick species. However, the proportions of the virus that were pathogenic for laboratory mice and of the Far-Eastern (FE) subtype, as well as the viral loads with the Siberian and the European subtypes for the TBEV in Ixodes pavlovskyi Pomerantsev, 1946 were higher than in Ixodes persulcatus (P. Schulze, 1930). Percentages of infected Myodes rutilus, Sicista betulina, and Sorex araneus exceeded those of Apodemus agrarius and Myodes rufocanus. Larvae and nymphs of ticks were found mainly on rodents, especially on Myodes rufocanus and S. betulina. The proportion of TBEV-mixed infections with different subtypes in the infected ticks (55.9% ± 6.5%) was higher than in small mammals (36.1% ± 4.0%) (p < 0.01). Molecular typing revealed mono- or mixed infection with three main subtypes of TBEV in ticks and small mammals. The Siberian subtype was more common in ixodid ticks, and the FE subtype was more common in small mammals (p < 0.001). TBEV isolates of the European subtype were rare. TBEV infection among different species of small mammals did not correlate with their infestation rate with ticks in the Novosibirsk region, Russia.
NASA Astrophysics Data System (ADS)
Levasseur-Regourd, Anny-Chantal; Brouet, Yann; Hadamcik, Edith; Heggy, Essam; Hines, Dean; Lasue, Jérémie; Renard, Jean-Baptiste
2015-08-01
Polarimetric astronomical observations on dust clouds and regolithic surfaces require laboratory simulations on samples to provide information on properties (size distribution, porosity, refractive index) of the scattering media. Similarly, in-situ radar investigations in the solar system require laboratory studies on samples to infer physical properties (e.g. porosity, ice/dust ratio) of sub-surfaces and interiors. Recent developments are illustrated with present studies related to the Rosetta mission, which begun its rendezvous with comet 67P/Churyumov-Gerasimeko (C-G) and landed the Philae module on its nucleus in 2014.We will summarize laboratory simulations with the PROGRA2 suite of instruments that study (in the visible to near IR domain) the polarimetric properties of dust samples in microgravity conditions or on surfaces [1], with emphasis on the interpretation of polarimetric observations of C-G, during its previous perihelion passages from Earth observatories, and currently from HST [2,3]. The presence of large dust particles in the pre-perihelion coma previously inferred from remote observations agrees with Rosetta ground truth [4]. We will also present measurements on the permittivity (in the millimeter to meter domain) of various dust samples, with emphasis on porous samples [5,6]. Results provide constraints on the properties of the subsurface and interior of C-G, as explored by MIRO on Rosetta and CONSERT on Philae.Such studies are relevant for the interpretation of polarimetric observations of other dust clouds (e.g. debris disks, interplanetary dust cloud, clouds in planetary atmospheres) and surfaces (e.g. planets, moons), as well as for those of other radar characterization studies (e.g. Mars, moons, asteroids).[1] Levasseur-Regourd et al. In Polarization of stars and planetary systems, Cambridge UP, in press 2015.[2] Hadamcik et al. A&A 517 2010.[3] Hines and Levasseur-Regourd, PSS submitted 2015.[4] Schulz et al. Nature 518 2015.[5] Heggy et al. 2012, Icarus 221 2012.[6] Brouet et al. A&A submitted 2015.
Planetary Nomenclature: An Overview and Update for 2017
NASA Astrophysics Data System (ADS)
Gaither, Tenielle; Hayward, Rose; IAU Working GroupPlanetary System Nomenclature
2017-10-01
The task of naming planetary surface features, rings, and natural satellites is managed by the International Astronomical Union’s (IAU) Working Group for Planetary System Nomenclature (WGPSN). There are currently 15,361 IAU-approved surface feature names on 41 planetary bodies, including moons and asteroids. The members of the WGPSN and its task groups have worked since the early 1970s to provide a clear, unambiguous system of planetary nomenclature that represents cultures and countries from all regions of Earth. WGPSN members include Rita Schulz (Chair) and 9 other members representing countries around the globe. The participation of knowledgeable scientists and experts in this process is vital to its success of the IAU WGPSN . Planetary nomenclature is a tool used to uniquely identify features on the surfaces of planets or satellites so they can be located, described, and discussed in publications, including peer-review journals, maps and conference presentations. Approved names are listed in the Transactions of the IAU and on the Gazetteer of Planetary Nomenclature website. Any names currently in use that are not listed the Gazetteer are not official. Planetary names must adhere to rules and conventions established by the IAU WGPSN (see http://planetarynames.wr.usgs.gov/Page/Rules for the complete list). The gazetteer includes an online Name Request Form (http://planetarynames.wr.usgs.gov/FeatureNameRequest) that can be used by members of the professional science community. Name requests are first reviewed by one of six task groups (Mercury, Venus, Moon, Mars, Outer Solar System, and Small Bodies). After a task group has reviewed a proposal, it is submitted to the WGPSN. Allow four to six weeks for the review and approval process. Upon WGPSN approval, names are considered formally approved and it is then appropriate to use them in publications. Approved names are immediately entered into the database and shown on the website. Questions about the nomenclature database and the naming process can be sent to Rosalyn Hayward, USGS Astrogeology Science Center, 2255 N. Gemini Dr., Flagstaff, AZ 86001, or by email to rhayward@usgs.gov.
Radiative forcing of the direct aerosol effect from AeroCom Phase II simulations
NASA Astrophysics Data System (ADS)
Myhre, G.; Samset, B. H.; Schulz, M.; Balkanski, Y.; Bauer, S.; Berntsen, T. K.; Bian, H.; Bellouin, N.; Chin, M.; Diehl, T.; Easter, R. C.; Feichter, J.; Ghan, S. J.; Hauglustaine, D.; Iversen, T.; Kinne, S.; Kirkevåg, A.; Lamarque, J.-F.; Lin, G.; Liu, X.; Lund, M. T.; Luo, G.; Ma, X.; van Noije, T.; Penner, J. E.; Rasch, P. J.; Ruiz, A.; Seland, Ø.; Skeie, R. B.; Stier, P.; Takemura, T.; Tsigaridis, K.; Wang, P.; Wang, Z.; Xu, L.; Yu, H.; Yu, F.; Yoon, J.-H.; Zhang, K.; Zhang, H.; Zhou, C.
2013-02-01
We report on the AeroCom Phase II direct aerosol effect (DAE) experiment where 16 detailed global aerosol models have been used to simulate the changes in the aerosol distribution over the industrial era. All 16 models have estimated the radiative forcing (RF) of the anthropogenic DAE, and have taken into account anthropogenic sulphate, black carbon (BC) and organic aerosols (OA) from fossil fuel, biofuel, and biomass burning emissions. In addition several models have simulated the DAE of anthropogenic nitrate and anthropogenic influenced secondary organic aerosols (SOA). The model simulated all-sky RF of the DAE from total anthropogenic aerosols has a range from -0.58 to -0.02 Wm-2, with a mean of -0.27 Wm-2 for the 16 models. Several models did not include nitrate or SOA and modifying the estimate by accounting for this with information from the other AeroCom models reduces the range and slightly strengthens the mean. Modifying the model estimates for missing aerosol components and for the time period 1750 to 2010 results in a mean RF for the DAE of -0.35 Wm-2. Compared to AeroCom Phase I (Schulz et al., 2006) we find very similar spreads in both total DAE and aerosol component RF. However, the RF of the total DAE is stronger negative and RF from BC from fossil fuel and biofuel emissions are stronger positive in the present study than in the previous AeroCom study. We find a tendency for models having a strong (positive) BC RF to also have strong (negative) sulphate or OA RF. This relationship leads to smaller uncertainty in the total RF of the DAE compared to the RF of the sum of the individual aerosol components. The spread in results for the individual aerosol components is substantial, and can be divided into diversities in burden, mass extinction coefficient (MEC), and normalized RF with respect to AOD. We find that these three factors give similar contributions to the spread in results.
Radiative forcing of the direct aerosol effect from AeroCom Phase II simulations
NASA Astrophysics Data System (ADS)
Myhre, G.; Samset, B. H.; Schulz, M.; Balkanski, Y.; Bauer, S.; Berntsen, T. K.; Bian, H.; Bellouin, N.; Chin, M.; Diehl, T.; Easter, R. C.; Feichter, J.; Ghan, S. J.; Hauglustaine, D.; Iversen, T.; Kinne, S.; Kirkevåg, A.; Lamarque, J.-F.; Lin, G.; Liu, X.; Luo, G.; Ma, X.; Penner, J. E.; Rasch, P. J.; Seland, Ø.; Skeie, R. B.; Stier, P.; Takemura, T.; Tsigaridis, K.; Wang, Z.; Xu, L.; Yu, H.; Yu, F.; Yoon, J.-H.; Zhang, K.; Zhang, H.; Zhou, C.
2012-08-01
We report on the AeroCom Phase II direct aerosol effect (DAE) experiment where 15 detailed global aerosol models have been used to simulate the changes in the aerosol distribution over the industrial era. All 15 models have estimated the radiative forcing (RF) of the anthropogenic DAE, and have taken into account anthropogenic sulphate, black carbon (BC) and organic aerosols (OA) from fossil fuel, biofuel, and biomass burning emissions. In addition several models have simulated the DAE of anthropogenic nitrate and anthropogenic influenced secondary organic aerosols (SOA). The model simulated all-sky RF of the DAE from total anthropogenic aerosols has a range from -0.58 to -0.02 W m-2, with a mean of -0.30 W m-2 for the 15 models. Several models did not include nitrate or SOA and modifying the estimate by accounting for this with information from the other AeroCom models reduces the range and slightly strengthens the mean. Modifying the model estimates for missing aerosol components and for the time period 1750 to 2010 results in a mean RF for the DAE of -0.39 W m-2. Compared to AeroCom Phase I (Schulz et al., 2006) we find very similar spreads in both total DAE and aerosol component RF. However, the RF of the total DAE is stronger negative and RF from BC from fossil fuel and biofuel emissions are stronger positive in the present study than in the previous AeroCom study. We find a tendency for models having a strong (positive) BC RF to also have strong (negative) sulphate or OA RF. This relationship leads to smaller uncertainty in the total RF of the DAE compared to the RF of the sum of the individual aerosol components. The spread in results for the individual aerosol components is substantial, and can be divided into diversities in burden, mass extinction coefficient (MEC), and normalized RF with respect to AOD. We find that these three factors give similar contributions to the spread in results.
The Impact of Discovering Life beyond Earth
NASA Astrophysics Data System (ADS)
Dick, Steven J.
2016-01-01
Introduction: astrobiology and society Steven J. Dick; Part I. Motivations and Approaches. How Do We Frame the Problems of Discovery and Impact?: Introduction; 1. Current approaches to finding life beyond earth, and what happens if we do Seth Shostak; 2. The philosophy of astrobiology: the Copernican and Darwinian presuppositions Iris Fry; 3. History, discovery, analogy: three approaches to the impact of discovering life beyond earth Steven J. Dick; 4. Silent impact: why the discovery of extraterrestrial life should be silent Clément Vidal; Part II. Transcending Anthropocentrism. How Do We Move beyond our Own Preconceptions of Life, Intelligence and Culture?: Introduction; 5. The landscape of life Dirk Schulze-Makuch; 6. The landscape of intelligence Lori Marino; 7. Universal biology: assessing universality from a single example Carlos Mariscal; 8. Equating culture, civilization, and moral development in imagining extraterrestrial intelligence: anthropocentric assumptions? John Traphagan; 9. Communicating with the other: infinity, geometry, and universal math and science Douglas Vakoch; Part III. Philosophical, Theological, and Moral Impact. How Do We Comprehend the Cultural Challenges Raised by Discovery?: Introduction; 10. Life, intelligence and the pursuit of value in cosmic evolution Mark Lupisella; 11. 'Klaatu barada nikto' - or, do they really think like us? Michael Ruse; 12. Alien minds Susan Schneider; 13. The moral subject of astrobiology: guideposts for exploring our ethical and political responsibilities towards extraterrestrial life Elspeth Wilson and Carol Cleland; 14. Astrobiology and theology Robin Lovin; 15. Would you baptize an extraterrestrial? Guy Consolmagno, SJ; Part IV. Practical Considerations: How Should Society Prepare for Discovery - and Non-Discovery?: Introduction; 16. Is there anything new about astrobiology and society? Jane Maienschein; 17. Evaluating preparedness for the discovery of extraterrestrial life: considering potential risks, impacts and plans Margaret Race; 18. Searching for extraterrestrial intelligence: preparing for an expected paradigm break Michael A. G. Michaud; 19. SETI in non-western perspective John Traphagan and Julian W. Traphagan; 20. The allure of alien life: public and media framings of extraterrestrial life Linda Billings; 21. Internalizing null extraterrestrial 'signals': an astrobiological app for a technological society Eric Chaisson; Index.
Rothenfluh, Fabia; Schulz, Peter J
2018-06-14
Websites on which users can rate their physician are becoming increasingly popular, but little is known about the website quality, the information content, and the tools they offer users to assess physicians. This study assesses these aspects on physician-rating websites in German- and English-speaking countries. The objective of this study was to collect information on websites with a physician rating or review tool in 12 countries in terms of metadata, website quality (transparency, privacy and freedom of speech of physicians and patients, check mechanisms for appropriateness and accuracy of reviews, and ease of page navigation), professional information about the physician, rating scales and tools, as well as traffic rank. A systematic Web search based on a set of predefined keywords was conducted on Google, Bing, and Yahoo in August 2016. A final sample of 143 physician-rating websites was analyzed and coded for metadata, quality, information content, and the physician-rating tools. The majority of websites were registered in the United States (40/143) or Germany (25/143). The vast majority were commercially owned (120/143, 83.9%), and 69.9% (100/143) displayed some form of physician advertisement. Overall, information content (mean 9.95/25) as well as quality were low (mean 18.67/47). Websites registered in the United Kingdom obtained the highest quality scores (mean 26.50/47), followed by Australian websites (mean 21.50/47). In terms of rating tools, physician-rating websites were most frequently asking users to score overall performance, punctuality, or wait time in practice. This study evidences that websites that provide physician rating should improve and communicate their quality standards, especially in terms of physician and user protection, as well as transparency. In addition, given that quality standards on physician-rating websites are low overall, the development of transparent guidelines is required. Furthermore, attention should be paid to the financial goals that the majority of physician-rating websites, especially the ones that are commercially owned, pursue. ©Fabia Rothenfluh, Peter J Schulz. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 14.06.2018.
The kingdom Protista and its 45 phyla.
Corliss, J O
1984-01-01
Because most recent treatments of the protists ('lower' eukaryotes comprising the kingdom PROTISTA Haeckel, 1866) have been preoccupied with either a 'phylogenetic-tree' approach or a discussion of the impact of possible endosymbiotic origins of major intracellular organelles, the overall systematics of the group, from taxonomic and nomenclatural points of view, has been almost totally neglected. As a result, confusion over contained phyla, their places in a classification scheme, and even their names (and authorships) is growing; the situation could become chaotic. The principal objective of the present paper is to recognize the taxonomic interrelationships among all protist groups; and it includes the specific proposal that some 45 phyla, defined and characterized, be assigned to 18 supraphyletic assemblages within the kingdom PROTISTA (itself redefined and contrasted with the other eukaryotic kingdoms recognized here: ANIMALIA, PLANTAE and FUNGI). Vernacular terms are employed for identification of the 18 assemblages, but defensible formal names are proposed at the level of phylum. None is presented as new: authorship-and-date credits are given to preceding workers on the taxonomy of the many groups involved. By presenting taxonomic characterizations as well as relevant nomenclatural data for each taxon described, a comprehensive scheme of overall higher-level classification within the kingdom emerges that may be considered to serve as a solid base or 'taking-off point' for future discussions. The 18 supraphyletic groups and their phyla (in parentheses and including authorships and dates of their formal names) are as follows: I. The rhizopods (phyla Karyoblastea Margulis, 1974; Amoebozoa Lühe, 1913; Acrasia Van Tieghem, 1880; Eumycetozoa Zopf, 1885; Plasmodiophorea Zopf, 1885; Granuloreticulosa De Saedeleer, 1934; incertae sedis Xenophyophora Schulze, 1904). II. The mastigomycetes (Hypochytridiomycota Sparrow, 1959; Oomycota Winter, 1897; incert. sed. Chytridiomycota Sparrow, 1959). III. The chlorobionts (Chlorophyta Pascher, 1914; Prasinophyta Christensen, 1962; Conjugatophyta Engler, 1892; Charophyta Rabenhorst, 1863; incert. sed. Glaucophyta Bohlin, 1901). IV. The euglenozoa (Euglenophyta Pascher, 1931; Kinetoplastidea Honigberg, 1963; incert. sed. Pseudociliata Corliss & Lipscomb, 1982). V. The rhodophytes (Rhodophyta Rabenhorst, 1863). VI. The cryptomonads (Cryptophyta Pascher, 1914). VII. The choanoflagellates (Choanoflagellata Kent, 1880).(ABSTRACT TRUNCATED AT 400 WORDS)
NASA Astrophysics Data System (ADS)
Targowski, Wojciech; Piotr, Czyż
2017-10-01
The article presents process of shaping place identity on the example of an important for Pomerania region investment - European Solidarity Centre. The idea of a Solidarity social movement is strongly associated with the formation of post-socialist national identity of Poland as well as local identity of Pomerania, from which movement originates. The realization of the European Solidarity Centre aims to be one of the essential elements of shaping Gdańsk’s identity of space. The article is an attempt to analyse how the presence of realization gradually affects the formation of the place identity of new urban space. Analysis of this realization will allow on the one hand to verify design assumptions made by authors, on the other provides the opportunity to search for best description of still vague notion of local identity. This concept, though intuitively close to everyone still seems to elude conceptual apparatus of theory of architecture. The intention of this article is to explore the notion of identity based on the observations of the newly realized significant cultural space. This analysis approaches the concept of identity from two perspectives. The first approach draws from the concept of identity of Christian Norberg-Schulz. Here, local identity is seen as a unique set of characteristics of space. So seen the concept of place identity is a correlate of concept of personal identity. In this analysis, methods of description of personal identity were transferred to the identity of the place. In the second approach, the identity of place is understood as a unique for that place way of being in space, way to spend time and development of the site-specific urban rituals. Such a concept of identity, draws from the concept of place of Kim Dovey. Both presented approaches seems to complement each other but they also emphasize different qualities. The now-traditional concept of Genius Loci sees architecture as a structural system of meanings. Meaningful elements are seen here primarily from aesthetic perspective. As something we can see. In this perspective, the concept of place identity is seen as a static formation. This perfectly corresponds to design determinants of historical spaces associated with the concept of cultural heritage. In the authors’ opinion identity of the place is also built on the interactions that occur between users in space. The space in this approach becomes a catalyst for social contact. What is important for the user is the formation of identity through customs, rituals and urban traditions - that create new network of social connections. This concept of place recognizes dynamic nature of space identity - as a changeable formation which is continuously co-created. Such recognition can give better understanding of identity for specific design conditions, such as gradual formation of new urban spaces. It is so because this approach places emphasis on the processual nature of space identity - as in the case discussed in the article.
EDITORIAL: Tropical deforestation and greenhouse gas emissions
NASA Astrophysics Data System (ADS)
Gibbs, Holly K.; Herold, Martin
2007-10-01
Carbon emissions from tropical deforestation have long been recognized as a key component of the global carbon budget, and more recently of our global climate system. Tropical forest clearing accounts for roughly 20% of anthropogenic carbon emissions and destroys globally significant carbon sinks (IPCC 2007). Global climate policy initiatives are now being proposed to address these emissions and to more actively include developing countries in greenhouse gas mitigation (e.g. Santilli et al 2005, Gullison et al 2007). In 2005, at the Conference of the Parties (COP) in Montreal, the United Nations Framework Convention on Climate Change (UNFCCC) launched a new initiative to assess the scientific and technical methods and issues for developing policy approaches and incentives to reduce emissions from deforestation and degradation (REDD) in developing countries (Gullison et al 2007). Over the last two years the methods and tools needed to estimate reductions in greenhouse gas emissions from deforestation have quickly evolved, as the scientific community responded to the UNFCCC policy needs. This focus issue highlights those advancements, covering some of the most important technical issues for measuring and monitoring emissions from deforestation and forest degradation and emphasizing immediately available methods and data, as well as future challenges. Elements for effective long-term implementation of a REDD mechanism related to both environmental and political concerns are discussed in Mollicone et al. Herold and Johns synthesize viewpoints of national parties to the UNFCCC on REDD and expand upon key issues for linking policy requirements and forest monitoring capabilities. In response to these expressed policy needs, they discuss a remote-sensing-based observation framework to start REDD implementation activities and build historical deforestation databases on the national level. Achard et al offer an assessment of remote sensing measurements across the world's tropical forests that can provide key consistency and prioritization for national-level efforts. Gibbs et al calculate a range of national-level forest carbon stock estimates that can be used immediately, and also review ground-based and remote sensing approaches to estimate national-level tropical carbon stocks with increased accuracy. These papers help illustrate that methodologies and tools are indeed available to estimate emissions from deforestation. Clearly, important technical challenges remain (e.g. quantifying degradation, assessing uncertainty, verification procedures, capacity building, and Landsat data continuity) but we now have a sufficient technical base to support REDD early actions and readiness mechanisms for building national monitoring systems. Thus, we enter the COP 13 in Bali, Indonesia with great hope for a more inclusive climate policy encompassing all countries and emissions sources from both land-use and energy sectors. Our understanding of tropical deforestation and carbon emissions is improving and with that, opportunities to conserve tropical forests and the host of ecosystem services they provide while also increasing revenue streams in developing countries through economic incentives to avoid deforestation and degradation. References Gullison R E et al 2007 Tropical forests and climate policy Science 316 985 6 Intergovernmental Panel on Climate Change (IPCC) 2007 Climate Change 2007: The Physical Science Basis: Summary for Policymakers http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-spm.pdf Santilli M et al 2005 Tropical deforestation and the Kyoto Protocol: an editorial essay Clim. Change 71 267 76 Focus on Tropical Deforestation and Greenhouse Gas Emissions Contents The articles below represent the first accepted contributions and further additions will appear in the near future. Pan-tropical monitoring of deforestation F Achard, R DeFries, H Eva, M Hansen, P Mayaux and H-J Stibig Monitoring and estimating tropical forest carbon stocks: making REDD a reality Holly K Gibbs, Sandra Brown, John O Niles and Jonathan A Foley Elements for the expected mechanisms on 'reduced emissions from deforestation and degradation, REDD' under UNFCCC D Mollicone, A Freibauer, E D Schulze, S Braatz, G Grassi and S Federici
Transfusion thresholds and other strategies for guiding allogeneic red blood cell transfusion.
Hill, S R; Carless, P A; Henry, D A; Carson, J L; Hebert, P C; McClelland, D B; Henderson, K M
2002-01-01
Most clinical practice guidelines recommend restrictive red cell transfusion practices with the goal of minimising exposure to allogeneic blood (from an unrelated donor). The purpose of this review is to compare clinical outcomes in patients randomised to restrictive versus liberal transfusion thresholds (triggers). To examine the evidence on the effect of transfusion thresholds, on the use of allogeneic and/or autologous blood, and the evidence for any effect on clinical outcomes. Trials were identified by: computer searches of OVID Medline (1966 to December 2000), Current Contents (1993 to Week 48 2000), and the Cochrane Controlled Trials Register (2000 Issue 4). References in identified trials and review articles were checked and authors contacted to identify any additional studies. Controlled trials in which patients were randomised to an intervention group or to a control group. Trials were included where the intervention groups were assigned on the basis of a clear transfusion "trigger", described as a haemoglobin (Hb) or haematocrit (Hct) level below which a RBC transfusion was to be administered. Trial quality was assessed using criteria proposed by Schulz et al. (1995). Relative risks of requiring allogeneic blood transfusion, transfused blood volumes and other clinical outcomes were pooled across trials using a random effects model. Ten trials were identified that reported outcomes for a total of 1780 patients. Restrictive transfusion strategies reduced the risk of receiving a red blood cell (RBC) transfusion by a relative 42% (RR=0.58: 95%CI=0.47,0.71). This equates to an average absolute risk reduction (ARR) of 40% (95%CI=24% to 56%). The volume of RBCs transfused was reduced on average by 0.93 units (95%CI=0.36,1.5 units). However, heterogeneity between these trials was statistically significant (p<0.00001) for these outcomes. Mortality, rates of cardiac events, morbidity, and length of hospital stay were unaffected. Trials were of poor methodological quality. The limited published evidence supports the use of restrictive transfusion triggers in patients who are free of serious cardiac disease. However, most of the data on clinical outcomes were generated by a single trial. The effects of conservative transfusion triggers on functional status, morbidity and mortality, particularly in patients with cardiac disease, need to be tested in further large clinical trials. In countries with inadequate screening of donor blood the data may constitute a stronger basis for avoiding transfusion with allogeneic red cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landa, Romina A.; Soledad Antonel, Paula; Ruiz, Mariano M.
2013-12-07
Nickel (Ni) based nanoparticles and nanochains were incorporated as fillers in polydimethylsiloxane (PDMS) elastomers and then these mixtures were thermally cured in the presence of a uniform magnetic field. In this way, macroscopically structured-anisotropic PDMS-Ni based magnetorheological composites were obtained with the formation of pseudo-chains-like structures (referred as needles) oriented in the direction of the applied magnetic field when curing. Nanoparticles were synthesized at room temperature, under air ambient atmosphere (open air, atmospheric pressure) and then calcined at 400 °C (in air atmosphere also). The size distribution was obtained by fitting Small Angle X-ray Scattering (SAXS) experiments with a polydisperse hardmore » spheres model and a Schulz-Zimm distribution, obtaining a size distribution centered at (10.0 ± 0.6) nm with polydispersivity given by σ = (8.0 ± 0.2) nm. The SAXS, X-ray powder diffraction, and Transmission Electron Microscope (TEM) experiments are consistent with single crystal nanoparticles of spherical shape (average particle diameter obtained by TEM: (12 ± 1) nm). Nickel-based nanochains (average diameter: 360 nm; average length: 3 μm, obtained by Scanning Electron Microscopy; aspect ratio = length/diameter ∼ 10) were obtained at 85 °C and ambient atmosphere (open air, atmospheric pressure). The magnetic properties of Ni-based nanoparticles and nanochains at room temperature are compared and discussed in terms of surface and size effects. Both Ni-based nanoparticles and nanochains were used as fillers for obtaining the PDMS structured magnetorheological composites, observing the presence of oriented needles. Magnetization curves, ferromagnetic resonance (FMR) spectra, and strain-stress curves of low filler's loading composites (2% w/w of fillers) were determined as functions of the relative orientation with respect to the needles. The results indicate that even at low loadings it is possible to obtain magnetorheological composites with anisotropic properties, with larger anisotropy when using nanochains. For instance, the magnetic remanence, the FMR field, and the elastic response to compression are higher when measured parallel to the needles (about 30% with nanochains as fillers). Analogously, the elastic response is also anisotropic, with larger anisotropy when using nanochains as fillers. Therefore, all experiments performed confirm the high potential of nickel nanochains to induce anisotropic effects in magnetorheological materials.« less
Sudbury-Riley, Lynn; FitzPatrick, Mary; Schulz, Peter J
2017-02-27
The eHealth Literacy Scale (eHEALS) is one of only a few available measurement scales to assess eHealth literacy. Perhaps due to the relative paucity of such measures and the rising importance of eHealth literacy, the eHEALS is increasingly a choice for inclusion in a range of studies across different groups, cultures, and nations. However, despite its growing popularity, questions have been raised over its theoretical foundations, and the factorial validity and multigroup measurement properties of the scale are yet to be investigated fully. The objective of our study was to examine the factorial validity and measurement invariance of the eHEALS among baby boomers (born between 1946 and 1964) in the United States, United Kingdom, and New Zealand who had used the Internet to search for health information in the last 6 months. Online questionnaires collected data from a random sample of baby boomers from the 3 countries of interest. The theoretical underpinning to eHEALS comprises social cognitive theory and self-efficacy theory. Close scrutiny of eHEALS with analysis of these theories suggests a 3-factor structure to be worth investigating, which has never before been explored. Structural equation modeling tested a 3-factor structure based on the theoretical underpinning to eHEALS and investigated multinational measurement invariance of the eHEALS. We collected responses (N=996) to the questionnaires using random samples from the 3 countries. Results suggest that the eHEALS comprises a 3-factor structure with a measurement model that falls within all relevant fit indices (root mean square error of approximation, RMSEA=.041, comparative fit index, CFI=.986). Additionally, the scale demonstrates metric invariance (RMSEA=.040, CFI=.984, ΔCFI=.002) and even scalar invariance (RMSEA=.042, CFI=.978, ΔCFI=.008). To our knowledge, this is the first study to demonstrate multigroup factorial equivalence of the eHEALS, and did so based on data from 3 diverse nations and random samples drawn from an increasingly important cohort. The results give increased confidence to researchers using the scale in a range of eHealth assessment applications from primary care to health promotions. ©Lynn Sudbury-Riley, Mary FitzPatrick, Peter J Schulz. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 27.02.2017.
NASA Astrophysics Data System (ADS)
Gaudio, S. J.; Lesher, C. E.
2012-12-01
We estimate the glass transition temperature, Tg, for vitreous/amorphous albite between 0 and 7.7 GPa by tracking the progress of densification following high-temperature annealing experiments with run durations equal to 5τ (when τ=100 s). Tg decreases by 54 K/GPa up to 2.6 GPa, and thereafter shows a weak negative pressure dependence. This behavior mimics the negative pressure dependence of viscosity of albite liquid shown by [1]; however, we do not find a change in the sign of ∂Tg/∂P at least up to 7.7 GPa as reported in some isothermal ∂η/∂P, and ∂DO/∂P data sets. Our high field (21.8 T) 27Al MAS NMR measurements of recovered glasses rapidly quenched from super-Tg conditions possess trace amounts of high coordinated Al at 2.6 GPa and only ˜17% by 5.5 GPa. This suggests that the decrease in Tg (and viscosity at low temperature) results dominantly from topological rearrangement of the supercooled melt structure and not changes to Al or Si coordination number and connectivity of the network. In fact, at Tg from 0 to 8 GPa, the XNBO, or network connectivity, is unchanged [2] and at 7.7 GPa, we find the proportion of high coordinated Al is still ˜35%. Convergence in the timescales of relaxation at Tg(P) and the onset of Na mobility to 6 GPa documented by high-pressure electrical conductivity measurements [3] implies that the fragility of albite melt increases with pressure up to ˜4-5 GPa, without changing the effective polymerization of the melt. In contrast, fragility appears to decrease with pressure in partially depolymerized silicate melts. Such differences in fragility can be used for extrapolation of activation energy based models for viscous flow to high pressure. [1] Kushiro, 1978, EPSL, 41; Brearley et al., 1986, GCA, 50; Brearley and Montana, 1989, GCA, 53; Poe et al., 1997, Science, 276; Suzuki et al., 2002, Phys. Chem. Miner., 29; Funakoshi et al., 2002, J. Phys.: Condens. Matter., 14; Behrens and Schulze, 2003, Am. Min., 88. [2] Lee et al. 2004, GCA, 68; [3] Bagdassarov et al., 2004, Phys. Chem. Glasses, 45.
Temporal Evolution of the Morphological Tail Structures of Comet P/Halley 1910 II
NASA Astrophysics Data System (ADS)
Izaguirre, L. S.; Voelzke, M. R.
2004-08-01
Eight hundred and eighty six images from September 1909 to May 1911 are analysed for the purpose of identifying, measuring and correlating the morphological structures along the plasma tail of P/Halley. These images are from the Atlas of Comet Halley 1910 II (Donn et al., 1986). A systematic visual analysis revealed 304 wavy structures (Yi et al., 1998) along the main tail and 164 along the secondary tails, 41 solitary waves (solitons) (Roberts, 1985), 13 Swan-like tails (Jockers, 1985), 26 disconnection events (DEs) (Voelzke, 2002a), 166 knots (Voelzke et al., 1997) and six shells (Schulz and Schlosser, 1989). While the wavy structures denote undulations or a train of waves, the solitons refer to the formations usually denominated kinks (Tomita et al., 1987). In general, it is possible to associate the occurrence of a DE and/or a Swan-Tail with the occurrence of a knot, but the last one may occur independently. It is also possible to say that the solitons occur in association with the wavy structures, but the reverse is not true. The 26 DEs documented in 26 different images allowed the derivation of two onsets of DEs, i.e., the time when the comet supposedly crossed a frontier between magnetic sectors of the solar wind (Brandt and Snow, 2000). Both onsets of DEs were determined after the perihelion passage with an average of the corrected velocities Vc equal to (57 ± 15) km s-1. The mean value of the corrected wavelength lc measured in 70 different wavy structures is equal to (1.7 ± 0.1) x 10^6 km and the mean amplitude A of the wave (measured in the same 70 wavy structures cited above) is equal to (1.4 ± 0.1) x 10^5 km. The mean value of the corrected cometocentric phase velocity Vpc measured in 20 different wavy structures is equal to (168 ± 28) km s-1. The average value of the corrected velocities Vkc of the knots measured in 36 different images is equal to (128 ± 12) km s-1. There is a tendency for A and lc to increase with increasing cometocentric distance. The preliminary results of this work agree with the earlier research from Voelzke and Matsuura (1998), which analysed comet P/Halley's tail structures in its last apparition in 1986.
NASA Astrophysics Data System (ADS)
Guterch, A.; Grad, M.; Keller, G. R.
2005-12-01
Beginning in 1997, Central Europe between the Baltic and Adriatic Seas, has been covered by an unprecedented network of seismic refraction experiments POLONAISE'97, CELEBRATION 2000, ALP 2002, and SUDETES 2003, have only been possible due to a massive international consortium consisted of more than 30 institutions from 16 countries in Europe and North America. The majority of recording instruments was provided by the IRIS/PASSCAL Instrument Center and the University of Texas at El Paso (USA), and several other countries also provided instrumentation. Total length of seismic profiles in all experiments is about 20,000 km. The main results of these experiments are: 1) the delineation of the deep structure of the southwestern margin of the East European Craton (southern Baltica) and its relationship to younger terranes; delineation of the major terranes and crustal blocks in the Trans European Suture Zone; determination of the structural framework of the Pannonian basin; elucidation of the deep structure and evolution of the Western Carpathian Mountains and Eastern Alps; determination of the structural relationships between the structural elements of the Bohemian massif and adjacent features; construction of 3-D models of the lithospheric structure; and evaluation and develop geodynamic models for the tectonic evolution of the region. Experiment Working Groups Members: K. Aric, M. Behm, E. Brueckl, W. Chwatal, H. Grassl, S. Hock, V. Hoeck, F. Kohlbeck, E.-M. Rumpfhuber, Ch. Schmid, R. Schmoller, C. Tomek, Ch. Ullrich, F.Weber (Austria), A.A. Belinsky (Belarus), I. Asudeh, R. Clowes, Z. Hajnal (Canada), F. Sumanova (Croatia), M. Broz , P. Hrubcova, M. Korn, O. Karousova, J. Malek, A. Spicak (Czech Republic), S.L. Jensen, P. Joergensen, H. Thybo (Denmark), K. Komminaho, U. Luosto, T. Tiira, J. Yliniemi (Finland), F. Bleibinhaus, R. Brinkmann, B. Forkmann, H. Gebrande, H. Geissler, A. Hemmann, G. Jentzsch, D. Kracke, A. Schulze, K. Schuster (Germany), T. Bodoky, T. Fancik, E. Hegedas, K. Posgay, E. Takacs (Hungary), J. Jacyna, L. Korabliova, G. Motuza, V. Nasedkin (Lithuania), W. Czuba, E. Gaczynski, M. Grad, A. Guterch, T. Janik, M. Majdanski, M. Malinowski, P. Sroda, M. Wilde-Piorko, (Poland), S.L. Kostiuchenko, A.F. Morozov (Russia), J. Vozar (Slovakia), A. Gosar (Slovenia), O. Selvi (Turkey), S. Acevedo, M. Averill, M. Fort, R. Greschke, S.Harder, G. Kaip, G.R. Keller, K.C. Miller, C.M. Snelson (USA)
Compartmentalisation Strategies for Hydrocarbon-based Biota on Titan
NASA Astrophysics Data System (ADS)
Norman, L.; Fortes, A. D.; Skipper, N.; Crawford, I.
2013-05-01
The goal of our study is to determine the nature of compartimentalisation strategies for any organisms inhabiting the hydrocarbon lakes of Titan (the largest moon of Saturn). Since receiving huge amounts of data via the Cassini-Huygens mission to the Saturnian system astrobiologists have speculated that exotic biota might currently inhabit this environment. The biota have been theorized to consume acetylene and hydrogen whilst excreting methane (1,2) leading to an anomalous hydrogen depletion near the surface; and there has been evidence to suggest this depletion exists (3). Nevertheless, many questions still remain concerning the possible physiological traits of biota in these environments, including whether cell-like structures can form in low temperature, low molecular weight hydrocarbons. The backbone of terrestrial cell membranes are vesicular structures composed primarily of a phospholipid bilayer with the hydrophilic head groups arranged around the periphery and are thought to be akin to the first protocells that terrestrial life utilised (4). It my be possible that reverse vesicles composed of a bilayer with the hydrophilic head groups arranged internally and a nonpolar core may be ideal model cell membranes for hydrocarbon-based organisms inhabiting Titan's hydrocarbon lakes (5). A variety of different surfactants have been used to create reverse vesicles in nonpolar liquids to date including; non-ionic ethers (7) and esters (6, 8); catanionic surfactant mixtures (9); zwitterionic gemini surfactants (10); coblock polymer surfactants (11); and zwitterionic phospholipid surfactants (12). In order to discover whether certain phospholipids can exhibit vesicular behaviour within hydrocarbon liquids, and to analyse their structure, we have carried out experimental studies using environmental conditions that are increasing comparable to those found on the surface of Titan. Experimental methods that have been used to determine the presence of vesicles include the use of microscopy, the presence of the Tyndall scattering effect, transmission electron microscopy (TEM), dynamic light scattering (DLS) , small-angle neutron scattering (SANS) and small-angle x-ray scattering (SAXS). These studies are currently being anaylzed, however, some results have indcated the presence of reverse vesicles in certain systems. Compounds that are shown to form reverse vesicles in conditions comparable to those of Titan's lakes could be potential 'biomarkers' and searched for in future missions to Titan. References [1] Schulze-Makuch D et al. Orig Life Evol Biosph 36, 324 (2006). [2] McKay C P et al. Icarus 178, 274 (2005). [3] Strobel D F. Icarus 208, 878 (2010). [4]. Fiordemondo D et al. Chem. Bio. Chem. 8, 1965 (2007). [5] Norman L H et al. A&G 52, 39 (2011). [6] Mollee H et al. J Pharm Sci 89, 930 (2000). ). [7] Kunieda H et al. Langmuir 15, 3118 (1999). [8] Shrestha L K et al. Langmuir 22, 1449 (2006). [9] Li H G et al. Chem. Lett 36, 702 (2007). [10] Peresypkin A et al. Mendeleev Commun. 17, 82 (2007). [11] Rangelov S et al. J. Phys. Chem B 108, 7542 (2004). [12] Tung S H et al. J. Am. Chem. 130, 8813 (2008).
In situ soil moisture and matrix potential - what do we measure?
NASA Astrophysics Data System (ADS)
Jackisch, Conrad; Durner, Wolfgang
2017-04-01
Soil moisture and matric potential are often regarded as state variables that are simple to monitor at the Darcy-scale. At the same time unproven believes about the capabilities and reliabilities of specific sensing methods or sensor systems exist. A consortium of ten institutions conducted a comparison study of currently available sensors for soil moisture and matrix potential at a specially homogenised field site with sandy loam soil, which was kept free of vegetation. In total 57 probes of 15 different systems measuring soil moisture, and 50 probes of 14 different systems measuring matric potential have been installed in a 0.5 meter grid to monitor the moisture state in 0.2 meter depth. The results give rise to a series of substantial questions about the state of the art in hydrological monitoring, the heterogeneity problem and the meaning of soil water retention at the field scale: A) For soil moisture, most sensors recorded highly plausible data. However, they do not agree in absolute values and reaction timing. For matric potential, only tensiometers were able to capture the quick reactions during rainfall events. All indirect sensors reacted comparably slowly and thus introduced a bias with respect to the sensing of soil water state under highly dynamic conditions. B) Under natural field conditions, a better homogeneity than in our setup can hardly be realised. While the homogeneity assumption held for the first weeks, it collapsed after a heavy storm event. The event exceeded the infiltration capacity, initiated the generation of redistribution networks at the surface, which altered the local surface properties on a very small scale. If this is the reality at a 40 m2 plot, what representativity have single point observations referencing the state of whole basins? C) A comparison of in situ and lab-measured retention curves marks systematic differences. Given the general practice of soil water retention parameterisation in almost any hydrological model this poses quite some concern about deriving field parameters from lab measurements. We will present some insights from the comparison study and highlight the conceptual concerns arising from it. Through this we hope to stimulate a discussion towards more critical revision of measurement assumptions and towards the development of alternative techniques to monitor subsurface states. The sensor comparison study consortium is a cooperation of Wolfgang Durner2, Ines Andrä2, Kai Germer2, Katrin Schulz2, Marcus Schiedung2, Jaqueline Haller-Jans2, Jonas Schneider2, Julia Jaquemotte2, Philipp Helmer2, Leander Lotz2, Thomas Graeff3, Andreas Bauer3, Irene Hahn3, Conrad Jackisch1, Martin Sanda4, Monika Kumpan5, Johann Dorner5, Gerrit de Rooij6, Stephan Wessel-Bothe7, Lorenz Kottmann8, and Siegfried Schittenhelm8. The great support by the team and the Thünen Institute Braunschweig is gratefully acknowledged. 1 Karlsruhe Institute of Technology, 2 Technical University of Braunschweig, 3 University of Potsdam, 4 Technical University of Prague, 5 Federal Department for Water Management Petzenkirchen, 6 Helmholtz Centre for Environmental Research Halle, 7 ecoTech GmbH Bonn, 8 Julius Kühn Institute Braunschweig
Gomez Quiñonez, Stefanie; Walthouwer, Michel Jean Louis; Schulz, Daniela Nadine; de Vries, Hein
2016-11-09
Until a few years ago, Web-based computer-tailored interventions were almost exclusively delivered via computer (eHealth). However, nowadays, interventions delivered via mobile phones (mHealth) are an interesting alternative for health promotion, as they may more easily reach people 24/7. The first aim of this study was to compare the efficacy of an mHealth and an eHealth version of a Web-based computer-tailored physical activity intervention with a control group. The second aim was to assess potential differences in use and appreciation between the 2 versions. We collected data among 373 Dutch adults at 5 points in time (baseline, after 1 week, after 2 weeks, after 3 weeks, and after 6 months). We recruited participants from a Dutch online research panel and randomly assigned them to 1 of 3 conditions: eHealth (n=138), mHealth (n=108), or control condition (n=127). All participants were asked to complete questionnaires at the 5 points in time. Participants in the eHealth and mHealth group received fully automated tailored feedback messages about their current level of physical activity. Furthermore, they received personal feedback aimed at increasing their amount of physical activity when needed. We used analysis of variance and linear regression analyses to examine differences between the 2 study groups and the control group with regard to efficacy, use, and appreciation. Participants receiving feedback messages (eHealth and mHealth together) were significantly more physically active after 6 months than participants in the control group (B=8.48, df=2, P=.03, Cohen d=0.27). We found a small effect size favoring the eHealth condition over the control group (B=6.13, df=2, P=.09, Cohen d=0.21). The eHealth condition had lower dropout rates (117/138, 84.8%) than the mHealth condition (81/108, 75.0%) and the control group (91/127, 71.7%). Furthermore, in terms of usability and appreciation, the eHealth condition outperformed the mHealth condition with regard to participants receiving (t 182 =3.07, P=.002) and reading the feedback messages (t 181 =2.34, P=.02), as well as the clarity of the messages (t 181 =1.99, P=.049). We tested 2 Web-based computer-tailored physical activity intervention versions (mHealth and eHealth) against a control condition with regard to efficacy, use, usability, and appreciation. The overall effect was mainly caused by the more effective eHealth intervention. The mHealth app was rated inferior to the eHealth version with regard to usability and appreciation. More research is needed to assess how both methods can complement each other. Netherlands Trial Register: NTR4503; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=4503 (Archived by WebCite at http://www.webcitation.org/6lEi1x40s). ©Stefanie Gomez Quiñonez, Michel Jean Louis Walthouwer, Daniela Nadine Schulz, Hein de Vries. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 09.11.2016.
In silico FRET from simulated dye dynamics
NASA Astrophysics Data System (ADS)
Hoefling, Martin; Grubmüller, Helmut
2013-03-01
Single molecule fluorescence resonance energy transfer (smFRET) experiments probe molecular distances on the nanometer scale. In such experiments, distances are recorded from FRET transfer efficiencies via the Förster formula, E=1/(1+(). The energy transfer however also depends on the mutual orientation of the two dyes used as distance reporter. Since this information is typically inaccessible in FRET experiments, one has to rely on approximations, which reduce the accuracy of these distance measurements. A common approximation is an isotropic and uncorrelated dye orientation distribution. To assess the impact of such approximations, we present the algorithms and implementation of a computational toolkit for the simulation of smFRET on the basis of molecular dynamics (MD) trajectory ensembles. In this study, the dye orientation dynamics, which are used to determine dynamic FRET efficiencies, are extracted from MD simulations. In a subsequent step, photons and bursts are generated using a Monte Carlo algorithm. The application of the developed toolkit on a poly-proline system demonstrated good agreement between smFRET simulations and experimental results and therefore confirms our computational method. Furthermore, it enabled the identification of the structural basis of measured heterogeneity. The presented computational toolkit is written in Python, available as open-source, applicable to arbitrary systems and can easily be extended and adapted to further problems. Catalogue identifier: AENV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv3, the bundled SIMD friendly Mersenne twister implementation [1] is provided under the SFMT-License. No. of lines in distributed program, including test data, etc.: 317880 No. of bytes in distributed program, including test data, etc.: 54774217 Distribution format: tar.gz Programming language: Python, Cython, C (ANSI C99). Computer: Any (see memory requirements). Operating system: Any OS with CPython distribution (e.g. Linux, MacOSX, Windows). Has the code been vectorised or parallelized?: Yes, in Ref. [2], 4 CPU cores were used. RAM: About 700MB per process for the simulation setup in Ref. [2]. Classification: 16.1, 16.7, 23. External routines: Calculation of Rκ2-trajectories from GROMACS [3] MD trajectories requires the GromPy Python module described in Ref. [4] or a GROMACS 4.6 installation. The md2fret program uses a standard Python interpreter (CPython) v2.6+ and < v3.0 as well as the NumPy module. The analysis examples require the Matplotlib Python module. Nature of problem: Simulation and interpretation of single molecule FRET experiments. Solution method: Combination of force-field based molecular dynamics (MD) simulating the dye dynamics and Monte Carlo sampling to obtain photon statistics of FRET kinetics. Additional comments: !!!!! The distribution file for this program is over 50 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: A single run in Ref. [2] takes about 10 min on a Quad Core Intel Xeon CPU W3520 2.67GHz with 6GB physical RAM References: [1] M. Saito, M. Matsumoto, SIMD-oriented fast Mersenne twister: a 128-bit pseudorandom number generator, in: A. Keller, S. Heinrich, H. Niederreiter (Eds.), Monte Carlo and Quasi-Monte Carlo Methods 2006, Springer; Berlin, Heidelberg, 2008, pp. 607-622. [2] M. Hoefling, N. Lima, D. Hänni, B. Schuler, C. A. M. Seidel, H. Grubmüller, Structural heterogeneity and quantitative FRET efficiency distributions of polyprolines through a hybrid atomistic simulation and Monte Carlo approach, PLoS ONE 6 (5) (2011) e19791. [3] D. V. D. Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark, H. J. C. Berendsen, GROMACS: fast, flexible, and free., J Comput Chem 26 (16) (2005) 1701-1718. [4] R. Pool, A. Feenstra, M. Hoefling, R. Schulz, J. C. Smith, J. Heringa, Enabling grand-canonical Monte Carlo: Extending the flexibility of gromacs through the GromPy Python interface module, Journal of Chemical Theory and Computation 33 (12) (2012) 1207-1214.
Carbon transfer from plant roots to soil - NanoSIMS analyses of undisturbed rhizosphere samples
NASA Astrophysics Data System (ADS)
Vidal, Alix; Hirte, Juliane; Bender, S. Franz; Mayer, Jochen; Gattinger, Andreas; Mueller, Carsten W.
2017-04-01
Soils are composed of a wide diversity of organic and mineral compounds, interacting to form complex mosaics of microenvironments. Roots and microorganisms are both key sources of organic carbon (OC). The volume of soil around living roots, i.e. the rhizosphere, is a privileged area for soil microbial activity and diversity. The microscopic observation of embedded soil sections has been applied since the 1950´s and has enabled observation of the rhizosphere at the smallest scale of organism interaction, i.e. at the level of root cells and bacteria (Alexander and Jackson, 1954). However, the observation of microorganisms in their intact environment, especially in soil, remains challenging. Existing microscopic images do not provide clear evidence of the chemical composition of compounds observed in the rhizosphere. Nano-scale secondary ion mass spectrometry (NanoSIMS) is a high spatial resolution method providing elemental and isotopic maps of organic and mineral materials. This technic has been increasingly used in soil science during the last decade (Hermann et al., 2007; Vogel et al., 2014) and more specifically for undisturbed soil sample observations (Vidal et al., 2016). In the present study, NanoSIMS was used to illustrate the biological, physical and chemical processes occurring in the rhizosphere at the microscale. To meet this objective, undisturbed rhizosphere samples were collected from a field experiment in Switzerland where wheat plants were pulse-labelled with 99% 13C-CO2 in weekly intervals throughout the growing season and sampled at flowering. Samples were embedded, sectioned, polished and analyzed with NanoSIMS, obtaining secondary ion images of 12C, 13C, 12C14N, 16O, 31P16O2, and 32S. The δ13C maps were obtained thanks to 12C and 13C images. 13C labelled root cells were clearly distinguished on images and presented highly variable δ13C values. Labelled spots (< 1 µm), identified as bacteria, were located at the root cell surroundings. These microorganisms were intimately associated with soil particles, forming microaggregates tightly bound to root cells. Finally, some images revealed the presence of larger labelled spots (> 4 µm) potentially assignable to arbuscular fungal hyphae. These results illustrate the transfer of carbon from the root tissues towards the microbial communities and the direct fate as organo-mineral associated OC at mineral soil particles. Alexander, F., Jackson, R., 1954. Examination of soil micro-organisms in their natural environment. Nature. 174, 750-751. Herrmann, A.M., Ritz, K., Nunan, N., Clode, P.L., Pett-Ridge, J., Kilburn, M.R., Murphy, D.V., O'Donnell, A.G., Stockdale, E.A., 2007. Nano-scale secondary ion mass spectrometry — A new analytical tool in biogeochemistry and soil ecology: A review article. Soil Biology and Biochemistry. 39, 1835-1850. Vidal, A., Remusat, L., Watteau, F., Derenne, S., Quenea K., 2016. Incorporation of 13C labelled shoot residues in Lumbricus terrestris casts: A combination of Transmission Electron Microscopy and Nanoscale Secondary Ion Mass Spectrometry. Soil Biology and Biochemistry. 93, 8-16. Vogel, C., Mueller, C.W., Höschen, C., Buegger, F., Heister, K., Schulz, S., Schloter, M., Kögel-Knabner, I., 2014. Submicron structures provide preferential spots for carbon and nitrogen sequestration in soils. Nature Communications 5.
Physically-based failure analysis of shallow layered soil deposits over large areas
NASA Astrophysics Data System (ADS)
Cuomo, Sabatino; Castorino, Giuseppe Claudio; Iervolino, Aniello
2014-05-01
In the last decades, the analysis of slope stability conditions over large areas has become popular among scientists and practitioners (Cascini et al., 2011; Cuomo and Della Sala, 2013). This is due to the availability of new computational tools (Baum et al., 2002; Godt et al., 2008; Baum and Godt, 2012; Salciarini et al., 2012) - implemented in GIS (Geographic Information System) platforms - which allow taking into account the major hydraulic and mechanical issues related to slope failure, even for unsaturated soils, as well as the spatial variability of both topography and soil properties. However, the effectiveness (Sorbino et al., 2010) of the above methods it is still controversial for landslides forecasting especially depending on the accuracy of DTM (Digital Terrain Model) and for the chance that distinct triggering mechanisms may occur over large area. Among the major uncertainties, layering of soil deposits is of primary importance due to soil layer conductivity contrast and differences in shear strength. This work deals with the hazard analysis of shallow landslides over large areas, considering two distinct schematizations of soil stratigraphy, i.e. homogeneous or layered. To this purpose, the physically-based model TRIGRS (Baum et al., 2002) is firstly used, then extended to the case of layered deposit: specifically, a unique set of hydraulic properties is assumed while distinct soil unit weight and shear strength are considered for each soil layer. Both models are applied to a significant study area of Southern Italy, about 4 km2 large, where shallow deposits of air-fall volcanic (pyroclastic) soils have been affected by several landslides, causing victims, damages and economic losses. The achieved results highlight that soil volume globally mobilized over the study area highly depends on local stratigraphy of shallow deposits. This relates to the depth of critical slip surface which rarely corresponds to the bedrock contact where cohesionless coarse materials lie on deeper soil layers with small effective cohesion. It is also shown that, due to a more realistic assessment of soil stratigraphy, the success of the model may increase when performing a back-analysis of a recent real event. References Baum, R. L., W. Z. Savage, and J. W. Godt (2002), TRIGRS-A Fortran program for transient rainfall infiltration and grid-based regional slope-stability analysis. U.S. Geological Survey, Open-file report 02-424, 35 p. Baum, R.L., Godt, J.W. (2012) Assessment of shallow landslide potential using 1-D and 3-D slope stability analysis Landslides and Engineered Slopes: Protecting Society through Improved Understanding - Eberhardt et al. (eds) 2012 Taylor & Francis Group, London, ISBN 978-0-415-62123-6, 1667-1672. Cascini L., Cuomo S., Della Sala M. (2011). Spatial and temporal occurrence of rainfall-induced shallow landslides of flow type: A case of Sarno-Quindici, Italy. Geomorphology, 126(1-2), 148-158. Cuomo S., Della Sala M. (2013). Spatially distributed analysis of shallow landslides and soil erosion induced by rainfall. (submitted to Natural Hazards). Godt, J.W., Baum, R.L., Savage, W.Z., Salciarini, D., Schulz, W.H., Harp, E.L. (2008). Transient deterministic shallow landslide modeling: requirements for susceptibility and hazard assessments in a GIS framework. Engineering Geology 102, 214-226. Salciarini, D., Tamagnini, C., Conversini, P., Rapinesi, S. (2012). Spatially distributed rainfall thresholds for the initiation of shallow landslides. Natural Hazards 61, 229-245. Sorbino G., Sica C., Cascini L. (2010). Susceptibility analysis of shallow landslides source areas using physically based models. Natural Hazards, 53(2), 313-332.
NASA Astrophysics Data System (ADS)
Vidal, Alix; Remusat, Laurent; Watteau, Françoise; Derenne, Sylvie; Quenea, Katell
2016-04-01
Earthworms play a central role in litter decomposition, soil structuration and carbon cycling. They ingest both organic and mineral compounds which are mixed, complexed with mucus and dejected in form of casts at the soil surface and along burrows. Bulk isotopic or biochemical technics have often been used to study the incorporation of litter in soil and casts, but they could not reflect the complex interaction between soil, plant and microorganisms at the microscale. However, the heterogeneous distribution of organic carbon in soil structures induces contrasted microbial activity areas. Nano-scale secondary ion mass spectrometry (NanoSIMS), which is a high spatial resolution method providing elemental and isotopic maps of organic and mineral materials, has recently been applied in soil science (Herrmann et al., 2007; Vogel et al., 2014). The combination of Nano-scale secondary ion mass spectrometry (NanoSIMS) and Transmission Electron Microscopy (TEM) has proven its potential to investigate labelled residues incorporation in earthworm casts (Vidal et al., 2016). In line of this work, we studied the spatial and temporal distribution of plant residues in soil aggregates and earthworm surface casts. This study aimed to (1) identify the decomposition states of labelled plant residues incorporated at different time steps, in casts and soil, (2) identify the microorganisms implied in this decomposition (3) relate the organic matter states of decomposition with their 13C signature. A one year mesocosm experiment was set up to follow the incorporation of 13C labelled Ryegrass (Lolium multiflorum) litter in a soil in the presence of anecic earthworms (Lumbricus terrestris). Soil and surface cast samples were collected after 8 and 54 weeks, embedded in epoxy resin and cut into ultra-thin sections. Soil was fractionated and all and analyzed with TEM and NanoSIMS, obtaining secondary ion images of 12C, 16O, 12C14N, 13C14N and 28Si. The δ13C maps were obtained using the 13C14N-/12C14N- ratio. We identified various states of decomposition within a same sample, associated with a high heterogeneity of δ13C values of plant residues. We also recognized various labelled microorganisms, mainly bacteria and fungi, underlining their participation in residues decomposition. δ13C values were higher in casts than soil aggregates and decreased between 8 and 54 weeks for both samples. Herrmann, A.M., Ritz, K., Nunan, N., Clode, P.L., Pett-Ridge, J., Kilburn, M.R., Murphy, D.V., O'Donnell, A.G., Stockdale, E.A., 2007. Nano-scale secondary ion mass spectrometry - A new analytical tool in biogeochemistry and soil ecology: A review article. Soil Biology and Biochemistry. 39, 1835-1850. Vidal, A., Remusat, L., Watteau, F., Derenne, S., Quenea K., 2016. Incorporation of 13C labelled shoot residues in Lumbricus terrestris casts: A combination of Transmission Electron Microscopy and Nanoscale Secondary Ion Mass Spectrometry. Soil Biology and Biochemistry. Vogel, C., Mueller, C.W., Höschen, C., Buegger, F., Heister, K., Schulz, S., Schloter, M., Kögel-Knabner, I., 2014. Submicron structures provide preferential spots for carbon and nitrogen sequestration in soils. Nature Communications 5.
Evidence for a strong sulfur-aromatic interaction derived from crystallographic data.
Zauhar, R J; Colbert, C L; Morgan, R S; Welsh, W J
2000-03-01
We have uncovered new evidence for a significant interaction between divalent sulfur atoms and aromatic rings. Our study involves a statistical analysis of interatomic distances and other geometric descriptors derived from entries in the Cambridge Crystallographic Database (F. H. Allen and O. Kennard, Chem. Design Auto. News, 1993, Vol. 8, pp. 1 and 31-37). A set of descriptors was defined sufficient in number and type so as to elucidate completely the preferred geometry of interaction between six-membered aromatic carbon rings and divalent sulfurs for all crystal structures of nonmetal-bearing organic compounds present in the database. In order to test statistical significance, analogous probability distributions for the interaction of the moiety X-CH(2)-X with aromatic rings were computed, and taken a priori to correspond to the null hypothesis of no significant interaction. Tests of significance were carried our pairwise between probability distributions of sulfur-aromatic interaction descriptors and their CH(2)-aromatic analogues using the Smirnov-Kolmogorov nonparametric test (W. W. Daniel, Applied Nonparametric Statistics, Houghton-Mifflin: Boston, New York, 1978, pp. 276-286), and in all cases significance at the 99% confidence level or better was observed. Local maxima of the probability distributions were used to define a preferred geometry of interaction between the divalent sulfur moiety and the aromatic ring. Molecular mechanics studies were performed in an effort to better understand the physical basis of the interaction. This study confirms observations based on statistics of interaction of amino acids in protein crystal structures (R. S. Morgan, C. E. Tatsch, R. H. Gushard, J. M. McAdon, and P. K. Warme, International Journal of Peptide Protein Research, 1978, Vol. 11, pp. 209-217; R. S. Morgan and J. M. McAdon, International Journal of Peptide Protein Research, 1980, Vol. 15, pp. 177-180; K. S. C. Reid, P. F. Lindley, and J. M. Thornton, FEBS Letters, 1985, Vol. 190, pp. 209-213), as well as studies involving molecular mechanics (G. Nemethy and H. A. Scheraga, Biochemistry and Biophysics Research Communications, 1981, Vol. 98, pp. 482-487) and quantum chemical calculations (B. V. Cheney, M. W. Schulz, and J. Cheney, Biochimica Biophysica Acta, 1989, Vol. 996, pp.116-124; J. Pranata, Bioorganic Chemistry, 1997, Vol. 25, pp. 213-219)-all of which point to the possible importance of the sulfur-aromatic interaction. However, the preferred geometry of the interaction, as determined from our analysis of the small-molecule crystal data, differs significantly from that found by other approaches. Copyright 2000 John Wiley & Sons, Inc.
Tick-borne encephalitis virus in arthropod vectors in the Far East of Russia.
Pukhovskaya, Natalia M; Morozova, Olga V; Vysochina, Nelya P; Belozerova, Nadejda B; Bakhmetyeva, Svetlana V; Zdanovskaya, Nina I; Seligman, Stephen J; Ivanov, Leonid I
2018-05-01
Isolates of tick-borne encephalitis virus (TBEV) from arthropod vectors (ticks and mosquitoes) in the Amur, the Jewish Autonomous and the Sakhalin regions as well as on the Khabarovsk territory of the Far East of Russia were studied. Different proportions of four main tick species of the family Ixodidae: Ixodes persulcatus P. Schulze, 1930; Haemaphysalis concinna Koch, 1844; Haemaphysalis japonica douglasi Nuttall et Warburton, 1915 and Dermacentor silvarum Olenev, 1932 were found in forests and near settlements. RT-PCR of TBEV RNA in adult ticks collected from vegetation in 1999-2014 revealed average infection rates of 7.9 ± 0.7% in I. persulcatus, of 5.6 ± 1.0% in H. concinna, of 2.0 ± 2.0% in H. japonica, and of 1.3 ± 1.3% in D. silvarum. Viral loads varied in a range from 10 2 to 10 9 TBEV genome-equivalents per a tick with the maximal values in I. persulcatus and H. japonica. Molecular typing using reverse transcription with subsequent real time PCR with subtype-specific fluorescent probes demonstrated that the Far Eastern (FE) subtype of TBEV predominated both in mono-infections and in mixed infection with the Siberian (Sib) subtype in I. persulcatus pools. TBEV strains of the FE subtype were isolated from I. persulcatus, H. concinna and from a pool of Aedes vexans mosquitoes. Ten TBEV strains isolated from I. persulcatus from the Khabarovsk territory and the Jewish Autonomous region between 1985 and 2013 cluster with the TBEV vaccine strain Sofjin of the FE subtype isolated from human brain in 1937. A TBEV strain from H. concinna collected in the Amur region (GenBank accession number KF880803) is similar to the vaccine strain 205 isolated in 1973 from I. persulcatus collected in the Jewish Autonomous region. The TBEV strain Lazo MP36 of the FE subtype isolated from a pool of A. vexans in the Khabarovsk territory in 2014 (KT001073) differs from strains isolated from 1) I. persulcatus (including the vaccine strain 205) and H. concinna; 2) mosquitoes [strain Malishevo (KJ744034) isolated in 1978 from Aedes vexans nipponii in the Khabarovsk territory]; and 3) human brain (including the vaccine strain Sofjin). Accordingly, in the far eastern natural foci, TBEV of the prevailing FE subtype has remained stable since 1937. Both Russian vaccines against TBE based on the FE strains (Sofjin and 205) are similar to the new viral isolates and might protect against infection. Copyright © 2018 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Sobolev, N. V.
2010-12-01
Coesite, a high-pressure polymorph of silica, was first discovered as part of a coesite-eclogite assemblage (coesite, garnet, omphacite) in equilibrium with diamond as diamond inclusion (DI) in Siberian diamond placers (Sobolev et al., 1976, Dokl. Akad. Nauk SSSR, 230: 1442). In recent years, coesite has become a key mineral coexisting with diamond both in kimberlite (DIs) and in UHP metamorphic rocks of the Kokchetav massif, Kazakhstan (diamondiferous gneisses and calcsilicate rocks). In the UHPM rocks of Kokchetav massif, coesite was first detected as inclusions in zircon associated with diamonds (Sobolev et al., 1991, Dokl. Akad. Nauk SSSR, 321: 184), as a result of the initial studies that had identified diamonds as inclusions in garnets and zircons (Sobolev, Shatsky, 1990, Nature, 343: 742). Garnet and omphacitic clinopyroxene are the principal primary minerals associated with coesite and diamond in UHP mantle and crustal rocks. Their compositions plot distinctly within the eclogitic compositional field and substantiate the existence of coesite presence as DIs in eclogitic (E-type) diamonds, as well as sometimes in xenoliths of diamondiferous eclogites (Shatsky et al., 2008, Lithos, 105:289). One of the major significant features of these eclogitic minerals in both UHPM and kimberlitic mantle occurrences is the K2O contents of the clinopyroxenes, reaching 1.6 wt.%, with Na2O and MnO in Ca-Mg-Fe garnets reaching 0.3 and 6.0 wt.%, respectively. Stable isotope data for C in diamonds and O in garnet, pyroxene and coesite have resulted in establishing a very wide range for these isotopes most typical for crustal conditions - i.e., atypical of mantle values. This is clearly shown for coesite DIs (Schulze et al., 2003, Nature, 428:68), garnets from diamondiferous eclogite xenoliths from Siberian kimberlites (Spetsius et al., 2008, Eur. J. Min., 20:375), garnets and clinopyroxenes from UHP calcsilicate diamondiferous rocks of the Kokchetav massif (Sobolev et al., in press, Contr. Min. Petr.). This extensive wide range in δ13C (PDB) for coesite-bearing diamonds, from -28 to +1.5 ‰, along with common crustal δ18O (SMOW) values from the principal rock-forming minerals (garnet and clinopyroxene) and accessory mineral (coesite), is typical for diamondiferous mantle eclogites, crustal UHPM rocks, and DIs. The petrogenetic evidences from all these rocks and minerals are indicative of major subduction of crustal protoliths (Ringwood, 1972, EPSL, 14:233), including the recycling of crustal carbon into diamonds in mantle eclogites, first speculated on by V.S. Sobolev and N.V. Sobolev (1980, Dokl. Akad. Nauk SSSR, 249: 1217).
EDITORIAL: Cluster issue on microplasmas
NASA Astrophysics Data System (ADS)
Chao, Chih C.; Liao, Jiunn-Der; Chang, Juu-En
2008-10-01
Ever since the first Workshop on Microplasmas, held in Japan in 2003, plasma scientists and engineers worldwide have been meeting approximately every 18 months to exchange and discuss the results of scientific research and technical applications of this unique type of plasma. Microplasmas are generally described as stable plasmas confined to spatial dimensions below about 1 mm that can be operated at pressures up to and exceeding atmospheric pressure. By their nature, this presents a wide range of opportunities and many advantages in practical applications, just a few examples being low energy consumption, small size, flexibility of use and ease of assembly into a user-friendly package. Nevertheless, there still remain several unanswered basic science questions and a largely untapped potential for environmental, biomedical and industrial applications. The fourth International Workshop on Microplasmas, held during 28-31 October 2007 in Tainan, Taiwan, continued the trend of previous Workshops with an orientation towards industrial and environmental applications. Many high-quality papers on microplasmas and microdischarges were presented and selected full papers were submitted to Journal of Physics D: Applied Physics for assessment by the editors and reviewers in accordance with the usual standards of quality and novelty. This Cluster Issue contains twelve accepted papers, covering four categories: fundamentals and basics, and environmental, biomedical and industrial applications. Fundamentals and basics includes coverage of the physics and microstructure of electrode discharge (Yu A Lebedev et al), the characteristics of low current discharge (Z Lj Petrović et al), plasma ignition (R Gesche et al), novel optical diagnostics (Schulz-von der Gathen et al), plasma generation and micronozzle flow (T Takahashi et al) and the relation between RF-power and atomic oxygen density distribution (N Knake et al). Environmental applications are represented by vapour-phase discharges in liquid capillaries (P Bruggeman et al) and biomedical applications by antibacterial treatment (K D Weltmann et al). Industrial applications include on-chip microplasma reactors (A Agiral et al), miniaturized atmospheric pressure plasma jets (J Schäfer et al and A V Pipa et al) and microplasma stamps (N Lucas et al). All of these represent important findings and advances in microplasma research and applications. We would like to thank the Publisher of the journal, Sarah Quin, and the editorial staff for their support and management of the publication. It is sincerely hoped that the contents of this Cluster Issue will promote understanding of microplasmas and microdischarges, and inspire further research towards industrial applications.
Astrobiology and other Mars science: how can humans help (and from where)?
NASA Astrophysics Data System (ADS)
Rummel, John; Conley, Catharine
2016-07-01
There are many advocates for the human exploration of Mars who wax poetical when discussing how good it is going to be, but there are only a few who may be willing to write requirements for how much direct human surface exploration on Mars needs to be possible before attempting it is worth the investment, or to compare modes of human exploration to see which one is most cost-efficient for the initial human missions to Mars (assuming that humans working in near-Mars space is a goal in and of itself. For example, the recent MEPAG Scientific Objectives for the Human Exploration of Mars Science Analysis Group (MEPAG HSO-SAG) [1] stated that "A defensible evaluation of surface science operations options and candidate scenarios cannot be done at this time - we recommend deferring this to a future team." Alternatively [e.g., 2], there are considerations of the science that can be done from the martian moon Phobos that do not require surface operations on Mars at all, except by robots controlled through low-latency telepresence. The promise of how to deliver better Mars science for the money (and risk) will be discussed in this paper, and some estimates made on how often a human has to step outside on Mars (and step back in) to accomplish more science than a telepresent rover. We will also look at what the estimates of contamination from on-site human explorers can mean to the search for possible indigenous life on Mars. Some [3] say that Mars is already "contaminated" by Earth organisms brought to Mars from Earth through impact-generated bolide exchanges, but (as noted in [4]) that statement suggests that they do not really hold a solid concept of what contamination is, and what it may mean to both our understanding of the pre-human past on Mars, as well as to the preservation of Mars resources for future human inhabitants. Refs. 1. Beaty et al., Candidate scientific objectives for the human exploration of Mars, and implications for the identification of Martian Exploration Zones.
Pore-fluid chemistry along the main axis of an active lobe at the Congo deep-sea fan
NASA Astrophysics Data System (ADS)
Croguennec, C.; Ruffine, L.; Guyader, V.; Le Bruchec, J.; Ruesch, B.; Caprais, J.; Cathalot, C.; de Prunelé, A.; Germain, Y.; Bollinger, C.; Dennielou, B.; Olu, K.; Rabouille, C.
2013-12-01
The distal lobes of the Congo deep-sea fan constitute a unique in situ laboratory to study early diagenesis of marine sediments. They are located at water depth of about 5000 m and result from the deposition of sediment transported by turbidity currents along the channel-levee systems and submarine canyon connected to the Congo River. Thus, a huge amount of organic matter, transported from the river to the lobes, undergoes decomposition processes involving different oxidants present within the sedimentary column. This drastically changes the chemistry of the pore fluids, allowing the occurence of a succession of biogeochemical processes. The present study is part of an ongoing project which aims at better understanding the role and the fate of organic matter transported to the lobe systems, as well as its implication in the distribution of the living communities encountered there. Thus, pore fluids have been sampled from 8 Calypso cores in order to determine the concentration of dissolved elements. Five sites have been investigated: four of them are located along the main axis of a currently active lobe, the last one being located on a lobe disconnected from the chenals. The analyses of methane, major (Cl, SO4, Mg, Ca, K, Na) and minor (Sr, Ba, B, Li, Mn) elements have been carried out along with total alkalinity determination. The resulting profiles show a highly heterogeneous pore-fluid chemistry. Sulphate concentration near the seawater/sediment interface varies from 3 to 29 mM, indicating intense sulphate reduction. Surprisingly the lowest values are found at the site which is disconnected from the active lobe. The manganese cycle is well defined for all cores. The core recovered at the more distal lobe exhibits very peculiar pore-fluid profiles which are likely related to a geological event, most likely sediment slide and remobilization. References: Babonneau, N., Savoye, B., Cremer, M. & Klein, B., 2002. Morphology and architecture of the present canyon and channel system of the Zaire deep-sea fan, Mar. Pet. Geol., 19, 445-467. Savoye, B., Babonneau, N., Dennielou, B. & Bez, M., 2009. Geological overview of the Angola-Congo margin, the Congo deep-sea fan and its submarine valleys, Deep-Sea Res. Part II-Top. Stud. Oceanogr., 56, 2169-2182. Vangriesheim, A., Khripounoff, A. & Crassous, P., 2009. Turbidity events observed in situ along the Congo submarine channel, Deep-Sea Res. Part II-Top. Stud. Oceanogr., 56, 2208-2222. Zabel, M. & Schulz, H.D., 2001. Importance of submarine landslides for non-steady state conditions in pore water systems - lower Zaire (Congo) deep-sea fan, Mar. Geol., 176, 87-99.
NASA Astrophysics Data System (ADS)
Ludwig, Wolfgang; Eggl, Siegfried; Neubauer, David; Leitner, Johannes; Firneis, Maria; Hitzenberger, Regina
2014-05-01
Recent fields of interest in exoplanetary research include studies of potentially habitable planets orbiting stars outside of our Solar System. Habitable Zones (HZs) are currently defined by calculating the inner and the outer limits of the mean distance between exoplanets and their central stars based on effective solar fluxes that allow for maintaining liquid water on the planet's surface. Kasting et al. (1993), Selsis et al. (2007), and recently Kopparapu et al. (2013) provided stellar flux limits for such scenarios. We compute effective solar fluxes for Earth-like planets using Earth-like and other atmospheric scenarios including atmospheres with high level and low level clouds. Furthermore we provide habitability limits for solvents other than water, i.e. limits for the so called Life Supporting Zone, introduced by Leitner et al. (2010). The Life Supporting Zone (LSZ) encompasses many habitable zones based on a variety of liquid solvents. Solvents like ammonia and sulfuric acid have been identified for instance by Leitner et al (2012) as possibly life supporting. Assuming planets on circular orbits, the extent of the individual HZ is then calculated via the following equation, d(i,o) = [L/Lsun*1/S(i,o)]**0.5 au, where L is the star's luminosity, and d(i,o) and S(i,o) are the distances to the central star for the inner and the outer edge and effective insolation for inner and the outer edge of the HZ, respectively. After generating S(i,o) values for a selection of solvents, we provide the means to determine LSZ boundaries for main sequence stars. Effective flux calculations are done using a one dimensional radiative convective model (Neubauer et al. 2011) based on a modified version of the open source radiative transfer software Streamer (Key and Schweiger, 1998). Modifications include convective adjustments, additional gases for absorption and the use of an offline cloud model, which allow us to observe the influence of clouds on effective stellar fluxes. Kasting, J.F., Whitmire, D.P., & Reynolds, R.T. 1993, Icar, 101, 108 Key JR, Schweiger AJ (1998) Geosci 24:443-451. Kopparapu, R.J., et al. 2013 ApJ 765, 131 Leitner, J. J., Schwarz, R., Firneis, M. G., Hitzenberger, R., and Neubauer, D., Astrobiology Science Conference 2010, 26-29 April 2010, League City, USA, 2010 Leitner, J.J., Schulze-Makuch, D., Firneis, M.G., Hitzenberger, R., Neubauer, D., 2012 Paleontology Journal 46 (9), 1091 Neubauer, D., Vrtala, A., Leitner, J.J., Firneis, M.G., Hitzenberger, R., 2011 Origins of Life and Evolution of Biospheres, 41, 545-552 Selsis, F., Kasting, J.F., Levrard, B., et al. 2007b, A&A, 476, 137
NASA Astrophysics Data System (ADS)
Landi, Tony C.; Bonafe, Giovanni; Stortini, Michele; Minguzzi, Enrico; Cristofanelli, Paolo; Marinoni, Angela; Giulianelli, Lara; Sandrini, Silvia; Gilardoni, Stefania; Rinaldi, Matteo; Ricciardelli, Isabella
2014-05-01
Within the EU project PEGASOS one of three field campaigns took place in the Po Valley during the summer of 2012. Photochemistry, particle formation, and particle properties related to diurnal evolution of the PBL were investigated through both in-situ and airborne measurements on board a Zeppelin NT air ship. In addition, 3-D air quality modeling systems were implemented over the Po valley for the summer 2012 to better characterize the atmospheric conditions, in terms of meteorological parameters and chemical composition. In this work, we present a comparison between atmospheric composition simulations carried out by the modeling system NINFA/AODEM with measurements performed during the PEGASOS field campaign for the period 13 June - 12 July 2012. NINFA (Stortini et al., 2007) is based on the chemical transport model CHIMERE (Bessagnet et al., 2008), driven by COSMO-I7, the meteorological Italian Limited Area Model, (Steppeler et al., 2003). Boundary conditions are provided by Prev'air data (www.prevair.org), and emission data input are based on regional, national and European inventory. Besides, a post-processing tool for aerosol optical properties calculation, called AODEM (Landi T. C. 2013) was implemented. Thus, predictions of Aerosol Optical Depth and aerosol extinction coefficient were also used for model comparison to vertical-resolved observations. For this experiment, NINFA/AODEM has been also evaluated by using measurements of size-segregated aerosol samples, number particles concentration and aerosol optical properties collected on hourly basis at the 3 different sampling sites representative of urban background (Bologna), rural background (San Pietro Capofiume) and remote high altitude station (Monte Cimone 2165 ma.s.l.). ). In addition, we focused on new particles formations events and long range transports from Northern Africa observed during the field campaign. References Bessagnet, Bertrand, Laurent Menut, Gabriele Curci, Alma Hodzic, Bruno Guillaume, Catherine Liousse, Sophie Moukhtar, Betty Pun, Christian Seigneur, and Michaël Schulz (2008). "Regional modeling of carbonaceous aerosols over europe-focus on secondary organic aerosols." Journal of Atmospheric Chemistry 61, no. 3 : 175-202. Landi Tony Christian (2013). AODEM: A post-processing tool for aerosol optical properties calculation in the Chemical Transport Models. Book published by LAP - Lambert Academic Publishing ISBN: 978-3-659-31802-3. Steppeler, J., G. Doms, U. Schättler, H. W. Bitzer, A. Gassmann, U. Damrath, and G. Gregoric (2003). "Meso-gamma scale forecasts using the nonhydrostatic model LM." Meteorology and Atmospheric Physics 82, no. 1-4 : 75-96. M. Stortini, M. Deserti, G. Bonafè, and E. Minguzzi. Long-term simulation and validation of ozone and aerosol in the Po Valley. In C.Borrego and E.Renner, editors, Developments in Environmental Sciences, volume 6, pages 768-770. Elsevier, 2007.
BIOMEX on EXPOSE-R2: First results on the preservation of Raman biosignatures after space exposure
NASA Astrophysics Data System (ADS)
Baqué, Mickael; Böttger, Ute; Leya, Thomas; de Vera, Jean-Pierre Paul
2017-04-01
After a 15-month exposure on-board the EXPOSE-R2 space platform, situated on the outside of the International Space Station, four astrobiology experiments successfully came back to Earth in March and June 2016. Among them, the BIOMEX (BIOlogy and Mars EXperiment) experiment aims at investigating the endurance of extremophiles and stability of biomolecules under space and Mars-like conditions in the presence of Martian mineral analogues (de Vera et al., 2012). The preservation and evolution of Raman biosignatures under such conditions is of particular interest for guiding future search-for-life missions to Mars (and other planetary objects) carrying Raman spectrometers (such as the Raman Laser Spectrometer instrument on board the future ExoMars rover). The photoprotective carotenoid pigments (present either in photosynthetic organisms such as plants, algae, cyanobacteria and in some bacteria and archaea) have been classified as high priority targets for biomolecule detection on Mars and therefore used as biosignature models due to their stability and easy identification by Raman spectroscopy (Böttger et al., 2012). We report here on the first results from the analysis of two carotenoids containing organisms: the cyanobacterium Nostoc sp. (strain CCCryo 231-06; = UTEX EE21 and CCMEE 391) isolated from Antarctica and the green alga cf. Sphaerocystis sp. (strain CCCryo 101-99) isolated from Spitsbergen. Desiccated cells of these organisms were exposed to space and simulated Mars-like conditions in space in the presence of two Martian mineral analogues (phyllosilicatic and sulfatic Mars regolith simulants) and a Lunar regolith analogue and analyzed with a 532nm Raman microscope at 1mW laser power. Carotenoids in both organisms were surprisingly still detectable at relatively high levels after being exposed for 15 months in Low Earth Orbit to UV, cosmic rays, vacuum (or Mars-like atmosphere) and temperatures stresses regardless of the mineral matrix used. Further analyses will help us to correlate these results with survival potential, cellular damages or stability and the different extremophiles tested in the BIOMEX experiment. Böttger, U., de Vera, J.-P., Fritz, J., Weber, I., Hübers, H.-W., and Schulze-Makuch, D. (2012). Optimizing the detection of carotene in cyanobacteria in a martian regolith analogue with a Raman spectrometer for the ExoMars mission. Planetary and Space Science 60, 356-362. de Vera, J.-P., Boettger, U., Noetzel, R. de la T., Sánchez, F.J., Grunow, D., Schmitz, N., Lange, C., Hübers, H.-W., Billi, D., Baqué, M., et al. (2012). Supporting Mars exploration: BIOMEX in Low Earth Orbit and further astrobiological studies on the Moon using Raman and PanCam technology. Planetary and Space Science 74, 103-110.
NASA Astrophysics Data System (ADS)
Allu C, Narayana; Pawan K, Gautam; Shraddha, Band; Madhusudan G, Yadava; Rengaswamy, Ramesh; Shen, Chuan-Chou
2016-04-01
High resolution δ18O and δ13C data from absolutely dated stalagmites have been useful for reconstructing the Asian monsoon variability (e.g., Yadava et al., 2004; Laskar et al., 2013; Allu et al., 2014; Lone et al., 2014; Sinha et al., 2015). However, many studies lack high resolution spatial and temporal records leaving significant gaps which need to be filled for a vivid understanding of monsoonal variability. We report here the first high resolution stalagmite δ18O isotope results during the last deglacial obtained from the Kailash cave located from the core monsoon region. The length of stalagmite was 480 mm, with an average diameter of 120 mm. The sample was cut for continuous micro milling at 400μm intervals along the growth axis (using new wave research micro-mill-101288) for the analyses of stable oxygen and carbon isotopes using a Delta V plus IRMS at the Physical Research Laboratory, Ahmedabad. The physical appearance of the sample section reveals very fine, straight and clear laminations from the top to 310 mm from below, which have thick laminae. U-Th dates obtained from a Thermo Fisher NEPTUNE multi-collector inductively coupled plasma mass spectrometer (MC-ICP-MS) at High-Precision Mass Spectrometry and Environment Change Laboratory (HISPEC), National Taiwan University, Taiwan (Shen et al., 2012) showed the record spanned ~2400 years from ~14.6 ka to ~12.2 ka. Linear Age-Depth model constructed from dates suggests that the sample grew for ~2.400 years from ~14.6 ka to ~12.2 ka with varying resolutions from ~6 months to ~8 years. Hendy's test from 8 distinct layers shows poor correlation between δ18O and δ13C suggesting the isotopic equilibrium conditions at the time of crystallization. δ18O and δ13C results appear to be cyclic in nature varying in the range from +0.37‰ to -6.07‰ and -1.59‰ to -10.59‰ respectively. Enriched δ18O in top portion represents poor monsoon during the onset of Younger Drayas. Later, the δ18O signals corresponding to Bølling-Allerød Interstadial appear to be cyclic in nature. We performed time-series analyses on the δ18O record to investigate the periodicities to understand the influence of both solar and non-solar frequencies during last deglacial. REDFIT (Schulz & Mudelsee, 2002) with Monte Carlo simulation was used to calculate the red noises. Spectral analysis of the δ18O time series show statistically most significant periodicity (>95%) centered at 592 years. The other significant periodicities found are 42, 37, 19, 18, 16, and14.5 years.
Preface: phys. stat. sol. (a) 203/12
NASA Astrophysics Data System (ADS)
Jackman, Richard B.; Nesládek, Milo; Haenen, Ken
2006-09-01
The 30 papers gathered in this issue of physica status solidi (a) give a thorough overview over different topics that were presented during the 11th edition of the International Workshop on Surface and Bulk Defects in CVD Diamond Films (SBDD), which took place from 22 to 24 February 2006, at the Hasselt University in Diepenbeek-Hasselt, Belgium. Since its start more than 10 years ago, the SBDD Workshop has grown into a well-established, yearly early bird meeting place, addressing new emerging science related to the progress in the CVD diamond field. The 10 invited lectures, 29 contributed oral presentations and 26 posters were presented in several sessions during an intense two and a half day long meeting.The number of participants reached 115 this year with participants coming from fifteen countries: Austria, Belgium, Czech Republic, France, Germany, Israel, Italy, Japan, Mexico, Poland, Russia, Singapore, Slovak Republic, Sweden, UK, and USA. The mixture of young and established scientists, including a great proportion of students, made this meeting a hot spot of lively discussions on a wide range of scientific subjects, not only during the meeting itself, but also at several occasions throughout many social events offered by the hospitality of the city of Hasselt.It stands for itself that the workshop would not have been possible without the support of many people and institutions. For financial aid we are especially indebted to the Scientific Research Community Surface Modification of Materials of the F.W.O.-Vlaanderen (Belgium), whose incessant support plays an important role in keeping this meeting going. We also thank the Hasselt University for offering the lecture hall and infrastructure facilities and Seki Technotron Corp. for sponsoring the poster reception and their presence with a table top exhibit. Finally we highly appreciate the active approach of the editorial staff of physica status solidi in this conference and would like to thank most notably Stefan Hildebrandt, Ron Schulz-Rheinländer, Christoph Lellig, and Julia Hübner, for their excellent and patient work, bringing the number of successfully published proceedings of SBDD in pss (a) up to 8 already!To finish, we would all like to invite you to the 12th edition of the SBDD series, newly renamed as Hasselt Diamond Workshop, to be held at its established location of Diepenbeek-Hasselt. We look forward meeting you again at SBDD XII in 2007:Hasselt Diamond Workshop - SBDD XII
Measuring and modeling of a three-dimensional tracer transport in a planted soil column
NASA Astrophysics Data System (ADS)
Schroeder, N.; Javaux, M.; Haber-Pohlmeier, S.; Pohlmeier, A. J.; Huber, K.; Vereecken, H.; Vanderborght, J.
2013-12-01
Water flow from soil to root is driven by the plant transpiration and an important component of the hydrological cycle. The model R-SWMS combines three-dimensional (3D) water flow and solute transport in soil with a detailed description of root structure in three dimensions [1,2]. This model offers the possibility to calculate root water and solute uptake and flow within the roots, which enables explicit studies with respect to the distribution of water and solutes around the roots as well as local processes at the root-soil interface. In this study, we compared measured data from a tracer experiment using Magnetic Resonance Imaging (MRI) with simulations in order to assess the distribution and magnitude of the water uptake of a young lupine plant. An aqueous solution of the Gadolinium-complex (Gd-DTPA2-) was chosen as a tracer, as it behaves conservatively and is ideally suited for MRI. Water flow in the soil towards the roots can thus be visualized by following the change in tracer concentrations over time. The data were obtained by MRI, providing high resolution 3D images of the tracer distribution and root architecture structures by using a spin echo pulse sequence, which is strongly T1- weighted to be tracer sensitive [3], and T2 -weighted for root imaging [4]. This experimental setup was simulated using the 3D high-resolution numerical model R-SWMS. The comparison between MRI data and the simulations showed extensive effects of root architecture parameters on solute spreading. Although the results of our study showed the strength of combining non-invasive measurements and 3D modeling of solute and water flow in soil-root systems, where the derivation of plant hydraulic parameters such as axial and radial root conductivities is possible, current limitations were found with respect to MRI measurements and process description. [1] Javaux, M., T. Schröder, J. Vanderborght, and H. Vereecken (2008), Use of a Three-Dimensional Detailed Modeling Approach for Predicting Root Water Uptake, Vadose Zone Journal, 7(3), 1079-1079. [2] Schröder, N., M. Javaux, J. Vanderborght, B. Steffen, and H. Vereecken (2012), Effect of Root Water and Solute Uptake on Apparent Soil Dispersivity: A Simulation Study, Vadose Zone Journal, 11(3). [3 ]Haber-Pohlmeier, S., Bechtold, M., Stapf, S., and Pohlmeier, A. (2010). Water Flow Monitored by Tracer Transport in Natural Porous Media Using Magnetic Resonance Imaging. Vadose Zone Journal (9),835-845. [4] Stingaciu, L. R., Schulz, H., Pohlmeier, A., Behnke, S., Zilken, H., Vereecken, H., and Javaux, M. (2013). In Situ Root System Architecture Extraction from Magnetic Resonance Imaging for Application to Water Uptake Modeling. Vadose Zone Journal.
NASA Astrophysics Data System (ADS)
Turse, Carol; Khan, A.; Leitner, J. J.; Firneis, M. G.; Schulze-Makuch, D.
2012-05-01
We performed Miller-Urey type experiments to determine the organic synthesis of amino acids under conditions that have likely occurred on Saturn's moon Titan and are also relevant to Jupiter's moon Europa. We conducted the first set of experiments under early Earth conditions, similar to the original Miller-Urey experiments (Miller, 1953). In brief, the 250ml round bottom flask was filled with approximately 200mL of filtered sterile water and the apparatus was placed under vacuum for 10 minutes to purge the water of gases. The system was then flushed with hydrogen gas and placed under vacuum three times. Gases were then added in the following order: hydrogen gas to 0.1 bar, methane gas to 0.45 bar and ammonia to 0.45 bar ( 1bar total). The water was then brought to a boil and the spark was applied using the tesla coil up to a maximum of 50,000 volts. The apparatus was run for approximately 5-7 days. Between the runs the apparatus was cleaned using a hot 10% sodium hydroxide solution followed by a dilute sulfuric acid wash and four rinses with Millipure water. In the second set of experiments we simulated conditions that could have existed on an early, warm Titan or after an asteroid strike on Titan (Schulze-Makuch and Grinspoon, 2005), particularly if the strike would have occurred in the subpolar areas that exhibit vast ethane-methane lakes. If the asteroid or comet would be of sufficient size, it would also puncture the icy crust and access a vast reservoir of the subsurface liquid ammonia-water mixture. Thompson and Sagan (1992) showed that a liquid water-ammonia body could exist for millions of years on Titan after an asteroid impact. Thus, we modified the experimental conditions as described above and report on the results. Assuming a moderate impact in the subpolar areas of Titan, we used an atmosphere of currently 1.5 bar, but increased the partial pressure of methane to 1 bar (and 0.1 bar ammonia assuming a minor amount of ammonia-water ice being evaporated during the impact) (1) Assuming a major impact that would puncture the icy crust and evaporate a significant portion of ammonia on impact, we increased the ammonia partial pressure to 0.5 bar (keeping methane constant at 1 bar) and used a 30 % ammonia water mixture as liquid reservoir in the experiment. (2) Titan's atmosphere also contains various higher organic trace constituents, commonly referred to as tholins, which include ethylene, ethane, acetylene, hydrogen cyanide and various aromatic compounds. A selection of these compounds was added in trace amounts to the experimental run.
Fault analysis as part of urban geothermal exploration in the German Molasse Basin around Munich
NASA Astrophysics Data System (ADS)
Ziesch, Jennifer; Tanner, David C.; Hanstein, Sabine; Buness, Hermann; Krawczyk, Charlotte M.; Thomas, Rüdiger
2017-04-01
Faults play an essential role in geothermal exploration. The prediction of potential fluid pathways in urban Munich has been started with the interpretation of a 3-D seismic survey (170 km2) that was acquired during the winter of 2015/2016 in Munich (Germany) within the Bavarian Molasse Basin. As a part of the research project GeoParaMoL*, we focus on the structural interpretation and retro-deformation analysis to detect sub-seismic structures within the reservoir and overburden. We explore the hydrothermal Malm carbonate reservoir (at a depth of 3 km) as a source of deep geothermal energy and the overburden of Tertiary Molasse sediments. The stratigraphic horizons, Top Aquitan, Top Chatt, Top Bausteinschichten, Top Lithothamnien limestone (Top Eocene), Top and Base Malm (Upper Jurassic), together with the detailed interpretation of the faults in the study area are used to construct a 3-D geological model. The study area is characterised by synthetic normal faults that strike parallel to the alpine front. Most major faults were active from Upper Jurassic up to the Miocene. The Munich Fault, which belongs to the Markt-Schwabener Lineament, has a maximum vertical offset of 350 metres in the central part, and contrary to previous interpretation based on 2-D seismic, this fault dies out in the eastern part of the area. The south-eastern part of the study area is dominated by a very complex fault system. Three faults that were previously detected in a smaller 3-D seismic survey at Unterhaching, to the south of the study area, with strike directions of 25°, 45° and 70° (Lüschen et al. 2014), were followed in to the new 3-D seismic survey interpretation. Particularly noticeable are relay ramps and horst/graben structures. The fault with a strike of 25° ends in three big sinkholes with a maximum vertical offset of 60 metres. We interpret this special structure as fault tip horsetail-structure, which caused a large amount of sub-seismic deformation. Consequently, this area could be characterised by increased fluid flow. This detailed understanding of the structural development and regional tectonics of the study area will guide the subsequent determination of potential fluid pathways in the new 3-D subsurface model of urban Munich. This project is funded by the Federal Ministry for Economic Affairs and Energy (BMWi). Lüschen, E., Wolfgramm, M., Fritzer, T., Dussel, M., Thomas, R. & Schulz, R. (2014): 3D seismic survey explores geothermal targets for reservoir characterization at Unterhaching, Munich, Germany, Geothermics, 50, 167-179. * https://www.liag-hannover.de/en/fsp/ge/geoparamol.html
NASA Astrophysics Data System (ADS)
de Luna, Elena; Guzmán, Gema; Gómez, José A.
2014-05-01
The optimization of water use in a semi-arid climate is based on an optimal use of rainwater adopting management practices that prevent and/or control runoff. This is a key point for increasing the economic and environmental sustainability of agriculture due to the minimization of diffuse pollution associated to runoff and to sediment and chemical transport. One strategy is the establishment of vegetative filters strips that prevent pesticides (Stehle et al. 2011), herbicides (Vianello et al. 2005), fertilizers (Withers et al. 2009) and runoff-sediment (Campo-Bescós et al. 2013) from entering streams or surface water reservoirs. To evaluate the short-term risks associated with the use of herbicides a trial was designed in two olive groves located in Benacazón (Sevilla) and Cabra (Córdoba) both with an average steepness of 11%. Two different management systems were evaluated, bare soil and bare soil with vegetative filter strips. Pre-emergence herbicides were applied and analysed at the beginning of the trial by chromatography GC-MS and after each rainfall event both in soil and sediment. Runoff and soil losses were measured, as well. The results obtained from this study show that soil management practices such as, the use of vegetative filter strips results in a reduction of soil losses and runoff. This it is translated in the improvement of soil quality and a reduction of water pollution caused by the use of herbicides. This information will improve the understanding of insufficiently known aspects and it will help to increase the knowledge for a better implementation of sustainable management practices at a farm scale and at larger temporal scale. References: Campo-Bescós, M. A., Muñoz-Carpena, R., & Kiker, G. (2013) Influencia del suelo en la eficiencia de la implantación de filtros verdes en un distrito de riego por superficie en medio árido. En Estudios de la Zona no Saturada del Suelo, Vol. XI: 183-187. Stehle, S., Elsaesser, D., Gregoire, C., Imfeld, G., Niehaus, E., Passeport, E., Payraudeau, S., Schäfera, R., Tournebize, J., & Schulz, R. (2011). Pesticide risk mitigation by vegetated treatment systems: a meta-analysis. Journal of Environmental Quality, 40: 1068-1080. Vianello, M., Vischetti, C., Scarponi, L., & Zanin, G. (2005). Herbicide losses in runoff events from a field with a low slope: role of a vegetative filter strip. Chemosphere, 61: 717-725. Withers, P. J. A., Hartikainen, H., Barberis, E., Flynn, N. J., & Warren, G. P. (2009). The effect of soil phosphorus on particulate phosphorus in land runoff. European Journal of Soil Science 60: 994-1004.
Internal Structure of Taiwan Chelungpu Fault Zone Gouges
NASA Astrophysics Data System (ADS)
Song, Y.; Song, S.; Tang, M.; Chen, F.; Chen, Y.
2005-12-01
Gouge formation is found to exist in brittle faults at all scale (1). This fine-grain gouge is thought to control earthquake instability. And thus investigating the gouge textures and compositions is very important to an understanding of the earthquake process. Employing the transmission electron microscope (TEM) and a new transmission X-ray microscope (TXM), we study the internal structure of fault zone gouges from the cores of the Taiwan Chelungpu-fault Drilling Project (TCDP), which drilled in the fault zone of 1999 Chi-Chi earthquake. This X-ray microscope have installed at beamline BL01B of the Taiwan Light Source, National Synchrotron Radiation Research Center (NSRRC). It provides 2D imaging and 3D tomography at energy 8-11 keV with a spatial resolution of 25-60 nm, and is equipped with the Zernike-phase contrast capability for imaging light materials. In this work, we show the measurements of gouge texture, particle size distribution and 3D structure of the ultracataclasite in fault gouges within 12 cm about 1111.29 m depth. These characterizations in transition from the fault core to damage zone are related to the comminuting and the fracture energy in the earthquake faulting. The TXM data recently shows the particle size distributions of the ultracataclasite are between 150 nm and 900 nm in diameter. We will keep analyzing the characterization of particle size distribution, porosity and 3D structure of the fault zone gouges in transition from the fault core to damage zone to realize the comminuting and fracture surface energy in the earthquake faulting(2-5).The results may ascertain the implication of the nucleation, growth, transition, structure and permeability of the fault zones(6-8). Furthermore, it may be possible to infer the mechanism of faulting, the physical and chemical property of the fault, and the nucleation of the earthquake. References 1) B. Wilson, T. Dewerw, Z. Reches and J. Brune, Nature, 434 (2005) 749. 2) S. E. Schulz and J. P. Evans, Tectonophysics 295 (1998) 223. 3) A. M. Boullier, K. Fujimoto, T. Ohtani, G. Roman-Ross, ? Lewin and H. Ito, P. Pezard, B. Ildefonse, Tectonophysics 378 (2004)v165. 4) Z. K. Shipton and P. A. Cowie, J. Structural Geology 25 (2003) 333. 5) J. S. Chester, F. M. Chester and A. K. Kronenberg, Nature 437, (2005) 133. 6) A. Billi, F. Salvini and F. Storti, J. Structural Geology 25 (2003)1779. 7) J. S. Caine, J. P. Evans and C. B. Forster, Geology 24 (11) (1996)1025. 8) N. Nakimura, T. Hirose and G. J. Borradaile, Earth and Planetary Science Letters 201 (2002) 13.
Progress in Global Multicompartmental Modelling of DDT
NASA Astrophysics Data System (ADS)
Stemmler, I.; Lammel, G.
2009-04-01
Dichlorophenyltrichloroethane, DDT, and its major metabolite dichlorophenyldichloroethylene, DDE, are long-lived in the environment (persistent) and circulate since the 1950s. They accumulate along food chains, cause detrimental effects in marine and terrestrial wild life, and pose a hazard for human health. DDT was widely used as an insecticide in the past and is still in use in a number of tropical countries to combat vector borne diseases like malaria and typhus. It is a multicompartmental substance with only a small mass fraction residing in air. A global multicompartment chemistry transport model (MPI-MCTM; Semeena et al., 2006) is used to study the environmental distribution and fate of dichlorodiphenyltrichloroethane (DDT). For the first time a horizontally and vertically resolved global model was used to perform a long-term simulation of DDT and DDE. The model is based on general circulation models for the ocean (MPIOM; Marsland et al., 2003) and atmosphere (ECHAM5). In addition, an oceanic biogeochemistry model (HAMOCC5.1; Maier-Reimer et al., 2005 ) and a microphysical aerosol model (HAM; Stier et al., 2005 ) are included. Multicompartmental substances are cycling in atmosphere (3 phases), ocean (3 phases), top soil (3 phases), and vegetation surfaces. The model was run for 40 years forced with historical agricultural application data of 1950-1990. The model results show that the global environmental contamination started to decrease in air, soil and vegetation after the applications peaked in 1965-70. In some regions, however, the DDT mass had not yet reached a maximum in 1990 and was still accumulating mass until the end of the simulation. Modelled DDT and DDE concentrations in atmosphere, ocean and soil are evaluated by comparison with observational data. The evaluation of the model results indicate that degradation of DDE in air was underestimated. Also for DDT, the discrepancies between model results and observations are related to uncertainties of input parameters. Furthermore, better resolution of some processes could improve model performance. References: Marsland S.J., Haak H., Jungclaus J.H., Latif M., Röske F. (2003): The Max-Planck-Institute global ocean/sea ice model with orthogonal curvilinear coordinates. Ocean Modelling 5, 91-127 Maier-Reimer E. , Kriest I., Segschneider J., Wetzel P. : The HAMburg Ocean Carbon Cycle Model HAMOCC 5.1 - Technical Description Release 1.1 (2005),Reports on Earth System Science 14 Stier P. , Feichter J. (2005), Kinne S., Kloster S., Vignati E., Wilson J.Ganzeveld L., Tegen I., Werner M., Blakanski Y., Schulz M., Boucher O., Minikin A., Petzold A.: The aerosol-climate model ECHAM5-HAM. Atmos. Chem. Phys 5, 1125-1156 Semeena V.S., Feichter J., Lammel G. (2006): Impact of the regional climate and substance properties on the fate and atmospheric long-range transport of persistent organic pollutants - examples of DDT and γ-HCH. Atmos. Chem. Phys. 6, 1231-1248
NASA Astrophysics Data System (ADS)
Huber, Katrin; Koebernick, Nicolai; Kerkhofs, Elien; Vanderborght, Jan; Javaux, Mathieu; Vetterlein, Doris; Vereecken, Harry
2014-05-01
A faba bean was grown in a column filled with a sandy soil, which was initially close to saturation and then subjected to a single drying cycle of 30 days. The column was divided in four hydraulically separated compartments using horizontal paraffin layers. Paraffin is impermeable to water but penetrable by roots. Thus by growing deeper, the roots can reach compartments that still contain water. The root architecture was measured every second day by X-ray CT. Transpiration rate, soil matric potential in four different depths, and leaf area were measured continously during the experiment. To investigate the influence of the partitioning of available soil water in the soil column on water uptake, we used R-SWMS, a fully coupled root and soil water model [1]. We compared a scenario with and without the split layers and investigated the influence on root xylem pressure. The detailed three-dimensional root architecture was obtained by reconstructing binarized root images manually with a virtual reality system, located at the Juelich Supercomputing Centre [2]. To verify the properties of the root system, we compared total root lengths, root length density distributions and root surface with estimations derived from Minkowski functionals [3]. In a next step, knowing the change of root architecture in time, we could allocate an age to each root segment and use this information to define age dependent root hydraulic properties that are required to simulate water uptake for the growing root system. The scenario with the split layers showed locally much lower pressures than the scenario without splits. Redistribution of water within the unrestricted soil column led to a more uniform distribution of water uptake and lowers the water stress in the plant. However, comparison of simulated and measured pressure heads with tensiometers suggested that the paraffin layers were not perfectly hydraulically isolating the different soil layers. We could show compensation efficiency of water uptake by the roots in the lower and wetter compartments. By comparing transpiration rates of experiments with and without additional paraffin layers, we were able to quantify restrictions of plant growth to available soil water. [1] Javaux, M., T. Schröder, J. Vanderborght, and H. Vereecken (2008), Use of a Three-Dimensional Detailed Modeling Approach for Predicting Root Water Uptake, Vadose Zone Journal, 7(3), 1079-1079. [2] Stingaciu, L., H. Schulz, A. Pohlmeier, S. Behnke, H. Zilken, M. Javaux, H. Vereecken (2013), In Situ Root System Architecture Extraction from Magnetic Resonance Imaging for Water Uptake Modeling, Vadose Zone Journal, 12(1). [3] Koebernick, N., U. Weller, K. Huber, S. Schlüter, H.-J. Vogel, R. Jahn; H. Vereecken, D. Vetterlein, In situ visualisation and quantification of root-system architecture and growth with X-ray CT, Manuscript submitted for publication.
NASA Astrophysics Data System (ADS)
Babuska, V.; Plomerova, J.; Karato, S. I.
2012-04-01
Although many studies indicate that subduction-related accretion, subduction-driven magmatism and tectonic stacking are major crustal-growth mechanisms, how the mantle lithosphere forms remains enigmatic. Cook (AGU Geod. Series 1986) published a model of continental 'shingling' based on seismic reflection data indicating dipping structures in the deep crust of accreted terranes. Helmstaedt and Gurney (J. Geoch. Explor. 1995) and Hart et al. (Geology 1997) suggest that the Archean continental lithosphere consists of alternating layers of basalt and peridotite derived from subducted and obducted Archean oceanic lithosphere. Peridotite xenoliths from the Mojavian mantle lithosphere (Luffi et al., JGR 2009), as well as xenoliths of eclogites underlying the Sierra Nevada batholith in California (Horodynskij et al., EPSL 2007), are representative for oceanic slab fragments successively attached to the continent. Recent seismological findings also seem to support a model of continental lithosphere built from systems of paleosubductions of plates of ancient oceanic lithosphere (Babuska and Plomerova, AGU Geoph. Monograph 1989), or by stacking of the plates (Helmstaedt and Schulze, Geol. Soc. Aust. Spec. Publ. 1989). Seismic anisotropy in the oceanic mantle lithosphere, explained mainly by the olivine A- (or D-) type fabric (Karato et al., Annu. Rev. Earth Planet. Sci. 2008), was discovered almost a half century ago (Hess, Nature 1964). Though it is difficult to determine seismic anisotropy within an active subducting slab (e.g., Healy et al., EPSL 2009; Eberhart-Phillips and Reyners, JGR 2009), field observations and laboratory experiments indicate the oceanic olivine fabric might be preserved there to a depth of at least 200-300 km. Dipping anisotropic fabrics in domains of the European mantle lithosphere were interpreted as systems of 'frozen' paleosubductions (Babuska and Plomerova, PEPI 2006), and the lithosphere base as a boundary between a fossil anisotropy in the lithospheric mantle and an underlying seismic anisotropy related to present-day flow in the asthenosphere (Plomerova and Babuska, Lithos 2010). Deep dipping reflectors in the Slave Craton were modelled as tops of a fossil oceanic lithosphere (Bostock, Lithos 1999). Using S-wave receiver functions, Miller and Eaton (GRL 2010) also interpreted mid-lithosphere discontinuities beneath British Columbia as remnant oceanic slabs. Strong radial anisotropy from global surface-wave data (Babuska et al., PAGEOPH 1998; Khan et al., JGR 2011), as well as differences between body-wave tomography images from SH and SV waves (Eken et al., Tectonophys. 2010), both showing strong anisotropy only down to ~200 km, are in agreement with the models of inclined olivine fabrics found in Phanerozoic and Precambrian mantle lithosphere (Plomerova et al., Solid Earth 2011). Models of assemblages of microplates with their own inclined fossil fabrics do not support a lithosphere growth by simple cooling processes, which should result in horizontal fabrics. The models with dipping fabrics also contribute to mapping boundaries of individual blocks building the continental lithosphere.
NASA Astrophysics Data System (ADS)
Kienzle, Stefan
2015-04-01
Precipitation is the central driving force of most hydrological processes, and is also the most variable element of the hydrological cycle. As the precipitation to runoff ratio is non-linear, errors in precipitation estimations are amplified in streamflow simulations. Therefore, the accurate estimate of areal precipitation is essential for watershed models and relevant impacts studies. A procedure is presented to demonstrate the spatial distribution of daily precipitation and temperature estimates across the Rocky Mountains within the framework of the ACRU agro-hydrological modelling system (ACRU). ACRU (Schulze, 1995) is a physical-conceptual, semi-distributed hydrological modelling system designed to be responsive to changes in land use and climate. The model has been updated to include specific high-mountain and cold climate routines and is applied to simulate impacts of land cover and climate change on the hydrological behaviour of numerous Rocky Mountain watersheds in Alberta, Canada. Both air temperature and precipitation time series need to be downscaled to hydrological response units (HRUs), as they are the spatial modelling units for the model. The estimation of accurate daily air temperatures is critical for the separation of rain and snow. The precipitation estimation procedure integrates a spatially distributed daily precipitation database for the period 1950 to 2010 at a scale of 10 by 10 km with a 1971-2000 climate normal database available at 2 by 2 km (PRISM). Resulting daily precipitation time series are further downscaled to the spatial resolution of hydrological response units, defined by 100 m elevation bands, land cover, and solar radiation, which have an average size of about 15 km2. As snow measurements are known to have a potential under-catch of up to 40%, further adjustment of snowfall may need to be increased using a procedure by Richter (1995). Finally, precipitation input to HRUs with slopes steeper than 10% need to be further corrected, because the true, sloped area, has a larger area than the planimetric area derived from a GIS. The omission of correcting for sloped areas would result in incorrect calculations of interception volumes, soil moisture storages, groundwater recharge rates, actual evapotranspiration volumes, and runoff coefficients. Daily minimum and maximum air temperatures are estimated for each HRU by downscaling the 10km time series to the HRUs by (a) applying monthly mean lapse rates, estimated either from surrounding climate stations or from the PRISM climate normal dataset in combination with a digital elevation model, (b) adjusting further for aspect of the HRU based on monthly mean incoming solar radiation, and (c) adjusting for canopy cover using the monthly mean leaf area indices. Precipitation estimates can be verified using independent snow water equivalent measurements derived from snow pillow or snow course observations, while temperature estimates are verified against either independent temperature measurements from climate stations, or from fire observation towers.
2017-09-01
'Anti-inflammatory properties of tianeptine on lipopolysaccharide-induced changes in microglial cells involve toll-like receptor-related pathways' by Slusarczyk, J., Trojan, E., Glombik, K., Piotrowska, A., Budziszewska, B., Kubera, M., Popiolek-Barczyk, K., Lason, W., Mika, J. and Basta-Kaim, A. The above article from the Journal of Neurochemistry published on 14 February 2016 on Wiley Online Library ( www.onlinelibrary.com), and in Volume 136, pp. 958-970, is being retracted by agreement between the corresponding author Agnieszka Basta-Kaim, the Journal's Editor-in-Chief Jörg Schulz, and John Wiley & Sons Ltd. The Editorial Office was alerted by a science journalist that the same Western Blot lane had been used to represent two different proteins. The Western Blot signal of iNOS in Fig. 4a was supposedly identical to the Western Blot signal of phospho-JNK in Fig. 6b. The corresponding author stated that "on the final step of figure 6 preparation the first author made, by mistake, an incorrect attachment of representative p-JNK blots." A corrected Fig. 6b is enclosed below. The second concern reaching the Editorial Office was that the same Western Blot signal appeared to have been used to represent two different experimental conditions: the iNOS control signal (-/- LPS/TIA Fig. 4a) appears as a horizontal and vertical mirror image of the last signal in this line (+/10 LPS/TIA Fig. 4a). The raw membrane which was used to produce Fig. 4a is enclosed on the next page and highlights the steps that were undertaken during figure preparation. Although the initial concern was not proven, concerns remained regarding the question how an inadvertent flipping of the first Western blot lane could happen. A corrected Fig. 4a prepared by the corresponding author from the raw image of iNOS western blot depicted above, without flipped first lane, is presented below: Although the corresponding author provided a large amount of evidence to explain disparities in the presentation of Western Blot images, due to the number of inconsistencies that were revealed during review of the provided evidence and the inability to confirm the nature of the steps that led to them, it was felt that the above mentioned Western Blot images presented in this publication were not reliable, even if the conclusions may still be valid. The first author would like to apologize to the readers, reviewers and editors of Journal of Neurochemistry for the errors. Reference Slusarczyk J., Trojan E., Glombik K., Piotrowska A., Budziszewska B., Kubera M., Popiolek-Barczyk K., Lason W., Mika J. and Basta-Kaim A. (2016) Anti-inflammatory properties of tianeptine on lipopolysaccharide-induced changes in microglial cells involve toll-like receptor-related pathways. J. Neurochem. 136, 958-970. https://doi.org/10.1111/jnc.13452. © 2017 International Society for Neurochemistry.
A New Generation of Large Seismic Refraction Experiments in Central Europe (1997-2003)
NASA Astrophysics Data System (ADS)
Guterch, A.; Grad, M.; Spicak, A.; Brueckl, E.; Hegedus, E.; Keller, G. R.; Thybo, H.
2003-12-01
Beginning in 1997, Central Europe has been covered by an unprecedented network of seismic refraction experiments. These experiments (POLONAISE'97, CELEBRATION 2000, ALP 2002, SUDETES 2003) have only been possible due a massive international cooperative effort. The total length of all profiles is about 19,000 km, and over 300 explosive sources were employed. The result is a network of seismic refraction profiles that extends along the Trans-European Suture Zone region of Poland and the Bohemian massif, Pannonian basin, trough the Carpathians and Alps to the Adriatic Sea and the Dinarides. As reflected in structures within these areas, Central Europe has experienced a complex tectonic history that includes the Caledonian, Variscan, and Alpine orogenies. The related TESZ region is a broad zone of deformation that extends across Europe from British Isles to the Black Sea region that formed as Europe was assembled from a complex collage of terranes during the late Palaeozoic. For example, the Bohemian massif is mostly located in the Czech Republic and is a large, complex terrane whose origin can be traced to northern Gondwana (Africa). These terranes were accreted along the margin of Baltica that was formed during the break-up of Rodinia. The tectonic evolution of this region shares many attributes with the Appalachian/Ouachita origin and is certainly of global important to studies in terrane tectonics and continental evolution. In southern Poland, several structural blocks are located adjacent to Baltica and were probably transported laterally along it similar to the Cenozoic movement of terranes along the western margin of North America. The younger Carpathian arc and Pannonian back-arc basin were also targeted by these experiments. Thickness of the crust in the area of investigations changes from 22-25 km in the Pannonian basin to about 55 km in the Trans-European Suture Zone in SE Poland. Together, these experiments are providing an unprecedented 3-D image of the evolution and assembly of a continent. Experiment Working Group Members: K. Aric, S. Azevedo, I. Asudeh, M. Behm, A.A. Belinsky, T. Bodoky, R. Brinkmann, M. Broz, E. Brueckl, W. Chwatal, R. Clowes, W. Czuba, T. Fancsik, B. Forkmann, M. Fort, E. Gaczynski, H. Gebrande, H. Geissler, A. Gosar, M. Grad, H. Grassi, R. Greschke, A. Guterch, Z. Hajnal, S. Harder,E. Hegedus, A. Hemmann, S. Hock, V. Hoeck, P. Hrubcova, T. Janik, G. Jentzsch, P. Joergensen, G. Kaip, G.R. Keller, F. Kohlbeck, K. Komminaho, M. Korn, O. Korousova, S.L. Kostiuchenko, D. Kracke, C.-E. Lund, U. Luosto, M. Majdazski, M. Malinowski, K.C. Miller, A.F. Morozov, G. Motuza, V. Nasedkin, E.-M. Rumpfhuber, Ch. Schmid, A. Schulze, K. Schuster, O. Selvi, C. Snelson, A. Spicak, P. Sroda, F. Sumanovac, E. Tacasc, H. Thybo, T. Tiira, C. Tomek, J. Vozar, F. Weber, M. Wilde-Pierko, J. Yliniemi, A. Zelazniewicz
NASA Astrophysics Data System (ADS)
Golla, B.; Bach, M.; Krumpe, J.
2009-04-01
1. Introduction Small streams differ greatly from the standardised water body used in the context of aquatic risk assessment for the regulation of plant protection products in Germany. The standard water body is static, with a depth of 0.3 m and a width of 1.0 m. No dilution or water replacement takes place. Spray drift happens always in direction to the water body. There is no variability in drift deposition rate (90th percentile spray drift deposition values [2]). There is no spray drift filtering by vegetation. The application takes place directly adjacent to the water body. In order to establish a more realistic risk assessment procedure the Federal Office for Consumer Protection and Food Safety (BVL) and the Federal Environment Agency (UBA) aggreed to replace deterministic assumptions with data distributions and spatially explicit data and introduce probabilistic methods [3, 4, 5]. To consider the spatial and temporal variability in the exposure situations of small streams the hydraulic and morphological characteristics of catchments need to be described as well as the spatial distribution of fields treated with pesticides. As small streams are the dominant type of water body in most German orchard regions, we use the growing region Lake Constance as pilot region. 2. Materials and methods During field surveys we derive basic morphological parameters for small streams in the Lake Constance region. The mean water width/depth ratio is 13 with a mean depth of 0.12 m. The average residence time is 5.6 s/m (n=87) [1]. Orchards are mostly located in the upper parts of the catchments. Based on an authoritative dataset on rivers and streams of Germany (ATKIS DLM25) we constructed a directed network topology for the Lake Constance region. The gradient of the riverbed is calculated for river stretches of > 500 m length. The network for the pilot region consists of 2000 km rivers and streams. 500 km stream length are located within a distance of 150 m to orchards. Within this distance a spray drift exposure with adverse effects is theoretically possible [6]. The network is segmented to approx. 80'000 segments of 25 m length. One segment is the basic element of the exposure assessment. Based on the Manning-Strickler formula and empirically determined relations two equations are developed to express the width and depth of the streams and the flow velocity [7]. Using Java programming and spatial network analysis within Oracle 10g/Spatial DBMS we developed a tool to simulate concentration over time for all single 25 m segments of the stream network. The analysis considers the spatially explicit upstream exposure situations due to the locations of orchards and recovery areas in the catchments. The application which takes place on a specific orchard is simulated according to realistic application patterns or to the simplistic assumption that all orchards are sprayed on the same day. 3. Results The results of the analysis are distributions of time average concentrations (mPEC) for all single stream segments of the stream network. The averaging time window can be defined flexibly between 1 h (mPEC1h) to 24 h (mPEC24h). Spatial network analysis based on georeferenced hydraulic and morphological parameters proved to be a suitable approach for analysing the exposure situation of streams under more realistic aspects. The time varying concentration of single stream segments can be analysed over a vegetation period or a single day. Stream segments which exceed a trigger concentration or segments with a specific pulse concentration pattern in given time windows can be identified and be addressed by e.g. implementing additional drift mitigation measures. References [1] Golla, B., J. Krumpe, J. Strassemeyer, and V. Gutsche. (2008): Refined exposure assessment of small streams in German orchard regions. Part 1. Results of a hydromorphological survey. Journal für Kulturpflanzen (submitted). [2] Rautmann, D., Streloke, M, and Winkler, R (1999): New basic drift values in the authorization procedure for plant protection products, pp. 133-141. In Workshop on risk management and risk mitigation measures in the context of authorization of plant protection products [3] Klein, A. W., Dechet, F., and Streloke, M (2003): Probabilistic Assessment Method for Risk Analysis in the framework of Plant Protection Product Authorisation, Industrieverband Agrar (IVA, 2006), Frankfurt/Main [4] Schulz R, Stehle S, Elsaesser F, Matezki S, Müller A, Neumann M, Ohliger R, Wogram J, Zenker K. 2008. Geodata-based Probabilistic Risk Assessment and Management of Pesticides in Germany, a Conceptual Framework. IEAM_2008-032R [5] Kubiak, R., Hommen, Bach, M., Classen, G. Fent, H.-G. Frede, A. Gergs, B. Golla, M. Klein, J. Krumpe, S. Matetzki, A. Müller, M. Neumann,T. G. Preuss, H. T. Ratte, M. Roß-Nickoll, S. Reichenberger, C. Schäfers, T. Strauss, A. Toschki, M. Trapp, J. Wogram (2009): A new GIS based approach for the assessment and management of environmental risks of plant protection, SETAC EUROPE Göteborg [6] Enzian, S. ,Golla., B. (2006) A method for the identification and classification of "save distance" cropland to the potential drift exposure of pesticides towards surface waters. UBA-Texte [7] Bach, M., Träbing, K. and Frede, H.-G. (2004): Morphological Characteristics of small rivers in the context of probabilistic exposure assessment. Nachrichtenblatt des Deutschen Pflanzenschutzdienstes 56
NASA Astrophysics Data System (ADS)
Chen, Yiying; Ryder, James; Naudts, Kim; McGrath, Matthew J.; Otto, Juliane; Bastriko, Vladislav; Valade, Aude; Launiainen, Samuli; Ogée, Jérôme; Elbers, Jan A.; Foken, Thomas; Tiedemann, Frank; Heinesch, Bernard; Black, Andrew; Haverd, Vanessa; Loustau, Denis; Ottlé, Catherine; Peylin, Philippe; Polcher, Jan; Luyssaert, Sebastiaan
2015-04-01
Canopy structure is one of the most important vegetation characteristics for land-atmosphere interactions as it determines the energy and scalar exchanges between land surface and overlay air mass. In this study we evaluated the performance of a newly developed multi-layer energy budget (Ryder et al., 2014) in a land surface model, ORCHIDEE-CAN (Naudts et al., 2014), which simulates canopy structure and can be coupled to an atmospheric model using an implicit procedure. Furthermore, a vertical discrete drag parametrization scheme was also incorporated into this model, in order to obtain a better description of the sub-canopy wind profile simulation. Site level datasets, including the top-of-the-canopy and sub-canopy observations made available from eight flux observation sites, were collected in order to conduct this evaluation. The geo-location of the collected observation sites crossed climate zones from temperate to boreal and the vegetation types included deciduous, evergreen broad leaved and evergreen needle leaved forest with maximum LAI ranging from 2.1 to 7.0. First, we used long-term top-of-the-canopy measurements to analyze the performance of the current one-layer energy budget in ORCHIDEE-CAN. Three major processes were identified for improvement through the implementation of a multi-layer energy budget: 1) night time radiation balance, 2) energy partitioning during winter and 3) prediction of the ground heat flux. Short-term sub-canopy observations were used to calibrate the parameters in sub-canopy radiation, turbulence and resistances modules with an automatic tuning process following the maximum gradient of the user-defined objective function. The multi-layer model is able to capture the dynamic of sub-canopy turbulence, temperature and energy fluxes with imposed LAI profile and optimized parameter set at a site level calibration. The simulation result shows the improvement both on the nighttime energy balance and energy partitioning during winter and presents a better Taylor skill score, compared to the result from single layer simulation. The importance of using the multi-layer energy budget in a land surface model for coupling to the atmospheric model will also be discussed in this presentation. Reference: Ryder, J., J. Polcher, P. Peylin, C. Ottlé, Y. Chen, E. Van Gorsel, V. Haverd, M. J. McGrath, K.Naudts, J. Otto, A. Valade, and S. Luyssaert, 2014. "A multi-layer land surface energy budget model for implicit coupling with global atmospheric simulations", Geosci. Model Dev. Discuss. 7, 8649-8701 Naudts, K. J. Ryder, M. J. McGrath, J. Otto, Y. Chen, A. Valade, V. Bellasen, G. Berhongaray, G. Bönisch, M. Campioli, J. Ghattas, T. De Groote, V. Haverd, J. Kattge, N. MacBean, F. Maignan, P. Merilä, J. Penuelas, P. Peylin, B. Pinty, H. Pretzsch, E. D. Schulze, D. Solyga, N. Vuichard, Y. Yan, and S. Luyssaert, 2014. "A vertically discretised canopy description for ORCHIDEE (SVN r2290) and the modifications to the energy, water and carbon fluxes", Geosci. Model Dev. Discuss. 7, 8565-8647
Maher, K.; Steefel, Carl; White, A.F.; Stonestrom, David A.
2009-01-01
In order to explore the reasons for the apparent discrepancy between laboratory and field weathering rates and to determine the extent to which weathering rates are controlled by the approach to thermodynamic equilibrium, secondary mineral precipitation, and flow rates, a multicomponent reactive transport model (CrunchFlow) was used to interpret soil profile development and mineral precipitation and dissolution rates at the 226 ka Marine Terrace Chronosequence near Santa Cruz, CA. Aqueous compositions, fluid chemistry, transport, and mineral abundances are well characterized [White A. F., Schulz M. S., Vivit D. V., Blum A., Stonestrom D. A. and Anderson S. P. (2008) Chemical weathering of a Marine Terrace Chronosequence, Santa Cruz, California. I: interpreting the long-term controls on chemical weathering based on spatial and temporal element and mineral distributions. Geochim. Cosmochim. Acta 72 (1), 36-68] and were used to constrain the reaction rates for the weathering and precipitating minerals in the reactive transport modeling. When primary mineral weathering rates are calculated with either of two experimentally determined rate constants, the nonlinear, parallel rate law formulation of Hellmann and Tisserand [Hellmann R. and Tisserand D. (2006) Dissolution kinetics as a function of the Gibbs free energy of reaction: An experimental study based on albite feldspar. Geochim. Cosmochim. Acta 70 (2), 364-383] or the aluminum inhibition model proposed by Oelkers et al. [Oelkers E. H., Schott J. and Devidal J. L. (1994) The effect of aluminum, pH, and chemical affinity on the rates of aluminosilicate dissolution reactions. Geochim. Cosmochim. Acta 58 (9), 2011-2024], modeling results are consistent with field-scale observations when independently constrained clay precipitation rates are accounted for. Experimental and field rates, therefore, can be reconciled at the Santa Cruz site. Additionally, observed maximum clay abundances in the argillic horizons occur at the depth and time where the reaction fronts of the primary minerals overlap. The modeling indicates that the argillic horizon at Santa Cruz can be explained almost entirely by weathering of primary minerals and in situ clay precipitation accompanied by undersaturation of kaolinite at the top of the profile. The rate constant for kaolinite precipitation was also determined based on model simulations of mineral abundances and dissolved Al, SiO2(aq) and pH in pore waters. Changes in the rate of kaolinite precipitation or the flow rate do not affect the gradient of the primary mineral weathering profiles, but instead control the rate of propagation of the primary mineral weathering fronts and thus total mass removed from the weathering profile. Our analysis suggests that secondary clay precipitation is as important as aqueous transport in governing the amount of dissolution that occurs within a profile because clay minerals exert a strong control over the reaction affinity of the dissolving primary minerals. The modeling also indicates that the weathering advance rate and the total mass of mineral dissolved is controlled by the thermodynamic saturation of the primary dissolving phases plagioclase and K-feldspar, as is evident from the difference in propagation rates of the reaction fronts for the two minerals despite their very similar kinetic rate laws. ?? 2009 Elsevier Ltd.
PREFACE: Vibrations at surfaces Vibrations at surfaces
NASA Astrophysics Data System (ADS)
Rahman, Talat S.
2011-12-01
This special issue is dedicated to the phenomenon of vibrations at surfaces—a topic that was indispensible a couple of decades ago, since it was one of the few phenomena capable of revealing the nature of binding at solid surfaces. For clean surfaces, the frequencies of modes with characteristic displacement patterns revealed how surface geometry, as well as the nature of binding between atoms in the surface layers, could be different from that in the bulk solid. Dispersion of the surface phonons provided further measures of interatomic interactions. For chemisorbed molecules on surfaces, frequencies and dispersion of the vibrational modes were also critical for determining adsorption sites. In other words, vibrations at surfaces served as a reliable means of extracting information about surface structure, chemisorption and overlayer formation. Experimental techniques, such as electron energy loss spectroscopy and helium-atom-surface scattering, coupled with infra-red spectroscopy, were continually refined and their resolutions enhanced to capture subtleties in the dynamics of atoms and molecules at surfaces. Theoretical methods, whether based on empirical and semi-empirical interatomic potential or on ab initio electronic structure calculations, helped decipher experimental observations and provide deeper insights into the nature of the bond between atoms and molecules in regions of reduced symmetry, as encountered on solid surfaces. Vibrations at surfaces were thus an integral part of the set of phenomena that characterized surface science. Dedicated workshops and conferences were held to explore the variety of interesting and puzzling features revealed in experimental and theoretical investigations of surface vibrational modes and their dispersion. One such conference, Vibrations at Surfaces, first organized by Harald Ibach in Juelich in 1980, continues to this day. The 13th International Conference on Vibrations at Surfaces was held at the University of Central Florida, Orlando, in March 2010. Several speakers at this meeting were invited to contribute to the special section in this issue. As is clear from the articles in this special section, the phenomenon of vibrations at surfaces continues to be a dynamic field of investigation. In fact, there is a resurgence of effort because the insights provided by surface dynamics are still fundamental to the development of an understanding of the microscopic factors that control surface structure formation, diffusion, reaction and structural stability. Examination of dynamics at surfaces thus complements and supplements the wealth of information that is obtained from real-space techniques such as scanning tunneling microscopy. Vibrational dynamics is, of course, not limited to surfaces. Surfaces are important since they provide immediate deviation from the bulk. They display how lack of symmetry can lead to new structures, new local atomic environments and new types of dynamical modes. Nanoparticles, large molecules and nanostructures of all types, in all kinds of local environments, provide further examples of regions of reduced symmetry and coordination, and hence display characteristic vibrational modes. Given the tremendous advance in the synthesis of a variety of nanostructures whose functionalization would pave the way for nanotechnology, there is even greater need to engage in experimental and theoretical techniques that help extract their vibrational dynamics. Such knowledge would enable a more complete understanding and characterization of these nanoscale systems than would otherwise be the case. The papers presented here provide excellent examples of the kind of information that is revealed by vibrations at surfaces. Vibrations at surface contents Poisoning and non-poisoning oxygen on Cu(410)L Vattuone, V Venugopal, T Kravchuk, M Smerieri, L Savio and M Rocca Modifying protein adsorption by layers of glutathione pre-adsorbed on Au(111)Anne Vallée, Vincent Humblot, Christophe Méthivier, Paul Dumas and Claire-Marie Pradier Relating temperature dependence of atom scattering spectra to surface corrugationW W Hayes and J R Manson Effects of the commensurability and disorder on friction for the system Xe/CuA Franchini, V Bortolani, G Santoro and K Xheka Switching ability of nitro-spiropyran on Au(111): electronic structure changes as a sensitive probe during a ring-opening reactionChristopher Bronner, Gunnar Schulze, Katharina J Franke, José Ignacio Pascual and Petra Tegeder High-resolution phonon study of the Ag(100) surfaceK L Kostov, S Polzin and W Widdra On the interpretation of IETS spectra of a small organic molecule Karina Morgenstern
Making direct use of canopy profiles in vegetation - atmosphere coupling
NASA Astrophysics Data System (ADS)
Ryder, James; Polcher, Jan; Peylin, Philippe; Ottlé, Catherine; Chen, Yiying; van Gorsel, Eva; Haverd, Vanessa; McGrath, Matthew; Naudts, Kim; Otto, Juliane; Valade, Aude; Luyssaert, Sebastiaan
2015-04-01
Most coupled land-surface regional models use the 'big-leaf' approach for simulating the sensible and latent heat fluxes of different vegetation types. However, there has been a progression in the types of questions being asked of these models, such as the consequences of land-use change or the behaviour of BVOCs and aerosol. In addition, recent years has seen growth in the availability of in-canopy datasets across a broaded range of species, with which to calibrate these simulations. Hence, there is now an argument for transferring some of the techniques and processes previously used in local, site-based land surface models to the land surface components of models which operate on a regional or even global scale. We describe here the development and evaluation of a vertical canopy energy budget model (Ryder, J et al., 2014) that can be coupled to an atmospheric model such as LMDz. Significantly, the model preserves the implicit coupling of the land-surface to atmosphere interface, which means that run-time efficiences are preserved. This is acheived by means of an interface based on the approach of Polcher et al. (1998) and Best et al. (2004), but newly developed for a canopy column. The model makes use of techniques from site-based models, such as the calculation of vertical turbulence statistics using a second-order closure model (Massman & Weil, 1999), and the distribution of long-wave and short-wave radiation over the profile, the latter using an innovate multilayer albedo scheme (McGrath et al., in prep.). Complete profiles of atmospheric temperature and specific humidity are now calculated, in order to simulate sensible and latent heat fluxes, as well as the leaf temperature at each level in the model. The model is shown to perform stably, and reproduces well flux measurements at an initial test site, across a time period of several days, or over the course of a year. Further applications of the model might be to simulate mixed canopies, the light-stimulated emission of chemical species, or the ecological consequences of changes to temperature profile as a results of changes to stand structure. References: Best, M. J., Beljaars, A. C. M., Polcher, J., & Viterbo, P. (2004). A proposed structure for coupling tiled surfaces with the planetary boundary layer. Journal of Hydrometeorology, 5, 1271-1278. Massman, W. J., & Weil, J. C., 1999. 'An analytical one-dimensional second-order closure model of turbulence statistics and the lagrangian time scale within and above plant canopies of arbitrary structure', Boundary-Layer Meteorology, 91, 81-107. Mcgrath, M. J., Pinty, B., Ryder, J., Otto, J., & Luyssaert, S., in prep. A multilevel canopy radiative transfer scheme based on a domain-averaged structure factor Polcher, J., McAvaney, B., Viterbo, P., Gaertner, M., Hahmann, A., Mahfouf, J.-F., Noilhan, J., Phillips, T.J., Pitman, A. J., Schlosser, C. A., Schulz, J-P, Timbal, B., Verseghy, D. L., Xue, Y. (1998). A proposal for a general interface between land surface schemes and general circulation models. Global and Planetary Change, 19, 261-276. Ryder, J., J. Polcher, P. Peylin, C. Ottlé, Y. Chen, E. Van Gorsel, V. Haverd, M. J. McGrath, K.Naudts, J. Otto, A. Valade, and S. Luyssaert, 2014. 'A multi-layer land surface energy budget model for implicit coupling with global atmospheric simulations', Geosci. Model Dev. Discuss. 7, 8649-8701
NASA Astrophysics Data System (ADS)
Martin, Nicholas L. S.; deHarak, Bruno A.
2010-01-01
From 30 July to 1 August 2009, over a hundred scientists from 18 countries attended the International Symposium on (e,2e), Double Photoionization and Related Topics and the 15th International Symposium on Polarization and Correlation in Electronic and Atomic Collisions which were held at the W T Young Library of the University of Kentucky, USA. Both conferences were satellite meetings of the XXVI International Conference on Photonic, Electronic and Atomic Collisions (ICPEAC) held in Kalamazoo, Michigan, USA, 21-28 July 2009. These symposia covered a broad range of experimental and theoretical topics involving excitation, ionization (single and multiple), and molecular fragmentation, of a wide range of targets by photons and charged particles (polarized and unpolarized). Atomic targets ranged from hydrogen to the heavy elements and ions, while molecular targets ranged from H2 to large molecules of biological interest. On the experimental front, cold target recoil ion momentum spectroscopy (COLTRIMS), also known as the Reaction Microscope because of the complete information it gives about a wide variety of reactions, is becoming commonplace and has greatly expanded the ability of researchers to perform previously inaccessible coincidence experiments. Meanwhile, more conventional spectrometers are also advancing and have been used for increasingly sophisticated and exacting measurements. On the theoretical front great progress has been made in the description of target states, and in the scattering calculations used to describe both simple and complex reactions. The international nature of collaborations between theorists and experimentalists is exemplified by, for example, the paper by Ren et al which has a total of 13 authors of whom the experimental group of six is from Heidelberg, Germany, one theoretical group is from Australia, with the remainder of the theoreticians coming from several different institutions in the United States. A total of 52 invited talks and 44 submitted posters covered recent advances in these topics. These proceedings present papers on 35 of the invited talks. The Local Organizers gratefully acknowledge the financial support of the University of Kentucky College of Arts and Sciences, and the University of Kentucky Department of Physics and Astronomy. We also thank Carol Cotrill, Eva Ellis, Diane Yates, Sarah Crowe, and John Nichols, of the Department of Physics and Astronomy, University of Kentucky for their invaluable assistance in the smooth running of the conferences; Oleksandr Korneta for taking the group photograph; and Emily Martin for helping accompanying persons. Nicholas L S Martin University of Kentucky Bruno A deHarak Illinois Wesleyan University International Scientific Organizing Committee Co-Chairs Don Madison (USA)Klaus Bartschat (USA) Members Lorenzo Avaldi (Italy)Nils Andersen (Denmark) Jamal Berakdar (Germany)Uwe Becker (Germany) Michael Brunger (Australia)Igor Bray (Australia) Greg Childers (USA)Nikolay Cherepkov (Russia) JingKang Deng (China)Albert Crowe (UK) Alexander Dorn (Germany)Danielle Dowek (France) Jim Feagin (USA)Oscar Fojon (Argentina) Nikolay Kabachnik (Russia)Tim Gay (USA) Anatoli Kheifets (Australia)Alexei Grum-Grzhimailo (Russia) George King (UK)Friedrich Hanne (Germany) Tom Kirchner (Germany)Alan Huetz (France) Azzedine Lahmam-Bennani (France)Morty Khakoo (USA) Julian Lower (Australia)Birgit Lohmann (Australia) William McCurdy (USA)Bill McConkey (Canada) Andrew Murray (UK)Rajesh Srivastava (India) Bernard Piraux (Belgium)Al Stauffer (Canada) Tim Reddish (Canada)Jim Williams (Australia) Roberto Rivarola (Argentina)Akira Yagishita (Japan) Michael Schulz (USA)Peter Zetner (Canada) Anthony Starace (USA)Joachim Ullrich (Germany) Giovanni Stefani (Italy)Erich Weigold (Australia) Masahiko Takahashi (Japan) Conference photograph
NASA Astrophysics Data System (ADS)
Raulin, F.; Coll, P.; Cabane, M.; Hebrard, E.; Israel, G.; Nguyen, M.-J.; Szopa, C.; Gpcos Team
Largest satellite of Saturn and the only satellite in the solar system having a dense atmosphere, Titan is one of the key planetary bodies for astrobiological studies, due to several aspects: Its analogies with planet Earth, in spite of much lower temperatures, The Cassini-Huygens data have largely confirmed the many analogies between Titan and our own planet. Both have similar vertical temperature profiles, (although much colder, of course, on Titan). Both have condensable and non condensable greenhouse gases in their atmosphere. Both are geologically very active. Furthermore, the data also suggest strongly the presence of a methane cycle on Titan analogous to the water cycle on Earth. The presence of an active organic chemistry, involving several of the key compounds of prebiotic chemistry. The recent data obtained from the Huygens instruments show that the organic matter in Titan low atmosphere (stratosphere and troposphere) is mainly concentrated in the aerosol particles. Because of the vertical temperature profile in this part of the atmosphere, most of the volatile organics are probably mainly condensed on the aerosol particles. The nucleus of these particles seems to be made of complex macromolecular organic matter, well mimicked in the laboratory by the "Titan's tholins". Now, laboratory tholins are known to release many organic compounds of biological interest, such as amino acids and purine and pyrimidine bases, when they are in contact with liquid water. Such hydrolysis may have occurred on the surface of Titan, in the bodies of liquid water which episodically may form on Titan's surface from meteoritic and cometary impacts. The formation of biologically interesting compounds may also occur in the deep water ocean, from the hydrolysis of complex organic material included in the chrondritic matter accreted during the formation of Titan. The possible emergence and persistence of Life on Titan 1 All ingredients which seems necessary for Life are present on Titan : • liquid water : permanently as a deep sub-surface ocean, and even episodically on the surface, • organic matter : in the internal structure, from chondritic materials, and in the atmosphere and on the surface, from the atmospheric organic chemistry • and energy : in the atmosphere (solar UV photons, energetic electrons from Saturn magnetosphere and cosmic rays) and, probably, in the environment of the sub-surface ocean (radioactive nuclei in the deep interior and tidal energy dissipation) as also supported by the likely presence of cryovolcanism on the surface Thus, it cannot be excluded that life may have emerged on or in Titan. In spite of the extreme conditions in this environment life may have been able to adapt and to persist. Many data are still expected from the Cassini-Huygens mission and future astrobiological exploration mission of Titan are now under consideration. Nevertheless, Titan already looks like another word, with an active prebiotic-like chemistry, but in the absence of permanent liquid water, on the surface: a natural laboratory for prebiotic-like chemistry. References. Fortes, A.D. (2000), `Exobiological implications of a possible ammonia-water ocean inside Titan', Icarus 146, 444-452 Raulin, F. (2005), `Exo-Astrobiological Aspects of Europa and Titan: From Observations to Speculations', Space Science Review 116 (1-2), 471-496. Nature, (2005), `The Huygens probe on Titan', 8 News & Views, Articles and Letters 438, 756-802 Schulze-Makuch, D., and Grinspoon D.H. (2005), `Biologically enhanced energy and carbon cycling on Titan?',Astrobiology 5, 560-567. 2
Post-traumatic growth in stroke carers: a comparison of theories.
Hallam, William; Morris, Reg
2014-09-01
This study examined variables associated with post-traumatic growth (PTG) in stroke carers and compared predictions of two models of PTG within this population: the model of Schaefer and Moos was compared to that of Tedeschi and Calhoun (1992, Personal coping: Theory, research, and application. Westport, CT: Praeger, 149; 1998, Posttraumatic growth: Positive changes in the aftermath of crisis. Mahwah, NJ: Lawrence Erlbaum, 99; 2004, Psychol. Inq., 15, 1, respectively). A cross-sectional survey design was employed. Carers of stroke survivors (N = 71) completed questionnaires measuring PTG, coping style, social support, survivor functioning, age, and carer quality of life. Correlation, multiple regression, and mediation analyses were used to test hypotheses. All carers completing the PTG measure (N = 70) reported growth, but average scores differed from cancer carers (Chambers et al., 2012, Eur. J. Cancer Care, 21, 213; Thombre et al., 2010, J. Psychosocial Oncol., 28, 173). PTG was positively correlated with deliberate and intrusive rumination, avoidance coping, social support, and quality of life. Regression analysis showed that factors identified by Tedeschi and Calhoun (deliberate rumination, intrusive rumination, social support, acceptance coping, survivor functioning) accounted for 49% of variance in PTG, whereas those identified by Schaefer and Moos (active coping, avoidance coping, social support, survivor functioning, and age) accounted for only 21%. Rumination, especially deliberate rumination, explained most variance in PTG and mediated the effect of social support on PTG. The findings add to the limited body of evidence suggesting that stroke carers experience growth. Deliberate rumination and social support are important in explaining growth, and the findings support the model proposed by Tedeschi and Calhoun over that of Schaefer and Moos. What is already known on this subject? Literature on caring for stroke survivors focuses on negative outcomes (Ilse, Feys, de Wit, Putman, & de Weerdt, 2008) to the exclusion of positive outcomes such as post-traumatic growth (PTG; Calhoun & Tedeschi, 1999). Studies of a variety of health conditions have demonstrated that PTG occurs in patients and carers after illness events and is associated with well-being (Gangstad, Norman, & Barton, 2006; Helgeson, Reynolds, & Tomich, 2006; Kim, Schulz, & Carver, 2007). Exploratory studies and studies of benefit finding have shown that PTG occurs in stroke carers (Bacon, Milne, Sheikh, & Freeston, 2009; Buschenfeld, Morris, & Lockwood, 2009; Haley et al., 2009; Thompson, 1991), but there are no studies using standard instruments to assess PTG in this population. Moreover, current theories posit different explanations for PTG (Schaefer & Moos, 1992, 1998; Tedeschi & Calhoun, 2004), and there is a need for empirical tests (Park, 2010). What does this study add? This study extends knowledge by measuring PTG with a standard instrument in a sample of UK stroke carers and investigating associated variables. The study also compared the predictive power of the models of PTG proposed by Tedeschi and Calhoun (2004) and Schaefer and Moos (1992, 1998). PTG was found in UK stroke carers, but levels differed from cancer carers in other countries. Factors associated with PTG were identified; Tedeschi and Calhoun's model best predicted PTG. Deliberate rumination had a direct effect on PTG and also mediated the effect of social support. Deliberate rumination is a possible target for therapeutic interventions to enhance PTG. © 2013 The British Psychological Society.
Eventos de Desconexão no Cometa P/Halley sob a Ótica do Modelo de Reconexão Magnética
NASA Astrophysics Data System (ADS)
Voelzke, M. R.; Matsuura, O. T.
1998-08-01
531 imagens contidas no The International Halley Watch Atlas of Large-Scale Phenomena (Brandt et al., 1992) cobrindo o período de setembro de 1985 a julho de 1986 foram analisadas visando identificar, caracterizar as propriedades e correlacionar estruturas morfológicas da cauda de plasma do cometa P/Halley. A análise revelou 47 eventos de desconexão (DEs) (Niedner & Brandt, 1979; Jockers, 1985; Celnik et al., 1988; Delva et al., 1991). A análise completa de todas as imagens encontra-se publicada em Voelzke & Matsuura, 1998. A distribuição dos DEs na distância heliocêntrica apresenta um caráter bimodal possivelmente associado com a distribuição espacial das fronteiras de setor magnético do meio interplanetário. Os 47 DEs fotografados em 47 imagens distintas permitiram determinar 19 origens de DEs, ou seja, o instante em que supostamente o cometa cruzou a fronteira entre setores magnéticos do vento solar. Tais dados cometários foram comparados com dados do vento solar provenientes de medidas realizadas in situ pelas sondas IMP-8, ICE e PVO, que mediram a variação da velocidade do vento solar, da densidade e da pressão dinâmica durante o intervalo analisado. Os dados destas sondas espaciais em conjunto com os da sonda Vega 1 foram usados para determinar o tempo das passagens do lençol de corrente. Com base nos dados das sondas foram calculadas as coordenadas heliográficas retroativas do lençol de corrente na "superfície fonte" dos mapas sinóticos do campo magnético de Hoeksema, 1989. O cálculo retroativo é feito através de um modelo simples de expressão do vento solar com velocidade uniforme, sendo considerada a co-rotação da magnetosfera com o Sol. Este trabalho apresenta os resultados desta comparação e a análise cinemática da origem dos DEs, determinada sob a hipótese que o plasma desconectado de um dado DE afasta-se com velocidade constante do núcleo cometário (Voelzke & Matsuura, 1998) e compara esta análise com outras que determinam o tempo de desconexão a partir de um movimento linear constantemente acelerado (Yi et al., 1994). A velocidade de um DE para outro varia enormemente. - Brandt, J.C., Niedner, M.B.Jr. and Rahe, J., (1992) The International Halley Watch Atlas of Large-Scale Phenomena (printed by: Johnson Printing Co., Boulder, CO), University of Colorado-Boulder. - Celnik, W.E., Koczet, P., Schlosser, W., Schulz, R., Svejda, P. and Weissbauer, K., (1988) Astron. Astrophys. Suppl. Ser. 72, 89. - Delva, M., Schwingenschuh, K., Niedner, M.B.Jr. and Gringauz, K.I., (1991) Planet. Space Sci. 39, Number 5, 697. - Hoeksema, J.T., (1989) Adv. Space Res. 9, 141. - Jockers, K., (1985) Astron. Astrophys. Suppl. Ser. 62, 791. - Niedner, M.B.Jr. and Brandt, J.C., (1979) Astrophys. J. 234, 723. - Voelzke, M.R. and Matsuura, O.T., (1988) Planet. Space Sci. 46, 835. - Yi, Y., Caputo, M.F. and Brandt, J.C., (1994) Planet. Space Sci. 42, Number 9, 705.
Enrollment trends in American soil science classes: 2004-2005 to 2013-2014 academic years
NASA Astrophysics Data System (ADS)
Brevik, Eric C.; Vaughan, Karen L.; Parikh, Sanjai J.; Dolliver, Holly; Lindbo, David; Steffan, Joshua J.; Weindorf, David; McDaniel, Paul; Mbila, Monday; Edinger-Marshall, Susan
2017-04-01
Studies indicate that soil science enrollment in the USA was on the decline in the 1990s and into the early 2000s (Baveye et al., 2006; Collins, 2008). However, a recent study indicated that in the seven years from 2007 through 2014 the number of soil science academic majors, at both the undergraduate and graduate levels, was on the increase (Brevik et al., 2014). However, the Brevik et al. (2014) study only looked at the number of soil science majors, it did not look at other important trends in soil science enrollment. Therefore, this study was developed to investigate enrollment numbers in individual soil science classes. To investigate this, we collected data from ten different American universities on the enrollment trends for seven different classes taught at the undergraduate level, introduction to soil science, soil fertility, soil management, pedology, soil biology/microbiology, soil chemistry, and soil physics, over a 10 year time period (2004-2005 to 2013-2014 academic years). Enrollment in each individual class was investigated over five (2009-2010 to 2013-2014) and 10 (2004-2005 to 2013-2014) year trends. All classes showed increasing enrollment over the five year study period except for soil physics, which experienced a modest decline in enrollment (-4.1% per year). The soil chemistry (23.2% per year) and soil management (10.1% per year) classes had the largest percentage gain in enrollment over the five year time period. All classes investigated experienced increased enrollment over the 10 year study period except soil biology/microbiology, which had an essentially stable enrollment (0.8% enrollment gain per year). Soil physics (28.9% per year) and soil chemistry (14.7% per year) had the largest percentage gain in enrollment over the 10 year time period. It is worth noting that soil physics enrollments had a large increase from 2004-2005 through 2009-2010, then dropped to and stabilized at a level that was lower than the 2009-2010 high but much higher than enrollment levels through the first three years of the study. This explains soil physics being the only class to show an enrollment decline over the five year trend while showing the greatest percentage gain over the 10 year trend. Overall, the individual classes showed 12 examples of increasing enrollment, one example of stable enrollment, and one example of declining enrollment. These results were interpreted as indicating that enrollment in soil science classes at American universities was on the rise over the time period of the study. References Baveye, P., Jacobson, A.R., Allaire, S.E., Tandarich, J.P. and Bryant, R.B., 2006. Whither goes soil science in the United States and Canada? Soil Science 171, 501-518. Brevik, E.C., Abit, S., Brown, D., Dolliver, H., Hopkins, D., Lindbo, D., Manu, A., Mbila, M., Parikh, S.J., Schulze, D., Shaw, J., Weil, R., Weindorf, D., 2014. Soil science education in the United States: history and current enrollment trends. Journal of the Indian Society of Soil Science 62(4), 299-306. Collins, M.E., 2008. Where have all the soils students gone? Journal of Natural Resources and Life Sciences Education 37, 117-124.
NASA Astrophysics Data System (ADS)
Rupf, Isabel
2013-04-01
To meet the EU's ambitious targets for carbon emission reduction, renewable energy production has to be strongly upgraded and made more efficient for grid energy storage. Alpine Foreland Basins feature a unique geological inventory which can contribute substantially to tackle these challenges. They offer a geothermal potential and storage capacity for compressed air, as well as space for underground storage of CO2. Exploiting these natural subsurface resources will strongly compete with existing oil and gas claims and groundwater issues. The project GeoMol will provide consistent 3-dimensional subsurface information about the Alpine Foreland Basins based on a holistic and transnational approach. Core of the project GeoMol is a geological framework model for the entire Northern Molasse Basin, complemented by five detailed models in pilot areas, also in the Po Basin, which are dedicated to specific questions of subsurface use. The models will consist of up to 13 litho-stratigraphic horizons ranging from the Cenozoic basin fill down to Mesozoic and late Paleozoic sedimentary rocks and the crystalline basement. More than 5000 wells and 28 000 km seismic lines serve as input data sets for the geological subsurface model. The data have multiple sources and various acquisition dates, and their interpretations have gone through several paradigm changes. Therefore, it is necessary to standardize the data with regards to technical parameters and content prior to further analysis (cf. Capar et al. 2013, EGU2013-5349). Each partner will build its own geological subsurface model with different software solutions for seismic interpretation and 3d-modelling. Therefore, 3d-modelling follows different software- and partner-specific workflows. One of the main challenges of the project is to ensure a seamlessly fitting framework model. It is necessary to define several milestones for cross border checks during the whole modelling process. Hence, the main input data set of the framework model are interpreted seismic lines, 3d-models can be generated either in time or in depth domain. Some partners will build their 3d-model in time domain and convert it after finishing to depth. Other participants will transform seismic information first and will model directly in depth domain. To ensure comparability between the different parts transnational velocity models for time-depth conversion are required at an early stage of the project. The exchange of model geometries, topology, and geo-scientific content will be achieved applying an appropriate cyberinfrastructure called GST. It provides functionalities to ensure semantic and technical interoperability. Within the project GeoMol a web server for the dissemination of 3d geological models will be implemented including an administrative interface for the role-based access, real-time transformation of country-specific coordinate systems and a web visualisation features. The project GeoMol is co-funded by the Alpine Space Program as part of the European Territorial Cooperation 2007-2013. The project integrates partners from Austria, France, Germany, Italy, Slovenia and Switzerland and runs from September 2012 to June 2015. Further information on www.geomol.eu. The GeoMol 3D-modelling team: Roland Baumberger (swisstopo), Magdalena Bottig (GBA), Alessandro Cagnoni (RLB), Laure Capar (BRGM), Renaud Couëffé (BRGM), Chiara D'Ambrogi (ISPRA), Chrystel Dezayes (BRGM), Gerold Diepolder (LfU BY), Charlotte Fehn (LGRB), Sunseare Gabalda (BRGM), Gregor Götzl (GBA), Andrej Lapanje (GeoZS), Fabio Carlo Molinari (RER-SGSS), Edgar Nitsch (LGRB), Robert Pamer (LfU BY), Sebastian Pfleiderer (GBA), Marco Pantaloni (ISPRA), Uta Schulz (LfU BY), Günter Sokol (LGRB), Gunther Wirsing (LGRB), Heiko Zumsprekel (LGRB)
NASA Technical Reports Server (NTRS)
Wooden, Diane H.; Harker, David E.; Woodward, Charles E.
2006-01-01
When the Deep Impact Mission hit Jupiter Family comet 9P/Tempel 1, an ejecta crater was formed and an pocket of volatile gases and ices from 10-30 m below the surface was exposed (A Hearn et aI. 2005). This resulted in a gas geyser that persisted for a few hours (Sugita et al, 2005). The gas geyser pushed dust grains into the coma (Sugita et a1. 2005), as well as ice grains (Schulz et al. 2006). The smaller of the dust grains were submicron in radii (0-25.3 micron), and were primarily composed of highly refractory minerals including amorphous (non-graphitic) carbon, and silicate minerals including amorphous (disordered) olivine (Fe,Mg)2SiO4 and pyroxene (Fe,Mg)SiO3 and crystalline Mg-rich olivine. The smaller grains moved faster, as expected from the size-dependent velocity law produced by gas-drag on grains. The mineralogy evolved with time: progressively larger grains persisted in the near nuclear region, having been imparted with slower velocities, and the mineralogies of these larger grains appeared simpler and without crystals. The smaller 0.2-0.3 micron grains reached the coma in about 1.5 hours (1 arc sec = 740 km), were more diverse in mineralogy than the larger grains and contained crystals, and appeared to travel through the coma together. No smaller grains appeared at larger coma distances later (with slower velocities), implying that if grain fragmentation occurred, it happened within the gas acceleration zone. These results of the high spatial resolution spectroscopy (GEMINI+Michelle: Harker et 4. 2005, 2006; Subaru+COMICS: Sugita et al. 2005) revealed that the grains released from the interior were different from the nominally active areas of this comet by their: (a) crystalline content, (b) smaller size, (c) more diverse mineralogy. The temporal changes in the spectra, recorded by GEMIM+Michelle every 7 minutes, indicated that the dust mineralogy is inhomogeneous and, unexpectedly, the portion of the size distribution dominated by smaller grains has a more diverse mineralogy. The lower spatial resolution, high sensitivity Spitzer IRS data reveal resonances of refractory minerals (those seen by GEMINI+Michelle plus ortho-pyroxene)) as well resonances that can be attributed to phillosilicates (layer lattice silicates such as Montmorillonite) (Lisse et al. 2006). Pre- and post-impact, micron to submicron grains were deciphered to be present in the coma by the modeling the high spatial resolution images to account for nucleus plus inner coma fluxes (Wooden et al. 2005, 2006; Harker et al. 2005, 2006a). Note also that crystalline silicates were released from the interior of 73P-B/SW-3 as it disintegrated (Harker et al. 2006b). From the Deep Impact and the disintegration of 73P-B, we are led to ask the questians: Why is the mineralogy of the dust released from a volatile-rich pocket beneath the surface different from the dust that is released from the nominally active areas? Could the most volatile pockets be exhausted quickly? Why would crystalline silicates be associated with more volatile materials? Perhaps the structure of the comet is so inhomogeneous, e.g., the layered pile mode2 of the nucleus (Belton et al. 2006), that a reservoir of crystalline silicate and submicron grains just happens to not be released by the nominally active areas of comet 9P? Perhaps comets lose matter through their mantles from below their surfaces, thus preserving ancient topographic structures and radiation damaged silicates and carbon? We will discuss and ponder different scenarios. We will discuss future directions for coordinated observations of JF comets.
NASA Astrophysics Data System (ADS)
Arnaud, Fabien; Fanget, Bernard; Malet, Emmanuel; Poulenard, Jérôme; Støren, Eivind; Leloup, Anouk; Bakke, Jostein; Sabatier, Pierre
2016-04-01
Recent paleo-studies revealed climatic southern high latitude climate evolution patterns that are crucial to understand the global climate evolution(1,2). Among others the strength and north-south shifts of westerlies wind appeared to be a key parameter(3). However, virtually no lands are located south of the 45th South parallel between Southern Georgia (60°W) and New Zealand (170°E) precluding the establishment of paleoclimate records of past westerlies dynamics. Located around 50°S and 70°E, lost in the middle of the sub-Antarctic Indian Ocean, Kerguelen archipelago is a major, geomorphologically complex, land-mass that is covered by hundreds lakes of various sizes. It hence offers a unique opportunity to reconstruct past climate and environment dynamics in a region where virtually nothing is known about it, except the remarkable recent reconstructions based on a Lateglacial peatbog sequence(4). During the 2014-2015 austral summer, a French-Norwegian team led the very first extensive lake sediment coring survey on Kerguelen Archipelago under the umbrella of the PALAS program supported by the French Polar Institute (IPEV). Two main areas were investigated: i) the southwest of the mainland, so-called Golfe du Morbihan, where glaciers are currently absent and ii) the northernmost Kerguelen mainland peninsula so-called Loranchet, where cirque glaciers are still present. This double-target strategy aims at reconstructing various independent indirect records of precipitation (glacier advance, flood dynamics) and wind speed (marine spray chemical species, wind-borne terrigenous input) to tackle the Holocene climate variability. Despite particularly harsh climate conditions and difficult logistics matters, we were able to core 6 lake sediment sites: 5 in Golfe du Morbihan and one in Loranchet peninsula. Among them two sequences taken in the 4km-long Lake Armor using a UWITEC re-entry piston coring system by 20 and 100m water-depth (6 and 7m-long, respectively). One sequence from the newly-named Lake Tiercelin (2m-long) was recovered using UWITEC gravity coring equipment operated from a portable rubber boat by 54m water-depth. Those three sequences cover the whole Holocene periods. The 3m-long sequence taken in Lake Guynemer, Loranchet peninsula, was taken using a homemade small platform and a Nesje piston corer by 50m water-depth and covers the last 5 ka cal. BP. Two additional lakes were cored in the vicinity of Lake Armor: Fougères and Poule from which short sequences were taken in order to study environmental changes since the arrival of humans in the 18th century and the subsequent introduction of exogenous plant and animal species. We present here preliminary results including the dating of all sediment sequences as well as their chemical logging and sedimentological description. This already revealed the recurrence of Holocene volcanic eruptions as well as erosion patterns that are comparable among different records. The recognition of tephra layers will further allow the synchronization of terrestrial records together and with marine records around Kerguelen Archipelago. Paleoclimate interpretations of acquired data as well as further measurements are still ongoing processes. However, one may already argue that we collected rare geological sequences of prime importance in the quest of understanding climate patterns affecting the southern high latitudes all along the Holocene. 1. Lamy. et al. 2015. in Integr. Anal. of Intergl. Clim. Dyn. Schulz & Paul eds., 75-81 (Springer) 2. Rebolledo et al. 2015. Quat. Res. 84, 21-36 3. Agosta et al. 2015. Clim. Res. 62, 219-240 4. Van der Putten et al 2015. Quat. Sci. Rev. 122, 142-157
Quantum heat engine power can be increased by noise-induced coherence
Scully, Marlan O.; Chapin, Kimberly R.; Dorfman, Konstantin E.; Kim, Moochan Barnabas; Svidzinsky, Anatoly
2011-01-01
Laser and photocell quantum heat engines (QHEs) are powered by thermal light and governed by the laws of quantum thermodynamics. To appreciate the deep connection between quantum mechanics and thermodynamics we need only recall that in 1901 Planck introduced the quantum of action to calculate the entropy of thermal light, and in 1905 Einstein’s studies of the entropy of thermal light led him to introduce the photon. Then in 1917, he discovered stimulated emission by using detailed balance arguments. Half a century later, Scovil and Schulz-DuBois applied detailed balance ideas to show that maser photons were produced with Carnot quantum efficiency (see Fig. 1A). Furthermore, Shockley and Quiesser invoked detailed balance to obtain the efficiency of a photocell illuminated by “hot” thermal light (see Fig. 2A). To understand this detailed balance limit, we note that in the QHE, the incident light excites electrons, which can then deliver useful work to a load. However, the efficiency is limited by radiative recombination in which the excited electrons are returned to the ground state. But it has been proven that radiatively induced quantum coherence can break detailed balance and yield lasing without inversion. Here we show that noise-induced coherence enables us to break detailed balance and get more power out of a laser or photocell QHE. Surprisingly, this coherence can be induced by the same noisy (thermal) emission and absorption processes that drive the QHE (see Fig. 3A). Furthermore, this noise-induced coherence can be robust against environmental decoherence.Fig. 1.(A) Schematic of a laser pumped by hot photons at temperature Th (energy source, blue) and by cold photons at temperature Tc (entropy sink, red). The laser emits photons (green) such that at threshold the laser photon energy and pump photon energy is related by Carnot efficiency (4). (B) Schematic of atoms inside the cavity. Lower level b is coupled to the excited states a and β. The laser power is governed by the average number of hot and cold thermal photons, and . (C) Same as B but lower b level is replaced by two states b1 and b2, which can double the power when there is coherence between the levels.Fig. 2.(A) Schematic of a photocell consisting of quantum dots sandwiched between p and n doped semiconductors. Open circuit voltage and solar photon energy ℏνh are related by the Carnot efficiency factor where Tc is the ambient and Th is the solar temperature. (B) Schematic of a quantum dot solar cell in which state b is coupled to a via, e.g., solar radiation and coupled to the valence band reservoir state β via optical phonons. The electrons in conduction band reservoir state α pass to state β via an external circuit, which contains the load. (C) Same as B but lower level b is replaced by two states b1 and b2, and when coherently prepared can double the output power.Fig. 3.(A) Photocell current j = Γραα (laser photon flux Pl/ℏνl) (in arbitrary units) generated by the photovoltaic cell QHE (laser QHE) of Fig. 1C (Fig. 2C) as a function of maximum work (in electron volts) done by electron (laser photon) Eα - Eβ + kTc log(ραα/ρββ) with full (red line), partial (brown line), and no quantum interference (blue line). (B) Power of a photocell of Fig. 2C as a function of voltage for different decoherence rates , 100γ1c. Upper curve indicates power acquired from the sun. PMID:21876187
Quantum heat engine power can be increased by noise-induced coherence.
Scully, Marlan O; Chapin, Kimberly R; Dorfman, Konstantin E; Kim, Moochan Barnabas; Svidzinsky, Anatoly
2011-09-13
Laser and photocell quantum heat engines (QHEs) are powered by thermal light and governed by the laws of quantum thermodynamics. To appreciate the deep connection between quantum mechanics and thermodynamics we need only recall that in 1901 Planck introduced the quantum of action to calculate the entropy of thermal light, and in 1905 Einstein's studies of the entropy of thermal light led him to introduce the photon. Then in 1917, he discovered stimulated emission by using detailed balance arguments. Half a century later, Scovil and Schulz-DuBois applied detailed balance ideas to show that maser photons were produced with Carnot quantum efficiency (see Fig. 1A). Furthermore, Shockley and Quiesser invoked detailed balance to obtain the efficiency of a photocell illuminated by "hot" thermal light (see Fig. 2A). To understand this detailed balance limit, we note that in the QHE, the incident light excites electrons, which can then deliver useful work to a load. However, the efficiency is limited by radiative recombination in which the excited electrons are returned to the ground state. But it has been proven that radiatively induced quantum coherence can break detailed balance and yield lasing without inversion. Here we show that noise-induced coherence enables us to break detailed balance and get more power out of a laser or photocell QHE. Surprisingly, this coherence can be induced by the same noisy (thermal) emission and absorption processes that drive the QHE (see Fig. 3A). Furthermore, this noise-induced coherence can be robust against environmental decoherence.Fig. 1.(A) Schematic of a laser pumped by hot photons at temperature T(h) (energy source, blue) and by cold photons at temperature T(c) (entropy sink, red). The laser emits photons (green) such that at threshold the laser photon energy and pump photon energy is related by Carnot efficiency (4). (B) Schematic of atoms inside the cavity. Lower level b is coupled to the excited states a and β. The laser power is governed by the average number of hot and cold thermal photons, and . (C) Same as B but lower b level is replaced by two states b(1) and b(2), which can double the power when there is coherence between the levels.Fig. 2.(A) Schematic of a photocell consisting of quantum dots sandwiched between p and n doped semiconductors. Open circuit voltage and solar photon energy ℏν(h) are related by the Carnot efficiency factor where T(c) is the ambient and T(h) is the solar temperature. (B) Schematic of a quantum dot solar cell in which state b is coupled to a via, e.g., solar radiation and coupled to the valence band reservoir state β via optical phonons. The electrons in conduction band reservoir state α pass to state β via an external circuit, which contains the load. (C) Same as B but lower level b is replaced by two states b(1) and b(2), and when coherently prepared can double the output power.Fig. 3.(A) Photocell current j = Γρ(αα) (laser photon flux P(l)/ℏ(ν(l))) (in arbitrary units) generated by the photovoltaic cell QHE (laser QHE) of Fig. 1C (Fig. 2C) as a function of maximum work (in electron volts) done by electron (laser photon) E(α) - E(β) + kT(c) log(ρ(αα)/ρ(ββ)) with full (red line), partial (brown line), and no quantum interference (blue line). (B) Power of a photocell of Fig. 2C as a function of voltage for different decoherence rates , 100γ(1c). Upper curve indicates power acquired from the sun.
Chandra Adds to Story of the Way We Were
NASA Astrophysics Data System (ADS)
2003-05-01
Data from NASA's Chandra X-ray Observatory have enabled astronomers to use a new way to determine if a young star is surrounded by a planet-forming disk like our early Sun. These results suggest that disks around young stars can evolve rapidly to form planets, or they can be disrupted by close encounters with other stars. Chandra observed two young star systems, TW Hydrae and HD 98800, both of which are in the TW Hydrae Association, a loose cluster of 10 million-year-old stars. Observations at infrared and other wavelengths have shown that several stars in the TW Hydrae Association are surrounded by disks of dust and gas. At a distance of about 180 light years from Earth, these systems are among the nearest analogs to the early solar nebula from which Earth formed. "X-rays give us an excellent new way to probe the disks around stars," said Joel Kastner of the Rochester Institute of Technology in Rochester, NY during a press conference today in Nashville, Tenn. at a meeting of the American Astronomical Society. "They can tell us whether a disk is very near to its parent star and dumping matter onto it, or whether such activity has ceased to be important. In the latter case, presumably the disk has been assimilated into larger bodies - perhaps planets--or disrupted." TW Hydrae and HD 98800A Chandra 0th Order Image of HD98800 Kastner and his colleagues found examples of each type of behavior in their study. One star, TW Hydrae, namesake of the TW Hydrae Association, exhibited features in its X-ray spectrum that provide strong, new evidence that matter is accreting onto the star from a circumstellar disk. They concluded that matter is guided by the star's magnetic field onto one or more hot spots on the surface of the star. In contrast, Chandra observations of the young multiple star system HD 98800 revealed that its brightest star, HD 98800A, is producing X-rays much as the Sun does, from a hot upper atmosphere or corona. HD 98800 is a complex multiple-star system consisting of two pairs of stars, called HD 98800A and HD 98800B. These pairs, each of which is about an Earth-Sun distance apart, orbit each other at about the same distance as Pluto orbits the Sun. "Our X-ray results are fully consistent with other observations that show that accretion of matter from a disk in HD 98800A has dropped to a low level," said Kastner. "So Chandra has thrown new weight behind the evidence that any disk in this system has been greatly diminished or destroyed in ten million years, perhaps by the ongoing formation of planets or by the companion stars." The new X-ray technique for studying disks around stars relies on the ability of Chandra's spectrometers to measure the energies of individual X-rays very precisely. By comparing the number of X-rays emitted by hot gas at specific energies from ions such as oxygen and neon, the temperature and density of particles can be determined. This new technique will help astronomers to distinguish between an accretion disk and a stellar corona as the origin of intense X-ray emission from a young star. Other members of the research team are David Huenemoerder, Norbert Schulz, and Claude Canizares from the Massachusetts Institute of Technology, and David Weintraub from Vanderbilt University. NASA's Marshall Space Flight Center in Huntsville, Ala., manages the Chandra program, and TRW, Inc., Redondo Beach, Calif., is the prime contractor for the spacecraft. The Smithsonian's Chandra X-ray Center controls science and flight operations from Cambridge, Mass., for the Office of Space Science at NASA Headquarters, Washington. The image and additional information are available at: http://chandra.harvard.edu and http://chandra.nasa.gov
NASA Astrophysics Data System (ADS)
van Berk, Wolfgang; Schulz, Hans-Martin
2010-05-01
Crude oil quality in reservoirs can be modified by degradation processes at oil-water contacts (OWC). Mineral phase assemblages, composition of coexisting pore water, and type and amount of hydrocarbon degradation products (HDP) are controlling factors in complex hydrogeochemical processes in hydrocarbon-bearing siliciclastic reservoirs, which have undergone different degrees of biodegradation. Moreover, the composition of coexisting gas (particularly CO2 partial pressure) results from different pathways of hydrogeochemical equilibration. In a first step we analysed recent and palaeo-OWCs in the Heidrun field. Anaerobic decomposition of oil components at the OWC resulted in the release of methane and carbon dioxide and subsequent dissolution of feldspars (anorthite and adularia) leading to the formation of secondary kaolinite and carbonate phases. Less intensively degraded hydrocarbons co-occur with calcite, whereas strongly degraded hydrocarbons co-occur with solid solution carbonate phase (siderite, magnesite, calcite) enriched in δ13C. To test such processes quantitatively in a second step, CO2 equilibria and mass transfers induced by organic-inorganic interactions have been hydrogeochemically modelled in different semi-generic scenarios with data from the Norwegian continental shelf (acc. Smith & Ehrenberg 1989). The model is based on chemical thermodynamics and includes irreversible reactions representing hydrolytic disproportionation of hydrocarbons according to Seewald's (2006) overall reaction (1a) which is additionally applied in our modelling work in an extended form including acetic acid (1b): (1) R-CH2-CH2-CH3 + 4H2O -> R + 2CO2 + CH4 + 5H2, (2) R-CH2-CH2-CH3 + 4H2O -> R + 1.9CO2 + 0.1CH3COOH + 0.9CH4 + 5H2. Equilibrating mineral assemblages (different feldspar types, quartz, kaolinite, calcite) are based on the observed primary reservoir composition at 72 °C. Modelled equilibration and coupled mass transfer were triggered by the addition and reaction of different amounts of HDP. Modelled CO2 partial pressure values in a multicomponent gas phase equilibrated with K-feldspar, quartz, kaolinite, and calcite resemble measured data. Similar CO2 contents result from acetic acid addition (eq. 1b). Equilibration with albite or anorthite reduces the release of CO2 into the multicomponent gas phase dramatically, by 1 or 4 orders of magnitude compared with the equilibration with K-feldspar (van Berk et al., 2009). Third and based on data by Ehrenberg & Jakobsen (2001), the effects of organic-inorganic interactions at OWCs in Brent Group reservoir sandstones from the Gullfaks Oilfield (offshore Norway) have been hydrogeochemically modelled. Observed local changes in mineral phase assemblage compositions (content of different feldspar types, kaolinite, carbonate) and CO2 partial pressures are attributed to varying degrees of oil-biodegradation (up to more than 10 %; Horstadt et al. 1992). Modelling results are congruent with observations and indicate that (i) intense dissolution of anorthite, (ii) less intense dissolution of albite, (iii) minor dissolution of K-feldspar, (iv) intense precipitation of kaolinite and quartz, (v) less intense precipitation of carbonate, and (vi) formation of CO2 partial pressures are driven by the release of HDP. References Ehrenberg SN & Jakobsen KG (2001) Plagioclase dissolution related to biodegradation of oil in Brent Group sandstones (Middle Jurassic) of Gullfaks Field, northern North Sea. Sedimentology, 48, 703-721. Smith JT & Ehrenberg SN (1989) Correlation of carbon dioxide abundance with temperature in clastic hydrocarbon reservoirs: relationship to inorganic chemical equilibrium. Marine and Petroleum Geology, 6, 129-135. Seewald JS (2003) Organic-inorganic interactions in petroleum-producing sedimentary basins. Nature, 426, 327-333. van Berk, W, Schulz, H-M & Fu, Y (2009) Hydrogeochemical modelling of CO2 equilibria and mass transfer induced by organic-inorganic interactions in siliciclastic petroleum reservoirs. Geofluids, 9, 253-262.
Chandra Observatory Uncovers Hot Stars In The Making
NASA Astrophysics Data System (ADS)
2000-11-01
Cambridge, Mass.--In resolving the hot core of one of the Earth's closest and most massive star-forming regions, the Chandra X-ray Observatory showed that almost all the young stars' temperatures are more extreme than expected. Orion Trapezium JPEG, TIFF, PS The Orion Trapezium as observed on October 31st UT 05:47:21 1999. The colors represent energy, where blue and white indicate very high energies and therefore exterme temperatures. The size of the X-ray source in the image also reflects its brightness, i.e. more bright sources appear larger in size. The is an artifact caused by the limiting blur of the telescope optics. The projected diameter of the field of view is about 80 light days. Credit: NASA/MIT Orion Trapezium JPEG, TIFF, PS The Orion Trapezium as observed on November 24th UT 05:37:54 1999. The colors represent energy, where blue and white indicate very high energies and therefore exterme temperatures. The size of the X-ray source in the image also reflects its brightness, i.e. more bright sources appear larger in size. The is an artifact caused by the limiting blur of the telescope optics. The projected diameter of the field of view is about 80 light days. Credit: NASA/MIT The Orion Trapezium Cluster, only a few hundred thousand years old, offers a prime view into a stellar nursery. Its X-ray sources detected by Chandra include several externally illuminated protoplanetary disks ("proplyds") and several very massive stars, which burn so fast that they will die before the low mass stars even fully mature. One of the major highlights of the Chandra observations are identification of proplyds as X-ray point source in the near vicinity of the most massive star in the Trapezium. Previous observations did not have the ability to separate the contributions of the different objects. "We've seen high temperatures in stars before, but what clearly surprised us was that nearly all the stars we see appear at rather extreme temperatures in X-rays, independent of their type," said Norbert S. Schulz, MIT research scientist at the Chandra X-ray Center, who leads the Orion Project. "And by extreme, we mean temperatures which are in some cases well above 60 million degrees." The hottest massive star known so far has been around 25 million degrees. The great Orion Nebula harbors the Orion Nebula Cluster (ONC), a loose association of around 2,000 mostly very young stars of a wide range of mass confined within a radius of less than 10 light years. The Orion Trapezium Cluster is a younger subgroup of stars at the core of the ONC confined within a radius of about 1.5 light years. Its median age is around 300,000 years. The constant bright light of the Trapezium and its surrounding stars at the heart of the Orion nebula (M42) are visible to the naked eye on clear nights. In X-rays, these young stars are constantly active and changing in brightness, sometimes within half a day, sometimes over weeks. "Never before Chandra have we seen images of stellar activity with such brilliance," said Joel Kastner, professor at the Chester F. Carlson Center for Imaging Science at the Rochester Institute of Technology. "Here the combination of very high angular resolution, with high quality spectra that Chandra offers, clearly pays off." The observation was performed using the High Energy Transmission Grating Spectrometer (HETGS) and the X-ray spectra were recorded with the spectroscopic array of the Advanced CCD Imaging Spectrometer (ACIS). The ACIS detector is a sophisticated version of the CCD detectors commonly used in video cameras or digital cameras. The orion stars are so bright in X-rays that they easily saturate the ccds. Here the team used the gratings as a blocking filter. Orion Trapezium - X-ray & Optical JPEG, TIFF, PS X-ray contours of the Chandra observation overlaid onto the optical Hubble image (courtesy of J. Bally, CASA Colorado). The field of view is 30"x30". Besides the bright main Trapezium stars, which were found to be extremely hot massive stars, several externally illuminated objects are also X-ray emitters. Some of them with temperatures up to 100 Million degrees. The ones that do not show X-ray contours are probably too faint to be detected in these particular Chandra observations. Credit: J. Bally, CASA Colorad It is generally assumed that low-mass stars like our Sun, when they are young, are more than 1,000 times more luminous in X-rays. The X-ray emission here is thought to arise from magnetic activity in connection with stellar rotation. Consequently, high temperatures would be observed in very violent and giant flares. Here temperatures as high as 60 million degrees have been observed in very few cases. The absence of many strong flares in the light curves, as well as temperatures in the Chandra ACIS spectra wich exceed the ones in giant flares, could mean that they are either young protostars (i.e stars in the making), or a special class of more evolved, hot young stars. Schulz concedes that although astronomers have gathered many clues in recent years about the X-ray behavior of very young stellar objects, "we are far from being able to uniquely classify evolutionary stages of their X-ray emission." The five main young and massive Trapezium stars are responsible for the illumination of the entire Orion Nebula. These stars are born with masses 15 to 30 times larger than the mass of our Sun. X-rays in such stars are thought to be produced by shocks that occur when high velocity stellar winds ram into slower dense material. The Chandra spectra show a temperature component of about 5 million to 10 million degrees, which is consistent with this model. However, four of these five stars also show additional components between 30 million and 60 million degrees. "The fact that some of these massive stars show such a hot component and some not, and that a hot component seems to be more common than previously assumed, is an important new aspect in the spectral behavior of these stars," said David Huenemoerder, research physicist at the MIT Center for Space Research. Standard shock models cannot explain such high temperatures, which may be caused by magnetically confined plasmas, which are generally only attributed to stars like the Sun. Such an effect would support the suspicion that some aspects in the X-ray emission of massive stars may not be different from our Sun, which also has a hot corona. More study is needed to confirm this conclusion. The latest in NASA's series of Great Observatories. Chandra is the "X-ray Hubble," launched in July 1999 into a deep-space orbit around the Earth. Chandra carries a large X-ray telescope to focus X-rays from objects in the sky. An X-ray telescope cannot work on the ground because the X-rays are absorbed by the Earth's atmosphere. The HETGS was built by the Massachusetts Institute of Technology with Bruno Rossi Professor Claude Canizares as Principal Investigator. The ACIS X-ray camera was conceived and developed for NASA by Penn State and the Massachusetts Institute of Technology under the leadership of Gordon Garmire, Evan Pugh Professor of Astronomy and Astrophysics at Penn State. The Orion observation was part of Prof. Canizares guaranteed observing time during the first round of Chandra observations. NASA's Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program. TRW Inc., Redondo Beach, California, is the prime contractor for the spacecraft. The Smithsonian's Chandra X-ray Center controls science and flight operations from Cambridge, Massachusetts. Orion Trapezium Handout Constellation Orion To follow Chandra's progress, visit the Chandra site at: http://chandra.harvard.edu AND http://chandra.nasa.gov Various Images for this release and a postscript version of a preprint of the accepted science paper (The Astrophysical Main Journal) can be downloaded from http://space.mit.edu/~nss/orion/orion.html
[Transfer of exotic ticks (Acari: ixodida) on reptiles (Reptilia) imported to Poland].
2009-01-01
In the of period 2003-2007, a total of 382 specimens of reptiles belonging to the following genera were investigated: Testudo, Iguana, Varanus, Gongylophis, Python, Spalerosophis, Psammophis. The material for the present study was a collection of reptiles owned by the "Animals" Ltd from Swietochłowice (Upper Silesia, Poland), specialising in import of exotic animals to Poland, as well as the reptile collections of private breeders. The reptiles that turned out to be the most heavily infected with ticks were the commonly bred terrarium reptiles: Varanus exanthematicus and Python regius and they were imported to Poland from Ghana, Africa. Exotic reptiles are also imported from Southern Europe, Asia and Central America. The presently reported study helped to confirm the fact of transfer of exotic ticks on reptiles to Poland. A total of 2104 tick specimens, representing all stages of development (males, females, nymphs, larvae), were collected. They represented species of the genera Amblyomma and Hyalomma. The following species were found: Amblyomma exornatum Koch, 1844, Amblyomma flavomaculatum (Lucas, 1846), Amblyomma latum Koch, 1844, Amblyomma nuttalli Dönitz, 1909, Amblyomma quadricavum Schulze, 1941, Amblyomma transversale (Lucas, 1844), Amblyomma varanense (Supino, 1897), Amblyomma spp. Koch, 1844, Hyalomma aegyptium (Linnaeus, 1758). All the species of ticks of genus Ambylomma revealed have been discovered in Poland for the first time. The overall prevalence of infection was 77.6%. The highest prevalence value (81.2%) was observed on pythons (Python regius) and (78.7%) on monitor lizards (Varanus exanthematicus). The highest number of ticks was collected from Python regius and Varanus exanthematicus. The mean infection intensity for V. exanthematicus was 7.6 ticks per host, while for P. regius the intensity reached 4.7 ticks. The most abundant tick transferred to Poland on a host was an African tick, Amblyomma latum. Fifty eight specimens of monitor lizards (V. salvator and V. exanthematicus) and 92 specimens pythons (P. regius) were examined, with detailed descriptions of where the parasite was feeding on the body of the host. Among the 434 specimens of ticks collected from the monitor lizards, the majority were attached on the host's legs (40.5%), on the trunk (29.3%), on the head (20.3%), with fewest on the tail (9.9%). Also, 430 specimens of ticks were collected from the bodies of pythons. They mostly parasitized along the whole length of the back (54.4%) and on the stomach side of the trunk (29.8%), less frequently in the area of the cloaca (5.6%), around the eyes (3.7%), in the nostril openings (0.9%) and on the remainder of the head (5.6%). On the hosts, ticks were found at different development stages, but adult development stages dominated. The most frequent were males (999 specimens), then adult females (552 specimens), nymphs (508 specimens) and larvae (45 specimens). During the research, 13 cases of anomalies of morphological structure were confirmed for ticks Amblyomma flavomaculatum, Amblyomma latum and Hyalomma aegyptium. Asymmetries and deformations of the general body shape were observed, as were anomalies concerning structures on the surface of the body and anomalies of the legs. For the first time in Poland, epidemiological tests were carried out in the direction of the infection of exotic ticks gathered from reptiles with micro-organisms which pose a threat for the health of people and animals. For this purpose, molecular techniques - polymerase chain reaction (PCR) and DNA sequencing were used. The isolates from 345 ticks, were examined for the presence of DNA of Anaplasma phagocytophilum, which is the etiological factor in human granulocytic anaplasmosis, and Rickettsia spp. from the spotted fever group, causing human rickettsiosis. This study confirmed the presence of Anaplasma phagocytophilum in two ticks of Amblyomma flavomaculatum (constituting 0.6% of all the ticks investigated) feeding on Varanus exanthematicus. None of the tick specimens, however, contained Rickettsia spp. DNA. The expanding phenomenon of the import of exotic reptiles in Poland and Central Europe is important for parasitological and epidemiological reasons and therefore requires monitoring and wide-ranging prophylactic activities to prevent the inflow of exotic parasites to Poland.
PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP 2010)
NASA Astrophysics Data System (ADS)
Lin, Simon C.; Shen, Stella; Neufeld, Niko; Gutsche, Oliver; Cattaneo, Marco; Fisk, Ian; Panzer-Steindel, Bernd; Di Meglio, Alberto; Lokajicek, Milos
2011-12-01
The International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held at Academia Sinica in Taipei from 18-22 October 2010. CHEP is a major series of international conferences for physicists and computing professionals from the worldwide High Energy and Nuclear Physics community, Computer Science, and Information Technology. The CHEP conference provides an international forum to exchange information on computing progress and needs for the community, and to review recent, ongoing and future activities. CHEP conferences are held at roughly 18 month intervals, alternating between Europe, Asia, America and other parts of the world. Recent CHEP conferences have been held in Prauge, Czech Republic (2009); Victoria, Canada (2007); Mumbai, India (2006); Interlaken, Switzerland (2004); San Diego, California(2003); Beijing, China (2001); Padova, Italy (2000) CHEP 2010 was organized by Academia Sinica Grid Computing Centre. There was an International Advisory Committee (IAC) setting the overall themes of the conference, a Programme Committee (PC) responsible for the content, as well as Conference Secretariat responsible for the conference infrastructure. There were over 500 attendees with a program that included plenary sessions of invited speakers, a number of parallel sessions comprising around 260 oral and 200 poster presentations, and industrial exhibitions. We thank all the presenters, for the excellent scientific content of their contributions to the conference. Conference tracks covered topics on Online Computing, Event Processing, Software Engineering, Data Stores, and Databases, Distributed Processing and Analysis, Computing Fabrics and Networking Technologies, Grid and Cloud Middleware, and Collaborative Tools. The conference included excursions to various attractions in Northern Taiwan, including Sanhsia Tsu Shih Temple, Yingko, Chiufen Village, the Northeast Coast National Scenic Area, Keelung, Yehliu Geopark, and Wulai Aboriginal Village, as well as two banquets held at the Grand Hotel and Grand Formosa Regent in Taipei. The next CHEP conference will be held in New York, the United States on 21-25 May 2012. We would like to thank the National Science Council of Taiwan, the EU ACEOLE project, commercial sponsors, and the International Advisory Committee and the Programme Committee members for all their support and help. Special thanks to the Programme Committee members for their careful choice of conference contributions and enormous effort in reviewing and editing about 340 post conference proceedings papers. Simon C Lin CHEP 2010 Conference Chair and Proceedings Editor Taipei, Taiwan November 2011 Track Editors/ Programme Committee Chair Simon C Lin, Academia Sinica, Taiwan Online Computing Track Y H Chang, National Central University, Taiwan Harry Cheung, Fermilab, USA Niko Neufeld, CERN, Switzerland Event Processing Track Fabio Cossutti, INFN Trieste, Italy Oliver Gutsche, Fermilab, USA Ryosuke Itoh, KEK, Japan Software Engineering, Data Stores, and Databases Track Marco Cattaneo, CERN, Switzerland Gang Chen, Chinese Academy of Sciences, China Stefan Roiser, CERN, Switzerland Distributed Processing and Analysis Track Kai-Feng Chen, National Taiwan University, Taiwan Ulrik Egede, Imperial College London, UK Ian Fisk, Fermilab, USA Fons Rademakers, CERN, Switzerland Torre Wenaus, BNL, USA Computing Fabrics and Networking Technologies Track Harvey Newman, Caltech, USA Bernd Panzer-Steindel, CERN, Switzerland Antonio Wong, BNL, USA Ian Fisk, Fermilab, USA Niko Neufeld, CERN, Switzerland Grid and Cloud Middleware Track Alberto Di Meglio, CERN, Switzerland Markus Schulz, CERN, Switzerland Collaborative Tools Track Joao Correia Fernandes, CERN, Switzerland Philippe Galvez, Caltech, USA Milos Lokajicek, FZU Prague, Czech Republic International Advisory Committee Chair: Simon C. Lin , Academia Sinica, Taiwan Members: Mohammad Al-Turany , FAIR, Germany Sunanda Banerjee, Fermilab, USA Dario Barberis, CERN & Genoa University/INFN, Switzerland Lothar Bauerdick, Fermilab, USA Ian Bird, CERN, Switzerland Amber Boehnlein, US Department of Energy, USA Kors Bos, CERN, Switzerland Federico Carminati, CERN, Switzerland Philippe Charpentier, CERN, Switzerland Gang Chen, Institute of High Energy Physics, China Peter Clarke, University of Edinburgh, UK Michael Ernst, Brookhaven National Laboratory, USA David Foster, CERN, Switzerland Merino Gonzalo, CIEMAT, Spain John Gordon, STFC-RAL, UK Volker Guelzow, Deutsches Elektronen-Synchrotron DESY, Hamburg, Germany John Harvey, CERN, Switzerland Frederic Hemmer, CERN, Switzerland Hafeez Hoorani, NCP, Pakistan Viatcheslav Ilyin, Moscow State University, Russia Matthias Kasemann, DESY, Germany Nobuhiko Katayama, KEK, Japan Milos Lokajícek, FZU Prague, Czech Republic David Malon, ANL, USA Pere Mato Vila, CERN, Switzerland Mirco Mazzucato, INFN CNAF, Italy Richard Mount, SLAC, USA Harvey Newman, Caltech, USA Mitsuaki Nozaki, KEK, Japan Farid Ould-Saada, University of Oslo, Norway Ruth Pordes, Fermilab, USA Hiroshi Sakamoto, The University of Tokyo, Japan Alberto Santoro, UERJ, Brazil Jim Shank, Boston University, USA Alan Silverman, CERN, Switzerland Randy Sobie , University of Victoria, Canada Dongchul Son, Kyungpook National University, South Korea Reda Tafirout , TRIUMF, Canada Victoria White, Fermilab, USA Guy Wormser, LAL, France Frank Wuerthwein, UCSD, USA Charles Young, SLAC, USA
Li, Xi-Ying; van Achterberg, Cornelis; Tan, Ji-Cai
2013-01-01
Abstract The species of the subfamily Opiinae (Hymenoptera: Braconidae) from Hunan (Oriental China) are revised and illustrated. Thirty-six new species are described: Apodesmia bruniclypealis Li & van Achterberg, sp. n., Apodesmia melliclypealis Li & van Achterberg, sp. n., Areotetes albiferus Li & van Achterberg, sp. n., Areotetes carinuliferus Li & van Achterberg, sp. n., Areotetes striatiferus Li & van Achterberg, sp. n., Coleopioides diversinotum Li & van Achterberg, sp. n., Coleopioides postpectalis Li & van Achterberg, sp. n., Fopius dorsopiferus Li, van Achterberg & Tan, sp. n., Indiopius chenae Li & van Achterberg, sp. n., Opiognathus aulaciferus Li & van Achterberg, sp. n., Opiognathus brevibasalis Li & van Achterberg, sp. n., Opius crenuliferus Li & van Achterberg, sp. n., Opius malarator Li, van Achterberg & Tan, sp. n., Opius monilipalpis Li & van Achterberg, sp. n., Opius pachymerus Li & van Achterberg, sp. n., Opius songi Li & van Achterberg, sp. n., Opius youi Li & van Achterberg, sp. n., Opius zengi Li & van Achterberg, sp. n., Phaedrotoma acuticlypeata Li & van Achterberg, sp. n., Phaedrotoma angiclypeata Li & van Achterberg, sp. n., Phaedrotoma antenervalis Li & van Achterberg, sp. n., Phaedrotoma depressiclypealis Li & van Achterberg, sp. n., Phaedrotoma flavisoma Li & van Achterberg, sp. n., Phaedrotoma nigrisoma Li & van Achterberg, sp. n., Phaedrotoma protuberator Li & van Achterberg, sp. n., Phaedrotoma rugulifera Li & van Achterberg, sp. n., Li & van Achterberg,Phaedrotoma striatinota Li & van Achterberg, sp. n., Phaedrotoma vermiculifera Li & van Achterberg, sp. n., Rhogadopsis latipennis Li & van Achterberg, sp. n., Rhogadopsis longicaudifera Li & van Achterberg, sp. n., Rhogadopsis maculosa Li, van Achterberg & Tan, sp. n., Rhogadopsis obliqua Li & van Achterberg, sp. n., Rhogadopsis sculpturator Li & van Achterberg, sp. n., Utetes longicarinatus Li & van Achterberg, sp. n. and Xynobius notauliferus Li & van Achterberg, sp. n. Areotetes van Achterberg & Li, gen. n. (type species: Areotetes carinuliferus sp. n.) and Coleopioides van Achterberg & Li, gen. n. (type species: Coleopioides postpectalis sp. n. are described. All species are illustrated and keyed. In total 30 species of Opiinae are sequenced and the cladograms are presented. Neopius Gahan, 1917, Opiognathus Fischer, 1972, Opiostomus Fischer, 1972, and Rhogadopsis Brèthes, 1913, are treated as a valid genera based on molecular and morphological differences. Opius vittata Chen & Weng, 2005 (not Opius vittatus Ruschka, 1915), Opius ambiguus Weng & Chen, 2005 (not Wesmael, 1835) and Opius mitis Chen & Weng, 2005 (not Fischer, 1963) are primary homonymsandarerenamed into Phaedrotoma depressa Li & van Achterberg, nom. n., Opius cheni Li & van Achterberg, nom. n. andOpius wengi Li & van Achterberg, nom. n., respectively. Phaedrotoma terga (Chen & Weng, 2005) comb. n.,Diachasmimorpha longicaudata (Ashmead, 1905) and Biosteres pavitita Chen & Weng, 2005, are reported new for Hunan, Opiostomus aureliae (Fischer, 1957) comb. n. is new for China and Hunan; Xynobius maculipennis(Enderlein, 1912) comb. n. is new for Hunan and continental China and Rhogadopsis longuria (Chen & Weng, 2005) comb. n. is new for Hunan. The following new combinations are given: Apodesmia puncta (Weng & Chen, 2005) comb. n., Apodesmia tracta (Weng & Chen, 2005) comb. n., Areotetes laevigatus (Weng & Chen, 2005) comb. n., Phaedrotoma dimidia (Chen & Weng, 2005) comb. n., Phaedrotoma improcera (Weng & Chen, 2005) comb. n., Phaedrotoma amputata (Weng & Chen, 2005) comb. n., Phaedrotoma larga (Weng & Chen, 2005) comb. n., Phaedrotoma osculas (Weng & Chen, 2005) comb. n., Phaedrotoma postuma (Chen & Weng, 2005) comb. n., Phaedrotoma rugulosa (Chen & Weng, 2005) comb. n., Phaedrotoma tabularis (Weng & Chen, 2005) comb. n., Rhogadopsis apii (Chen & Weng, 2005) comb. n., Rhogadopsis dimidia (Chen & Weng, 2005) comb. n., Rhogadopsis diutia (Chen & Weng, 2005) comb. n., Rhogadopsis longuria (Chen & Weng, 2005) comb. n., Rhogadopsis pratellae(Weng & Chen, 2005) comb. n., Rhogadopsis pratensis (Weng & Chen, 2005) comb. n., Rhogadopsis sculpta (Chen & Weng, 2005) comb. n., Rhogadopsis sulcifer (Fischer, 1975) comb. n., Rhogadopsis tabidula(Weng & Chen, 2005) comb. n., Xynobius complexus (Weng & Chen, 2005) comb. n., Xynobius indagatrix (Weng & Chen, 2005) comb. n., Xynobius multiarculatus (Chen & Weng, 2005) comb. n. The following (sub)genera are synonymised: Snoflakopius Fischer, 1972, Jucundopius Fischer, 1984, Opiotenes Fischer, 1998, and Oetztalotenes Fischer, 1998, with Opiostomus Fischer, 1971; Xynobiotenes Fischer, 1998, with Xynobius Foerster, 1862; Allotypus Foerster, 1862, Lemnaphilopius Fischer, 1972, Agnopius Fischer, 1982, and Cryptognathopius Fischer, 1984, with Apodesmia Foerster, 1862; Nosopoea Foerster, 1862, Tolbia Cameron, 1907, Brachycentrus Szépligeti, 1907, Baeocentrum Schulz, 1911, Hexaulax Cameron, 1910, Coeloreuteus Roman, 1910, Neodiospilus Szépligeti, 1911, Euopius Fischer, 1967, Gerius Fischer, 1972, Grimnirus Fischer, 1972, Hoenirus Fischer, 1972, Mimirus Fischer, 1972, Gastrosema Fischer, 1972, Merotrachys Fischer, 1972, Phlebosema Fischer, 1972, Neoephedrus Samanta, Tamili, Saha & Raychaudhuri, 1983, Adontopius Fischer, 1984, Kainopaeopius Fischer, 1986, Millenniopius Fischer, 1996, and Neotropopius Fischer, 1999, with Phaedrotoma Foerster, 1862. PMID:23653521
NASA Astrophysics Data System (ADS)
Tornquist, Mattias
The research presented in this thesis covers wave-particle interactions for relativistic (0.5-10 MeV) electrons in Earth's outer radiation belt (r = 3-7 RE, or L-shells: L = 3-7) interacting with magnetospheric Pc-5 (ULF) waves. This dissertation focuses on ideal models for short and long term electron energy and radial position scattering caused by interactions with ULF waves. We use test particle simulations to investigate these wave-particle interactions with ideal wave and magnetic dipole fields. We demonstrate that the wave-particle phase can cause various patterns in phase space trajectories, i.e. local acceleration, and that for a global electron population, for all initial conditions accounted for, has a negligible net energy scattering. Working with GSM polar coordinates, the relevant wave field components are EL, Ephi and Bz, where we find that the maximum energy scattering is 3-10 times more effective for Ephi compared to EL in a magnetic dipole field with a realistic dayside compression amplitude. We also evaluate electron interactions with two coexisting waves for a set of small frequency separations and phases, where it is confirmed that multi-resonant transport is possible for overlapping resonances in phase space when the Chirikov criterion is met (stochasticity parameter K = 1). The electron energy scattering enhances with decreasing frequency separation, i.e. increasing K, and is also dependent on the phases of the waves. The global acceleration is non-zero, can be onset in about 1 hour and last for > 4 hours. The adiabatic wave-particle interaction discussed up to this point can be regarded as short-term scattering ( tau ˜ hours ). When the physical problem extends to longer time scales (tau ˜ days ) the process ceases to be adiabatic due to the introduction of stochastic element in the system and becomes a diffusive process. We show that any mode in a broadband spectrum can contribute to the total diffusion rate for a particular drift frequency within the spectral band via dynamic phases. Each mode contributes maximally at a phase reset frequency fr = 2.63fk, where fk is the mode frequency. We experiment with electron diffusion due to interaction with wave broadband spectra in MLT sectors and find the phase reset effect being strongest when there is no azimuthal wave vector (msec = 0) within the sector. DLL rapidly coheres to the local PSD as the wave number increases and, for example, at msec = 1.00+/-0.25 the effect of phase resets is only 10-30% as strong as for msec = 0. Since phase resets depend on particle drift frequencies when MLT sectors are involved, a consequence is that DLL must adjust as a function of L-shell as well. For example, from the local PSD as the sole contributor to diffusion Schulz and Lanzerotte (1979) has shown that DLL ˜ L6 , but we prove that the function becomes DLL ˜ L5 with some variations due to fd and MLT sector width. The final part of this dissertation evaluates a pre storm commencement event on November 7, 2004, when Earth's magnetopause was struck by a high-speed solar wind with a mostly northward component of interplanetary magnetic field. We obtained a global MHD field simulated by the OpenGGC model for the interval 17:00-18:40 in universal time from NASA's Community Coordinated Modeling Center. Global distribution plots of the electric and magnetic field PSD reveal strong ULF waves spanning the whole dayside sector. There are distinct electric field modes at approximately 0.9, 2.3 and 3.7-6.3 mHz within the dayside sector, which we then used in test-particle simulations and the variance calculations in order to evaluate the diffusion coefficients. To ensure diffusion by sufficient stochasticity, we run the event by repeating the interval 10 times in series for a total duration of 12 hours. For the wave electric fields, the predicted diffusion coefficient due to local PSD matches the outcome from simulated electron scattering at 0.9 and 2.3 mHz. The diffusion due to the wider frequency band at 3.7-6.3 mHz does not fit the PSD profile alone, and requires phase resets in non-resonant modes within the spectrum to yield an agreement between the calculations and the simulations. Furthermore, only msec = 1 provides the correct solution. We have thus demonstrated the importance in including both the MLT sector width and wave number as additional significant factors apart from the local PSD in determining the diffusion coefficient for a realistic wave field. (Abstract shortened by UMI.).
NASA Astrophysics Data System (ADS)
Mahieu, Emmanuel; O'Doherty, Simon; Reimann, Stefan; Vollmer, Martin; Bader, Whitney; Bovy, Benoît; Lejeune, Bernard; Demoulin, Philippe; Roland, Ginette; Servais, Christian; Zander, Rodolphe
2013-04-01
Hydrofluorocarbons (HCFCs) are the first substitutes to the long-lived ozone depleting halocarbons, in particular the chlorofluorocarbons (CFCs). Given the complete ban of the CFCs by the Montreal Protocol, its Amendments and Adjustments, HCFCs are on the rise, with current rates of increase substantially larger than at the beginning of the 21st century. HCFC-142b (CH3CClF2) is presently the second most abundant HCFCs, after HCFC-22 (CHClF2). It is used in a wide range of applications, including as a blowing foam agent, in refrigeration and air-conditioning. Its concentration will soon reach 25 ppt in the northern hemisphere, with mixing ratios increasing at about 1.1 ppt/yr [Montzka et al., 2011]. The HCFC-142b lifetime is estimated at 18 years. With a global warming potential of 2310 on a 100-yr horizon, this species is also a potent greenhouse gas [Forster et al., 2007]. First space-based retrievals of HCFC-142b have been reported by Dufour et al. [2005]. 17 occultations recorded in 2004 by the Canadian ACE-FTS instrument (Atmospheric Chemistry Experiment - Fourier Transform Spectrometer, onboard SCISAT-1) were analyzed, using two microwindows (1132.5-1135.5 and 1191.5-1195.5 cm-1). In 2009, Rinsland et al. determined the HCFC-142b trend near the tropopause, from the analysis of ACE-FTS observations recorded over the 2004-2008 time period. The spectral region used in this study extended from 903 to 905.5 cm-1. In this contribution, we will present the first HCFC-142b measurements from ground-based high-resolution Fourier Transform Infrared (FTIR) solar spectra. We use observations recorded at the high altitude station of the Jungfraujoch (46.5°N, 8°E, 3580 m asl), with a Bruker 120HR instrument, in the framework of the Network for the Detection of Atmospheric Composition Change (NDACC, visit http://www.ndacc.org). The retrieval of HCFC-142b is very challenging, with simulations indicating only weak absorptions, lower than 1% for low sun spectra and current concentrations. Among the four microwindows tested, the region extending from 900 to 906 cm-1 proved to be the most appropriate, with limited interferences, in particular from water vapor. A total column time series spanning the 2004-2012 time period will be presented, analyzed and critically discussed. After conversion of our total columns to concentrations, we will compare our results with in situ measurements performed in the northern hemisphere by the AGAGE network. Acknowledgments The University of Liège contribution to the present work has primarily been supported by the SSD and PRODEX programs (AGACC-II and A3C projects, respectively) funded by the Belgian Federal Science Policy Office (BELSPO), Brussels. E. Mahieu is Research Associate with the F.R.S. - FNRS. Laboratory developments and mission expenses at the Jungfraujoch station were funded by the F.R.S. - FNRS and the Fédération Wallonie-Bruxelles, respectively. We thank the International Foundation High Altitude Research Stations Jungfraujoch and Gornergrat (HFSJG, Bern) for supporting the facilities needed to perform the observations. We further acknowledge the vital contribution from all the Belgian colleagues in performing the Jungfraujoch observations used here. References Dufour, G., C.D. Boone, and P.F. Bernath, First measurements of CFC-113 and HCFC-142b from space using ACE-FTS infrared spectra, Geophys. Res. Lett., 32, L15S09, doi:10.1029/2005GL022422, 2005. Forster, P., V. Ramaswamy, P. Artaxo, T. Berntsen, R. Betts, D.W. Fahey, J. Haywood, J. Lean, D.C. Lowe, G. Myhre, J. Nganga, R. Prinn, G. Raga, M. Schulz and R. Van Dorland, 2007: Changes in Atmospheric Constituents and in Radiative Forcing. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M.Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. Montzka, S.A., S. Reimann, A. Engel, K. Krüger, S. O'Doherty, W.T. Sturges, D. Blake, M. Dirf, P. Fraser, L. Froidevaux, K. Jucks, K. Kreher, M.J. Kurylo, A. Mellouki, J. Miller, O.-J. Nielsen, V.L. Orkin, R.G. Prinn, R. Shew, M.L. Santee, A. Stohl, and D. Verdonik, Ozone-Depleting Substances (ODSs) and Related Chemicals, Chapter 1 in Scientific Assessment of Ozone Depletion: 2010, Global Ozone Research and Monitoring Project-Report No. 52, 516 pp., World Meteorological Organization, Geneva, Switzerland, 2011. Rinsland, C.P., L.S. Chiou, C.D. Boone, P.F. Bernath, and E. Mahieu, First Measurements of the HCFC-142b trend from Atmospheric Chemistry Experiment (ACE) Solar Occultation Spectra, J. Quant. Spectrosc. Radiat. Transfer, 110, 2127-2134, 2009.
Jets Spout Far Closer to Black Hole Than Thought, Scientists Say
NASA Astrophysics Data System (ADS)
2004-01-01
Scientists at the Massachusetts Institute of Technology, taking advantage of multiple unique views of black hole particle jets over the course of a year with NASA's Chandra X-ray Observatory, have assembled a "picture" of the region that has revealed several key discoveries. They have found that the jets may be originating five times closer to the black hole than previously thought; they see in better detail how these jets change with time and distance from the black hole; and they could use this information as a new technique to measure black hole mass. Presented today in a press conference at the meeting of the American Astronomical Society in Atlanta, the observation will ultimately help solve the mystery of the great cosmic contradiction, in which black holes, notorious for pulling matter in, somehow manage to also shoot matter away in particle jets moving close to the speed of light. The observation is of a familiar source named SS 433 -- a binary star system within our Galaxy in the constellation Aquila, the Eagle, about 16,000 light years away. The black hole and its companion are about two-thirds closer to each other than the planet Mercury is to the Sun. The jets shoot off at 175 million miles per hour, 26 percent of light speed. "The high-speed jets in nearby SS 433 may be caused by the same mechanisms as the powerful outflows in the most distant and much more massive black holes, such as quasars," said Laura Lopez, an undergraduate student at MIT and lead author on a paper about the result. "SS 433 provides a nice local laboratory to study the formation of and conditions in relativistic jets." Dr. Herman Marshall, Ms. Lopez's research supervisor, led the investigation. Matter from the companion star pours into the black hole via a swirling accretion disk, much like water down a drain. Black hole particles jets are thought to be produced as some of the matter encounters strong magnetic fields close to the black hole. SS 433 is angled in such a way that one jet is shooting away from us while the other is aimed slightly towards us. The black hole's companion star enters the picture here as it periodically eclipses parts of the jets. Scientists use the eclipse, called an occultation, as a tool to block one part of the jet so that they can study other parts more easily. Using the Chandra High Energy Transmission Grating Spectrometer, the MIT group measured many characteristics of the jets, forming the best view of a jet's structure ever obtained. No image was created, as in other Chandra observations. Rather, the scientists pieced together the scene through spectroscopy, the fingerprint of chemical elements that reveals temperature and velocity of matter in the jets. They determined the length of the X-ray-emitting portion of the jet (over one million miles, about five times the distance from the Earth to the Moon); the temperature range (dropping from about 100 million degrees Celsius to 10 million degrees farther out); the chemical abundances (iron, silicon, and more); and the jet opening angle. In a previous observation they measured the jet's density. With this information, the team could determine that the jet base was five times closer to the black hole than previously observed, with a base diameter of about 1,280 miles. Also, from a bit of geometry along with information on the size of the binary system from optical observations by a team led by Douglas Gies of Georgia State University, the MIT group determined that the size of the companion star that blocked the view of the receding jet is about nine times the size of the Sun. From that, they estimated that the black hole is 16 solar masses. (For many years scientists have speculated whether SS 433 contains a black hole or a neutron star. Today's announcement of a 16-solar-mass object confirms that it is indeed a black hole, too massive to be a neutron star.) "The uniqueness of SS 433 cannot be overstated," said Marshall. "SS 433 provides an excellent opportunity to study the origin, evolution, and long-term behavior of jets because the X rays come from a region very close to the black hole. Of the hundreds of jets observed in the radio and X-ray bands, this is the only one for which we have a solid statement that it contains atomic nuclei and for which we are sure of the internal temperature and density." Supermassive black holes with jets, such as quasars, might display similar behavior, but they are so massive and so distant that changes cannot be observed because time scales are too long. Thus SS 433 serves as a laboratory to study the jet phenomenon "close to home," Marshall said. As such, further Chandra observations are planned. Collaborators on the investigation were Claude Canizares, Julie Kane, and Norbert Schulz, all of MIT. For images, refer to http://space.mit.edu/~hermanm/ss433.html.
NASA Astrophysics Data System (ADS)
Capar, Laure
2013-04-01
Within the framework of the transnational project GeoMol geophysical and geological information on the entire Molasse Basin and on the Po Basin are gathered to build consistent cross-border 3D geological models based on borehole evidence and seismic data. Benefiting from important progress in seismic processing, these new models will provide some answers to various questions regarding the usage of subsurface resources, as there are geothermal energy, CO2 and gas storage, oil and gas production, and support decisions-making to national and local administrations as well as to industries. More than 28 000 km of 2D seismic lines are compiled reprocessed and harmonized. This work faces various problems like the vertical drop of more than 700 meters between West and East of the Molasse Basin and to al lesser extent in the Po Plain, the heterogeneities of the substratum, the large disparities between the period and parameters of seismic acquisition, and depending of their availability, the use of two types of seismic data, raw and processed seismic data. The main challenge is to harmonize all lines at the same reference level, amplitude and step of signal processing from France to Austria, spanning more than 1000 km, to avoid misfits at crossing points between seismic lines and artifacts at the country borders, facilitating the interpretation of the various geological layers in the Molasse Basin and Po Basin. A generalized stratigraphic column for the two basins is set up, representing all geological layers relevant to subsurface usage. This stratigraphy constitutes the harmonized framework for seismic reprocessing. In general, processed seismic data is available on paper at stack stage and the mandatory information to take these seismic lines to the final stage of processing, the migration step, are datum plane and replacement velocity. However several datum planes and replacement velocities were used during previous processing projects. Our processing sequence is to first digitize the data, to have them in SEG-Y format. The second step is to apply some post-stack processing to obtain a good data quality before the final migration step. The third step is the final migration, using optimized migration velocities and the fourth step is the post-migration processing. In case of raw seismic data, the mandatory information for processing is made accessible, like from observer logs, coordinates and field seismic data. The processing sequence in order to obtain the final usable version of the seismic line is based on a pre-stack time migration. A complex processing sequence is applied. One main issue is to deal with the significant changes in the topography along the seismic lines and in the first twenty meter layer, this low velocity zone (LVZ) or weathered zone, where some lateral velocity variations occur and disturb the wave propagation, therefore the seismic signal. In seismic processing, this matter is solved by using the static corrections which allow removing these effects of lateral velocity variations and the effects of topography. Another main item is the good determination of root mean square velocities for migration, to improve the final result of seismic processing. Within GeoMol, generalized 3D velocity models of stack velocities are calculated in order to perform a rapid time-depth conversion. In final, all seismic lines of the project GeoMol will be at the same level of processing, the migration level. But to tie all these lines, a single appropriate datum plane and replacement velocity for the entire Molasse Basin and Po Plain, respectively, have to be carefully set up, to avoid misties at crossing points. The reprocessing and use of these 28 000 km of seismic lines in the project GeoMol provide the pivotal database to build a 3D framework model for regional subsurface information on the Alpine foreland basins (cf. Rupf et al. 2013, EGU2013-8924). The project GeoMol is co-funded by the Alpine Space Program as part of the European Territorial Cooperation 2007-2013. The project integrates partners from Austria, France, Germany, Italy, Slovenia and Switzerland and runs from September 2012 to June 2015. Further information on www.geomol.eu The GeoMol seismic interpretation team: Roland Baumberger (swisstopo), Agnès BRENOT (BRGM), Alessandro CAGNONI (RLB), Renaud COUËFFE (BRGM), Gabriel COURRIOUX (BRGM), Chiara D'Ambrogi (ISPRA), Chrystel Dezayes (BRGM), Charlotte Fehn (LGRB), Sunseare GABALDA (BRGM), Gregor Götzl (GBA), Andrej Lapanje (GeoZS), Stéphane MARC (BRGM), Alberto MARTINI (RER-SGSS), Fabio Carlo Molinari (RER-SGSS), Edgar Nitsch (LGRB), Robert Pamer (LfU BY), Marco PANTALONI (ISPRA), Sebastian Pfleiderer (GBA), Andrea PICCIN (RLB), (Nils Oesterling (swisstopo), Isabel Rupf (LGRB), Uta Schulz (LfU BY), Yves SIMEON (BRGM), Günter SÖKOL (LGRB), Heiko Zumsprekel (LGRB)
NASA Astrophysics Data System (ADS)
2010-11-01
An exoplanet orbiting a star that entered our Milky Way from another galaxy has been detected by a European team of astronomers using the MPG/ESO 2.2-metre telescope at ESO's La Silla Observatory in Chile. The Jupiter-like planet is particularly unusual, as it is orbiting a star nearing the end of its life and could be about to be engulfed by it, giving tantalising clues about the fate of our own planetary system in the distant future. Over the last 15 years, astronomers have detected nearly 500 planets orbiting stars in our cosmic neighbourhood, but none outside our Milky Way has been confirmed [1]. Now, however, a planet with a minimum mass 1.25 times that of Jupiter [2] has been discovered orbiting a star of extragalactic origin, even though the star now finds itself within our own galaxy. It is part of the so-called Helmi stream [3] - a group of stars that originally belonged to a dwarf galaxy that was devoured by our galaxy, the Milky Way, in an act of galactic cannibalism about six to nine billion years ago. The results are published today in Science Express. "This discovery is very exciting," says Rainer Klement of the Max-Planck-Institut für Astronomie (MPIA), who was responsible for the selection of the target stars for this study. "For the first time, astronomers have detected a planetary system in a stellar stream of extragalactic origin. Because of the great distances involved, there are no confirmed detections of planets in other galaxies. But this cosmic merger has brought an extragalactic planet within our reach." The star is known as HIP 13044, and it lies about 2000 light-years from Earth in the southern constellation of Fornax (the Furnace). The astronomers detected the planet, called HIP 13044 b, by looking for the tiny telltale wobbles of the star caused by the gravitational tug of an orbiting companion. For these precise observations, the team used the high-resolution spectrograph FEROS [4] attached to the 2.2-metre MPG/ESO telescope [5] at ESO's La Silla Observatory in Chile. Adding to its claim to fame, HIP 13044 b is also one of the few exoplanets known to have survived the period when its host star expanded massively after exhausting the hydrogen fuel supply in its core - the red giant phase of stellar evolution. The star has now contracted again and is burning helium in its core. Until now, these so-called horizontal branch stars have remained largely uncharted territory for planet-hunters. "This discovery is part of a study where we are systematically searching for exoplanets that orbit stars nearing the end of their lives," says Johny Setiawan, also from MPIA, who led the research. "This discovery is particularly intriguing when we consider the distant future of our own planetary system, as the Sun is also expected to become a red giant in about five billion years." HIP 13044 b is near to its host star. At the closest point in its elliptical orbit, it is less than one stellar diameter from the surface of the star (or 0.055 times the Sun-Earth distance). It completes an orbit in only 16.2 days. Setiawan and his colleagues hypothesise that the planet's orbit might initially have been much larger, but that it moved inwards during the red giant phase. Any closer-in planets may not have been so lucky. "The star is rotating relatively quickly for an horizontal branch star," says Setiawan. "One explanation is that HIP 13044 swallowed its inner planets during the red giant phase, which would make the star spin more quickly." Although HIP 13044 b has escaped the fate of these inner planets so far, the star will expand again in the next stage of its evolution. HIP 13044 b may therefore be about to be engulfed by the star, meaning that it is doomed after all. This could also foretell the demise of our outer planets - such as Jupiter - when the Sun approaches the end of its life. The star also poses interesting questions about how giant planets form, as it appears to contain very few elements heavier than hydrogen and helium - fewer than any other star known to host planets. "It is a puzzle for the widely accepted model of planet formation to explain how such a star, which contains hardly any heavy elements at all, could have formed a planet. Planets around stars like this must probably form in a different way," adds Setiawan. Notes [1] There have been tentative claims of the detection of extragalactic exoplanets through "gravitational microlensing" events, in which the planet passing in front of an even more distant star leads to a subtle, but detectable "flash". However, this method relies on a singular event - the chance alignment of a distant light source, planetary system and observers on Earth - and no such extragalactic planet detection has been confirmed. [2] Using the radial velocity method, astronomers can only estimate a minimum mass for a planet, as the mass estimate also depends on the tilt of the orbital plane relative to the line of sight, which is unknown. From a statistical point of view, this minimum mass is however often close to the real mass of the planet. [3] Astronomers can identify members of the Helmi stream as they have motions (velocity and orbits) that are rather different from the average Milky Way stars. [4] FEROS stands for Fibre-fed Extended Range Optical Spectrograph. [5] The 2.2-metre telescope has been in operation at La Silla since early 1984 and is on indefinite loan to ESO from the Max-Planck Society (Max Planck Gesellschaft or MPG in German). Telescope time is shared between MPG and ESO observing programmes, while the operation and maintenance of the telescope are ESO's responsibility. More information This research was presented in a paper, "A Giant Planet Around a Metal-poor Star of Extragalactic Origin", by J. Setiawan et al., to appear in Science Express on 18 November 2010. The team is composed of J. Setiawan, R. J. Klement, T. Henning, H.-W. Rix, and B. Rochau (Max-Planck-Institut für Astronomie, Heidelberg, Germany), J. Rodmann (European Space Agency, Noordwijk, the Netherlands), and T. Schulze-Hartung (Max-Planck-Institut für Astronomie, Heidelberg, Germany). ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world's most productive astronomical observatory. It is supported by 14 countries: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world's most advanced visible-light astronomical observatory and VISTA, the world's largest survey telescope. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 42-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become "the world's biggest eye on the sky".
EDITORIAL: Reigniting innovation in the transistor Reigniting innovation in the transistor
NASA Astrophysics Data System (ADS)
Demming, Anna
2012-09-01
Today the transistor is integral to the electronic circuitry that wires our lives. When Bardeen and Brattain first observed an amplified signal by connecting electrodes to a germanium crystal they saw that their 'semiconductor triode' could prove a useful alternative to the more cumbersome vacuum tubes used at the time [1]. But it was perhaps William Schottky who recognized the extent of the transistor's potential. A basic transistor has three or more terminals and current across one pair of terminals can switch or amplify current through another pair. Bardeen, Brattain and Schottky were jointly awarded a Nobel Prize in 1956 'for their researches on semiconductors and their discovery of the transistor effect' [2]. Since then many new forms of the transistor have been developed and understanding of the underlying properties is constantly advancing. In this issue Chen and Shih and colleagues at Taiwan National University and Drexel University report a pyroelectrics transistor. They show how a novel optothermal gating mechanism can modulate the current, allowing a range of developments in nanoscale optoelectronics and wireless devices [3]. The explosion of interest in nanoscale devices in the 1990s inspired electronics researchers to look for new systems that can act as transistors, such as carbon nanotube [4] and silicon nanowire [5] transistors. Generally these transistors function by raising and lowering an energy barrier of kBT -1, but researchers in the US and Canada have demonstrated that the quantum interference between two electronic pathways through aromatic molecules can also modulate the current flow [6]. The device has advantages for further miniaturization where energy dissipation in conventional systems may eventually cause complications. Interest in transistor technology has also led to advances in fabrication techniques for achieving high production quantities, such as printing [7]. Researchers in Florida in the US demonstrated field effect transistor behaviour in devices fabricated from chemically reduced graphene oxide. The work provided an important step forward for graphene electronics, which has been hampered by difficulties in scaling up the mechanical exfoliation techniques required to produce the high-quality graphene often needed for functioning devices [8]. In Sweden, researchers have developed a transistor design that they fabricate using standard III-V parallel processing, which also has great promise for scaling up production. Their transistor is based on a vertical array of InAs nanowires, which provide high electron mobility and the possibility of high-speed and low-power operation [9]. Different fabrication techniques and design parameters can influence the properties of transistors. Researchers in Belgium used a new method based on high-vacuum scanning spreading resistance microscopy to study the effect of diameter on carrier profile in nanowire transistors [10]. They then used experimental data and simulations to gain a better understanding of how this influenced the transistor performance. In Japan, Y Ohno and colleagues at Nagoya University have reported how atomic layer deposition of an insulating layer of HfO2 on carbon nanotube field effect transistors can change the carrier from p-type to n-type [11]. Carrier type switching—'ambipolar behaviour'—and hysteresis of carbon nanotube network transistors can make achieving reliable device performance challenging. However studies have also suggested that the hysteretic properties may be exploited in non-volatile memory applications. A collaboration of researchers in Italy and the US demonstrated transistor and memory cell behaviour in a system based on a carbon nanotube network [13]. Their device had relatively fast programming, good endurance and the charge retention was successfully enhanced by limiting exposure to air. Progress in understanding transistor behaviour has inspired other innovations in device applications. Nanowires are notoriously sensitive to gases such as CO, opening opportunities for applications in sensing using one-dimensional nanostructure transistors [12]. The pyroelectric transistor reported in this issue represents an intriguing development for device applications of this versatile and ubiquitous electronics component [3]. As the researchers point out, 'By combining the photocurrent feature and optothermal gating effect, the wide range of response to light covering ultraviolet and infrared radiation can lead to new nanoscale optoelectronic devices that are suitable for remote or wireless applications.' In nanotechnology research and development, often the race is on to achieve reliable device behaviour in the smallest possible systems. But sometimes it is the innovations in the approach used that revolutionize technology in industry. The pyroelectric transistor reported in this issue is a neat example of the ingenious innovations in this field of research. While in research the race is never really over, as this work demonstrates the journey itself remains an inspiration. References [1] Bardeen J and Brattain W H 1948 The transistor, a semi-conductor triode Phys. Rev 74 230-1 [2] Shockley W B, Bardeen J and Brattain W H 1956 The nobel prize in physics www.nobelprize.org/nobel_prizes/physics/laureates/1956/# [3] Hsieh C-Y, Lu M-L, Chen J-Y, Chen Y-T, Chen Y-F, Shih W Y and Shih W-H 2012 Single ZnO nanowire-PZT optothermal field effect transistors Nanotechnology 23 355201 [4] Tans S J, Verschueren A R M and Dekker C 1998 Room-temperature transistor based on a single carbon nanotube Nature 393 49-52 [5] Cui Y, Zhong Z, Wang D, Wang W U and Lieber C M 2003 High performance silicon nanowire field effect transistors Nano Lett. 3 149-52 [6]Stafford C A, Cardamone D M and Mazumdar S 2007 The quantum interference effect transistor Nanotechnology 18 424014 [7] Garnier F, Hajlaoui R, Yassar A and Srivastava P 1994 All-polymer field-effect transistor realized by printing techniques Science 265 1684-6 [8] Joung D, Chunder A, Zhai L and Khondaker S I 2010 High yield fabrication of chemically reduced graphene oxide field effect transistors by dielectrophoresis Nanotechnology 21 165202 [9] Bryllert T, Wernersson L-E, L¨owgren T and Samuelson L 2006 Vertical wrap-gated nanowire transistors Nanotechnology 17 S227-30 [10] Schulze A et al 2011 Observation of diameter dependent carrier distribution in nanowire-based transistors Nanotechnology 22 185701 [11] Moriyama N, Ohno Y, Kitamura T, Kishimoto S and Mizutani T 2010 Change in carrier type in high-k gate carbon nanotube field-effect transistors by interface fixed charges Nanotechnology 21 165201 [12] Bartolomeo A D, Rinzan M, Boyd A K, Yang Y, Guadagno L, Giubileo F and Barbara P 2010 Electrical properties and memory effects of field-effect transistors from networks of single-and double-walled carbon nanotubes Nanotechnology 21 115204 [13] Liao L et al 2009 Multifunctional CuO nanowire devices: P-type field effect transistors and CO gas sensors Nanotechnology 20 085203
Astrobiology and Venus exploration
NASA Astrophysics Data System (ADS)
Grinspoon, David H.; Bullock, Mark A.
For hundreds of years prior to the space age, Venus was considered among the most likely homes for extraterrestrial life. Since planetary exploration began, Venus has not been considered a promising target for Astrobiological exploration. However, Venus should be central to such an exploration program for several reasons. At present Venus is the only other Earth-sized terrestrial planet that we know of, and certainly the only one we will have the opportunity to explore in the foreseeable future. Understanding the divergence of Earth and Venus is central to understanding the limits of habitability in the inner regions of habitable zones around solar-type stars. Thus Venus presents us with a unique opportunity for putting the bulk properties, evolution and ongoing geochemical processes of Earth in a wider context. Many geological and meteorological processes otherwise active only on Earth at present are currently active on Venus. Active volcanism most likely affects the climate and chemical equilibrium state of the atmosphere and surface, and maintains the global cloud cover. Further, if we think beyond the specifics of a particular chemical system required to build complexity and heredity, we can ask what general properties a planet must possess in order to be considered a possible candidate for life. The answers might include an atmosphere with signs of flagrant chemical disequilibrium and active, internally driven cycling of volatile elements between the surface, atmosphere and interior. At present, the two planets we know of which possess these characteristics are Earth and Venus. Venus almost surely once had warm, habitable oceans. The evaporation of these oceans, and subsequent escape of hydrogen, most likely resulted in an oxygenated atmosphere. The duration of this phase is poorly understood, but during this time the terrestrial planets were not isolated. Rather, due to frequent impact transport, they represented a continuous environment for early microbial life. Life, once established in the early oceans of Venus, may have migrated to the clouds which, on present day Venus, may represent a habitable niche. Though highly acidic, this aqueous environment enjoys moderate temperatures, surroundings far from chemical equilibrium, and potentially useful radiation fluxes. Observations of unusual chemistry in the clouds, and particle populations that are not well characterized, suggest that this environment must be explored much more fully before biology can be ruled out. A sulfur-based metabolism for cloud-based life on Venus has recently been proposed (Schulze-Makuch et al., 2004). While speculative, these arguments, along with the discovery of terrestrial extremophile organisms that point toward the plausibility of survival in the Venusian clouds, establish the credibility of astrobiological exploration of Venus. Arguments for the possible existence of life on Mars or Europa are, by convention and repetition, seen as more mainstream than arguments for life elsewhere, but their logical status is similar to plausibility arguments for life on Venus. With the launch of COROT in 2006 and Kepler in 2008 the demographics of Earth-sized planets in our galaxy should finally become known. Future plans for a Terrestrial Planet Finder or Darwin-type space-based spectrograph should provide the capability of studying the atmospheric composition and other properties of terrestrial planets. One of the prime rationales for building such instruments is the possibility of identifying habitable planets or providing more generalized observational constraints on the habitable zones of stellar systems. Given the prevalence of CO2 dominated atmospheres in our own solar system, it is quite likely that a large fraction of these will be Venus-like in composition and evolutionary history. We will be observing these planets at random times in their evolution. In analogy with our own solar system, it is just as likely that we will find representatives of early Venus and early Earth type planets from the first 2 billion years of their evolution as it is that we will find "mature Venus" and "mature Earth"type planets that are roughly 4.5 billion years old. Therefore, in order to be poised to use the results of these future observations of extrasolar planets to make valid, generalized inferences about the size, shape and evolution of stellar habitable zones it is vital that we obtain a much deeper understanding of the evolutionary histories and divergence of Earth and Venus. The Mars Exploration Rover findings of evidence for aqueous conditions on early Mars have intensified interest in the possible origin and evolution of life on early Mars. Yet the evidence suggests that these deposits were formed in a highly acidic and sulfur-rich environment. During this phase, Mars may well have had sulfuric acid clouds sustained by vigorous, sulfur-rich volcanism. This suggests that a greater understanding of the chemistry of the Venusian atmosphere and clouds, and surface/atmosphere interactions, may help to characterize the environment of Mars when life may have formed there. In turn, if signs of early life are found on Mars during the upcoming decades of intensive astrobiological exploration planned for that planet, it will strengthen arguments for the plausibility of life in an early and gradually acidifying Venusian environment. Of our two neighboring planets, Venus and Mars, it is not yet known which held on to its surface oceans, and early habitable conditions, for longer.
ERIC Educational Resources Information Center
Cui, Zhongmin; Kolen, Michael J.
2009-01-01
This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…
Comparison of DNA extraction methods for meat analysis.
Yalçınkaya, Burhanettin; Yumbul, Eylem; Mozioğlu, Erkan; Akgoz, Muslum
2017-04-15
Preventing adulteration of meat and meat products with less desirable or objectionable meat species is important not only for economical, religious and health reasons, but also, it is important for fair trade practices, therefore, several methods for identification of meat and meat products have been developed. In the present study, ten different DNA extraction methods, including Tris-EDTA Method, a modified Cetyltrimethylammonium Bromide (CTAB) Method, Alkaline Method, Urea Method, Salt Method, Guanidinium Isothiocyanate (GuSCN) Method, Wizard Method, Qiagen Method, Zymogen Method and Genespin Method were examined to determine their relative effectiveness for extracting DNA from meat samples. The results show that the salt method is easy to perform, inexpensive and environmentally friendly. Additionally, it has the highest yield among all the isolation methods tested. We suggest this method as an alternative method for DNA isolation from meat and meat products. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Yanran; Chen, Duo; Zhang, Jiwei; Chen, Ning; Li, Xiaoqi; Gong, Xiaojing
2017-09-01
GIS (gas insulated switchgear), is an important equipment in power system. Partial discharge plays an important role in detecting the insulation performance of GIS. UHF method and ultrasonic method frequently used in partial discharge (PD) detection for GIS. It is necessary to investigate UHF method and ultrasonic method for partial discharge in GIS. However, very few studies have been conducted on the method combined this two methods. From the view point of safety, a new method based on UHF method and ultrasonic method of PD detection for GIS is proposed in order to greatly enhance the ability of anti-interference of signal detection and the accuracy of fault localization. This paper presents study aimed at clarifying the effect of the new method combined UHF method and ultrasonic method. Partial discharge tests were performed in laboratory simulated environment. Obtained results show the ability of anti-interference of signal detection and the accuracy of fault localization for this new method combined UHF method and ultrasonic method.
The multigrid preconditioned conjugate gradient method
NASA Technical Reports Server (NTRS)
Tatebe, Osamu
1993-01-01
A multigrid preconditioned conjugate gradient method (MGCG method), which uses the multigrid method as a preconditioner of the PCG method, is proposed. The multigrid method has inherent high parallelism and improves convergence of long wavelength components, which is important in iterative methods. By using this method as a preconditioner of the PCG method, an efficient method with high parallelism and fast convergence is obtained. First, it is considered a necessary condition of the multigrid preconditioner in order to satisfy requirements of a preconditioner of the PCG method. Next numerical experiments show a behavior of the MGCG method and that the MGCG method is superior to both the ICCG method and the multigrid method in point of fast convergence and high parallelism. This fast convergence is understood in terms of the eigenvalue analysis of the preconditioned matrix. From this observation of the multigrid preconditioner, it is realized that the MGCG method converges in very few iterations and the multigrid preconditioner is a desirable preconditioner of the conjugate gradient method.
Energy minimization in medical image analysis: Methodologies and applications.
Zhao, Feng; Xie, Xianghua
2016-02-01
Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.
Li, Xuelin; Tang, Jinfa; Meng, Fei; Li, Chunxiao; Xie, Yanming
2011-10-01
To study the adverse reaction of Danhong injection with four kinds of methods, central monitoring method, chart review method, literature study method and spontaneous reporting method, and to compare the differences between them, explore an appropriate method to carry out post-marketing safety evaluation of traditional Chinese medicine injection. Set down the adverse reactions' questionnaire of four kinds of methods, central monitoring method, chart review method, literature study method and collect the information on adverse reactions in a certain period. Danhong injection adverse reaction information from Henan Province spontaneous reporting system was collected with spontaneous reporting method. Carry on data summary and descriptive analysis. Study the adverse reaction of Danhong injection with four methods of central monitoring method, chart review method, literature study method and spontaneous reporting method, the rates of adverse events were 0.993%, 0.336%, 0.515%, 0.067%, respectively. Cyanosis, arrhythmia, hypotension, sweating, erythema, hemorrhage dermatitis, rash, irritability, bleeding gums, toothache, tinnitus, asthma, elevated aminotransferases, constipation, pain are new discovered adverse reactions. The central monitoring method is the appropriate method to carry out post-marketing safety evaluation of traditional Chinese medicine injection, which could objectively reflect the real world of clinical usage.
Ensemble Methods for MiRNA Target Prediction from Expression Data.
Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong
2015-01-01
microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials.
Ensemble Methods for MiRNA Target Prediction from Expression Data
Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong
2015-01-01
Background microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. Results In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials. PMID:26114448
46 CFR 160.077-5 - Incorporation by reference.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., Breaking of Woven Cloth; Grab Method. (ii) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (iii) Method 5134, Strength of Cloth, Tearing; Tongue Method. (iv) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (v) Method 5762, Mildew Resistance of Textile Materials...
46 CFR 160.077-5 - Incorporation by reference.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Elongation, Breaking of Woven Cloth; Grab Method. (2) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (3) Method 5134, Strength of Cloth, Tearing; Tongue Method. (4) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (5) Method 5762, Mildew Resistance of Textile Materials...
46 CFR 160.077-5 - Incorporation by reference.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., Breaking of Woven Cloth; Grab Method. (ii) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (iii) Method 5134, Strength of Cloth, Tearing; Tongue Method. (iv) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (v) Method 5762, Mildew Resistance of Textile Materials...
46 CFR 160.077-5 - Incorporation by reference.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Elongation, Breaking of Woven Cloth; Grab Method. (2) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (3) Method 5134, Strength of Cloth, Tearing; Tongue Method. (4) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (5) Method 5762, Mildew Resistance of Textile Materials...
Methods for analysis of cracks in three-dimensional solids
NASA Technical Reports Server (NTRS)
Raju, I. S.; Newman, J. C., Jr.
1984-01-01
Various analytical and numerical methods used to evaluate the stress intensity factors for cracks in three-dimensional (3-D) solids are reviewed. Classical exact solutions and many of the approximate methods used in 3-D analyses of cracks are reviewed. The exact solutions for embedded elliptic cracks in infinite solids are discussed. The approximate methods reviewed are the finite element methods, the boundary integral equation (BIE) method, the mixed methods (superposition of analytical and finite element method, stress difference method, discretization-error method, alternating method, finite element-alternating method), and the line-spring model. The finite element method with singularity elements is the most widely used method. The BIE method only needs modeling of the surfaces of the solid and so is gaining popularity. The line-spring model appears to be the quickest way to obtain good estimates of the stress intensity factors. The finite element-alternating method appears to yield the most accurate solution at the minimum cost.
Sharma, Sangita; Neog, Madhurjya; Prajapati, Vipul; Patel, Hiren; Dabhi, Dipti
2010-01-01
Five simple, sensitive, accurate and rapid visible spectrophotometric methods (A, B, C, D and E) have been developed for estimating Amisulpride in pharmaceutical preparations. These are based on the diazotization of Amisulpride with sodium nitrite and hydrochloric acid, followed by coupling with N-(1-naphthyl)ethylenediamine dihydrochloride (Method A), diphenylamine (Method B), beta-naphthol in an alkaline medium (Method C), resorcinol in an alkaline medium (Method D) and chromotropic acid in an alkaline medium (Method E) to form a colored chromogen. The absorption maxima, lambda(max), are at 523 nm for Method A, 382 and 490 nm for Method B, 527 nm for Method C, 521 nm for Method D and 486 nm for Method E. Beer's law was obeyed in the concentration range of 2.5-12.5 microg mL(-1) in Method A, 5-25 and 10-50 microg mL(-1) in Method B, 4-20 microg mL(-1) in Method C, 2.5-12.5 microg mL(-1) in Method D and 5-15 microg mL(-1) in Method E. The results obtained for the proposed methods are in good agreement with labeled amounts, when marketed pharmaceutical preparations were analyzed.
Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.
Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping
2017-06-27
Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.
A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.
Yang, Harry; Zhang, Jianchun
2015-01-01
The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.
Method Engineering: A Service-Oriented Approach
NASA Astrophysics Data System (ADS)
Cauvet, Corine
In the past, a large variety of methods have been published ranging from very generic frameworks to methods for specific information systems. Method Engineering has emerged as a research discipline for designing, constructing and adapting methods for Information Systems development. Several approaches have been proposed as paradigms in method engineering. The meta modeling approach provides means for building methods by instantiation, the component-based approach aims at supporting the development of methods by using modularization constructs such as method fragments, method chunks and method components. This chapter presents an approach (SO2M) for method engineering based on the service paradigm. We consider services as autonomous computational entities that are self-describing, self-configuring and self-adapting. They can be described, published, discovered and dynamically composed for processing a consumer's demand (a developer's requirement). The method service concept is proposed to capture a development process fragment for achieving a goal. Goal orientation in service specification and the principle of service dynamic composition support method construction and method adaptation to different development contexts.
Ramadan, Nesrin K; El-Ragehy, Nariman A; Ragab, Mona T; El-Zeany, Badr A
2015-02-25
Four simple, sensitive, accurate and precise spectrophotometric methods were developed for the simultaneous determination of a binary mixture containing Pantoprazole Sodium Sesquihydrate (PAN) and Itopride Hydrochloride (ITH). Method (A) is the derivative ratio method ((1)DD), method (B) is the mean centering of ratio spectra method (MCR), method (C) is the ratio difference method (RD) and method (D) is the isoabsorptive point coupled with third derivative method ((3)D). Linear correlation was obtained in range 8-44 μg/mL for PAN by the four proposed methods, 8-40 μg/mL for ITH by methods A, B and C and 10-40 μg/mL for ITH by method D. The suggested methods were validated according to ICH guidelines. The obtained results were statistically compared with those obtained by the official and a reported method for PAN and ITH, respectively, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ramadan, Nesrin K.; El-Ragehy, Nariman A.; Ragab, Mona T.; El-Zeany, Badr A.
2015-02-01
Four simple, sensitive, accurate and precise spectrophotometric methods were developed for the simultaneous determination of a binary mixture containing Pantoprazole Sodium Sesquihydrate (PAN) and Itopride Hydrochloride (ITH). Method (A) is the derivative ratio method (1DD), method (B) is the mean centering of ratio spectra method (MCR), method (C) is the ratio difference method (RD) and method (D) is the isoabsorptive point coupled with third derivative method (3D). Linear correlation was obtained in range 8-44 μg/mL for PAN by the four proposed methods, 8-40 μg/mL for ITH by methods A, B and C and 10-40 μg/mL for ITH by method D. The suggested methods were validated according to ICH guidelines. The obtained results were statistically compared with those obtained by the official and a reported method for PAN and ITH, respectively, showing no significant difference with respect to accuracy and precision.
NASA Astrophysics Data System (ADS)
Moustafa, Azza Aziz; Salem, Hesham; Hegazy, Maha; Ali, Omnia
2015-02-01
Simple, accurate, and selective methods have been developed and validated for simultaneous determination of a ternary mixture of Chlorpheniramine maleate (CPM), Pseudoephedrine HCl (PSE) and Ibuprofen (IBF), in tablet dosage form. Four univariate methods manipulating ratio spectra were applied, method A is the double divisor-ratio difference spectrophotometric method (DD-RD). Method B is double divisor-derivative ratio spectrophotometric method (DD-RD). Method C is derivative ratio spectrum-zero crossing method (DRZC), while method D is mean centering of ratio spectra (MCR). Two multivariate methods were also developed and validated, methods E and F are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods have the advantage of simultaneous determination of the mentioned drugs without prior separation steps. They were successfully applied to laboratory-prepared mixtures and to commercial pharmaceutical preparation without any interference from additives. The proposed methods were validated according to the ICH guidelines. The obtained results were statistically compared with the official methods where no significant difference was observed regarding both accuracy and precision.
Methods for elimination of dampness in Building walls
NASA Astrophysics Data System (ADS)
Campian, Cristina; Pop, Maria
2016-06-01
Dampness elimination in building walls is a very sensitive problem, with high costs. Many methods are used, as: chemical method, electro osmotic method or physical method. The RECON method is a representative and a sustainable method in Romania. Italy has the most radical method from all methods. The technology consists in cutting the brick walls, insertion of a special plastic sheeting and injection of a pre-mixed anti-shrinking mortar.
A comparison of several methods of solving nonlinear regression groundwater flow problems
Cooley, Richard L.
1985-01-01
Computational efficiency and computer memory requirements for four methods of minimizing functions were compared for four test nonlinear-regression steady state groundwater flow problems. The fastest methods were the Marquardt and quasi-linearization methods, which required almost identical computer times and numbers of iterations; the next fastest was the quasi-Newton method, and last was the Fletcher-Reeves method, which did not converge in 100 iterations for two of the problems. The fastest method per iteration was the Fletcher-Reeves method, and this was followed closely by the quasi-Newton method. The Marquardt and quasi-linearization methods were slower. For all four methods the speed per iteration was directly related to the number of parameters in the model. However, this effect was much more pronounced for the Marquardt and quasi-linearization methods than for the other two. Hence the quasi-Newton (and perhaps Fletcher-Reeves) method might be more efficient than either the Marquardt or quasi-linearization methods if the number of parameters in a particular model were large, although this remains to be proven. The Marquardt method required somewhat less central memory than the quasi-linearization metilod for three of the four problems. For all four problems the quasi-Newton method required roughly two thirds to three quarters of the memory required by the Marquardt method, and the Fletcher-Reeves method required slightly less memory than the quasi-Newton method. Memory requirements were not excessive for any of the four methods.
Hybrid DFP-CG method for solving unconstrained optimization problems
NASA Astrophysics Data System (ADS)
Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa
2017-09-01
The conjugate gradient (CG) method and quasi-Newton method are both well known method for solving unconstrained optimization method. In this paper, we proposed a new method by combining the search direction between conjugate gradient method and quasi-Newton method based on BFGS-CG method developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP method and proven to posses both sufficient descent and global convergence properties.
Generalization of the Engineering Method to the UNIVERSAL METHOD.
ERIC Educational Resources Information Center
Koen, Billy Vaughn
1987-01-01
Proposes that there is a universal method for all realms of knowledge. Reviews Descartes's definition of the universal method, the engineering definition, and the philosophical basis for the universal method. Contends that the engineering method best represents the universal method. (ML)
Colloidal Electrolytes and the Critical Micelle Concentration
ERIC Educational Resources Information Center
Knowlton, L. G.
1970-01-01
Describes methods for determining the Critical Micelle Concentration of Colloidal Electrolytes; methods described are: (1) methods based on Colligative Properties, (2) methods based on the Electrical Conductivity of Colloidal Electrolytic Solutions, (3) Dye Method, (4) Dye Solubilization Method, and (5) Surface Tension Method. (BR)
Huang, Jianhua
2012-07-01
There are three methods for calculating thermal insulation of clothing measured with a thermal manikin, i.e. the global method, the serial method, and the parallel method. Under the condition of homogeneous clothing insulation, these three methods yield the same insulation values. If the local heat flux is uniform over the manikin body, the global and serial methods provide the same insulation value. In most cases, the serial method gives a higher insulation value than the global method. There is a possibility that the insulation value from the serial method is lower than the value from the global method. The serial method always gives higher insulation value than the parallel method. The insulation value from the parallel method is higher or lower than the value from the global method, depending on the relationship between the heat loss distribution and the surface temperatures. Under the circumstance of uniform surface temperature distribution over the manikin body, the global and parallel methods give the same insulation value. If the constant surface temperature mode is used in the manikin test, the parallel method can be used to calculate the thermal insulation of clothing. If the constant heat flux mode is used in the manikin test, the serial method can be used to calculate the thermal insulation of clothing. The global method should be used for calculating thermal insulation of clothing for all manikin control modes, especially for thermal comfort regulation mode. The global method should be chosen by clothing manufacturers for labelling their products. The serial and parallel methods provide more information with respect to the different parts of clothing.
Comparison of five methods for the estimation of methane production from vented in vitro systems.
Alvarez Hess, P S; Eckard, R J; Jacobs, J L; Hannah, M C; Moate, P J
2018-05-23
There are several methods for estimating methane production (MP) from feedstuffs in vented in vitro systems. One method (A; "gold standard") measures methane proportions in the incubation bottle's head space (HS) and in the vented gas collected in gas bags. Four other methods (B, C, D and E) measure methane proportion in a single gas sample from HS. Method B assumes the same methane proportion in the vented gas as in HS, method C assumes constant methane to carbon dioxide ratio, method D has been developed based on empirical data and method E assumes constant individual venting volumes. This study aimed to compare the MP predictions from these methods to that of the gold standard method under different incubation scenarios, to validate these methods based on their concordance with a gold standard method. Methods C, D and E had greater concordance (0.85, 0.88 and 0.81), lower root mean square error (RMSE) (0.80, 0.72 and 0.85) and lower mean bias (0.20, 0.35, -0.35) with the gold standard than did method B (concordance 0.67, RMSE 1.49 and mean bias 1.26). Methods D and E were simpler to perform than method C and method D was slightly more accurate than method E. Based on precision, accuracy and simplicity of implementation, it is recommended that, when method A cannot be used, methods D and E are preferred to estimate MP from vented in vitro systems. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling
2017-11-01
Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.
A Novel Method to Identify Differential Pathways in Hippocampus Alzheimer's Disease.
Liu, Chun-Han; Liu, Lian
2017-05-08
BACKGROUND Alzheimer's disease (AD) is the most common type of dementia. The objective of this paper is to propose a novel method to identify differential pathways in hippocampus AD. MATERIAL AND METHODS We proposed a combined method by merging existed methods. Firstly, pathways were identified by four known methods (DAVID, the neaGUI package, the pathway-based co-expressed method, and the pathway network approach), and differential pathways were evaluated through setting weight thresholds. Subsequently, we combined all pathways by a rank-based algorithm and called the method the combined method. Finally, common differential pathways across two or more of five methods were selected. RESULTS Pathways obtained from different methods were also different. The combined method obtained 1639 pathways and 596 differential pathways, which included all pathways gained from the four existing methods; hence, the novel method solved the problem of inconsistent results. Besides, a total of 13 common pathways were identified, such as metabolism, immune system, and cell cycle. CONCLUSIONS We have proposed a novel method by combining four existing methods based on a rank product algorithm, and identified 13 significant differential pathways based on it. These differential pathways might provide insight into treatment and diagnosis of hippocampus AD.
Improved accuracy for finite element structural analysis via an integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
NASA Astrophysics Data System (ADS)
Li, Yanran; Chen, Duo; Li, Li; Zhang, Jiwei; Li, Guang; Liu, Hongxia
2017-11-01
GIS (gas insulated switchgear), is an important equipment in power system. Partial discharge plays an important role in detecting the insulation performance of GIS. UHF method and ultrasonic method frequently used in partial discharge (PD) detection for GIS. However, few studies have been conducted on comparison of this two methods. From the view point of safety, it is necessary to investigate UHF method and ultrasonic method for partial discharge in GIS. This paper presents study aimed at clarifying the effect of UHF method and ultrasonic method for partial discharge caused by free metal particles in GIS. Partial discharge tests were performed in laboratory simulated environment. Obtained results show the ability of anti-interference of signal detection and the accuracy of fault localization for UHF method and ultrasonic method. A new method based on UHF method and ultrasonic method of PD detection for GIS is proposed in order to greatly enhance the ability of anti-interference of signal detection and the accuracy of detection localization.
Juárez, M; Polvillo, O; Contò, M; Ficco, A; Ballico, S; Failla, S
2008-05-09
Four different extraction-derivatization methods commonly used for fatty acid analysis in meat (in situ or one-step method, saponification method, classic method and a combination of classic extraction and saponification derivatization) were tested. The in situ method had low recovery and variation. The saponification method showed the best balance between recovery, precision, repeatability and reproducibility. The classic method had high recovery and acceptable variation values, except for the polyunsaturated fatty acids, showing higher variation than the former methods. The combination of extraction and methylation steps had great recovery values, but the precision, repeatability and reproducibility were not acceptable. Therefore the saponification method would be more convenient for polyunsaturated fatty acid analysis, whereas the in situ method would be an alternative for fast analysis. However the classic method would be the method of choice for the determination of the different lipid classes.
... Z Health Topics Birth control methods Birth control methods > A-Z Health Topics Birth control methods fact ... To receive Publications email updates Submit Birth control methods Birth control (contraception) is any method, medicine, or ...
26 CFR 1.381(c)(5)-1 - Inventories.
Code of Federal Regulations, 2011 CFR
2011-04-01
... the dollar-value method, use the double-extension method, pool under the natural business unit method... double-extension method, pool under the natural business unit method, and value annual inventory... natural business unit method while P corporation pools under the multiple pool method. In addition, O...
26 CFR 1.381(c)(5)-1 - Inventories.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the dollar-value method, use the double-extension method, pool under the natural business unit method... double-extension method, pool under the natural business unit method, and value annual inventory... natural business unit method while P corporation pools under the multiple pool method. In addition, O...
46 CFR 160.076-11 - Incorporation by reference.
Code of Federal Regulations, 2011 CFR
2011-10-01
... following methods: (1) Method 5100, Strength and Elongation, Breaking of Woven Cloth; Grab Method, 160.076-25; (2) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method, 160.076-25; (3) Method 5134, Strength of Cloth, Tearing; Tongue Method, 160.076-25. Underwriters Laboratories (UL) Underwriters...
Downwind changes in grain size of aeolian dust; examples from marine and terrestrial archives
NASA Astrophysics Data System (ADS)
Stuut, Jan-Berend; Prins, Maarten
2013-04-01
Aeolian dust in the atmosphere may have a cooling effect when small particles in the high atmosphere block incoming solar energy (e.g., Claquin et al., 2003) but it may also act as a 'greenhouse gas' when larger particles in the lower atmosphere trap energy that was reflected from the Earth's surface (e.g., Otto et al., 2007). Therefore, it is of vital importance to have a good understanding of the particle-size distribution of aeolian dust in space and time. As wind is a very size-selective transport mechanism, the sediments it carries typically have a very-well sorted grain-size distribution, which gradually fines from proximal to distal deposition sites. This fact has been used in numerous paleo-environmental studies to both determine source-to-sink changes in the particle size of aeolian dust (e.g., Weltje and Prins, 2003; Holz et al., 2004; Prins and Vriend, 2007) and to quantify mass-accumulation rates of aeolian dust (e.g., Prins and Weltje 1999; Stuut et al., 2002; Prins et al., 2007; Prins and Vriend, 2007; Stuut et al., 2007; Tjallingii et al., 2008; Prins et al., 2009). Studies on modern wind-blown particles have demonstrated that particle size of dust not only is a function of lateral but also vertical transport distance (e.g., Torres-Padron et al., 2002; Stuut et al., 2005). Nonetheless, there are still many unresolved questions related to the physical properties of wind-blown particles like e.g., the case of "giant" quartz particles found on Hawaii (Betzer et al., 1988) that can only originate from Asia but have a too large size for the distance they travelled through the atmosphere. Here, we present examples of dust particle-size distributions from terrestrial (loess) as well as marine (deep-sea sediments) sedimentary archives and their spatial and temporal changes. With this contribution we hope to provide quantitative data for the modelling community in order to get a better grip on the role of wind-blown particles in the climate system. Cited references Betzer, P.R., Carder, K.L., Duce, R.A., Merrill, J.T., Tindale, N.W., Uematsu, M., Costello, D.K., Young, R.W., Feely, R.A., Breland, J.A., Bernstein, R.E., Greco, A.M., 1988. Long-range transport of giant mineral aerosol particles. Nature 336, 568. Claquin, T., Roelandt, C., Kohfeld, K.E., Harrison, S.P., Tegen, I., C., P.I., Balkanski, Y., Bergametti, G., Hansson, M., Mahowald, N.M., Rodhe, H., Schulz, M., 2003. Radiative forcing of climate by ice-age atmospheric dust. Climate Dynamics 20, 193-202. Holz, C., Stuut, J.-B.W., Henrich, R., 2004. Terrigenous sedimentation processes along the continental margin off NW-Africa: implications from grain-size analyses of surface sediments. Sedimentology 51, 1145-1154. Otto, S., de Reus, M., Trautmann, T., Thomas, A., Wendisch, M., Borrmann, S., 2007. Atmospheric radiative effects of an in situ measured Saharan dust plume and the role of large particles. Atmos. Chem. Phys. 7, 4887-4903. Prins, M.A., Weltje, G.J., 1999. End-member modeling of siliciclastic grain-size distributions: the Late Quaternary record of eolian and fluvial sediment supply to the Arabian Sea and its paleoclimatic significance., in: Harbaugh, J., Watney, L., Rankey, G., Slingerland, R., Goldstein, R., Franseen, E. (Eds.), Numerical experiments in stratigraphy: Recent advances in stratigraphic and sedimentologic computer simulations. SEPM Special Publication 62. Society for Sedimentary Geology, pp. 91-111. Prins, M.A., Vriend, M., 2007, Glacial and interglacial eolian dust dispersal patterns across the Chinese Loess Plateau inferred from decomposed loess grain-size records. Geochemistry, Geophysics, Geosystems (G-cubed), 8, Q07Q05, doi:10.1029/2006GC001563. Prins, M.A., Vriend, M., Nugteren, G., Vandenberghe, J., Lu, H., Zheng, H., Jan Weltje, G., 2007. Late Quaternary aeolian dust input variability on the Chinese Loess Plateau: inferences from unmixing of loess grain-size records. Quaternary Science Reviews 26, 230-242. Prins, M.A., Zheng, H., Beets, K., Troelstra, S., Bacon, P., Kamerling, I., Wester, W., Konert, M., Huang, X., Ke, W., Vandenberghe, J., 2009. Dust supply from river floodplains: The case of the lower Huang He (Yellow River) recorded in a loess-palaeosol sequence from the Mangshan Plateau. Journal of Quaternary Science 24, 75-84. Stuut, J.-B.W., Prins, M.A., Schneider, R.R., Weltje, G.J., Jansen, J.H.F., Postma, G., 2002. A 300-kyr record of aridity and wind strength in southwestern Africa: inferences from grain-size distributions of sediments on Walvis Ridge, SE Atlantic. Marine Geology 180, 221-233. Stuut, J.-B.W., Zabel, M., Ratmeyer, V., Helmke, P., Schefuß, E., Lavik, G., Schneider, R.R., 2005. Provenance of present-day eolian dust collected off NW Africa. Journal of Geophysical Research 110. Stuut, J.-B.W., Kasten, S., Lamy, F., Hebbeln, D., 2007. Sources and modes of terrigenous sediment input to the Chilean continental slope. Quaternary International 161, 67-76. Tjallingii, R., Claussen, M., Stuut, J.-B.W., Fohlmeister, J., Jahn, A., Bickert, T., Lamy, F., Rohl, U., 2008. Coherent high- and low-latitude control of the northwest African hydrological balance. Nature Geoscience 1, 670-675. Torres-Padrón, M.E., Gelado-Caballero, M.D., Collado-Sánchez, C., Siruela-Matos, V.F., Cardona-Castellano, P.J., Hernández-Brito, J.J., 2002. Variability of dust inputs to the CANIGO zone. Deep Sea Research Part II: Topical Studies in Oceanography 49, 3455-3464. Weltje, G.J., Prins, M.A., 2003. Muddled or mixed? Inferring palaeoclimate from size distributions of deep-sea clastics. Sedimentary Geology 162, 39-62.
NASA Astrophysics Data System (ADS)
Buchholz, Bernhard; Afchine, Armin; Klein, Alexander; Barthel, Jochen; Kallweit, Sören; Klostermann, Tim; Krämer, Martina; Schiller, Cornelius; Ebert, Volker
2013-04-01
Water vapor measurements especially within clouds are difficult, in particular due to numerous instrument-specific limitations in precision, time resolution and accuracy. Notably the quantification of the ice and gas-phase water content in cirrus clouds, which play an important role in the global climate system, require new high-speed hygrometers concepts which are capable of resolving large water vapor gradients. Previously we demonstrated a stationary concept of a Tunable Diode Laser Absorption Spectroscopy (TDLAS)-based quantification of the ice/liquid water by independent, but simultaneous measurements of A) the gas-phase water in an open-path configuration (optical-path 125 m) and B) the total water in an extractive version with a closed cell (30 m path) after evaporating the condensed water [1]. In this case we used laboratory TDLAS instrumentation in combination with a long absorption paths and applied those to the AIDA cloud camber [2]. Recently we developed an advanced, miniature version of the concept, suitable for mobile field applications and in particular for use on aircrafts. First tests of our new, fiber-coupled open-path TDLAS cell [3] for airborne applications were combined with the experiences of our extractive SEALDH instruments [4] and led to a new, multi-channel, "multi-phase TDL-hygrometer" called "HAI" ("Hygrometer for Atmospheric Investigations"). HAI, which is explicitly designed for the new German HALO (High Altitude and Long Range Research Aircraft) airplane, provides a similar, but improved functionality like the stationary, multi-phase TDLAS developed for AIDA. However HAI comes in a much more compact, six height units, 30 kg, electronics rack for the main unit and with a new, completely fiber-coupled, compact, 21 kg, dual-wavelength open-path TDL-cell which is placed in the aircraft's skin. HAI is much more complex and versatile than the AIDA precursor and can be seen as comprised of four TDL-spectrometers, as it simultaneously measures with two independent wavelengths (1.4 μm for troposphere and 2.6 μm for UT/LS to permit full coverage of water vapor concentrations from ground level to the stratosphere) both of which are applied to two measurement scenarios: A) in two independent extractive, closed cells (1.5 m path, 300 ccm cell volume) for redundant total water measurements at 1.4 and 2.6 μm and B) in a dual-wavelength open path cell (4.3 m path length) for a selective gas phase water detection. All HAI channels, but the 2.6 μm closed cell, are fibred-coupled. Depending on the sampling inlet (forward direction, ram pressure borrowed) we achieve in the closed cells a flow of 7 slm at 120 hPa which leads with a bulk flow assumption to a gas exchange time of 0.3 sec. Both lasers are synchronized and wavelength tuned at repetition frequencies of up to 1 kHz depending on the spatial resolution needed. HAI runs autonomous [5] allowing almost maintenance-free operation even in harsh environments. HAI is further combined with our long-term experience in TDLAS data evaluation [6] especially in rapidly changing and disturbed processes [7], [8] which leads to a highly precise, long term stable, fast, accurate, calibration-free, interference resistant hygrometer which can help to clarify several important issues - both from a technical perspective (e.g. influence of sampling system) as well as from a scientific view (e.g. determination ice-content of cirrus clouds). In the presentation we will discuss HAI's novel setup, its performance during the first tests, and show results from the first successful flights on HALO during the TACTS and EMSVAL campaigns in 2012. The HAI development was funded by DFG within the HALO-SPP 1294 and via internal funds from FZJ. - [1] B. J. Murray, T. W. Wilson, S. Dobbie, Z. Cui, S. M. R. K. Al-Jumur, O. Möhler, M. Schnaiter, R. Wagner, S. Benz, M. Niemand, H. Saathoff, V. Ebert, S. Wagner, and B. Kärcher, "Heterogeneous nucleation of ice particles on glassy aerosols under cirrus conditions," Nature Geoscience, vol. 3, no. 4, pp. 233-237, Mar. 2010. [2] V. Ebert, C. Lauer, H. Saathoff, S. Hunsmann, and S. Wagner, "Simultaneous, absolute gas-phase and total water detection during cloud formation studies in the AIDA chamber using a dual 1.37 μm TDL-Spectrometer," Geophysical Research Abstracts, vol. 10, pp. 1-2, 2008. [3] T. Klostermann, Entwicklung und Erprobung des Hygrometer for Atmospheric Investigations, PhD Thesis, Universität Wuppertal, 2011, p. 118 [4] B. Buchholz, B. Kühnreich, H. G. J. Smit, and V. Ebert, "Validation of an extractive, airborne, compact TDL spectrometer for atmospheric humidity sensing by blind intercomparison," Applied Physics B, Sep. 2012. DOI: 10.1007/s00340-012-5143-1 [5] B. Buchholz, "Neue Hard- und Softwareentwicklungen für autonome, kompakte und leichte Feld-Diodenlaserspektrometer," Diploma thesis, Universität Heidelberg, 2010. [6] V. Ebert and J. Wolfrum, "Absorption spectroscopy," in OPTICAL MEASUREMENTS-Techniques and Applications, ed. F. Mayinger, Springer, 1994, pp. 273-312. [7] C. Schulz, A. Dreizler, V. Ebert, and J. Wolfrum, "Combustion Diagnostics," in Handbook of Experimental Fluid Mechanics, C. Tropea, A. L. Yarin, and J. F. Foss, Eds. Heidelberg: Springer Berlin Heidelberg, 2007, pp. 1241-1316. [8] V. Ebert, T. Fernholz, C. Giesemann, H. Pitz, H. Teichert, J. Wolfrum, and H. Jaritz, "Simultaneous diode-laser-based in situ detection of multiple species and temperature in a gas-fired power plant," Proc. Combust. Inst., 28, 1, pp. 423-430, 2000.
Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study
Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M
2017-01-01
Background The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. Objective The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. Methods We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. Results We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). Conclusions In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms of efficiency (total number of participants enrolled). The average cost per recruited participant was also lower for online than for offline methods, although costs varied greatly among both online and offline recruitment methods. We observed a decrease in the efficiency of some online recruitment methods over time, suggesting that it may be optimal to adopt multiple online methods. PMID:28249833
Interior-Point Methods for Linear Programming: A Review
ERIC Educational Resources Information Center
Singh, J. N.; Singh, D.
2002-01-01
The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…
The Relation of Finite Element and Finite Difference Methods
NASA Technical Reports Server (NTRS)
Vinokur, M.
1976-01-01
Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.
[Baseflow separation methods in hydrological process research: a review].
Xu, Lei-Lei; Liu, Jing-Lin; Jin, Chang-Jie; Wang, An-Zhi; Guan, De-Xin; Wu, Jia-Bing; Yuan, Feng-Hui
2011-11-01
Baseflow separation research is regarded as one of the most important and difficult issues in hydrology and ecohydrology, but lacked of unified standards in the concepts and methods. This paper introduced the theories of baseflow separation based on the definitions of baseflow components, and analyzed the development course of different baseflow separation methods. Among the methods developed, graph separation method is simple and applicable but arbitrary, balance method accords with hydrological mechanism but is difficult in application, whereas time series separation method and isotopic method can overcome the subjective and arbitrary defects caused by graph separation method, and thus can obtain the baseflow procedure quickly and efficiently. In recent years, hydrological modeling, digital filtering, and isotopic method are the main methods used for baseflow separation.
Semi top-down method combined with earth-bank, an effective method for basement construction.
NASA Astrophysics Data System (ADS)
Tuan, B. Q.; Tam, Ng M.
2018-04-01
Choosing an appropriate method of deep excavation not only plays a decisive role in technical success, but also in economics of the construction project. Presently, we mainly base on to key methods: “Bottom-up” and “Top-down” construction method. Right now, this paper presents an another method of construction that is “Semi Top-down method combining with earth-bank” in order to take the advantages and limit the weakness of the above methods. The Bottom-up method was improved by using the earth-bank to stabilize retaining walls instead of the bracing steel struts. The Top-down method was improved by using the open cut method for the half of the earthwork quantities.
Klous, Miriam; Klous, Sander
2010-07-01
The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Saleh, Sarah S.; Hassan, Nagiba Y.; Salem, Hesham
2015-02-01
This work presents the application of different spectrophotometric techniques based on two wavelengths for the determination of severely overlapped spectral components in a binary mixture without prior separation. Four novel spectrophotometric methods were developed namely: induced dual wavelength method (IDW), dual wavelength resolution technique (DWRT), advanced amplitude modulation method (AAM) and induced amplitude modulation method (IAM). The results of the novel methods were compared to that of three well-established methods which were: dual wavelength method (DW), Vierordt's method (VD) and bivariate method (BV). The developed methods were applied for the analysis of the binary mixture of hydrocortisone acetate (HCA) and fusidic acid (FSA) formulated as topical cream accompanied by the determination of methyl paraben and propyl paraben present as preservatives. The specificity of the novel methods was investigated by analyzing laboratory prepared mixtures and the combined dosage form. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed. No difference was observed between the obtained results when compared to the reported HPLC method, which proved that the developed methods could be alternative to HPLC techniques in quality control laboratories.
2014-01-01
In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679
Feldsine, Philip T; Leung, Stephanie C; Lienau, Andrew H; Mui, Linda A; Townsend, David E
2003-01-01
The relative efficacy of the SimPlate Total Plate Count-Color Indicator (TPC-CI) method (SimPlate 35 degrees C) was compared with the AOAC Official Method 966.23 (AOAC 35 degrees C) for enumeration of total aerobic microorganisms in foods. The SimPlate TPC-CI method, incubated at 30 degrees C (SimPlate 30 degrees C), was also compared with the International Organization for Standardization (ISO) 4833 method (ISO 30 degrees C). Six food types were analyzed: ground black pepper, flour, nut meats, frozen hamburger patties, frozen fruits, and fresh vegetables. All foods tested were naturally contaminated. Nineteen laboratories throughout North America and Europe participated in the study. Three method comparisons were conducted. In general, there was <0.3 mean log count difference in recovery among the SimPlate methods and their corresponding reference methods. Mean log counts between the 2 reference methods were also very similar. Repeatability (Sr) and reproducibility (SR) standard deviations were similar among the 3 method comparisons. The SimPlate method (35 degrees C) and the AOAC method were comparable for enumerating total aerobic microorganisms in foods. Similarly, the SimPlate method (30 degrees C) was comparable to the ISO method when samples were prepared and incubated according to the ISO method.
Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation
NASA Astrophysics Data System (ADS)
Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab
2015-05-01
3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.
Sun, Shi-Hua; Jia, Cun-Xian
2014-01-01
Background This study aims to describe the specific characteristics of completed suicides by violent methods and non-violent methods in rural Chinese population, and to explore the related factors for corresponding methods. Methods Data of this study came from investigation of 199 completed suicide cases and their paired controls of rural areas in three different counties in Shandong, China, by interviewing one informant of each subject using the method of Psychological Autopsy (PA). Results There were 78 (39.2%) suicides with violent methods and 121 (60.8%) suicides with non-violent methods. Ingesting pesticides, as a non-violent method, appeared to be the most common suicide method (103, 51.8%). Hanging (73 cases, 36.7%) and drowning (5 cases, 2.5%) were the only violent methods observed. Storage of pesticides at home and higher suicide intent score were significantly associated with choice of violent methods while committing suicide. Risk factors related to suicide death included negative life events and hopelessness. Conclusions Suicide with violent methods has different factors from suicide with non-violent methods. Suicide methods should be considered in suicide prevention and intervention strategies. PMID:25111835
A review of propeller noise prediction methodology: 1919-1994
NASA Technical Reports Server (NTRS)
Metzger, F. Bruce
1995-01-01
This report summarizes a review of the literature regarding propeller noise prediction methods. The review is divided into six sections: (1) early methods; (2) more recent methods based on earlier theory; (3) more recent methods based on the Acoustic Analogy; (4) more recent methods based on Computational Acoustics; (5) empirical methods; and (6) broadband methods. The report concludes that there are a large number of noise prediction procedures available which vary markedly in complexity. Deficiencies in accuracy of methods in many cases may be related, not to the methods themselves, but the accuracy and detail of the aerodynamic inputs used to calculate noise. The steps recommended in the report to provide accurate and easy to use prediction methods are: (1) identify reliable test data; (2) define and conduct test programs to fill gaps in the existing data base; (3) identify the most promising prediction methods; (4) evaluate promising prediction methods relative to the data base; (5) identify and correct the weaknesses in the prediction methods, including lack of user friendliness, and include features now available only in research codes; (6) confirm the accuracy of improved prediction methods to the data base; and (7) make the methods widely available and provide training in their use.
A different approach to estimate nonlinear regression model using numerical methods
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].
Sorting protein decoys by machine-learning-to-rank
Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen
2016-01-01
Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset. PMID:27530967
Sorting protein decoys by machine-learning-to-rank.
Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen
2016-08-17
Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset.
Improved accuracy for finite element structural analysis via a new integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Salissou, Yacoubou; Panneton, Raymond
2010-11-01
Several methods for measuring the complex wave number and the characteristic impedance of sound absorbers have been proposed in the literature. These methods can be classified into single frequency and wideband methods. In this paper, the main existing methods are revisited and discussed. An alternative method which is not well known or discussed in the literature while exhibiting great potential is also discussed. This method is essentially an improvement of the wideband method described by Iwase et al., rewritten so that the setup is more ISO 10534-2 standard-compliant. Glass wool, melamine foam and acoustical/thermal insulator wool are used to compare the main existing wideband non-iterative methods with this alternative method. It is found that, in the middle and high frequency ranges the alternative method yields results that are comparable in accuracy to the classical two-cavity method and the four-microphone transfer-matrix method. However, in the low frequency range, the alternative method appears to be more accurate than the other methods, especially when measuring the complex wave number.
Methods for environmental change; an exploratory study.
Kok, Gerjo; Gottlieb, Nell H; Panne, Robert; Smerecnik, Chris
2012-11-28
While the interest of health promotion researchers in change methods directed at the target population has a long tradition, interest in change methods directed at the environment is still developing. In this survey, the focus is on methods for environmental change; especially about how these are composed of methods for individual change ('Bundling') and how within one environmental level, organizations, methods differ when directed at the management ('At') or applied by the management ('From'). The first part of this online survey dealt with examining the 'bundling' of individual level methods to methods at the environmental level. The question asked was to what extent the use of an environmental level method would involve the use of certain individual level methods. In the second part of the survey the question was whether there are differences between applying methods directed 'at' an organization (for instance, by a health promoter) versus 'from' within an organization itself. All of the 20 respondents are experts in the field of health promotion. Methods at the individual level are frequently bundled together as part of a method at a higher ecological level. A number of individual level methods are popular as part of most of the environmental level methods, while others are not chosen very often. Interventions directed at environmental agents often have a strong focus on the motivational part of behavior change.There are different approaches targeting a level or being targeted from a level. The health promoter will use combinations of motivation and facilitation. The manager will use individual level change methods focusing on self-efficacy and skills. Respondents think that any method may be used under the right circumstances, although few endorsed coercive methods. Taxonomies of theoretical change methods for environmental change should include combinations of individual level methods that may be bundled and separate suggestions for methods targeting a level or being targeted from a level. Future research needs to cover more methods to rate and to be rated. Qualitative data may explain some of the surprising outcomes, such as the lack of large differences and the avoidance of coercion. Taxonomies should include the theoretical parameters that limit the effectiveness of the method.
A comparison theorem for the SOR iterative method
NASA Astrophysics Data System (ADS)
Sun, Li-Ying
2005-09-01
In 1997, Kohno et al. have reported numerically that the improving modified Gauss-Seidel method, which was referred to as the IMGS method, is superior to the SOR iterative method. In this paper, we prove that the spectral radius of the IMGS method is smaller than that of the SOR method and Gauss-Seidel method, if the relaxation parameter [omega][set membership, variant](0,1]. As a result, we prove theoretically that this method is succeeded in improving the convergence of some classical iterative methods. Some recent results are improved.
A review of parametric approaches specific to aerodynamic design process
NASA Astrophysics Data System (ADS)
Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li
2018-04-01
Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.
Wan, Xiaomin; Peng, Liubao; Li, Yuanjian
2015-01-01
Background In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. Methods A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. Results All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. Conclusions The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method. PMID:25803659
NASA Astrophysics Data System (ADS)
Jaishree, J.; Haworth, D. C.
2012-06-01
Transported probability density function (PDF) methods have been applied widely and effectively for modelling turbulent reacting flows. In most applications of PDF methods to date, Lagrangian particle Monte Carlo algorithms have been used to solve a modelled PDF transport equation. However, Lagrangian particle PDF methods are computationally intensive and are not readily integrated into conventional Eulerian computational fluid dynamics (CFD) codes. Eulerian field PDF methods have been proposed as an alternative. Here a systematic comparison is performed among three methods for solving the same underlying modelled composition PDF transport equation: a consistent hybrid Lagrangian particle/Eulerian mesh (LPEM) method, a stochastic Eulerian field (SEF) method and a deterministic Eulerian field method with a direct-quadrature-method-of-moments closure (a multi-environment PDF-MEPDF method). The comparisons have been made in simulations of a series of three non-premixed, piloted methane-air turbulent jet flames that exhibit progressively increasing levels of local extinction and turbulence-chemistry interactions: Sandia/TUD flames D, E and F. The three PDF methods have been implemented using the same underlying CFD solver, and results obtained using the three methods have been compared using (to the extent possible) equivalent physical models and numerical parameters. Reasonably converged mean and rms scalar profiles are obtained using 40 particles per cell for the LPEM method or 40 Eulerian fields for the SEF method. Results from these stochastic methods are compared with results obtained using two- and three-environment MEPDF methods. The relative advantages and disadvantages of each method in terms of accuracy and computational requirements are explored and identified. In general, the results obtained from the two stochastic methods (LPEM and SEF) are very similar, and are in closer agreement with experimental measurements than those obtained using the MEPDF method, while MEPDF is the most computationally efficient of the three methods. These and other findings are discussed in detail.
AN EULERIAN-LAGRANGIAN LOCALIZED ADJOINT METHOD FOR THE ADVECTION-DIFFUSION EQUATION
Many numerical methods use characteristic analysis to accommodate the advective component of transport. Such characteristic methods include Eulerian-Lagrangian methods (ELM), modified method of characteristics (MMOC), and operator splitting methods. A generalization of characteri...
Capital investment analysis: three methods.
Gapenski, L C
1993-08-01
Three cash flow/discount rate methods can be used when conducting capital budgeting financial analyses: the net operating cash flow method, the net cash flow to investors method, and the net cash flow to equity holders method. The three methods differ in how the financing mix and the benefits of debt financing are incorporated. This article explains the three methods, demonstrates that they are essentially equivalent, and recommends which method to use under specific circumstances.
Effective description of a 3D object for photon transportation in Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Suganuma, R.; Ogawa, K.
2000-06-01
Photon transport simulation by means of the Monte Carlo method is an indispensable technique for examining scatter and absorption correction methods in SPECT and PET. The authors have developed a method for object description with maximum size regions (maximum rectangular regions: MRRs) to speed up photon transport simulation, and compared the computation time with that for conventional object description methods, a voxel-based (VB) method and an octree method, in the simulations of two kinds of phantoms. The simulation results showed that the computation time with the proposed method became about 50% of that with the VD method and about 70% of that with the octree method for a high resolution MCAT phantom. Here, details of the expansion of the MRR method to three dimensions are given. Moreover, the effectiveness of the proposed method was compared with the VB and octree methods.
Region of influence regression for estimating the 50-year flood at ungaged sites
Tasker, Gary D.; Hodge, S.A.; Barks, C.S.
1996-01-01
Five methods of developing regional regression models to estimate flood characteristics at ungaged sites in Arkansas are examined. The methods differ in the manner in which the State is divided into subrogions. Each successive method (A to E) is computationally more complex than the previous method. Method A makes no subdivision. Methods B and C define two and four geographic subrogions, respectively. Method D uses cluster/discriminant analysis to define subrogions on the basis of similarities in watershed characteristics. Method E, the new region of influence method, defines a unique subregion for each ungaged site. Split-sample results indicate that, in terms of root-mean-square error, method E (38 percent error) is best. Methods C and D (42 and 41 percent error) were in a virtual tie for second, and methods B (44 percent error) and A (49 percent error) were fourth and fifth best.
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
Designing Class Methods from Dataflow Diagrams
NASA Astrophysics Data System (ADS)
Shoval, Peretz; Kabeli-Shani, Judith
A method for designing the class methods of an information system is described. The method is part of FOOM - Functional and Object-Oriented Methodology. In the analysis phase of FOOM, two models defining the users' requirements are created: a conceptual data model - an initial class diagram; and a functional model - hierarchical OO-DFDs (object-oriented dataflow diagrams). Based on these models, a well-defined process of methods design is applied. First, the OO-DFDs are converted into transactions, i.e., system processes that supports user task. The components and the process logic of each transaction are described in detail, using pseudocode. Then, each transaction is decomposed, according to well-defined rules, into class methods of various types: basic methods, application-specific methods and main transaction (control) methods. Each method is attached to a proper class; messages between methods express the process logic of each transaction. The methods are defined using pseudocode or message charts.
Simple Test Functions in Meshless Local Petrov-Galerkin Methods
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.
2016-01-01
Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.
Leapfrog variants of iterative methods for linear algebra equations
NASA Technical Reports Server (NTRS)
Saylor, Paul E.
1988-01-01
Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.
Development of a Coordinate Transformation method for direct georeferencing in map projection frames
NASA Astrophysics Data System (ADS)
Zhao, Haitao; Zhang, Bing; Wu, Changshan; Zuo, Zhengli; Chen, Zhengchao
2013-03-01
This paper develops a novel Coordinate Transformation method (CT-method), with which the orientation angles (roll, pitch, heading) of the local tangent frame of the GPS/INS system are transformed into those (omega, phi, kappa) of the map projection frame for direct georeferencing (DG). Especially, the orientation angles in the map projection frame were derived from a sequence of coordinate transformations. The effectiveness of orientation angles transformation was verified through comparing with DG results obtained from conventional methods (Legat method and POSPac method) using empirical data. Moreover, the CT-method was also validated with simulated data. One advantage of the proposed method is that the orientation angles can be acquired simultaneously while calculating position elements of exterior orientation (EO) parameters and auxiliary points coordinates by coordinate transformation. These three methods were demonstrated and compared using empirical data. Empirical results show that the CT-method is both as sound and effective as Legat method. Compared with POSPac method, the CT-method is more suitable for calculating EO parameters for DG in map projection frames. DG accuracy of the CT-method and Legat method are at the same level. DG results of all these three methods have systematic errors in height due to inconsistent length projection distortion in the vertical and horizontal components, and these errors can be significantly reduced using the EO height correction technique in Legat's approach. Similar to the results obtained with empirical data, the effectiveness of the CT-method was also proved with simulated data. POSPac method: The method is presented by Applanix POSPac software technical note (Hutton and Savina, 1997). It is implemented in the POSEO module of POSPac software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, M.; Ma, L.Q.
1998-11-01
It is critical to compare existing sample digestion methods for evaluating soil contamination and remediation. USEPA Methods 3050, 3051, 3051a, and 3052 were used to digest standard reference materials and representative Florida surface soils. Fifteen trace metals (Ag, As, Ba, Be, Cd, Cr, Cu, Hg, Mn, Mo, Ni, Pb, Sb, Se, and Za), and six macro elements (Al, Ca, Fe, K, Mg, and P) were analyzed. Precise analysis was achieved for all elements except for Cd, Mo, Se, and Sb in NIST SRMs 2704 and 2709 by USEPA Methods 3050 and 3051, and for all elements except for As, Mo,more » Sb, and Se in NIST SRM 2711 by USEPA Method 3052. No significant differences were observed for the three NIST SRMs between the microwave-assisted USEPA Methods 3051 and 3051A and the conventional USEPA Method 3050 Methods 3051 and 3051a and the conventional USEPA Method 3050 except for Hg, Sb, and Se. USEPA Method 3051a provided comparable values for NIST SRMs certified using USEPA Method 3050. However, for method correlation coefficients and elemental recoveries in 40 Florida surface soils, USEPA Method 3051a was an overall better alternative for Method 3050 than was Method 3051. Among the four digestion methods, the microwave-assisted USEPA Method 3052 achieved satisfactory recoveries for all elements except As and Mg using NIST SRM 2711. This total-total digestion method provided greater recoveries for 12 elements Ag, Be, Cr, Fe, K, Mn, Mo, Ni, Pb, Sb, Se, and Zn, but lower recoveries for Mg in Florida soils than did the total-recoverable digestion methods.« less
[Comparative analysis between diatom nitric acid digestion method and plankton 16S rDNA PCR method].
Han, Jun-ge; Wang, Cheng-bao; Li, Xing-biao; Fan, Yan-yan; Feng, Xiang-ping
2013-10-01
To compare and explore the application value of diatom nitric acid digestion method and plankton 16S rDNA PCR method for drowning identification. Forty drowning cases from 2010 to 2011 were collected from Department of Forensic Medicine of Wenzhou Medical University. Samples including lung, kidney, liver and field water from each case were tested with diatom nitric acid digestion method and plankton 16S rDNA PCR method, respectively. The Diatom nitric acid digestion method and plankton 16S rDNA PCR method required 20 g and 2 g of each organ, and 15 mL and 1.5 mL of field water, respectively. The inspection time and detection rate were compared between the two methods. Diatom nitric acid digestion method mainly detected two species of diatoms, Centriae and Pennatae, while plankton 16S rDNA PCR method amplified a length of 162 bp band. The average inspection time of each case of the Diatom nitric acid digestion method was (95.30 +/- 2.78) min less than (325.33 +/- 14.18) min of plankton 16S rDNA PCR method (P < 0.05). The detection rates of two methods for field water and lung were both 100%. For liver and kidney, the detection rate of plankton 16S rDNA PCR method was both 80%, higher than 40% and 30% of diatom nitric acid digestion method (P < 0.05), respectively. The laboratory testing method needs to be appropriately selected according to the specific circumstances in the forensic appraisal of drowning. Compared with diatom nitric acid digestion method, plankton 16S rDNA PCR method has practice values with such advantages as less quantity of samples, huge information and high specificity.
Reliable clarity automatic-evaluation method for optical remote sensing images
NASA Astrophysics Data System (ADS)
Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen
2015-10-01
Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.
26 CFR 1.412(c)(1)-2 - Shortfall method.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 26 Internal Revenue 5 2013-04-01 2013-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...
26 CFR 1.412(c)(1)-2 - Shortfall method.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 5 2012-04-01 2011-04-01 true Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...
26 CFR 1.412(c)(1)-2 - Shortfall method.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 26 Internal Revenue 5 2014-04-01 2014-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...
26 CFR 1.412(c)(1)-2 - Shortfall method.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 5 2011-04-01 2011-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...
40 CFR 60.547 - Test methods and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...
40 CFR 60.547 - Test methods and procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...
40 CFR 60.547 - Test methods and procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...
The Dramatic Methods of Hans van Dam.
ERIC Educational Resources Information Center
van de Water, Manon
1994-01-01
Interprets for the American reader the untranslated dramatic methods of Hans van Dam, a leading drama theorist in the Netherlands. Discusses the functions of drama as a method, closed dramatic methods, open dramatic methods, and applying van Dam's methods. (SR)
Methods for environmental change; an exploratory study
2012-01-01
Background While the interest of health promotion researchers in change methods directed at the target population has a long tradition, interest in change methods directed at the environment is still developing. In this survey, the focus is on methods for environmental change; especially about how these are composed of methods for individual change (‘Bundling’) and how within one environmental level, organizations, methods differ when directed at the management (‘At’) or applied by the management (‘From’). Methods The first part of this online survey dealt with examining the ‘bundling’ of individual level methods to methods at the environmental level. The question asked was to what extent the use of an environmental level method would involve the use of certain individual level methods. In the second part of the survey the question was whether there are differences between applying methods directed ‘at’ an organization (for instance, by a health promoter) versus ‘from’ within an organization itself. All of the 20 respondents are experts in the field of health promotion. Results Methods at the individual level are frequently bundled together as part of a method at a higher ecological level. A number of individual level methods are popular as part of most of the environmental level methods, while others are not chosen very often. Interventions directed at environmental agents often have a strong focus on the motivational part of behavior change. There are different approaches targeting a level or being targeted from a level. The health promoter will use combinations of motivation and facilitation. The manager will use individual level change methods focusing on self-efficacy and skills. Respondents think that any method may be used under the right circumstances, although few endorsed coercive methods. Conclusions Taxonomies of theoretical change methods for environmental change should include combinations of individual level methods that may be bundled and separate suggestions for methods targeting a level or being targeted from a level. Future research needs to cover more methods to rate and to be rated. Qualitative data may explain some of the surprising outcomes, such as the lack of large differences and the avoidance of coercion. Taxonomies should include the theoretical parameters that limit the effectiveness of the method. PMID:23190712
Implementation of an improved adaptive-implicit method in a thermal compositional simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, T.B.
1988-11-01
A multicomponent thermal simulator with an adaptive-implicit-method (AIM) formulation/inexact-adaptive-Newton (IAN) method is presented. The final coefficient matrix retains the original banded structure so that conventional iterative methods can be used. Various methods for selection of the eliminated unknowns are tested. AIM/IAN method has a lower work count per Newtonian iteration than fully implicit methods, but a wrong choice of unknowns will result in excessive Newtonian iterations. For the problems tested, the residual-error method described in the paper for selecting implicit unknowns, together with the IAN method, had an improvement of up to 28% of the CPU time over the fullymore » implicit method.« less
Green, Carla A; Duan, Naihua; Gibbons, Robert D; Hoagwood, Kimberly E; Palinkas, Lawrence A; Wisdom, Jennifer P
2015-09-01
Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings.
Green, Carla A.; Duan, Naihua; Gibbons, Robert D.; Hoagwood, Kimberly E.; Palinkas, Lawrence A.; Wisdom, Jennifer P.
2015-01-01
Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings. PMID:24722814
Bond additivity corrections for quantum chemistry methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. F. Melius; M. D. Allendorf
1999-04-01
In the 1980's, the authors developed a bond-additivity correction procedure for quantum chemical calculations called BAC-MP4, which has proven reliable in calculating the thermochemical properties of molecular species, including radicals as well as stable closed-shell species. New Bond Additivity Correction (BAC) methods have been developed for the G2 method, BAC-G2, as well as for a hybrid DFT/MP2 method, BAC-Hybrid. These BAC methods use a new form of BAC corrections, involving atomic, molecular, and bond-wise additive terms. These terms enable one to treat positive and negative ions as well as neutrals. The BAC-G2 method reduces errors in the G2 method duemore » to nearest-neighbor bonds. The parameters within the BAC-G2 method only depend on atom types. Thus the BAC-G2 method can be used to determine the parameters needed by BAC methods involving lower levels of theory, such as BAC-Hybrid and BAC-MP4. The BAC-Hybrid method should scale well for large molecules. The BAC-Hybrid method uses the differences between the DFT and MP2 as an indicator of the method's accuracy, while the BAC-G2 method uses its internal methods (G1 and G2MP2) to provide an indicator of its accuracy. Indications of the average error as well as worst cases are provided for each of the BAC methods.« less
Comparison of different methods to quantify fat classes in bakery products.
Shin, Jae-Min; Hwang, Young-Ok; Tu, Ock-Ju; Jo, Han-Bin; Kim, Jung-Hun; Chae, Young-Zoo; Rhu, Kyung-Hun; Park, Seung-Kook
2013-01-15
The definition of fat differs in different countries; thus whether fat is listed on food labels depends on the country. Some countries list crude fat content in the 'Fat' section on the food label, whereas other countries list total fat. In this study, three methods were used for determining fat classes and content in bakery products: the Folch method, the automated Soxhlet method, and the AOAC 996.06 method. The results using these methods were compared. Fat (crude) extracted by the Folch and Soxhlet methods was gravimetrically determined and assessed by fat class using capillary gas chromatography (GC). In most samples, fat (total) content determined by the AOAC 996.06 method was lower than the fat (crude) content determined by the Folch or automated Soxhlet methods. Furthermore, monounsaturated fat or saturated fat content determined by the AOAC 996.06 method was lowest. Almost no difference was observed between fat (crude) content determined by the Folch method and that determined by the automated Soxhlet method for nearly all samples. In three samples (wheat biscuits, butter cookies-1, and chocolate chip cookies), monounsaturated fat, saturated fat, and trans fat content obtained by the automated Soxhlet method was higher than that obtained by the Folch method. The polyunsaturated fat content obtained by the automated Soxhlet method was not higher than that obtained by the Folch method in any sample. Copyright © 2012 Elsevier Ltd. All rights reserved.
A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong Luo; Yidong Xia; Robert Nourgaliev
2011-05-01
A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less
2014-01-01
Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018
NASA Astrophysics Data System (ADS)
Kot, V. A.
2017-11-01
The modern state of approximate integral methods used in applications, where the processes of heat conduction and heat and mass transfer are of first importance, is considered. Integral methods have found a wide utility in different fields of knowledge: problems of heat conduction with different heat-exchange conditions, simulation of thermal protection, Stefantype problems, microwave heating of a substance, problems on a boundary layer, simulation of a fluid flow in a channel, thermal explosion, laser and plasma treatment of materials, simulation of the formation and melting of ice, inverse heat problems, temperature and thermal definition of nanoparticles and nanoliquids, and others. Moreover, polynomial solutions are of interest because the determination of a temperature (concentration) field is an intermediate stage in the mathematical description of any other process. The following main methods were investigated on the basis of the error norms: the Tsoi and Postol’nik methods, the method of integral relations, the Gudman integral method of heat balance, the improved Volkov integral method, the matched integral method, the modified Hristov method, the Mayer integral method, the Kudinov method of additional boundary conditions, the Fedorov boundary method, the method of weighted temperature function, the integral method of boundary characteristics. It was established that the two last-mentioned methods are characterized by high convergence and frequently give solutions whose accuracy is not worse that the accuracy of numerical solutions.
Method for producing smooth inner surfaces
Cooper, Charles A.
2016-05-17
The invention provides a method for preparing superconducting cavities, the method comprising causing polishing media to tumble by centrifugal barrel polishing within the cavities for a time sufficient to attain a surface smoothness of less than 15 nm root mean square roughness over approximately a 1 mm.sup.2 scan area. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media bound to a carrier to tumble within the cavities. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media in a slurry to tumble within the cavities.
A Hybrid Method for Pancreas Extraction from CT Image Based on Level Set Methods
Tan, Hanqing; Fujita, Hiroshi
2013-01-01
This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction. PMID:24066016
Polidori, David; Rowley, Clarence
2014-07-22
The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.
Ross, John; Keesbury, Jill; Hardee, Karen
2015-01-01
ABSTRACT The method mix of contraceptive use is severely unbalanced in many countries, with over half of all use provided by just 1 or 2 methods. That tends to limit the range of user options and constrains the total prevalence of use, leading to unplanned pregnancies and births or abortions. Previous analyses of method mix distortions focused on countries where a single method accounted for more than half of all use (the 50% rule). We introduce a new measure that uses the average deviation (AD) of method shares around their own mean and apply that to a secondary analysis of method mix data for 8 contraceptive methods from 666 national surveys in 123 countries. A high AD value indicates a skewed method mix while a low AD value indicates a more uniform pattern across methods; the values can range from 0 to 21.9. Most AD values ranged from 6 to 19, with an interquartile range of 8.6 to 12.2. Using the AD measure, we identified 15 countries where the method mix has evolved from a distorted one to a better balanced one, with AD values declining, on average, by 35% over time. Countries show disparate paths in method gains and losses toward a balanced mix, but 4 patterns are suggested: (1) rise of one method partially offset by changes in other methods, (2) replacement of traditional with modern methods, (3) continued but declining domination by a single method, and (4) declines in dominant methods with increases in other methods toward a balanced mix. Regions differ markedly in their method mix profiles and preferences, raising the question of whether programmatic resources are best devoted to better provision of the well-accepted methods or to deploying neglected or new ones, or to a combination of both approaches. PMID:25745119
Wan, Xiaomin; Peng, Liubao; Li, Yuanjian
2015-01-01
In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method.
Achieving cost-neutrality with long-acting reversible contraceptive methods.
Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna
2015-01-01
This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20-29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. Copyright © 2014 Elsevier Inc. All rights reserved.
Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong
2014-10-01
In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.
Hammack, Thomas S; Valentin-Bon, Iris E; Jacobson, Andrew P; Andrews, Wallace H
2004-05-01
Soak and rinse methods were compared for the recovery of Salmonella from whole cantaloupes. Cantaloupes were surface inoculated with Salmonella cell suspensions and stored for 4 days at 2 to 6 degrees C. Cantaloupes were placed in sterile plastic bags with a nonselective preenrichment broth at a 1:1.5 cantaloupe weight-to-broth volume ratio. The cantaloupe broths were shaken for 5 min at 100 rpm after which 25-ml aliquots (rinse) were removed from the bags. The 25-ml rinses were preenriched in 225-ml portions of the same uninoculated broth type at 35 degrees C for 24 h (rinse method). The remaining cantaloupe broths were incubated at 35 degrees C for 24 h (soak method). The preenrichment broths used were buffered peptone water (BPW), modified BPW, lactose (LAC) broth, and Universal Preenrichment (UP) broth. The Bacteriological Analytical Manual Salmonella culture method was compared with the following rapid methods: the TECRA Unique Salmonella method, the VIDAS ICS/SLM method, and the VIDAS SLM method. The soak method detected significantly more Salmonella-positive cantaloupes (P < 0.05) than did the rinse method: 367 Salmonella-positive cantaloupes of 540 test cantaloupes by the soak method and 24 Salmonella-positive cantaloupes of 540 test cantaloupes by the rinse method. Overall, BPW, LAC, and UP broths were equivalent for the recovery of Salmonella from cantaloupes. Both the VIDAS ICS/SLM and TECRA Unique Salmonella methods detected significantly fewer Salmonella-positive cantaloupes than did the culture method: the VIDAS ICS/SLM method detected 23 of 50 Salmonella-positive cantaloupes (60 tested) and the TECRA Unique Salmonella method detected 16 of 29 Salmonella-positive cantaloupes (60 tested). The VIDAS SLM and culture methods were equivalent: both methods detected 37 of 37 Salmonella-positive cantaloupes (60 tested).
Temperature Profiles of Different Cooling Methods in Porcine Pancreas Procurement
Weegman, Brad P.; Suszynski, Thomas M.; Scott, William E.; Ferrer, Joana; Avgoustiniatos, Efstathios S.; Anazawa, Takayuki; O’Brien, Timothy D.; Rizzari, Michael D.; Karatzas, Theodore; Jie, Tun; Sutherland, David ER.; Hering, Bernhard J.; Papas, Klearchos K.
2014-01-01
Background Porcine islet xenotransplantation is a promising alternative to human islet allotransplantation. Porcine pancreas cooling needs to be optimized to reduce the warm ischemia time (WIT) following donation after cardiac death, which is associated with poorer islet isolation outcomes. Methods This study examines the effect of 4 different cooling Methods on core porcine pancreas temperature (n=24) and histopathology (n=16). All Methods involved surface cooling with crushed ice and chilled irrigation. Method A, which is the standard for porcine pancreas procurement, used only surface cooling. Method B involved an intravascular flush with cold solution through the pancreas arterial system. Method C involved an intraductal infusion with cold solution through the major pancreatic duct, and Method D combined all 3 cooling Methods. Results Surface cooling alone (Method A) gradually decreased core pancreas temperature to < 10 °C after 30 minutes. Using an intravascular flush (Method B) improved cooling during the entire duration of procurement, but incorporating an intraductal infusion (Method C) rapidly reduced core temperature 15–20 °C within the first 2 minutes of cooling. Combining all methods (Method D) was the most effective at rapidly reducing temperature and providing sustained cooling throughout the duration of procurement, although the recorded WIT was not different between Methods (p=0.36). Histological scores were different between the cooling Methods (p=0.02) and the worst with Method A. There were differences in histological scores between Methods A and C (p=0.02) and Methods A and D (p=0.02), but not between Methods C and D (p=0.95), which may highlight the importance of early cooling using an intraductal infusion. Conclusions In conclusion, surface cooling alone cannot rapidly cool large (porcine or human) pancreata. Additional cooling with an intravascular flush and intraductal infusion results in improved core porcine pancreas temperature profiles during procurement and histopathology scores. These data may also have implications on human pancreas procurement since use of an intraductal infusion is not common practice. PMID:25040217
Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu
2017-01-01
In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility. PMID:28187177
Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu; Park, So Yeon
2017-01-01
In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.
Estimating Tree Height-Diameter Models with the Bayesian Method
Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733
Estimating tree height-diameter models with the Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.
Wong, M S; Cheng, J C Y; Lo, K H
2005-04-01
The treatment effectiveness of the CAD/CAM method and the manual method in managing adolescent idiopathic scoliosis (AIS) was compared. Forty subjects were recruited with twenty subjects for each method. The clinical parameters namely Cobb's angle and apical vertebral rotation were evaluated at the pre-brace and the immediate in-brace visits. The results demonstrated that orthotic treatments rendered by the CAD/CAM method and the conventional manual method were effective in providing initial control of Cobb's angle. Significant decreases (p < 0.05) were found between the pre-brace and immediate in-brace visits for both methods. The mean reductions of Cobb's angle were 12.8 degrees (41.9%) for the CAD/CAM method and 9.8 degrees (32.1%) for the manual method. An initial control of the apical vertebral rotation was not shown in this study. In the comparison between the CAD/CAM method and the manual method, no significant difference was found in the control of Cobb's angle and apical vertebral rotation. The current study demonstrated that the CAD/CAM method can provide similar result in the initial stage of treatment as compared with the manual method.
Mattfeldt, Torsten
2011-04-01
Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.
Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study.
Christensen, Tina; Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M
2017-03-01
The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms of efficiency (total number of participants enrolled). The average cost per recruited participant was also lower for online than for offline methods, although costs varied greatly among both online and offline recruitment methods. We observed a decrease in the efficiency of some online recruitment methods over time, suggesting that it may be optimal to adopt multiple online methods. ©Tina Christensen, Anders H Riis, Elizabeth E Hatch, Lauren A Wise, Marie G Nielsen, Kenneth J Rothman, Henrik Toft Sørensen, Ellen M Mikkelsen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 01.03.2017.
Ouyang, Ying; Mansell, Robert S; Nkedi-Kizza, Peter
2004-01-01
A high performance liquid chromatography (HPLC) method with UV detection was developed to analyze paraquat (1,1'-dimethyl-4,4'-dipyridinium dichloride) herbicide content in soil solution samples. The analytical method was compared with the liquid scintillation counting (LSC) method using 14C-paraquat. Agreement obtained between the two methods was reasonable. However, the detection limit for paraquat analysis was 0.5 mg L(-1) by the HPLC method and 0.05 mg L(-1) by the LSC method. The LSC method was, therefore, 10 times more precise than the HPLC method for solution concentrations less than 1 mg L(-1). In spite of the high detection limit, the UC (nonradioactive) HPLC method provides an inexpensive and environmentally safe means for determining paraquat concentration in soil solution compared with the 14C-LSC method.
Hybrid finite element and Brownian dynamics method for diffusion-controlled reactions.
Bauler, Patricia; Huber, Gary A; McCammon, J Andrew
2012-04-28
Diffusion is often the rate determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite element method. This paper proposes a new hybrid diffusion method that couples the strengths of each of these two methods. The method is derived for a general multidimensional system, and is presented using a basic test case for 1D linear and radially symmetric diffusion systems.
Noh, Jaesung; Lee, Kun Mo
2003-05-01
A relative significance factor (f(i)) of an impact category is the external weight of the impact category. The objective of this study is to propose a systematic and easy-to-use method for the determination of f(i). Multiattribute decision-making (MADM) methods including the analytical hierarchy process (AHP), the rank-order centroid method, and the fuzzy method were evaluated for this purpose. The results and practical aspects of using the three methods are compared. Each method shows the same trend, with minor differences in the value of f(i). Thus, all three methods can be applied to the determination of f(i). The rank order centroid method reduces the number of pairwise comparisons by placing the alternatives in order, although it has inherent weakness over the fuzzy method in expressing the degree of vagueness associated with assigning weights to criteria and alternatives. The rank order centroid method is considered a practical method for the determination of f(i) because it is easier and simpler to use compared to the AHP and the fuzzy method.
Zenita, O.; Basavaiah, K.
2011-01-01
Two titrimetric and two spectrophotometric methods are described for the assay of famotidine (FMT) in tablets using N-bromosuccinimide (NBS). The first titrimetric method is direct in which FMT is titrated directly with NBS in HCl medium using methyl orange as indicator (method A). The remaining three methods are indirect in which the unreacted NBS is determined after the complete reaction between FMT and NBS by iodometric back titration (method B) or by reacting with a fixed amount of either indigo carmine (method C) or neutral red (method D). The method A and method B are applicable over the range of 2–9 mg and 1–7 mg, respectively. In spectrophotometric methods, Beer's law is obeyed over the concentration ranges of 0.75–6.0 μg mL−1 (method C) and 0.3–3.0 μg mL−1 (method D). The applicability of the developed methods was demonstrated by the determination of FMT in pure drug as well as in tablets. PMID:21760785
Twostep-by-twostep PIRK-type PC methods with continuous output formulas
NASA Astrophysics Data System (ADS)
Cong, Nguyen Huu; Xuan, Le Ngoc
2008-11-01
This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.
Pérez de Isla, Leopoldo; Casanova, Carlos; Almería, Carlos; Rodrigo, José Luis; Cordeiro, Pedro; Mataix, Luis; Aubele, Ada Lia; Lang, Roberto; Zamorano, José Luis
2007-12-01
Several studies have shown a wide variability among different methods to determine the valve area in patients with rheumatic mitral stenosis. Our aim was to evaluate if 3D-echo planimetry is more accurate than the Gorlin method to measure the valve area. Twenty-six patients with mitral stenosis underwent 2D and 3D-echo echocardiographic examinations and catheterization. Valve area was estimated by different methods. A median value of the mitral valve area, obtained from the measurements of three classical non-invasive methods (2D planimetry, pressure half-time and PISA method), was used as the reference method and it was compared with 3D-echo planimetry and Gorlin's method. Our results showed that the accuracy of 3D-echo planimetry is superior to the accuracy of the Gorlin method for the assessment of mitral valve area. We should keep in mind the fact that 3D-echo planimetry may be a better reference method than the Gorlin method to assess the severity of rheumatic mitral stenosis.
Küme, Tuncay; Sağlam, Barıs; Ergon, Cem; Sisman, Ali Rıza
2018-01-01
The aim of this study is to evaluate and compare the analytical performance characteristics of the two creatinine methods based on the Jaffe and enzymatic methods. Two original creatinine methods, Jaffe and enzymatic, were evaluated on Architect c16000 automated analyzer via limit of detection (LOD) and limit of quantitation (LOQ), linearity, intra-assay and inter-assay precision, and comparability in serum and urine samples. The method comparison and bias estimation using patient samples according to CLSI guideline were performed on 230 serum and 141 urine samples by analyzing on the same auto-analyzer. The LODs were determined as 0.1 mg/dL for both serum methods and as 0.25 and 0.07 mg/dL for the Jaffe and the enzymatic urine method respectively. The LOQs were similar with 0.05 mg/dL value for both serum methods, and enzymatic urine method had a lower LOQ than Jaffe urine method, values at 0.5 and 2 mg/dL respectively. Both methods were linear up to 65 mg/dL for serum and 260 mg/dL for urine. The intra-assay and inter-assay precision data were under desirable levels in both methods. The higher correlations were determined between two methods in serum and urine (r=.9994, r=.9998 respectively). On the other hand, Jaffe method gave the higher creatinine results than enzymatic method, especially at the low concentrations in both serum and urine. Both Jaffe and enzymatic methods were found to meet the analytical performance requirements in routine use. However, enzymatic method was found to have better performance in low creatinine levels. © 2017 Wiley Periodicals, Inc.
Parikh, Harshal R; De, Anuradha S; Baveja, Sujata M
2012-07-01
Physicians and microbiologists have long recognized that the presence of living microorganisms in the blood of a patient carries with it considerable morbidity and mortality. Hence, blood cultures have become critically important and frequently performed test in clinical microbiology laboratories for diagnosis of sepsis. To compare the conventional blood culture method with the lysis centrifugation method in cases of sepsis. Two hundred nonduplicate blood cultures from cases of sepsis were analyzed using two blood culture methods concurrently for recovery of bacteria from patients diagnosed clinically with sepsis - the conventional blood culture method using trypticase soy broth and the lysis centrifugation method using saponin by centrifuging at 3000 g for 30 minutes. Overall bacteria recovered from 200 blood cultures were 17.5%. The conventional blood culture method had a higher yield of organisms, especially Gram positive cocci. The lysis centrifugation method was comparable with the former method with respect to Gram negative bacilli. The sensitivity of lysis centrifugation method in comparison to conventional blood culture method was 49.75% in this study, specificity was 98.21% and diagnostic accuracy was 89.5%. In almost every instance, the time required for detection of the growth was earlier by lysis centrifugation method, which was statistically significant. Contamination by lysis centrifugation was minimal, while that by conventional method was high. Time to growth by the lysis centrifugation method was highly significant (P value 0.000) as compared to time to growth by the conventional blood culture method. For the diagnosis of sepsis, combination of the lysis centrifugation method and the conventional blood culture method with trypticase soy broth or biphasic media is advocable, in order to achieve faster recovery and a better yield of microorganisms.
Amin, Alaa S.; Kassem, Mohammed A.
2012-01-01
Aim and Background: Three simple, accurate and sensitive spectrophotometric methods for the determination of finasteride in pure, dosage and biological forms, and in the presence of its oxidative degradates were developed. Materials and Methods: These methods are indirect, involve the addition of excess oxidant potassium permanganate for method A; cerric sulfate [Ce(SO4)2] for methods B; and N-bromosuccinimide (NBS) for method C of known concentration in acid medium to finasteride, and the determination of the unreacted oxidant by measurement of the decrease in absorbance of methylene blue for method A, chromotrope 2R for method B, and amaranth for method C at a suitable maximum wavelength, λmax: 663, 528, and 520 nm, for the three methods, respectively. The reaction conditions for each method were optimized. Results: Regression analysis of the Beer plots showed good correlation in the concentration ranges of 0.12–3.84 μg mL–1 for method A, and 0.12–3.28 μg mL–1 for method B and 0.14 – 3.56 μg mL–1 for method C. The apparent molar absorptivity, Sandell sensitivity, detection and quantification limits were evaluated. The stoichiometric ratio between the finasteride and the oxidant was estimated. The validity of the proposed methods was tested by analyzing dosage forms and biological samples containing finasteride with relative standard deviation ≤ 0.95. Conclusion: The proposed methods could successfully determine the studied drug with varying excess of its oxidative degradation products, with recovery between 99.0 and 101.4, 99.2 and 101.6, and 99.6 and 101.0% for methods A, B, and C, respectively. PMID:23781478
John Butcher and hybrid methods
NASA Astrophysics Data System (ADS)
Mehdiyeva, Galina; Imanova, Mehriban; Ibrahimov, Vagif
2017-07-01
As is known there are the mainly two classes of the numerical methods for solving ODE, which is commonly called a one and multistep methods. Each of these methods has certain advantages and disadvantages. It is obvious that the method which has better properties of these methods should be constructed at the junction of them. In the middle of the XX century, Butcher and Gear has constructed at the junction of the methods of Runge-Kutta and Adams, which is called hybrid method. Here considers the construction of certain generalizations of hybrid methods, with the high order of accuracy and to explore their application to solving the Ordinary Differential, Volterra Integral and Integro-Differential equations. Also have constructed some specific hybrid methods with the degree p ≤ 10.
Critical study of higher order numerical methods for solving the boundary-layer equations
NASA Technical Reports Server (NTRS)
Wornom, S. F.
1978-01-01
A fourth order box method is presented for calculating numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations. The method, which is the natural extension of the second order box scheme to fourth order, was demonstrated with application to the incompressible, laminar and turbulent, boundary layer equations. The efficiency of the present method is compared with two point and three point higher order methods, namely, the Keller box scheme with Richardson extrapolation, the method of deferred corrections, a three point spline method, and a modified finite element method. For equivalent accuracy, numerical results show the present method to be more efficient than higher order methods for both laminar and turbulent flows.
A temperature match based optimization method for daily load prediction considering DLC effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Z.
This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less
[Isolation and identification methods of enterobacteria group and its technological advancement].
Furuta, Itaru
2007-08-01
In the last half-century, isolation and identification methods of enterobacteria groups have markedly improved by technological advancement. Clinical microbiology tests have changed overtime from tube methods to commercial identification kits and automated identification. Tube methods are the original method for the identification of enterobacteria groups, that is, a basically essential method to recognize bacterial fermentation and biochemical principles. In this paper, traditional tube tests are discussed, such as the utilization of carbohydrates, indole, methyl red, and citrate and urease tests. Commercial identification kits and automated instruments by computer based analysis as current methods are also discussed, and those methods provide rapidity and accuracy. Nonculture techniques of nucleic acid typing methods using PCR analysis, and immunochemical methods using monoclonal antibodies can be further developed.
Comparison of three commercially available fit-test methods.
Janssen, Larry L; Luinenburg, D Michael; Mullins, Haskell E; Nelson, Thomas J
2002-01-01
American National Standards Institute (ANSI) standard Z88.10, Respirator Fit Testing Methods, includes criteria to evaluate new fit-tests. The standard allows generated aerosol, particle counting, or controlled negative pressure quantitative fit-tests to be used as the reference method to determine acceptability of a new test. This study examined (1) comparability of three Occupational Safety and Health Administration-accepted fit-test methods, all of which were validated using generated aerosol as the reference method; and (2) the effect of the reference method on the apparent performance of a fit-test method under evaluation. Sequential fit-tests were performed using the controlled negative pressure and particle counting quantitative fit-tests and the bitter aerosol qualitative fit-test. Of 75 fit-tests conducted with each method, the controlled negative pressure method identified 24 failures; bitter aerosol identified 22 failures; and the particle counting method identified 15 failures. The sensitivity of each method, that is, agreement with the reference method in identifying unacceptable fits, was calculated using each of the other two methods as the reference. None of the test methods met the ANSI sensitivity criterion of 0.95 or greater when compared with either of the other two methods. These results demonstrate that (1) the apparent performance of any fit-test depends on the reference method used, and (2) the fit-tests evaluated use different criteria to identify inadequately fitting respirators. Although "acceptable fit" cannot be defined in absolute terms at this time, the ability of existing fit-test methods to reject poor fits can be inferred from workplace protection factor studies.
A Tale of Two Methods: Chart and Interview Methods for Identifying Delirium
Saczynski, Jane S.; Kosar, Cyrus M.; Xu, Guoquan; Puelle, Margaret R.; Schmitt, Eva; Jones, Richard N.; Marcantonio, Edward R.; Wong, Bonnie; Isaza, Ilean; Inouye, Sharon K.
2014-01-01
Background Interview and chart-based methods for identifying delirium have been validated. However, relative strengths and limitations of each method have not been described, nor has a combined approach (using both interviews and chart), been systematically examined. Objectives To compare chart and interview-based methods for identification of delirium. Design, Setting and Participants Participants were 300 patients aged 70+ undergoing major elective surgery (majority were orthopedic surgery) interviewed daily during hospitalization for delirium using the Confusion Assessment Method (CAM; interview-based method) and whose medical charts were reviewed for delirium using a validated chart-review method (chart-based method). We examined rate of agreement on the two methods and patient characteristics of those identified using each approach. Predictive validity for clinical outcomes (length of stay, postoperative complications, discharge disposition) was compared. In the absence of a gold-standard, predictive value could not be calculated. Results The cumulative incidence of delirium was 23% (n= 68) by the interview-based method, 12% (n=35) by the chart-based method and 27% (n=82) by the combined approach. Overall agreement was 80%; kappa was 0.30. The methods differed in detection of psychomotor features and time of onset. The chart-based method missed delirium in CAM-identified patients laacking features of psychomotor agitation or inappropriate behavior. The CAM-based method missed chart-identified cases occurring during the night shift. The combined method had high predictive validity for all clinical outcomes. Conclusions Interview and chart-based methods have specific strengths for identification of delirium. A combined approach captures the largest number and the broadest range of delirium cases. PMID:24512042
Inventory Management for Irregular Shipment of Goods in Distribution Centre
NASA Astrophysics Data System (ADS)
Takeda, Hitoshi; Kitaoka, Masatoshi; Usuki, Jun
2016-01-01
The shipping amount of commodity goods (Foods, confectionery, dairy products, such as public cosmetic pharmaceutical products) changes irregularly at the distribution center dealing with the general consumer goods. Because the shipment time and the amount of the shipment are irregular, the demand forecast becomes very difficult. For this, the inventory control becomes difficult, too. It cannot be applied to the shipment of the commodity by the conventional inventory control methods. This paper proposes the method for inventory control by cumulative flow curve method. It proposed the method of deciding the order quantity of the inventory control by the cumulative flow curve. Here, it proposes three methods. 1) Power method,2) Polynomial method and 3)Revised Holt's linear method that forecasts data with trends that is a kind of exponential smoothing method. This paper compares the economics of the conventional method, which is managed by the experienced and three new proposed methods. And, the effectiveness of the proposal method is verified from the numerical calculations.
Computational Methods in Drug Discovery
Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens
2014-01-01
Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236
[Primary culture of human normal epithelial cells].
Tang, Yu; Xu, Wenji; Guo, Wanbei; Xie, Ming; Fang, Huilong; Chen, Chen; Zhou, Jun
2017-11-28
The traditional primary culture methods of human normal epithelial cells have disadvantages of low activity of cultured cells, the low cultivated rate and complicated operation. To solve these problems, researchers made many studies on culture process of human normal primary epithelial cell. In this paper, we mainly introduce some methods used in separation and purification of human normal epithelial cells, such as tissue separation method, enzyme digestion separation method, mechanical brushing method, red blood cell lysis method, percoll layered medium density gradient separation method. We also review some methods used in the culture and subculture, including serum-free medium combined with low mass fraction serum culture method, mouse tail collagen coating method, and glass culture bottle combined with plastic culture dish culture method. The biological characteristics of human normal epithelial cells, the methods of immunocytochemical staining, trypan blue exclusion are described. Moreover, the factors affecting the aseptic operation, the conditions of the extracellular environment, the conditions of the extracellular environment during culture, the number of differential adhesion, and the selection and dosage of additives are summarized.
A Modified Magnetic Gradient Contraction Based Method for Ferromagnetic Target Localization
Wang, Chen; Zhang, Xiaojuan; Qu, Xiaodong; Pan, Xiao; Fang, Guangyou; Chen, Luzhao
2016-01-01
The Scalar Triangulation and Ranging (STAR) method, which is based upon the unique properties of magnetic gradient contraction, is a high real-time ferromagnetic target localization method. Only one measurement point is required in the STAR method and it is not sensitive to changes in sensing platform orientation. However, the localization accuracy of the method is limited by the asphericity errors and the inaccurate value of position leads to larger errors in the estimation of magnetic moment. To improve the localization accuracy, a modified STAR method is proposed. In the proposed method, the asphericity errors of the traditional STAR method are compensated with an iterative algorithm. The proposed method has a fast convergence rate which meets the requirement of high real-time localization. Simulations and field experiments have been done to evaluate the performance of the proposed method. The results indicate that target parameters estimated by the modified STAR method are more accurate than the traditional STAR method. PMID:27999322
Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve
1987-01-01
Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.
Robust numerical solution of the reservoir routing equation
NASA Astrophysics Data System (ADS)
Fiorentini, Marcello; Orlandini, Stefano
2013-09-01
The robustness of numerical methods for the solution of the reservoir routing equation is evaluated. The methods considered in this study are: (1) the Laurenson-Pilgrim method, (2) the fourth-order Runge-Kutta method, and (3) the fixed order Cash-Karp method. Method (1) is unable to handle nonmonotonic outflow rating curves. Method (2) is found to fail under critical conditions occurring, especially at the end of inflow recession limbs, when large time steps (greater than 12 min in this application) are used. Method (3) is computationally intensive and it does not solve the limitations of method (2). The limitations of method (2) can be efficiently overcome by reducing the time step in the critical phases of the simulation so as to ensure that water level remains inside the domains of the storage function and the outflow rating curve. The incorporation of a simple backstepping procedure implementing this control into the method (2) yields a robust and accurate reservoir routing method that can be safely used in distributed time-continuous catchment models.
Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A
2012-03-15
To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Monovasilis, Theodore; Kalogiratou, Zacharoula; Simos, T. E.
2014-10-01
In this work we derive exponentially fitted symplectic Runge-Kutta-Nyström (RKN) methods from symplectic exponentially fitted partitioned Runge-Kutta (PRK) methods methods (for the approximate solution of general problems of this category see [18] - [40] and references therein). We construct RKN methods from PRK methods with up to five stages and fourth algebraic order.
O'Cathain, Alicia; Murphy, Elizabeth; Nicholl, Jon
2007-01-01
Background Recently, there has been a surge of international interest in combining qualitative and quantitative methods in a single study – often called mixed methods research. It is timely to consider why and how mixed methods research is used in health services research (HSR). Methods Documentary analysis of proposals and reports of 75 mixed methods studies funded by a research commissioner of HSR in England between 1994 and 2004. Face-to-face semi-structured interviews with 20 researchers sampled from these studies. Results 18% (119/647) of HSR studies were classified as mixed methods research. In the documentation, comprehensiveness was the main driver for using mixed methods research, with researchers wanting to address a wider range of questions than quantitative methods alone would allow. Interviewees elaborated on this, identifying the need for qualitative research to engage with the complexity of health, health care interventions, and the environment in which studies took place. Motivations for adopting a mixed methods approach were not always based on the intrinsic value of mixed methods research for addressing the research question; they could be strategic, for example, to obtain funding. Mixed methods research was used in the context of evaluation, including randomised and non-randomised designs; survey and fieldwork exploratory studies; and instrument development. Studies drew on a limited number of methods – particularly surveys and individual interviews – but used methods in a wide range of roles. Conclusion Mixed methods research is common in HSR in the UK. Its use is driven by pragmatism rather than principle, motivated by the perceived deficit of quantitative methods alone to address the complexity of research in health care, as well as other more strategic gains. Methods are combined in a range of contexts, yet the emerging methodological contributions from HSR to the field of mixed methods research are currently limited to the single context of combining qualitative methods and randomised controlled trials. Health services researchers could further contribute to the development of mixed methods research in the contexts of instrument development, survey and fieldwork, and non-randomised evaluations. PMID:17570838
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor-Pashow, K.; Fondeur, F.; White, T.
Savannah River National Laboratory (SRNL) was tasked with identifying and developing at least one, but preferably two methods for quantifying the suppressor in the Next Generation Solvent (NGS) system. The suppressor is a guanidine derivative, N,N',N"-tris(3,7-dimethyloctyl)guanidine (TiDG). A list of 10 possible methods was generated, and screening experiments were performed for 8 of the 10 methods. After completion of the screening experiments, the non-aqueous acid-base titration was determined to be the most promising, and was selected for further development as the primary method. {sup 1}H NMR also showed promising results from the screening experiments, and this method was selected formore » further development as the secondary method. Other methods, including {sup 36}Cl radiocounting and ion chromatography, also showed promise; however, due to the similarity to the primary method (titration) and the inability to differentiate between TiDG and TOA (tri-n-ocytlamine) in the blended solvent, {sup 1}H NMR was selected over these methods. Analysis of radioactive samples obtained from real waste ESS (extraction, scrub, strip) testing using the titration method showed good results. Based on these results, the titration method was selected as the method of choice for TiDG measurement. {sup 1}H NMR has been selected as the secondary (back-up) method, and additional work is planned to further develop this method and to verify the method using radioactive samples. Procedures for analyzing radioactive samples of both pure NGS and blended solvent were developed and issued for the both methods.« less
Issa, M M; Nejem, R M; El-Abadla, N S; Al-Kholy, M; Saleh, Akila A
2008-01-01
A novel atomic absorption spectrometric method and two highly sensitive spectrophotometric methods were developed for the determination of paracetamol. These techniques based on the oxidation of paracetamol by iron (III) (method I); oxidation of p-aminophenol after the hydrolysis of paracetamol (method II). Iron (II) then reacts with potassium ferricyanide to form Prussian blue color with a maximum absorbance at 700 nm. The atomic absorption method was accomplished by extracting the excess iron (III) in method II and aspirates the aqueous layer into air-acetylene flame to measure the absorbance of iron (II) at 302.1 nm. The reactions have been spectrometrically evaluated to attain optimum experimental conditions. Linear responses were exhibited over the ranges 1.0-10, 0.2-2.0 and 0.1-1.0 mug/ml for method I, method II and atomic absorption spectrometric method, respectively. A high sensitivity is recorded for the proposed methods I and II and atomic absorption spectrometric method value indicate: 0.05, 0.022 and 0.012 mug/ml, respectively. The limit of quantitation of paracetamol by method II and atomic absorption spectrometric method were 0.20 and 0.10 mug/ml. Method II and the atomic absorption spectrometric method were applied to demonstrate a pharmacokinetic study by means of salivary samples in normal volunteers who received 1.0 g paracetamol. Intra and inter-day precision did not exceed 6.9%.
Issa, M. M.; Nejem, R. M.; El-Abadla, N. S.; Al-Kholy, M.; Saleh, Akila. A.
2008-01-01
A novel atomic absorption spectrometric method and two highly sensitive spectrophotometric methods were developed for the determination of paracetamol. These techniques based on the oxidation of paracetamol by iron (III) (method I); oxidation of p-aminophenol after the hydrolysis of paracetamol (method II). Iron (II) then reacts with potassium ferricyanide to form Prussian blue color with a maximum absorbance at 700 nm. The atomic absorption method was accomplished by extracting the excess iron (III) in method II and aspirates the aqueous layer into air-acetylene flame to measure the absorbance of iron (II) at 302.1 nm. The reactions have been spectrometrically evaluated to attain optimum experimental conditions. Linear responses were exhibited over the ranges 1.0-10, 0.2-2.0 and 0.1-1.0 μg/ml for method I, method II and atomic absorption spectrometric method, respectively. A high sensitivity is recorded for the proposed methods I and II and atomic absorption spectrometric method value indicate: 0.05, 0.022 and 0.012 μg/ml, respectively. The limit of quantitation of paracetamol by method II and atomic absorption spectrometric method were 0.20 and 0.10 μg/ml. Method II and the atomic absorption spectrometric method were applied to demonstrate a pharmacokinetic study by means of salivary samples in normal volunteers who received 1.0 g paracetamol. Intra and inter-day precision did not exceed 6.9%. PMID:20046743
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.
Rowlands, J A; Hunter, D M; Araj, N
1991-01-01
A new digital image readout method for electrostatic charge images on photoconductive plates is described. The method can be used to read out images on selenium plates similar to those used in xeromammography. The readout method, called the air-gap photoinduced discharge method (PID), discharges the latent image pixel by pixel and measures the charge. The PID readout method, like electrometer methods, is linear. However, the PID method permits much better resolution than scanning electrometers while maintaining quantum limited performance at high radiation exposure levels. Thus the air-gap PID method appears to be uniquely superior for high-resolution digital imaging tasks such as mammography.
Eubanks-Carter, Catherine; Gorman, Bernard S; Muran, J Christopher
2012-01-01
Analysis of change points in psychotherapy process could increase our understanding of mechanisms of change. In particular, naturalistic change point detection methods that identify turning points or breakpoints in time series data could enhance our ability to identify and study alliance ruptures and resolutions. This paper presents four categories of statistical methods for detecting change points in psychotherapy process: criterion-based methods, control chart methods, partitioning methods, and regression methods. Each method's utility for identifying shifts in the alliance is illustrated using a case example from the Beth Israel Psychotherapy Research program. Advantages and disadvantages of the various methods are discussed.
A comparative study of interface reconstruction methods for multi-material ALE simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kucharik, Milan; Garimalla, Rao; Schofield, Samuel
2009-01-01
In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.
Digital photography and transparency-based methods for measuring wound surface area.
Bhedi, Amul; Saxena, Atul K; Gadani, Ravi; Patel, Ritesh
2013-04-01
To compare and determine a credible method of measurement of wound surface area by linear, transparency, and photographic methods for monitoring progress of wound healing accurately and ascertaining whether these methods are significantly different. From April 2005 to December 2006, 40 patients (30 men, 5 women, 5 children) admitted to the surgical ward of Shree Sayaji General Hospital, Baroda, had clean as well as infected wound following trauma, debridement, pressure sore, venous ulcer, and incision and drainage. Wound surface areas were measured by these three methods (linear, transparency, and photographic methods) simultaneously on alternate days. The linear method is statistically and significantly different from transparency and photographic methods (P value <0.05), but there is no significant difference between transparency and photographic methods (P value >0.05). Photographic and transparency methods provided measurements of wound surface area with equivalent result and there was no statistically significant difference between these two methods.
Anatomically-Aided PET Reconstruction Using the Kernel Method
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-01-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810
Anatomically-aided PET reconstruction using the kernel method.
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2016-09-21
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
[An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].
Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang
2014-07-01
Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.
A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis
Kang, Mengjun
2015-01-01
A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction. PMID:25996691
Anatomically-aided PET reconstruction using the kernel method
NASA Astrophysics Data System (ADS)
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-09-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
[Theory, method and application of method R on estimation of (co)variance components].
Liu, Wen-Zhong
2004-07-01
Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.
NASA Technical Reports Server (NTRS)
Wood, C. A.
1974-01-01
For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.
Wohlsen, T; Bates, J; Vesey, G; Robinson, W A; Katouli, M
2006-04-01
To use BioBall cultures as a precise reference standard to evaluate methods for enumeration of Escherichia coli and other coliform bacteria in water samples. Eight methods were evaluated including membrane filtration, standard plate count (pour and spread plate methods), defined substrate technology methods (Colilert and Colisure), the most probable number method and the Petrifilm disposable plate method. Escherichia coli and Enterobacter aerogenes BioBall cultures containing 30 organisms each were used. All tests were performed using 10 replicates. The mean recovery of both bacteria varied with the different methods employed. The best and most consistent results were obtained with Petrifilm and the pour plate method. Other methods either yielded a low recovery or showed significantly high variability between replicates. The BioBall is a very suitable quality control tool for evaluating the efficiency of methods for bacterial enumeration in water samples.
Wilsonian methods of concept analysis: a critique.
Hupcey, J E; Morse, J M; Lenz, E R; Tasón, M C
1996-01-01
Wilsonian methods of concept analysis--that is, the method proposed by Wilson and Wilson-derived methods in nursing (as described by Walker and Avant; Chinn and Kramer [Jacobs]; Schwartz-Barcott and Kim; and Rodgers)--are discussed and compared in this article. The evolution and modifications of Wilson's method in nursing are described and research that has used these methods, assessed. The transformation of Wilson's method is traced as each author has adopted his techniques and attempted to modify the method to correct for limitations. We suggest that these adaptations and modifications ultimately erode Wilson's method. Further, the Wilson-derived methods have been overly simplified and used by nurse researchers in a prescriptive manner, and the results often do not serve the purpose of expanding nursing knowledge. We conclude that, considering the significance of concept development for the nursing profession, the development of new methods and a means for evaluating conceptual inquiry must be given priority.
NASA Astrophysics Data System (ADS)
Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen
2013-08-01
We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.
Study report on a double isotope method of calcium absorption
NASA Technical Reports Server (NTRS)
1978-01-01
Some of the pros and cons of three methods to study gastrointestinal calcium absorption are briefly discussed. The methods are: (1) a balance study; (2) a single isotope method; and (3) a double isotope method. A procedure for the double isotope method is also included.
2012-01-01
Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16 traits in the Nordic Holstein population. Methods The data consisted of de-regressed proofs (DRP) for 5 214 genotyped and 9 374 non-genotyped bulls. The bulls were divided into a training and a validation population by birth date, October 1, 2001. Five approaches for genomic prediction were used: 1) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted for the difference of scale between the genomic and the pedigree relationship matrices. A set of weights on the pedigree relationship matrix (ranging from 0.05 to 0.40) was used to build the combined relationship matrix in the single-step blending method and the GBLUP method with a polygenetic effect. Results Averaged over the 16 traits, reliabilities of genomic breeding values predicted using the GBLUP method with a polygenic effect (relative weight of 0.20) were 0.3% higher than reliabilities from the simple GBLUP method (without a polygenic effect). The adjusted single-step blending and original single-step blending methods (relative weight of 0.20) had average reliabilities that were 2.1% and 1.8% higher than the simple GBLUP method, respectively. In addition, the GBLUP method with a polygenic effect led to less bias of genomic predictions than the simple GBLUP method, and both single-step blending methods yielded less bias of predictions than all GBLUP methods. Conclusions The single-step blending method is an appealing approach for practical genomic prediction in dairy cattle. Genomic prediction from the single-step blending method can be improved by adjusting the scale of the genomic relationship matrix. PMID:22455934
Hua, Yang; Kaplan, Shannon; Reshatoff, Michael; Hu, Ernie; Zukowski, Alexis; Schweis, Franz; Gin, Cristal; Maroni, Brett; Becker, Michael; Wisniewski, Michele
2012-01-01
The Roka Listeria Detection Assay was compared to the reference culture methods for nine select foods and three select surfaces. The Roka method used Half-Fraser Broth for enrichment at 35 +/- 2 degrees C for 24-28 h. Comparison of Roka's method to reference methods requires an unpaired approach. Each method had a total of 545 samples inoculated with a Listeria strain. Each food and surface was inoculated with a different strain of Listeria at two different levels per method. For the dairy products (Brie cheese, whole milk, and ice cream), our method was compared to AOAC Official Method(SM) 993.12. For the ready-to-eat meats (deli chicken, cured ham, chicken salad, and hot dogs) and environmental surfaces (sealed concrete, stainless steel, and plastic), these samples were compared to the U.S. Department of Agriculture/Food Safety and Inspection Service-Microbiology Laboratory Guidebook (USDA/FSIS-MLG) method MLG 8.07. Cold-smoked salmon and romaine lettuce were compared to the U.S. Food and Drug Administration/Bacteriological Analytical Manual, Chapter 10 (FDA/BAM) method. Roka's method had 358 positives out of 545 total inoculated samples compared to 332 positive for the reference methods. Overall the probability of detection analysis of the results showed better or equivalent performance compared to the reference methods.
NASA Astrophysics Data System (ADS)
Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan
2015-10-01
Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.
Hanks, Andrew S; Wansink, Brian; Just, David R
2014-03-01
Measuring food waste is essential to determine the impact of school interventions on what children eat. There are multiple methods used for measuring food waste, yet it is unclear which method is most appropriate in large-scale interventions with restricted resources. This study examines which of three visual tray waste measurement methods is most reliable, accurate, and cost-effective compared with the gold standard of individually weighing leftovers. School cafeteria researchers used the following three visual methods to capture tray waste in addition to actual food waste weights for 197 lunch trays: the quarter-waste method, the half-waste method, and the photograph method. Inter-rater and inter-method reliability were highest for on-site visual methods (0.90 for the quarter-waste method and 0.83 for the half-waste method) and lowest for the photograph method (0.48). This low reliability is partially due to the inability of photographs to determine whether packaged items (such as milk or yogurt) are empty or full. In sum, the quarter-waste method was the most appropriate for calculating accurate amounts of tray waste, and the photograph method might be appropriate if researchers only wish to detect significant differences in waste or consumption of selected, unpackaged food. Copyright © 2014 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Karamon, Jacek; Ziomko, Irena; Cencek, Tomasz; Sroka, Jacek
2008-10-01
The modification of flotation method for the examination of diarrhoeic piglet faeces for the detection of Isospora suis oocysts was elaborated. The method was based on removing fractions of fat from the sample of faeces by centrifugation with a 25% Percoll solution. The investigations were carried out in comparison to the McMaster method. From five variants of the Percoll flotation method, the best results were obtained when 2ml of flotation liquid per 1g of faeces were used. The limit of detection in the Percoll flotation method was 160 oocysts per 1g, and was better than with the McMaster method. The efficacy of the modified method was confirmed by results obtained in the examination of the I. suis infected piglets. From all faecal samples, positive samples in the Percoll flotation method were double the results than that of the routine method. Oocysts were first detected by the Percoll flotation method on day 4 post-invasion, i.e. one-day earlier than with the McMaster method. During the experiment (except for 3 days), the extensity of I. suis invasion in the litter examined by the Percoll flotation method was higher than that with the McMaster method. The obtained results show that the modified flotation method with the use of Percoll could be applied in the diagnostics of suckling piglet isosporosis.
Gyawali, P; Ahmed, W; Jagals, P; Sidhu, J P S; Toze, S
2015-12-01
Hookworm infection contributes around 700 million infections worldwide especially in developing nations due to increased use of wastewater for crop production. The effective recovery of hookworm ova from wastewater matrices is difficult due to their low concentrations and heterogeneous distribution. In this study, we compared the recovery rates of (i) four rapid hookworm ova concentration methods from municipal wastewater, and (ii) two concentration methods from sludge samples. Ancylostoma caninum ova were used as surrogate for human hookworm (Ancylostoma duodenale and Necator americanus). Known concentration of A. caninum hookworm ova were seeded into wastewater (treated and raw) and sludge samples collected from two wastewater treatment plants (WWTPs) in Brisbane and Perth, Australia. The A. caninum ova were concentrated from treated and raw wastewater samples using centrifugation (Method A), hollow fiber ultrafiltration (HFUF) (Method B), filtration (Method C) and flotation (Method D) methods. For sludge samples, flotation (Method E) and direct DNA extraction (Method F) methods were used. Among the four methods tested, filtration (Method C) method was able to recover higher concentrations of A. caninum ova consistently from treated wastewater (39-50%) and raw wastewater (7.1-12%) samples collected from both WWTPs. The remaining methods (Methods A, B and D) yielded variable recovery rate ranging from 0.2 to 40% for treated and raw wastewater samples. The recovery rates for sludge samples were poor (0.02-4.7), although, Method F (direct DNA extraction) provided 1-2 orders of magnitude higher recovery rate than Method E (flotation). Based on our results it can be concluded that the recovery rates of hookworm ova from wastewater matrices, especially sludge samples, can be poor and highly variable. Therefore, choice of concentration method is vital for the sensitive detection of hookworm ova in wastewater matrices. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
Achieving cost-neutrality with long-acting reversible contraceptive methods⋆
Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna
2014-01-01
Objectives This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. Study design A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20–29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. Results The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. Conclusions This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Implications Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. PMID:25282161
Crawford, Charles G.; Martin, Jeffrey D.
2017-07-21
In October 2012, the U.S. Geological Survey (USGS) began measuring the concentration of the pesticide fipronil and three of its degradates (desulfinylfipronil, fipronil sulfide, and fipronil sulfone) by a new laboratory method using direct aqueous-injection liquid chromatography tandem mass spectrometry (DAI LC–MS/MS). This method replaced the previous method—in use since 2002—that used gas chromatography/mass spectrometry (GC/MS). The performance of the two methods is not comparable for fipronil and the three degradates. Concentrations of these four chemical compounds determined by the DAI LC–MS/MS method are substantially lower than the GC/MS method. A method was developed to correct for the difference in concentrations obtained by the two laboratory methods based on a methods comparison field study done in 2012. Environmental and field matrix spike samples to be analyzed by both methods from 48 stream sites from across the United States were sampled approximately three times each for this study. These data were used to develop a relation between the two laboratory methods for each compound using regression analysis. The relations were used to calibrate data obtained by the older method to the new method in order to remove any biases attributable to differences in the methods. The coefficients of the equations obtained from the regressions were used to calibrate over 16,600 observations of fipronil, as well as the three degradates determined by the GC/MS method retrieved from the USGS National Water Information System. The calibrated values were then compared to over 7,800 observations of fipronil and to the three degradates determined by the DAI LC–MS/MS method also retrieved from the National Water Information System. The original and calibrated values from the GC/MS method, along with measures of uncertainty in the calibrated values and the original values from the DAI LC–MS/MS method, are provided in an accompanying data release.
24 CFR 291.90 - Sales methods.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...
24 CFR 291.90 - Sales methods.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...
24 CFR 291.90 - Sales methods.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...
24 CFR 291.90 - Sales methods.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-14
... Office 37 CFR Part 42 Transitional Program for Covered Business Method Patents--Definitions of Covered Business Method Patent and Technological Invention; Final Rule #0;#0;Federal Register / Vol. 77 , No. 157... Business Method Patents-- Definitions of Covered Business Method Patent and Technological Invention AGENCY...
24 CFR 291.90 - Sales methods.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...
A Review of Methods for Missing Data.
ERIC Educational Resources Information Center
Pigott, Therese D.
2001-01-01
Reviews methods for handling missing data in a research study. Model-based methods, such as maximum likelihood using the EM algorithm and multiple imputation, hold more promise than ad hoc methods. Although model-based methods require more specialized computer programs and assumptions about the nature of missing data, these methods are appropriate…
ERIC Educational Resources Information Center
Kitis, Emine; Türkel, Ali
2017-01-01
The aim of this study is to find out Turkish pre-service teachers' views on effectiveness of cluster method as a writing teaching method. The Cluster Method can be defined as a connotative creative writing method. The way the method works is that the person who brainstorms on connotations of a word or a concept in abscence of any kind of…
Assay of fluoxetine hydrochloride by titrimetric and HPLC methods.
Bueno, F; Bergold, A M; Fröehlich, P E
2000-01-01
Two alternative methods were proposed to assay Fluoxetine Hydrochloride: a titrimetric method and another by HPLC using as mobile phase water pH 3.5: acetonitrile (65:35). These methods were applied to the determination of Fluoxetine as such or in formulations (capsules). The titrimetric method is an alternative for pharmacies and small industries. Both methods showed accuracy and precision and are an alternative to the official methods.
1970-01-01
design and experimentation. I. The Shock- Tube Method Smiley [546] introduced the use of shock waves...one of the greatest disadvantages of this technique. Both the unique adaptability of the shock tube method for high -temperature measurement of...Line-Source Flow Method H. The Hot-Wire Thermal Diffusion Column Method I. The Shock- Tube Method J. The Arc Method K. The Ultrasonic Method .
NASA Technical Reports Server (NTRS)
Banyukevich, A.; Ziolkovski, K.
1975-01-01
A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.
Comparison of measurement methods for capacitive tactile sensors and their implementation
NASA Astrophysics Data System (ADS)
Tarapata, Grzegorz; Sienkiewicz, Rafał
2015-09-01
This paper presents a review of ideas and implementations of measurement methods utilized for capacity measurements in tactile sensors. The paper describes technical method, charge amplification method, generation and as well integration method. Three selected methods were implemented in dedicated measurement system and utilised for capacitance measurements of ourselves made tactile sensors. The tactile sensors tested in this work were fully fabricated with the inkjet printing technology. The tests result were presented and summarised. The charge amplification method (CDC) was selected as the best method for the measurement of the tactile sensors.